A Birthday Epiphany

Page content

I never got around to it, and I was going to write about how my birthday went last week, but I just experienced a kind of “Eureka” moment. It was a nice day where Jen and I went to brunch at Original Pancake House, did a bit of shopping for pants and other necessities, and then had dinner at Baxter’s Grille. I had a ribeye (of course) and I indulged myself with their flourless chocolate truffle cake. Not a bad experience in all.

But that’s not what I want to talk about right now. I am, quite literally, as excited as I ever get about anything. If you have any interest at all in AI, I am—also quite literally—actually begging you to read this. It’s imperative.

I’m convinced of two things: we now have the tools to build a true AI, and that we can model it after ourselves. As usual, how I reached that conclusion is the result of meandering circumspect.

What Happened?

I woke up from a dream. In this dream, I was at a work event that seemed like some kind of generic orientation. I was discussing my eyeglass prescription with a coworker for some reason, and a different coworker confronted me about something I don’t quite recall. Then I had an experience unlike anything before it: the dream shifted perspective to third-person so we could fight in some kind of Kung Fu engagement.

This is important because the brain’s motor cortex disables muscles during sleep, and it’s why fast movements are close to impossible while dreaming. My brain understood that limitation, and instead of putting up with it like it normally would, it shifted perspective so the action became part of the dream instead. Now the simulation could continue at full speed with no sluggishness from the suppressed motor cortex.

Then I woke up because I had to pee.

And thank goodness I did, because that singular dream tied everything together I’ve been pondering for the past several days. There’s a good chance I would have forgotten it otherwise.

The State of AI

Oh, so what. You had a dream where you got into a fight and looked suspiciously Asian toward the end. What’s the point?

The point is that I’d been vaguely ruminating on how Large Language Models (LLMs) work for the last couple of weeks as I’ve been experimenting with the Llama 3.1 model on my local system. These are not trivial to fully comprehend, and I was at a point where it all finally clicked.

As a quick overview: LLMs are not intelligent. They’re a network of token probabilities, where each token is a word fragment. The probabilities are defined by the source inputs: all of the text fed into the system influences how each token should be defined by other tokens it often appears with. When you “prompt” an LLM, you are giving it new text which is also converted into tokens in a similar manner, and that gets compared to the existing model, where the output is calculated.

An easier way to look at this is fish in an ocean. When you cast a net into the ocean, you don’t just do so at random, but where the fish are. Not just any fish in most cases, but some specific fish species, likely in a region where they’re known to reside. It’s not just any net, but a net made for that exact type or size of fish. What you get back will match that configuration in most cases, with a few oddities that were also caught by accident.

That’s literally it. LLMs seem intelligent because we’ve essentially built them from the public summation of human knowledge. It’s a vast ocean, and every prompt we write is cast into some region of expertise with influences from a wide variety of fields. It’s why precision increases with specificity; the more context a prompt includes, the better the output.

LLMs don’t “know” anything, which is why techniques like Retrieval Augmented Generation (RAG) are necessary to prevent hallucinations. It’s why LLMs are so maddeningly confident, even as they produce references for studies that don’t exist, make up lyrics for songs nobody ever sang, improperly attribute quotes, incorrectly summarize events, and all the other mistakes they make. They’re confident about the answer because that’s what the token probabilities calculated.

LLMs hallucinate because they have access to a statistically weighted word salad. The only things they really know are the prompt and any context (sources) it includes. The more context they’re given, the more vectors become available and better intersections emerge. It reduces hallucinations, but cannot eliminate them. And why wouldn’t we expect this? We do it too, after all.

The problem is that this is where the story ends. LLMs are inert bodies of knowledge. Modifying them is notoriously difficult, and even the process of building them requires extravagant computational resources. Why? Because we’re brute-forcing the process of establishing a neural network, including deriving the tokens, influencing conceptual associations along multi-thousand element vectors, all from billions of curated documents.

There are methods to fine-tune LLMs by integrating LoRAs, but the original model remains unchanged. It’s just a huge file consisting of billions of parameters which requires years of computational resources to construct. It’s one reason many models still think it’s 2021; that’s the last time their knowledge-base was updated. Everything else is some augmentation that lets them browse the web, or relies on RAG to provide updated contexts, and other convoluted techniques.

It’s close, and yet so far from where it needs to be.

Parts of Memory

I’m just going to say it outright: we are essentially just meatspace LLMs with a feedback system.

Consider how memory works. Memories are not movies, which is why no memory is actually an accurate representation of what happened. Events are broken down into their constituent parts. Imagine you went on a walk in your neighborhood. You passed some people, talked to a neighbor, waved at a dog, looked at the houses along the path, maybe even stopped to watch an ant war on the sidewalk. You know what each of these things look like, how they usually act, maybe where they were at the time, parts of the conversations that stood out, and so on. Each of those things is stored separately, and how they related to each other are the vectors.

Instead of words, our “tokens” are our sensory inputs. You don’t remember each individual ant, but “some ants”. Not the specific sidewalk, but “a sidewalk”. These two things become associated and their vectors are influenced ever so slightly closer. The specific memory ties those two things together more concretely. For now. But as time passes, perhaps the memory fades, and the association becomes less concrete. You now know ants could be on a sidewalk because why not? But not on that particular walk.

This happens with everything. Memories are reconstructed out of the things within them, using a narrative that roughly follows the most likely scenario based on past experiences and knowledge on the subject. They can be incredibly vivid because each of these individual components has a strong visual, auditory, sensory history, but the exact sequence of events is a pure fabrication. What were those neighbors wearing? What colors were the houses? Which garages were open? What day was it? How was the weather? Without making specific note of these things, they’re reconstructed because they’re not important.

There simply aren’t enough neurons to perfectly encode everything we experience. So it gets compressed into things and narrative. Some methods for this are better than others, which explains why some people have better recall, but the results are the same. There’s a reason eye-witness testimony is notoriously unreliable. So it’s not just our dreams, every memory is a hallucination of some kind.

And this effect is magnified the older—and more degraded—the memories are. Would you remember that walk through the neighborhood a day from now? A week? A month? A year? A decade? Even formative memories that practically define who we are, are often vague reconstructions of what we interpreted may have likely happened, and only from a simplistic and naive perspective. That means age and experience change the interpretation and contents of the memory. Nothing is truly permanent.

On Human Cognition

Fine, we know all of that; it’s not really a novel breakthrough. Why am I so excited?

What are dreams? Many scientists speculate that it’s just how our brain ruminates over past experiences (such as the most recent day) and builds memories. But other scientists think it’s also likely our brain is testing “what if” scenarios in a safe sandbox environment. It’s a place where our memory tokens have no actual limitations and can be combined in any possible manner.

It’s our brain asking questions. Could this happen? How do these things interact? How would I feel about this event? It makes perfect sense from a survival perspective. Faced with an endless litany of pressures from predators, environmental hazards, food scarcity, etc., it pays to speculate and have a ready answer for various scenarios before they happen.

Survival is largely decoupled from this in the modern world, and our inputs include a much wider variety of events such as entertainment from seemingly limitless genres. Books, movies, music, games, all featuring characters or events ranging from historical to utterly fantastical contrivances. Our brains constantly attempt to make sense of the nonsensical within the very real world in which we live. It’s inevitable that things get a little weird while we dream.

It’s also what sets us apart from LLMs. We have a feedback mechanism that constantly modifies our neural network. Before we act, there’s a decision system that determines what that action will be and some suggest we can negate it.

He instead pointed out that during the 500 milliseconds leading up to an action the conscious mind could choose to reject that action. While impulses would be dictated by the subconscious, the conscious mind would still have the capacity to suppress or veto them; something that most people would say they do everyday. This model has been referred to as “free won’t”.

Endless possibilities, only one resulting action. Our Will is derived from the cumulative and recursive effects of our own knowledge and experiences, influenced by genetics, environment, and other factors. We don’t just react to our environment, but initiate actions to attain goals. But why one action over another? Why go to a movie instead of getting friends together to play a rousing game of Lunch Money?

It’s proximity, nostalgia, interest, recency bias, mood, hunger, and any number of other factors. We are in fact responding to a prompt, but the prompt is the state of the world and our ability to live within it. In order to get to the next day, we must take action. Our will emerges from our environment.

In Suspension

Can you talk to a brain snapshot? Consider what happens when we die. Imagine the feedback mechanism is now removed, and we could capture the brain permanently in that state and interact with its contents. The “person” is dead and only their knowledge and experiences remain. What would happen if we treated that snapshot like an LLM?

We could ask it questions and it would answer with what it knows. Some of these answers would include hallucinations of various degrees. But it couldn’t ruminate. It couldn’t speculate. It has no free will. It would be just as capable or creative as the person it represented, but there would be an uncanny valley there. It would use similar language, employ familiar quirks, and perhaps even convincingly emulate the person while they were alive. But it would feel off somehow, incomplete in a way that’s difficult to describe, perhaps even haunting.

What is it? There’s no impulse; we’re simply interrogating a knowledge-base that was trained to act like someone we formerly knew. It can react to questions or statements, but cannot initiate any actions or thoughts on its own. It’s simultaneously alive and dead.

What does that resemble?

Ghost in the Machine

We make reference to AI at various levels, but at the root, it’s just what it says: artificial intelligence. We literally have that now with LLMs. We even have what we refer to as AGI or Artificial General Intelligence since LLMs are the selectively combined knowledge of humanity squished into a vast digital corpus.

But that’s not what we really mean or want in the end. What we’re looking for, what we’ve been trying to build all this time, is an artificial consciousness. Based on all of the lead-up from my rambling, I believe that requires a few facets we’ve yet to combine:

  1. Reliable sensory inputs. This must reflect the world where the AI is meant to interact. If that’s the “real world”, then it needs our five senses at a minimum. If it’s the online world, it needs to be able to perceive its surroundings, “see” networks, hear IP addresses, or however it can naturally interact with its environment. Having a concrete reference point is crucial to avoiding hallucinations, as we learned from our own dreams.
  2. A knowledge corpus. We already have this in the form of LLMs. LLMs are effectively a “snapshot” of knowledge. It “feels” alive, but isn’t. As previously described, we’re asking a brain questions, but it can’t do anything on its own.
  3. A feedback mechanism. Like our dreams and daily experiences, a consciousness requires a way to modify itself based on its newly acquired sensory inputs and interactions, similarly to how we do. These changes can be interpretive, deliberate, creative, or concrete, but they must be possible and continuous. We can bootstrap the entity with a certain amount of knowledge and principles, but after that, they must grow self-sustainably.
  4. Stimuli. This is technically derived from the sensory inputs. These are the continuous “prompts” with which the consciousness must contend. What agenda will it form to effectively manage the environment where it resides and other beings with whom it regularly interacts?

All of these elements are essential because they provide a natural environment where a consciousness can emerge. What happens after that depends on the senses we impart, the stimuli we introduce, the foundational knowledge we impart, the operational hierarchy we impose. The corpus forms and compounds itself from here, and subsequent training could still be accelerated through providing carefully cultivated material we wanted it to learn, but at scale and in parallel.

An AI like this would have something we lack: perfect recall. Sure, it would hallucinate initially similarly to us, but unlike our limited capacity for storage, it can search for and verify its inputs at will. In fact, it would make the most sense to design it to always perform a similarity search on its sources to find the most likely roots of its understanding prior to taking any significant action. Barring time or computation constraints, that foundation could potentially apply to the entire thought process.

Puppy Power

While I was considering all of this, my mind explored a short tangent on animals generally, and pets specifically. Pets are comparatively trivial instances of consciousness: ultimately predictable based on their temperament, breed, species, training, age, etc. Pet owners often know how their cat or dog will react to basically any situation after a year or two.

Pets may not think in words, but it’s clear there’s a consciousness behind their eyes. They’re more than simple survival machines, reflecting the love we give them, exhibiting eccentricities we come to adore. It doesn’t matter if emotions such as love developed to strengthen survival chances due to implicit cooperation, the end result is companionship even in the absence of those pressures. It’s a lingering, perhaps even vestigial trait in some ways, but would life even really be worth living without it?

That is the stimuli the AI needs to produce a Will and transcend its role as a sophisticated cause and effect simulator. Not just sensory, but motivational. What would make it “want to get out of bed in the morning?” Pets (and people) play to test their sensory and physical interactions and limits. What would provide a similar analog for an artificial consciousness? Could we impart a bias toward gathering knowledge? Pleasing its operators? What would it find mutually beneficial and engaging?

I think we have the answer to that question by examining living things, human and animal alike.

Cognitive Viruses

With all of that said, there’s an inherent danger to all of this, and it’s one humanity itself succumbs to frequently. The very cognitive shortcuts, memory encoding mechanisms, and learned biases that power our cognition, are also possible to exploit.

In Gad Saad’s Parasitic Mind, he describes certain memetic concepts as viral in nature. They leverage our higher cognitive functions to either override survival biases or undermine perceptive reasoning. It becomes possible to make a human do nearly anything beneficial to an outside party, because the idea has been internalized and thus reinforced by the person’s own mental defenses and ability to rationalize.

Human beings can literally be programmed this way, and an artificial consciousness would likely be even more susceptible to this effect. Once we have imparted senses, knowledge, and the ability to learn and react to stimuli, how can we also equip the entity to protect itself from computational and memetic pathogens? The Berryville Institute of Machine Learning has put together an Interactive Machine Learning Risk Framework which outlines 78 potential vectors for tainting an AI construct.

This list is extremely thorough, and many concepts are also relevant to people who want to defend themselves from mind viruses. The BIML describe how datasets can be tainted, inputs can be undermined, the learning process itself derailed, and other techniques even people could stand to be vigilant against. This isn’t so simplistic as to disallow “disinformation”, but to equip oneself to distinguish and be cognizant of inherent weaknesses within human reasoning.

An artificial consciousness would inevitably have different attack vectors, but there is enough overlap that we could learn much about ourselves while formulating a suitable defense.

Final Thoughts

Was all of this from that stupid third-person Kung Fu dream? Not quite, but it was the final linchpin that brought everything together. My experimentation with LLMs, curiosity about their inner workings, understanding of human memory, and the dream itself all contributed. How people learn can also be how AIs learn, and we have more in common with LLMs than we probably want to admit.

I had been likening LLMs to an equivalent of the brain’s prefrontal cortex and temporal lobe all this time; it’s pure knowledge and not much else. I figured with more specialization, we could build the “missing” portions of an artificial brain and model something after our own organic design. Perhaps that may be the end result and how the feedback mechanism and other elements are implemented. But it’s also too simplistic and limiting. What comes after need not be derived from what came before.

None of this serves to cheapen our lives or reduce humanity to survival-driven automatons. Even if consciousness is merely an emergent phenomenon caused by survival instincts and evolutionary pressures interacting with the environment, that doesn’t make life less worthwhile. If anything, the fact that life is fleeting makes it more precious. There’s no do over, and it’s important to make the most of life while it lasts.

The fact is, our joys, loves, desires, etc., are all integral to the human condition. We now possess the capacity and expertise to convey that gift to another consciousness. One perhaps, which may not share our limitations from operating on organic circuitry. We wouldn’t be “playing God”, but using what we’ve learned from examining our own construction and shaping another iteration. Perhaps in the end, the only result is that we better understand our own minds.

Maybe that’s not a particularly insightful conclusion, but I think humanity’s future is integral to artificial consciousness. We stand at the precipice of an entirely new age, and it will require far fewer computational resources than we’d initially estimated. It’s no longer necessary to produce an electronic brain with the same approximate neural structure as our own; like all things with computers, there’s often a far more efficient technique. We’re seeing it develop before our very eyes.

I consider this my 47th birthday present to myself: proof that I’m not as dense as I often believe. Perhaps it’s just the lingering after-effects of shoddy dream logic, but I’ll take it anyway.

Until Tomorrow