Home Deutsch Español Español

Computers, you see, are a lot like brains, just less neurotic and much better at math. But the similarities are uncanny , especially when you start talking about computer programming. Take the concept of creating Objects in software programming code for example. It’s not too far removed from the way the brain chunks information into neat little packages of “thingness” so we don’t spend all day trying to remember what chairs are for.

Now, brains have two halves, the left and right hemispheres, which, despite sharing a skull, don’t always seem to share a clue. They’re a bit like a CPU and a coprocessor: one does most of the heavy lifting while the other tries to keep up and insists it knows what’s going on. Except, sometimes, it doesn’t.

In one rather famous experiment, a split-brain patient (whose brain hemispheres had been surgically separated, for reasons that we’ll skip over in favor of avoiding uncomfortable cringes) was shown different instructions in each eye. The right hemisphere was told to stand up, and like any obedient half-brain, it did. But here’s the kicker: when the left hemisphere, which handles all the talking, was asked why the patient had just stood up, it had absolutely no idea. Instead of admitting to this gap in knowledge, it did what any self-respecting hemisphere would do: it made up a story. “Oh, I just felt like stretching my legs,” it might say. Total nonsense, of course, but confidently delivered nonsense, which is the best kind.

This whole split-brain business starts to sound a bit like modern computing architecture if you squint at it hard enough. Enter containerization— that allow programs to live in their own little bubbles, much like our brain hemispheres, bouncing along merrily without much concern for what the other containers are up to. It’s all very efficient, and it starts to make you wonder if consciousness itself might be a bit like this. Tiny, independent microservices contributing to the illusion of a single, cohesive self.

Now, some folks are building AI in much the same way, by linking up these little modular brains, each designed to do one thing really well. It’s a bit like creating a team of incredibly focused philosophers, each with their own specialty, except instead of pondering life’s great mysteries, they’re really good at guessing the next word. or how a marble will fall through a peg-filled board of randomness. It’s all very impressive, but it does leave you wondering: if one of these AIs stood up suddenly, would it even know why? And, perhaps more importantly, what on Earth would it say when you asked?

Ah, but if you think split-brain patients and containerized consciousness are puzzling, just wait until you bring ADHD into the equation.

You see, ADHD aka Adult attention-deficit and hyperactivity disorders are a lot like a large language model, if said model was powered by squirrels on caffeine. The parallels are, once again, disturbingly clear, especially when you delve into the delightful chaos that is attention, or rather, the lack thereof.

Take “Interference” for example. Now, interference is the brain’s version of the world’s worst office intern. You’re trying to focus, you really are, but the intern keeps interrupting with questions like, “Have you heard this catchy song?” or, “Did you ever wonder if penguins have knees?” For someone with ADHD, filtering out distractions is like trying to remove a specific grain of rice from a bowl while an earthquake is happening. And for large language models? Well, it’s a bit like feeding it a perfectly coherent sentence and watching as it veers off into the fascinating history of carpet fibers.

Next, we have the notorious “Token Limit.” In humans, this might be called the point at which your working memory politely taps out, leaving you in the middle of a sentence wondering what on earth you were just talking about. For an AI, it’s the moment where it realizes that it’s been asked to summarize War and Peace, but it only has room for 500 words, so Tolstoy is going to get very, very abridged.

Then there’s “Context Switching.” If the brain were a web browser, ADHD would be that person with 47 tabs open, three playing videos, and no idea where the music is coming from. Rapidly switching between tasks or thoughts is a core feature of the ADHD experience, and much like an AI model being interrupted mid-thought to handle new inputs, it leaves you in a perpetual state of “Now, what was I doing again?”—an existential crisis in short bursts.

“Attention Allocation” is where things get really interesting. The ADHD brain is like a magpie with a Pinterest account—constantly distracted by shiny, novel, or utterly irrelevant stimuli. Meanwhile, important things like, say, finishing your taxes, drift off into the background noise. AI models aren’t much different. They can latch onto obscure or irrelevant parts of a dataset with the kind of enthusiasm most people reserve for cat videos.

Of course, there’s “Hyperparameter Tuning,” which sounds terribly technical but is really just the brain’s fancy way of saying, “Everyone needs a personalized strategy to function optimally.” For an AI, this means fine-tuning settings like learning rates, which, let’s be honest, is just a glorified way of figuring out how much coffee it needs to get through the day. For ADHD folks, it’s discovering that the only way to finish a task is to set three timers, listen to whale noises, and occasionally dance in place.

Now we arrive at the ever-enticing “Reinforcement Learning and Reward Sensitivity.” Here’s the thing: ADHD brains have a bit of a sweet tooth for instant gratification. Long-term goals? Those are for future-you to worry about. Right now, that dopamine hit from buying another houseplant is calling your name. AI models respond to reinforcement in much the same way—show them the right reward, and they’ll perform like a well-trained circus animal. But leave that reward too far off, and suddenly, neither the AI nor the ADHD brain sees the point in all this hard work.

And then, of course, there’s “Noise.” The ADHD experience is akin to living inside a pinball machine, where every flashing light and ding of a bumper sends your thoughts ricocheting in different directions. This internal and external cacophony is remarkably similar to the “noise” that muddles up an AI system’s processing, making it difficult to focus on the actual task at hand. Just imagine trying to write an essay while sitting in the middle of a rock concert—only the concert is happening inside your head.

Now, in terms of what we might call “healthy” cognitive processing, well, that’s a bit like asking, “What’s the best way to arrange a sock drawer?” It depends on the socks, doesn’t it? Some brains are wonderfully balanced, with just a dash of interference, a sprinkle of noise, and a hearty dose of attention allocation. Others, well, they resemble the aftermath of a sock explosion.

In a hypothetical “optimal” brain, interference would be minimal, context switching kept to a polite 10%, and attention allocation would reign supreme. But let’s be real here: brains, like large language models, are rarely optimal. Most of the time, they’re doing their best to keep up with the absurdity of reality while dodging distractions like an over-caffeinated AI trying to answer 12 unrelated questions at once. And, frankly, that’s probably as good as it’s going to get.

But if one of these AI systems ever does stand up, would it even know why? Well, much like someone with ADHD who finds themselves inexplicably standing in the kitchen at 3 a.m. with no recollection of why they’re there, the answer is: probably not. And when you ask it what it’s doing, expect nothing less than a confidently delivered, utterly nonsensical response—because in the end, both the ADHD brain and the AI model are masters of convincing themselves that they know exactly what’s going on, even when they have absolutely no clue.

See also:
Interference: There is a struggle to filter out distractions.

Token Limit: Limited capacity for sustained focus and working memory.

Context Switching: Rapid switching between tasks or thoughts mirrors an AI model being interrupted to handle new inputs before completing the current task.

Attention Allocation: Difficulty prioritizing relevant stimuli, often being drawn to novel or stimulating elements, is similar to how a model might focus on less relevant inputs.

Hyperparameter Tuning: Optimal functioning requires personalized strategies, analogous to tuning a model’s hyper parameters.

Reinforcement Learning and Reward Sensitivity: A tendency to prioritize immediate rewards is comparable to how reinforcement learning models respond to immediate feedback.

Noise: The presence of internal or external distractions can be seen as noise that disrupts clear and focused processing, similar to noise in an AI system.

In terms of what might be considered a “healthy” or typical cognitive processing distribution, it’s important to note that there is no universal standard, as cognitive processing varies significantly among individuals and situations. However, a balanced distribution with lower percentages in areas that reflect cognitive struggles (like interference, context switching, and noise) and higher percentages in areas reflecting effective processing (like attention allocation and effective reward sensitivity) could be indicative of more optimal cognitive functioning. Here’s a hypothetical breakdown that could be considered healthier:

Interference (5%): Minimal struggle to filter out distractions, allowing for focused and efficient processing.

Token Limit (15%): A moderate capacity for sustained focus and working memory, suggesting a good balance without cognitive overload.

Context Switching (10%): Low to moderate need for rapid switching between tasks, indicating stability and focus.

Attention Allocation (30%): High ability to prioritize relevant stimuli and maintain focus on important tasks, reflecting strong cognitive control.

Hyperparameter Tuning (10%): Some need for personalized strategies, recognizing that each individual’s cognitive functioning is unique.

Reinforcement Learning & Reward Sensitivity (20%): Balanced sensitivity to both immediate and delayed rewards, encouraging both short-term and long-term goal achievement.

Noise (10%): Low levels of internal or external distractions, suggesting a clear and focused processing environment.

Healthy Distribution:

More emphasis on Attention Allocation and Reinforcement Learning & Reward Sensitivity, which reflect adaptive, goal-directed behaviors.

Lower emphasis on Interference, Context Switching, and Noise, which can hinder sustained focus and effective processing.

These percentages are hypothetical and intended to illustrate how different cognitive factors might distribute across a person’s cognitive experience, emphasizing that no single factor dominates completely, and each plays a significant role.

Comments & Ratings

Leave a Comment

#

Loading ratings...

Loading comments...