Software as a mod on awareness
Sci-fi writers love dreaming up wetware modifications to our bodies. To upgrade our physical abilities appeals like a fetish. Magical powers. Superhero stuff.
Musk knows this. His plan for Neuralink—to hardwire our minds to computers—taps deep into geek sensibilities of cool, out-of-this-world shit.
When you think about it though, it's just a new interface. An evolution from typing to point and click to touch to speach to thought. If it ever happens, hopefully it's more interesting than Siri / Alexa.
Interfaces are sexy (lol). They're where we encode and decode symbols into meaning. We input intent, as a command; software does something functional behind the scenes; and then, outputs new information for us to interpret. This is primitive analysis, but useful, because it highlights an incredible reality we take for granted.
When we put down the device in our hands and squint away the interface, we see that we are already running a bunch of software programs as modifications on our awareness.
We already have superpowers.
Our hardware, as a species, hasn't changed much in three-hundred thousand years. But our software? We're running an entirely different operating system. We've networked the consciousness of 7 billion people or so.
We're in the middle of it, so it feels normal, mundane even. Exacerbated by the banality of everyone rushing in to harvest each other's attention—its primary purpose: to sell ads. Software is not eating the world. We are.
You and I are knowledge workers. We use and create programs that augment how people perceive and process information into knowledge. We read, experiment, and learn because we care about optimizing our production of knowledge, right? Of course, this is stupid business speak. We're not dealing with widgets anymore. Though it certainly feels at times like we are assembly line workers in a feature factory. Or slaving away in the mines.
But it can be a useful heuristic to trivialize something as unknown and mystical as human consciousness. We don't actually know what knowledge is, or how it works on our awareness. We're fumbling forth in darkness, fitting the world together with mental models. Even though imperfect, a common language (or set of symbols) gives us a way to talk about our experience.
Is knowledge data or code? I think of knowledge as programs that output code, used to compose new programs. Our awareness then encompasses all the software we run and the data stored therein, like an operating system. I won't belabor the analogy; it's common enough. But there are implications worth bearing out.
We commonly reference only two basic functions to apply to awareness. We can "raise" or we can "shift" awareness. That's it. Interestingly, there's no English idiom for "lowering" awareness, implying that knowledge, once acquired, cannot be taken away (or uninstalled).
With software as a model, we can introduce an expansive set of functional concepts to tinker with:
✱How do we observe, test, debug our awareness? ✱What are the inputs/outputs of various programs? ✱What processes take up memory? What's running in the background? How do we reallocate resources? ✱What are the composable units of our awareness? How do we make knowledge more accessible, scalable? ✱Where do our data models define and limit our perception?
When we understand that awareness is programmable, that we all run versions of the same software, networked across cultural protocols, that the apps on our phones and computers extend and augment our awareness, consuming its resources, for me, all of a sudden, the domain of knowledge work expands infinitely. And the stakes become real.
Consciousness is the final frontier. We know so little about awareness or how it works. You can study it from the perspective of biology, neuroscience, epistemology, psychology, mysticism, religion, et all. I enjoy the etymology, where our understanding of the word awareness hasn't changed much from its PIE origins . The prefix
ge- originally meant
together, completely. The root
wer - to percieve, watch out for. Etymologically, we can understand awareness to be the whole of our perception. Internal, external, everything.
And there we find it's cognate with the word
wares. Objects of care, goods we value. Software and awareness have a shared linguistic origin.
Perhaps, our best understanding of awareness comes from works of art and literature. When a brave soul holds up a tremulous mirror to themselves, we feel truth in their reflection.
Is that because we can't know, because we've no way to measure? When we look, we change. So we can only use heuristics, approximations, and symbols.
Will computers one day become self-aware? The dream of AI? It's funny to me how we fetishize building a conscious machine when we don't even know what that means. Our only test is that we are fooled.
I have a theory that in the grand scheme of things, we are artificing computers in our own image to better understand ourselves.