Sunk Cost, the Meaning Crisis, & Taste
Last night, I listened to an interview with Gwern Branwen. Since he is an anonymous online writer, his identity was concealed using a computer-generated avatar and an AI-generated voice. He got me thinking, as I’m inclined to do, so I’m going to let my thoughts wander and see what appears on the page. Branwen’s writings cover a broad range of topics, but his main focus is artificial intelligence—specifically that these models are scaling in a predictable fashion, and as such, superintelligent AI, or AGI, is just around the corner—three years away, or by 2030 at the latest. In the interview, he more or less takes this as a given and considers the implications of what living in such a world might look like and, more pertinently, what this future might mean for how we live our lives now. I am also going to take his timeframe as a given.
Before I reflect on the content of the interview, I want to say a little about what has been called the “meaning crisis.” I think learning a new skill has, in some respects, been rendered useless. I am dyslexic, which has its positives and negatives; dyslexic people have an alternative information processing style, but it obviously makes language a challenge. I can’t say that learning a second language would be impossible, but I can say it’s definitely not going to happen. Even for a neurotypical individual, learning a second language is a challenge. Nevertheless, people do it for many reasons: to communicate with others who speak that language, to immerse themselves in different cultures, to advance their careers, to read literature, or even for love. They commit to the sunk cost in order to reap the rewards. But what if there were a shortcut that got you most of the way there—a device in the ear, for example, that perfectly and seamlessly translated every word in real time into your mother tongue and vice versa? Would the sunk cost still be worth it? If you’d already put in the effort, I doubt you would regret it, but if one were considering learning a language now, on this side of the equation with the device in hand, what then?
I think there is still a case to be made; there is surely a degree of cognitive flexibility, greater abstract reasoning, and a kind of subjective breadth that comes with learning a second language, but the rewards are surely greatly diminished. The assumption is that AGI in the next few years will make such a device possible. Honestly, we’re pretty close as it is. My guess, then, is that if this technology became ubiquitous, far fewer people would sink the cost.
I’m really just using language learning here as a concrete example to explain something far more wide-reaching. The arrival of AGI will apply across the board. There will be a surplus of intelligence; any task that is cognitively demanding will be possible to outsource to a computer. Now, of course, that does not necessarily mean that all cognitive efforts will be outsourced; we will have a choice. We can do what is pleasurable to us and outsource what is not, such as navigating the intricacies of the healthcare insurance landscape.
So, the point is that when it comes to sunk costs over the next few years, all this impending change greatly alters the equation before us. Let’s take another example: say you wanted to learn how to use some architectural design software. There are many reasons why you might want to do this, but chief among them is probably the economic value that having such a skill may afford you. Perhaps it would eventually lead to freelance architectural plans or maybe even a job with an architectural firm. Even dabbling with architectural design software might open new opportunities. The mere possibility generates a lot of meaning. But now, it’s not possible.
It’s not possible because there will be a surplus of intelligence. The economic value of any heavy-lifting cognitive effort will be rendered negligible. This changes the equation dramatically. It changes much of what has been conventionally thought of as meaningful and renders it meaningless. What are you good at? Meaningless. What is your job? Meaningless. In a culture where the first question people ask is, “What do you do?” we are fast approaching a meaning crisis.
I believe there is meaning to be found, but it requires viewing things very differently. We have to look past many of the cultural and social norms of our age. I think games—specifically computer games—are a useful tool for exploring at least one aspect of meaning. In a game, you generally have an objective. Within the context of the game, what is your meaning, your purpose? It is to work towards that objective. Beyond the game, in the real world, the game is meaningless, but in the game, the game is everything. A game, by definition, creates artificial limitations—parameters that, if you abide by them, give meaning. Imagine if life were like this: a series of games, a series of objectives.
Now to say something about the Branwen interview. One of his underlying assumptions is the extent to which it is wrong to think of AGI as human-like intelligence; it is often erroneously described this way. Just because artificial intelligence can perform heavy-lifting cognitive efforts as well as, or better than, the best human, does not mean it intellectually experiences reality as we do. Human intelligence and computer intelligence are very different. In particular, humans experience reality subjectively in a way that computers cannot (in this interview, he does not get into whether this may change). As such, there is something of value we can do now that AGI cannot replicate better, faster, or cheaper: express a preference.
I hadn’t considered this before, but I think there is something quite profound in this observation. Preferences, desires, evaluations, judgments—all the stuff of subjectivity—cannot be made wrong. It may even be a kind of commodity that becomes valuable in this age to come. I’m talking about taste. What should we be doing over the next few years? We should be writing down what we like, and the challenging part, I think, is really trying to express why we like it. Interrogating the why.
It reminds me of High Fidelity, the Cusack romantic-comedy from 2000. I’m pretty sure there’s a line in that movie about how “it’s what you like, not what you are like, that matters.” On one level, that comes off as shallow and trite, but perhaps he was hitting on something deeper: maybe it is what we like that will define meaning in the years ahead. Right, I’m off to rewatch that movie and figure out why it’s so good.