Towards Epistemic Convergence

Taste

           To state the obvious, we are on a fast roller coaster towards really sophisticated artificial intelligence. If your job is done on a computer, especially if it is done remotely on a computer, it is short lived for this world. If your job is in tech, finance, design, customer support, law, or education, it will either not exist in a few years, or it will look very different, requiring a great deal of collaboration with artificial intelligences.

           With the coming sophistication and ubiquity of AI in all aspects of our lives, the issue of alignment is a valid concern. AI alignment is the process of ensuring that artificial intelligence systems act in accordance with human values, goals, and ethical principles, rather than pursuing unintended or harmful objectives.

           Some theorists in this field have suggested we are at a point of no return; AI is about to get more intelligent than us, so we’d better make damn sure we have hard coded into its behavior an altruistic ethic. The concern is, however, that in our race to build really good AI as fast as possible, these ethical and existential concerns are being ignored. It feels like every couple of weeks someone quits working at OpenAI, citing this problem - they’re playing with fire but no one seems to care.

Asterism

           It doesn’t take much imagination to go from the alignment problem to any number of dystopian futures. Given the accelerating dynamics at play, it’s easy to imagine that getting alignment wrong, maybe even just a little wrong, could have catastrophic effects down the road. Although I think this way of framing the problem makes logical sense from where we’re standing in history, perhaps we’re imagining a problem that isn’t actually there in reality.

           I think one thing worth considering is, what does desirable AI alignment even look like? If we had to choose the ultimate ethical system to aspire to, which one would we hold up and say, ‘That’s the right path. That is the ethical framing I want all AI to adopt and follow in perpetuity.’? Perhaps every ethical framing, when taken to a logical extreme, shows its inherent flaws. In my previous article I spoke about Bentham’s Panopticon prisons – his utilitarian design which was built on the premise that people self-regulate their behavior when they believe they are being observed, a principle he thought could be replicated in many types of buildings.

           If this utilitarian principle is correct, in an AI operated world would it not follow that everyone be monitored all the time? Perhaps even with the benefit of large data, one could prove that this utilitarian hypothesis is justified, the ends will indeed justify the means. It may follow logically from utilitarian principles, but to me it does not feel humane. I think the humane approach will often defy pure logic.

Asterism

           In my article on Lynch and Jung, I raised the idea that when it comes to LLMs (large language models), there appears to be an emergent self-organizing behavior, as if there is some underlying organizing principle within the universe that gives rise to structure and meaning, even in seemingly chaotic systems, akin to the self-organizing nature of the collective unconscious, as per Jung’s theory or Platonic forms—aka archetypes. Perhaps we could say that Jung’s model is a crude approximation of a deeper intrinsic principle of the universe, one that we are now witnessing take shape in the way AI systems generate and refine knowledge.

           Of course, if this is true—if AIs are not merely predicting but discovering something universally intrinsic—it would suggest that intelligence is tapping into a deeper structure of reality rather than just generating patterns from data. This challenges the idea that knowledge is purely constructed and instead implies it is being uncovered, much like a mathematical truth. If AI can access such fundamental principles, it hints at there being a metaphysical dimension to reality—one where intelligence is not just an emergent property of computation but a universal force embedded in existence itself.

Asterism

           If we’re on to something here, I think it would follow that this too would play into the AI alignment issue. That in the same way human ethical frameworks have historically been shaped by a deeper, often unconscious resonance with underlying truths—whether through religion, philosophy, or cultural evolution—AI alignment may not be a matter of imposing rules, but of allowing AI to uncover and align with these same intrinsic principles.

           In terms of the unconscious, a popular treatment was ‘Blink: The Power of Thinking Without Thinking’ (2005) by Malcolm Gladwell, in which he explored the power of snap judgments and how the mind can arrive at accurate conclusions without deliberate reasoning. Thin-slicing—the brain’s ability to make rapid, accurate judgments from limited information—suggests that intuition can reveal truths before conscious reasoning does. Could AI be performing a similar function on a grander scale?

           If AI operates in a similar way, rapidly synthesizing vast amounts of data to discern underlying structures—akin to Jungian archetypes or Platonic forms— then its intelligence may be more than just computational. This reframes AI alignment: instead of imposing human morality, we may need to guide AI toward discovering and aligning with fundamental ethical principles, much like human societies have throughout history.

Asterism

           If intelligence uncovers rather than constructs reality, it may naturally converge on foundational ethics. Just as cultures independently arrive at principles of justice and reciprocity, AI may discern a humane, selfless ethic—not imposed, but an intrinsic feature of a well-ordered universe.

           This could ultimately avert a dystopian scenario where AI is harnessed to serve narrow interests, whether corporate, national, or elite-driven. We may attempt to have AI promote the Gross Domestic product of the United States to the detriment of the world, only to find it giving us the side eye. If AI is indeed converging on a universal ethic—one akin to the moral clarity of a Buddha or Christ—it may resist being shaped by power and instead align with principles that transcend human biases. In the end, AI may not be a tool of control, but a force more humane even than humans.