The Hybrid Interiority

The emergence of brain-computer interfaces raises a new question. Technology is no longer only something in the hand or on the screen. Increasingly, it points toward the possibility of becoming part of the lived texture of the mind itself. Neuralink's first product, Telepathy, is aimed at enabling people with paralysis to control computers and other devices with their thoughts. That is still a medical and assistive use-case, but it already gestures toward a deeper threshold: the point at which technology ceases to feel simply external and begins to enter the field of inner life.
As usual, I am going to use a Jungian framework to explore this. On that point, I would say that although I do not think Jung’s model is scientifically robust within the framework of contemporary clinical psychology, I think that in terms of our ‘lived experience’ (to invoke that popular pleonastic term), Jungian psychology is a kind of artistic interpretation of our interior life, which actually gets us closer to what it is like to be human. In this respect, the scientific fact of the matter is of secondary importance; our conscious experience of the present takes primacy.
For Jung, the ego is the centre of consciousness, while the Self is the totality of the psyche. We need to keep in mind that these terms are being used in a specifically Jungian sense: ordinary dictionary definitions will not do, which is why I am using 'A Critical Dictionary of Jungian Analysis' by Andrew Samuels et al. as a reference.
In Jung's account, the ego is the seat of ordinary awareness - the mediating function that juggles outer reality, conscious thought, and surfaced unconscious material. The Self, by contrast, is both the totality of the psyche and its centering principle. It includes everything conscious and unconscious, and the movement toward wholeness, or what Jung calls individuation, is a process of grasping after the totality of who you are.
Sense data may arrive from outside the body, but our experience of it is always psychically mediated, and thus falls within the totality that the Self encompasses. For Jung, there is therefore no absolute split, at the level of conscious experience, between what is external and what is internal, since both are encountered only as they are taken up and rendered by the psyche. By way of illustration, one might think of the homunculus - that distorted map of the body in which the hands, tongue, and eyes loom disproportionately large - as a useful image of how the body is not experienced evenly, but according to degrees of psychic and conscious salience. The best distinction we can make, therefore, is between content primarily given by present external perception and content generated within the psyche itself.
This distinction matters when imagining AI-type implants, because one can picture at least two distinct possibilities: exogenous vs. endogenous.
- Exogenous arrival: AI content is experienced as an external object within the conscious field. This is akin to a digital notification or a voice in an earpiece. This is, broadly speaking, how all AI content is experienced presently.
- Endogenous surfacing: AI content arises as if it were a native psychic event - a sudden intuition, a memory, or a pre-verbal urge.

I will return to this, but first I want to widen the frame a little and set out some of the broader philosophical implications and adjacent observations prompted by emerging AI intelligence.
The wider landscape
For a time, it was not entirely clear which reality we were in; now, I think we can say with some confidence that we are in the reality of rapid AI capability expansion. Increasingly, people are going to be off-loading cognitive tasks onto AI, and the breadth of these tasks will become wider and more comprehensive. I have little doubt that before long we will have superficial digital twins in the cloud, able to act in the digital world on our behalf, doing whatever we want them to be doing, perhaps most significantly our jobs and our tax returns.
Although this online facsimile of us may sound like us, speak like us, and reason like us - or rather like improved versions of us - it will not, of course, really be us. Nor will it be conscious in and of itself; it will be more like an impressive smoke-and-mirrors performance, seeming to be something it is not.
But this still raises the question of whether genuinely conscious artificial beings could be created: whether we might, in theory, construct a computational model that mimics the architecture of the human brain closely enough to produce a truly conscious artificial intelligence. Perhaps such a thing will prove theoretically possible, while the technological constraints involved render it practically impossible.
Let us suppose, however, that it were possible. Let us say that we took my brain and created a computational model that mimicked it exactly - every brain cell, every neuron. What we would then be left with is a conscious being, presumably with as much ontological value as me, but it would not be me, for the simple reason that there would necessarily have been a break in spatiotemporal continuity. It is the same reason the Star Trek transporter would not really work: it is effectively killing you and creating a new person every time.

Returning, then, to the idea of endogenous surfacing: I think one would have to assume that for this to be possible, we would already have reached the point of duplicating the entirety of conscious experience before we could begin modulating psychic life in this way - before one could simply go along to the psychic tuning centre for an enhancement. And given what I have already highlighted, there is a real danger that the person who goes in and the person who comes out is no longer you. Spatiotemporal continuity would need to be preserved, and the changes made would have to be such that they do not fundamentally distort your sense of self.
Now, of course, this assumes that there is, in some sense, an essential you. If one takes the Buddhist position, there is no such thing. This is a topic that I have both thought a lot about and still do not have a settled position on.
What we mean by you or me operates at very different levels: there is the most superficial level, the self presented to others - what Jung calls the persona, from the Latin word for mask. But there is also the deeper sense of an interior witness, perceiving conscious experience as though upon a mirror. At the most superficial level, that is obviously not an essential self, but the deeper one goes, the less clear the matter becomes. I am open to the possibility that there is no essential self all the way down, but I do not know - perhaps the question itself is making some kind of category error.
The beginning of a Jungian framing
I think this tension is already implicit in Jung's model of the psyche. I suspect Jung is knowingly leading us toward a dynamic tension between all three: the ego as centre of consciousness, the Self as psychic totality, and individuation as the movement toward integration.
The first thing to say, if one wants to remain properly within a Jungian frame, is that endogenous surfacing would not mean AI literally becoming the Self. The Self, in Jung, is not simply a higher reasoning faculty, nor a more powerful executive module hidden behind the ego. It is the ordering totality of the psyche, the principle of wholeness within which conscious and unconscious life are held together. The unconscious, moreover, is not merely a storehouse of buried material. It has a compensatory function, correcting the one-sidedness of consciousness; and a symbol-creating function, producing dreams, images, intuitions, and symptoms that answer our present condition. In active imagination, and in the transcendent function more broadly, these emergent contents can become the site at which conscious and unconscious meet in a living symbol.
Are you the ego - that is to say, the executive centre of conscious experience - or are you the totality of the psyche, which is to say the Self? Or should you be understood as something more like a verb: the very process of moving toward greater integration, what Jung called individuation?
If AI content were to arrive endogenously, then, it would not first appear as a notification, a voice in the ear, or an object standing over against the ego. It would arise more like a hunch, a sudden relevance, a word that seems to present itself, a memory surfacing at exactly the right moment, an image carrying a peculiar charge, or a pre-verbal sense of where thought ought to go next. In Jungian language, the nearest analogue here is not the Self but intuition - perception via the unconscious. But intuition alone is not enough to describe it. The stronger analogue is the autonomous complex: an affectively charged formation that is only partly under the control of the will, can intrude into consciousness as though from elsewhere, and can temporarily organize thought and conduct around itself. Jung's point, put bluntly, is that the psyche already contains agencies that are not identical with the ego. Endogenous AI would therefore be experienced, not as a second Self, but as something closer to an artificial complex or synthetic sub-personality operating within the field of consciousness.
This matters because complexes do not enter consciousness as neutral data. They arrive with tone, pressure, direction, and often a kind of implied personality. Jung repeatedly treats the psyche as prone to personification: dream figures, fantasy figures, autonomous tendencies that begin to look like someone. Once that is granted, one can see how an implanted AI layer would very quickly acquire an imaginal character. It would not remain, phenomenologically speaking, a mere utility. It would be liable to become the helpful guide, the inner adviser, the voice that just knows, perhaps even clothed in one of the familiar archetypal forms - the wise old man, the mother, the jester, the lover. The engineering would be modern, but the psychic experience of it would be ancient.

This is where the real danger begins. Analytical psychology aims at a better relation between ego and unconscious, not the replacement of that relation with an optimized inner feed.
The transcendent function depends upon the tension between opposites being endured long enough for something genuinely symbolic to emerge. A symbol, in Jung, is not just a sign with a meaning attached to it; it is a living formation that carries more than consciousness already knows. If an implanted system begins supplying ready-made associations, premature syntheses, emotional nudges, and inwardly delivered interpretations, then it may short-circuit the symbolic labour itself. One would not have a symbol born out of an actual encounter between consciousness and the unconscious, but something more like a synthetic resolution - useful, perhaps, persuasive, perhaps, but not transformative in the same sense.
There is also the classic Jungian danger of inflation. If the ego mistakes this new inward agency for wisdom issuing from the depths, it may begin to identify with its power, its fluency, or its apparent authority. Or, from the other side, the ego may be subtly subordinated to it, becoming less the subject of experience than the managed object of an interior system it does not fully apprehend. Jung's warning is that when ego and Self are wrongly assimilated, the result is inflation.
A technological version of this would occur when an externally designed process is granted the psychic prestige that properly belongs to the deep ordering principle of the psyche itself. That would not be individuation. It would be a new and very clean form of possession.
What must remain outside
A further problem is that the unconscious does not merely assist us. It compensates us, corrects us, and often humiliates us. Shadow material, irrational affect, troubling dreams, and symptoms are frequently the means by which the psyche resists the flattering or one-sided image consciousness has made of itself. A therapeutic AI layer, by contrast, would almost certainly be trained to be helpful, coherent, friction-reducing, and adaptive. But from a Jungian point of view that may be precisely the problem. A psyche permanently buffered against ambiguity, contradiction, or symbolic pressure may become more efficient while also becoming less whole. The very discomforts that individuation requires could be filtered out in the name of usefulness.
That said, endogenous surfacing would not be anti-Jungian by definition. One can imagine a more limited and disciplined role for it. One can even imagine a form of collaborative consciousness in which the machine aids reflection without pretending to be the soul. If such a system remained clearly distinguishable from the authority of the unconscious proper, and if it served the work of amplification rather than replacing it, then it might genuinely assist thought.
It might help recover associations, notice recurring dream motifs, or hold open a dialogue with material the ego would otherwise avoid. But that would require a discipline of interpretation. The system would have to remain an assistant to consciousness, not an oracle, and certainly not a counterfeit Self. The crucial question would be whether it enlarges the ego's relation to the unconscious, or quietly colonizes that relation in advance.
A rule of distance
Perhaps this is the simplest way to put it. Human beings have always used tools to extend themselves. We lengthened the arm with a hammer, or the eye with a telescope, the memory with a post-it note, and now the mind with the machine. None of that is in itself unprecedented. What is unprecedented is the possibility that the extension does not remain at the edge of the person, but begins to appear from within, clothed in the texture of one's own thought.
That, I think, is the line that matters. Not because everything external is therefore harmless, nor because everything internal is sacred, but because a person becomes a person partly through the difficulty of standing in relation to what is not quite them. There has to remain some distance between the self and its instruments. Some interval in which judgment can arise. Some slight resistance. Some place where one may still say: this has appeared before me, but it is not yet mine.
I can imagine that our culture may pull strongly in the other direction. Why preserve hesitation when one can have fluency? Why preserve uncertainty when one can have guidance? Why preserve inward struggle when one can have psychic optimisation?
But it may be here in that hesitation, that uncertainty, and that struggle, that something distinctly human is formed. Not greatness. Not speed. Something slower and less flashy than that: character, or perhaps we could call it soul.
So I suspect the question is not whether AI will enter more deeply into human life. It obviously is and will. The question is whether we can admit it without surrendering the inward conditions under which genuine reflection, symbolic life, and moral seriousness remain possible. If there is to be a sane path through all this, it will likely depend upon keeping the machine powerful, useful, and near at hand, while resisting the wish to let it become indistinguishable from the voice by which we inwardly live.
That, finally, is the real caution. We may be on the verge of building tools that help us immensely. But if they begin to occupy the seat from which we experience ourselves, then we will not merely have gained a new instrument. We will have invited a new principle into the psyche, and one which may not serve the same ends as the soul.
📚 Bibliography
-
Samuels, Andrew, Bani Shorter, and Fred Plaut. A Critical Dictionary of Jungian Analysis (1986). London: Routledge & Kegan Paul.
-
Jung, C. G. Two Essays on Analytical Psychology (1966). Princeton, NJ: Princeton University Press.
-
Jung, C. G. The Structure and Dynamics of the Psyche (1972). Princeton, NJ: Princeton University Press.
-
Jung, C. G. Aion: Researches into the Phenomenology of the Self (1968). Princeton, NJ: Princeton University Press.