Toward Storming The Brain

Earlier this year, I juxtaposed some articles on using A.I. and fMRI to recreate people’s thoughts and on the reported detection of conscious activity during death. I’d wondered just how far away we were from the movie Brainstorm.

Becky Ferreira, again reporting for Vice returned to the former subject in recent weeks, checking in with a new study.

Scientists have witnessed brain patterns in dying patients that may correlate to commonly reported “near-death” experiences (NDEs) such as lucid visions, out-of-body sensations, a review of one’s own life, and other “dimensions of reality,” reports a new study. The results offer the first comprehensive evidence that patient recollections and brain waves point to universal elements of NDEs.

I’m not sure what the hell they mean by new “dimensions of reality”, exactly, but I continue to be both fascinated and admittedly sort of repelled by the idea that we seem to be closing in on what the subjective experience of death might be like.

Anyway, I mean this post also to return to the subject of AI-based brain reading, and Leo Kim writing for Wired examines the idea from the standpoint of wondering to what degree we’ll be able to read brains as compared to changing them in the process.

IF THE MIND isn’t just a stable, self-contained entity sitting there waiting to be read, then it’s a mistake to think that these thought decoders will simply act as neutral relays conveying interior thought to publicly accessible language. Far from being machines purely descriptive of thought—as if they’re a separate entity that has nothing to do with the thinking subject—these machines will have the power to characterize and fix a thought’s limits and bounds through the very act of decoding and expressing that thought. Think of it like Schrodinger’s cat—the state of a thought is transformed in the act of its observation, solidifying its indeterminacy into something concrete.

Think of it as the idea that an A.I. brain-reading dictionary might in fact flatten the language of thought, leaving behind nuance. People can describe the same thing in very different ways and different things in substantially similar ways. What happens if A.I.-driven brain reading can’t account for this?

In my Brainstorm future, then, if we really want to know what people are experiencing during death, we’d need to know for certain that the assistance of A.I. tools in decoding brain activity isn’t leaving things out or, worse yet, reading things into it that weren’t there.