WHAT THE BRAIN SEES VERSES WHAT THE EYE SEES
(Courtesy of Shinji Nishimoto)
Developing the Non-invasive and Non-harmful Thought Cam
The year 2011 permits us glimpses of what the future of neuroscience holds for us. The research team of Shinji Nishimoto and Jack Gallant at US Berkeley head the study of a type of brain scan that can reconstruct internal visual images from a person’s brain.
If the research and development of the technology go as planned, the scientists hope to be able to develop a system that can look into a person’s brain, see what that person is thinking, and capture the visual activity. On this noinvasive front producing highly accurate scans to clearly represent a person’s memories, hallucinations, dreams to name a few things with an end result of external digital images or even videos. Just the idea of getting closer to the possibility of being able to watch actual dream or hallucination videos on screen is incredibly surreal and even disturbing.
This cutting-edge breakthrough in neuroscience paves the way for a technology only seen in science fiction and fantasy. In 1983, the movie titled Brainstorm was released, a sci-fi film about a system that allows researchers to record and playback people’s experiences. In 2000, The Cell, another Hollywood Sci-Fi movie tells of a technology that allows a person to enter into another person’s mind enabling that person to access another's dreams and memories. In Harry Potter, a series of seven fantasy novels about a hidden magical world, memories and thoughts were treated as tangible objects that can be extracted from the mind and viewed by others or even stored for future perusal.
How it’s done
To procure the images, an fMRI (functional Magnetic Resonance Imaging) is used in conjunction with computational models. A subject, one of which is Nishimoto himself, to get an original baseline, lies still inside the fMRI system for hours at a time just watching movie clip after movie clip after movie clip.
The fMRI is used to measure blood flow through the visual cortex and then, the captured visuals are fed to a computer. The computer will then section the visuals into small, 3D cubes known as voxels or volumetric pixels. Each voxel has a model that is able to describe how shape and motion information is recorded into brain activity. The subjects viewed random movie clips while their brains were simultaneously scanned. This allowed the scientist to connect brain activity with whatever the subject was seeing. Through this, the scientist learned how to interpret scanned internal visuals.
It first started out with still images. Once the scientists have associated specific brain activity to visual patterns, the subjects looked at black and white photographs while their brains were simultaneously scanned. The computational models reconstructed brain scans into images that were so surprisingly distinguishable as to precisely “guess” which picture the subject was looking at.
According to Shinji Nishimoto, natural visual experiences are no different from watching a movie, such as our memories and dreams will play back in our minds like a movie being viewed. Before we can understand how our brains work in natural conditions, we must know what goes on inside the mind while matching a movie. For the latest experiment, moving pictures were used.
Random videos totaling 18 million seconds or 5,000 hours were analyzed by the computer and matched to 100 clips from brain scans to what it deemed to be most similar to what the subject as most likely seen. The images were far from being considered high definition, it was not even close to VGA quality, they were however, distinguishable enough to be eerie. They looked like how dreams and memories are often visually remembered by most, indistinct lines, a bit color blind and all too fuzzy, much like videos captured by a CCTV camera. But despite the poor quality of the images, you can fully tell what most of the images were: a man talking and gesticulating, a close up of a person’s face, etc.
This being merely the early phase of the research, it illustrates what a vast potential this technology has. It is something that could advance into something reminiscent of science fiction. This in fact, is the first step in being able to decode brain activity and reconstruct it into images that anyone can view on a screen.
What’s the point?
According to the researchers, the purpose of developing this technology is to be able to “communicate” to those who suffer from neurodegenerative diseases, comatose, paralysis or anyone who has no means of communicating. But this kind of advanced technology is open to abuse as most innovations are. It can mean an end to lie detectors as minds can be easily read, it can be an end to privacy and it can be the beginning of various ethical violations.
According to Gallant, he admits that it is possible that this technology could lead to serious ethical and privacy implications. But, he does not expect noninvasive technology for decoding brain activity to be sophisticated enough to be used for that for few decades.
Although, we are dealing with the ethical, moral, and legally acting not the unethical, immoral, and illegally acting. This is typified by the human experiments that continually surface. Which is further typified by President Clinton's statement apologizing for thousands of human experiments as the seated, or current president at the time.
This development is much like the invention of videos, television and film. It started with still images, black and white, silent movies and then audio was introduced. Maybe in a few more decades, “brain movies” would be projected in 3D.