打开APP
userphoto
未登录

开通VIP,畅享免费电子书等14项超值服

开通VIP
Virtual Reality: Do We Live in our Brain''''s Simulation of the World?
Virtual Reality: Do We Live in our Brain's Simulation of the World?
www.linkedin.com
"A 22-year-old man has been instantaneously transported to his family's pizzeria and his local railway station by having his brain zapped." NOT WHAT IT SEEMS
The New Scientist article¹ intro could have been taken from the back of a sci-fi novel. It didn't seem like a compatible title for a respected science publication, but then again, truth is often stranger than fiction. This wasn't a next generation virtual reality (VR) demo; a trip on psilocybin. This was real.
"A 22-year-old man has been instantaneously transported to his family's pizzeria and his local railway station by having his brain zapped."
Pierre Mégevand and his colleagues at the Feinstein Institute for Medical Research in Manhasset, New York, wanted to pinpoint the area in the brain which processes locations and places. They scanned the brain of a volunteer while showing images of various objects and scenes, then recorded the corresponding areas of the brain that lit up. They had found their itch, now it was time to scratch. The researchers simulated the area, and that's when a complex visual hallucination transported the volunteer back to work at the pizzeria. Stimulation of a nearby area summoned the hallucination of a staircase and a blue closet in his home. Repeated stimulation of the same areas brought about the same hallucinations.
These illusory simulations of the mind are helping scientists to identify the physiology of visual information in the brain, and on a deeper level, understand what reality really is. Nature has given us our very own internal VR headset, and it has its own coordinates between the ears. We think the world we experience is external, outside, real, but these studies suggest otherwise. Exceptionally here, man is playing 'God'; manipulating normality. Complex virtual reality is brought about only through targeted stimulation of the brain. But it turns out that the fence between virtual reality and reality is narrower and more rickety than we believe.
In the book, Hallucinations, neurologist Oliver Sacks walks us through many surreal cases of visual phenomena he's had with patients. In one case we learn of Rosalie², a resident at one of the nursing homes he works at. In her nineties, she had recently started to see "incredibly real hallucinations", and Sacks had been contacted to deduce the emergency. More surprisingly, however, she was completely blind. As she stood in front of Dr. Sacks at the home, she claimed in fact to be having hallucinations right then.
"What sort of things?" Sacks asked.
"People in Eastern dress!" she bellowed.
Rosalie was experiencing what's known as Charles Bonnet syndrome (CBS), a condition originally used to describe any hallucinations related to eye disease or other ocular problems, but which has now come to encompass neurological aspects of hallucination as well. CBS occurs in people with impaired vision;  the consequence of when a sustained lack of light passing through the eyes causes the visual centres of the brain to weigh more heavily on visual memory.
Vision in the blind is a very real phenomenon. Although people born blind, having never experienced vision³, are only able to produce very simple hallucinations; basic shapes and patterns. The vast majority of people with CBS, including Rosalie, are able to disassociate real life from hallucination. Indeed, the greatest relief often comes with the ability to put a label of sanity to personal accusations of insanity². Were it not for this awareness - as is the case in some CBS patients - internal projections would unnervingly become part of the believed external reality; a line beyond which makes answering the question of what reality is, even more difficult. We wind up like the characters in Christopher Nolan's 2010 film Inception, who aren't able to distinguish between dreams and reality.
But before you count your lucky blessings, if you've ever thought you'd seen a ghost or a person you know, only to look and realise you were wrong, then you have been at the receiving end of your mind's projections. The only difference here, is that you're able to reframe the error afterwards. In effect, you have a totem that stops spinning⁴.
For those with poor eyesight, having hallucinations turns out to be the rule rather than the exception. If eyesight is even moderately blurry, one does not need to get one's brain 'zapped' in order to experience internal projections. Sacks shares an experiment conducted in 1999: "Robert Teunisse and his colleagues, studying a population of nearly six hundred elderly patients with visual problems in Holland, found that almost 15 percent of them had complex hallucinations—of people, animals, or scenes—and as many as 80 percent had simple hallucinations—shapes and colors, sometimes patterns, but not formed images or scenes."⁵ (Emphasis mine.)
Without forceful stimulation, you're not likely to find yourself at work (God forbid). But simple hallucinations are far more common than reported in anyone with less than perfect vision. You and I are wired to see. And in the case of hallucinations, when our visual systems are not adequately stimulated, a portion - or all - of the visual field becomes “concocted by [the] brain or mind."⁶ Less apparent here, is that hallucination is also a consequence of social stigma, helping to explain its perceived scarcity. ("Mum, I think Granny has finally lost her marbles.") Although, as Sacks does note, there are many cultures around the world who celebrate hallucination; particularly those of shamanic descent⁷.
These examples demonstrate that our sanity, which I'll define here as the ability to perceive and live in a unanimously coherent fashion, is a precious state of mind indeed. A well targeted brain-zap, the removal of light to the eye for an extended period of time, and it becomes clear that external reality doesn't exist only externally.
"Life comes at us very quickly, and what we need to do is take that amorphous flow of experience and somehow extract meaning from it."⁸ WHAT IS REALITY?
What is reality? 15 seconds, apparently.⁹
More to the point; the previous 15 seconds of this very moment. It's an theory that was coined in early 2014 as the 'continuity field'; the discovery that we seem to merge together similar schemata seen within the previous 15-second time frame.
Vision scientist and associate professor of psychology at UC Berkeley, David Whitney, explains in an interview: "The continuity field smoothes what would otherwise be a jittery perception of object features over time."¹⁰ Without a visual system to recalibrate the current scene for what we already know and believe, daily life would be more like an overwhelmingly, "jarring acid trip"¹¹. (A concrete example of this process in action is our tendency to miss continuity errors in films¹¹.) More pertinently, it's also hypothesized that this 15-second delay actually shields us from hallucinogenic experiences by stabilising the incoming flow of visual information¹², which invites the question of whether hallucinogenic drugs disrupt this function in particular.
"This is surprising because it means the visual system sacrifices accuracy for the sake of the continuous, stable perception of objects."¹⁰ As with breakthroughs in the understanding of many psychological phenomena, we're offered a rare window into the inner workings of the brain when things go awry.
"Psychiatric patients sometimes have delusive beliefs, as if they are in an alternative reality, and schizophrenics may also experience perceptual hallucinations – literally seeing things that are not there," says Keisuke Suzuki in The Guardian¹³. He's the lead author of a paper describing The Substitutional Reality system (SR), developed by researchers at the RIKEN Brain Science Institute's Laboratory for Adaptive Intelligence.
"Our motivation is to explore the cognitive mechanisms underlying our strong conviction in reality. How can people trust what they perceive?" Suzuki asks. The SR system has been developed to manipulate healthy "participants' perception of reality", and could become a important tool in understanding the capacity to discern between simulations of the brain and objective reality; particularly in psychiatric conditions such as schizophrenia.
In an experiment designed around this technology, a participant sat in a room wearing a VR headset similar to the Oculus Rift. They were then played alternately live and recorded footage of a similar scene at different times to see if they noticed a change. Critically, the majority of participants failed to distinguish between what was real (live) and what was not (recorded). Perfectly normal, healthy individuals were unable to reframe the error after the fact. This time, the totem kept on spinning⁴.
"Seven out of ten participants could not detect that the given scene was recorded [...]. The participant was not certain whether he was experiencing live or recorded scenes."¹⁴ To understand why this is, it will help if we draw a picture of how visual information is handled once inside the brain. In the illuminating book Phantoms of the Brain, neuroscientist V. S. Ramachandran reveals that within the brain, “there are over thirty different maps concerned with vision alone. [Likewise for tactile or somatic sensations—touch, joint and muscle sense]."¹⁵
It’s these maps which simulate the human body in a physical space. Every part of the body is plotted in a corresponding map within the brain, and categorised by part. The body map for hands lies next to the body map for the face and upper arm; genetalia sits next to the feet¹⁶. The finite size of these maps (even with the increased surface area from being scrunched up within the skull) limit the amount of information which can be processed and stored from the outside world.
You might almost imagine a collection of scrunched up world maps for each body part, juxtaposed to each other in a narrow school locker. With a touch on the body - a forearm, index finger, the lower back - a small light is seen appearing at different points on the maps to represent the firing of associated clusters of nerve cells. (In neuropathologies, such as after a stroke, a touch of the face can be experienced in the hand; an orgasm can be experienced in the foot¹⁷.) Given the brain's comparatively small size against the information overload of objective reality, it might be said that evolution has made an extremely smart compromise internally, so that we can go about our lives in a coherent fashion.
But assuming we can only process a limited feed of the world, it doesn't explain why experience seems coherent. By limiting information, the external world becomes largely an assumptive blur to the brain¹⁸, concentrating limited resources on the narrow window of objective reality that is the focal crosshair of our direct gaze - the word you're looking at right now through your fovea; a small depression in the retina of your eye where visual acuity is highest. This modest feed is then heavily augmented though memorised experience, ala the continuity field. (Incidentally, this necessary augmentation to our conscious lives may explain the mystery of childhood amnesia, which is the inability to recall experience before the age of 3. Only implicit memories can be formed before this period¹⁹. A foundational, 'continuity field' may be needed for experience to be coherent enough to become conscious.)
Significantly, however, this peripheral blur has necessitated an extra sensitive 'danger-trigger', ensuring that any possible threat is brought into focus with a turn of the head, or a shifting of the eyes²⁰. All things considered, it is an effective adaptation for maximising coherence and survivability in a universe incomprehensible to a "three-pound mass of jelly" (as Ramachandran has been known to call it).
Historically, this adaptation might have produced a, "could that be a sabre-toothed tiger in the bush?" as we walk down the path. In the modern world, though, we're more likely to get the chronic danger-trigger: "Is that war 6,000 km away a possible threat to me?" Worryingly, our best response to remediate this inflamed trigger today seems to be through apathy, medication or both²¹. (Although some might have reduced the problem somewhat with a smart low-information diet²².) Apathy and medication are surface-level responses, however. Less apparent is another of nature's ingenious adaptations to resolving the cognitive dissonance of this great uncertainty: give the conscious mind the impression that it is getting the full picture¹⁸.
"Our nervous system uses past visual experiences to predict how blurred objects would look in sharp detail."¹⁸ The overwhelming blur in our perceptual lives is filtered for the conscious mind to be as concrete as the word you're now looking at. This adaptation has assisted us in coping with the chaotic uncertainty we face every day.¹⁸ ²³
A fascinating psychopathology, anosognosia, is an extreme example of this mechanic gone wrong. Also in Phantoms of the Brain:
“Anosognosia is an extraordinary syndrome about which almost nothing is known. The patient is obviously sane in most respects yet claims to see her lifeless limb springing into action—clapping or touching my nose—and fails to realize the absurdity of it all."²⁴
Consider the case of Mrs Dodds²⁵, who had been left completely paralyzed on the left side of her body after a stroke:
Ramachandran: “Mrs. Dodds, can you touch my nose with your right hand?"
She did so with no trouble.
Ramachandran: "Can you touch my nose with your left hand?"
Her hand lay paralyzed in front of her.
Ramachandran: "Mrs. Dodds, are you touching my nose?"
Mrs Dodds: "Yes, of course I'm touching your nose."
The discrepancy between the internal maps of reality and the external feed offer a rare glimpse into the mind's compulsion to create coherence in the world. As Ramachandran notes, “the odd behavior of these patients can help us solve the mystery of how various parts of the brain create a useful representation of the external world and generate the illusion of a "self" that endures in space and time."²⁶
Of larger consequence perhaps, this natural limitation has led to a systemic false confidence, producing completely unreasonable economic gambles in modern society²⁷. Arbitrarily, we've been making confident decisions on the assumption that we're getting 100% of the picture, but really getting just 10% of the picture with 90% peripheral blur.
More accurately, former chief engineer for Sun Microsystems Michael Deering published research in 1998, finding that, "across the entire visual field, the human visual system can perceive approximately only one fifteenth the visual detail that would be discernible if foveal resolutions were available for the entire field."²⁸ In humans, the fovea represents a high resolution but very narrow field of view; about 1 - 2 degrees. For this reason, the eyes dart around simultaneously ala saccades, focusing on novel and memory-directed stimuli of interest with the eyes through the vestibulo-ocular reflex (VOR) and forming a three-dimensional map in the brain with an emphasis on the novel and interesting. Again, this is to conserve resources. We favour the contextual gestalt over accurate minutiae. (To demonstrate the fallibility of this system and the role of memory, consider the mysterious Spelunker's illusion - the 'confident' belief that one can see their own hands in complete darkness, yet at the same time not the hands of someone else.)
Peripheral blur is, in a very real sense, subject to the same conditions as visual impairment. As such, it can produce the same hallucinations in perfectly healthy people. We are leading ourselves into increasingly punitive crises around the world, all the while masking the source of the problem. Hindsight appears to be 20/20, and these fundamental limitations of thought lead us to believe that we can learn from our mistakes. By no fault of our own, we find ourselves underestimating the complexity of the future. As long as humans are in the driving seat and technology continues to empower individuals (particularly those with economic power) there's no immediate reason why these baseline mechanics won't produce global events with ramifications so huge we can barely appreciate them today.
By reconciling uncertainty, nature has allowed the conscious thinker to get out of bed in the morning without caving under the immense conditionality of life, but it's necessarily turned us into decision-making buffoons. (Particularly when it involves predicting the inherently unpredictable future.)
To make things even worse, once we have made a decision, we unwittingly anchor ourselves to it through a whole host of heuristics and biases²⁹ which serve to further increase our perceived sense of self-importance, -righteousness and -control in the world. (An additional layer of coherence.) To name just three biases which further part us from objective truth:
THE CONFIRMATION BIAS
The confirmation bias narrows our world view in order to have us confirm what we already know, closing our attentions to new and - in particular - conflicting information. The result is reduced cognitive dissonance, fueled on increased perceived happiness. (The world becomes ostensibly less complex, reducing anxiety and effectively increasing wellbeing.)
THE HINDSIGHT BIAS
The hindsight bias leads us to simplify the past to a form-factor we can understand. Steve Jobs once said that you can only join the dots looking back³⁰. This statement is not just a shrewd philosophy. Presently and in the future, there are too many 'dots' to comprehend. Our brain unconsciously rearranges and simplifies³¹ our memories so that when we look back, we can make sense of our lives. This gives us the retrospective illusion that we can apply (simplified) past lessons to the (incomprehensible) future. We assume the dot-density never changes.
Existential coherence, it might be said, is a retrospective illusion, caricatured in Geschwind syndrome; the neurotic capacity to find exquisite meaning in life. More normally, to expect coherence presently and in the future is to invite depression³². THE OPTIMISM BIAS
While ambiguity accentuates our danger-trigger at one extreme, it increases our disposition towards seeing the world through rose-coloured glasses³³ at the other. Fantasies thrive on having freedom against the heavy burden of objective reality. This bias is constantly recalibrated throughout life, from the starry-eyed kindergartner all the way to the jaded air-traffic controller.
DO WE LIVE IN OUR BRAIN'S SIMULATION OF THE WORLD?
The simple answer is no. The brain works in tandem with the objective world, albeit through a narrow window bolstered heavily through pre-existing memories of the past so as not to be overwhelmed. Further enquiry today, however, passes the torch to philosophy, with its millennia of polarised Eastern and Western thought questioning the true nature of reality. It suggests we meet somewhere in the middle.
At one end, the East practices holistic subjectivity with a fatalistic acceptance of the world as it is; a sacred gestalt. At the other, the insatiable objectivity of the Capitalist West with it's obsession for reductionism; a scientific pursuit of the truth which winds up lifeless³⁴. We see the dots rather than the Dalmatian. The better (strictly Western) question to ask perhaps lies in determining the actual ratio between subjective and objective reality, which produces what we call experience. But the problem with the pursuit of quantification is that we enter a world of contradiction as we leave the safety walls of Newtonian determinism for the random universe of Heisenberg's uncertainty principle. We attempt to study the objective brain through a fundamentally subjective lens - the brain. On the fringe of our worldly understanding, we bump up against the Pandora's box of Quantum Mechanics; the double-slit experiment, as well as other unsolved, possibly neuro-related mysteries in Physics.
This ratio, if there is such a thing, will fluctuate between people on an axis of various conditions such as what we've explored in this piece; neural stimulation, the degree of pathology, self awareness. Certainly, the science of hallucination and its surprising ubiquity makes it clear that reality is more subjective than most of us would think. And this makes sense considering that what we see is deterministically outside of ourselves.Sensibly, it's reasoned to derive from there. But that insight takes us barely closer to finding a satisfying scientific answer.
Given that subjective experience is different for everyone, a good starting place perhaps is with the integrity of one's optical and neural visual systems (eyes and brain) against the flow of visual information available to these systems. (Outside light/stimulation.) With any one cog departing from what could be considered normal, neural artifacts begin to appear very quickly and the brain takes an increasingly sizeable piece of the perceptual pie. This was witnessed through our volunteer who was vividly transported back to his family's pizzeria through brain-stimulation, or more severely through psychiatric problems like Charles Bonnet syndrome and schizophrenia.
We know people can be drawn into a maze contrived solely from their own mind whether through neuropathology (as Hollywood has bountiful accounts of), or in healthy brains through dreaming. But how far does the rabbit hole go the other way? Is it epitomized through an over-stimulating acid trip when the neural barriers on objectivity are compromised? Or are we back to the brain again?
Disconcertingly, experiments like those conducted through the Substitutional Reality system (SR) reveal that even fully healthy individuals are not exempt from a failure to calibrate subjective and objective reality. Whether we like it or not, the world is blurry to us all. Pinpointing the transition between recorded and live reality seems intractable, but it does invite one consideration; that maybe it's time to start looking for our own totems.
Coding Consciousness
Many of the brightest minds today are at the forefront of understanding our perceptual relationship with technology, but few are as poised to make stunning breakthroughs in this arena than virtual reality company Oculus, now owned by Facebook. They have deeper pockets than the most Ivy-league of university research departments, and have the intellectual brunt of some of the world's smartest scientists when it comes to bridging the digital and biological divide.
That bridge, however, is still divided by a chasm of ignorance. We still barely have a framework for how the human brain works for one thing. In early 2014, a mouse brain was mapped for the first time³⁵; a mammoth achievement in the field of connectomics (the field of brain-mapping), involving a single connectome of 75 million neurons. But the complete map of the human brain is still no where to be seen. A feat that would bring a degree of understanding that would doubtless translate into enhancing virtual reality technology.
But the leap from one to the other is significant; not unlike a high school science teacher drawing planets on the blackboard. Planets can be abstracted into categories and isolated for human understanding, but it's difficult to communicate the objective scale that we're dealing with here. For instance, even a modestly detailed connectome of a mouse brain represents more than 1.8 petabytes of data. (That's 1.8 million gigabytes.) The human brain, by contrast, would likely require 98,000 petabytes of data.³⁶ (ERROR.)
And that's just the bridge problem, a theoretically knowable path. The chasm problem, however, is far less clear; its depth impossible to know. On one end, the digital mechanics of VR stay relatively logical and error-free. On the other, the human brain processes information on estimated pattern recognition with an error-prone memory. We're dealing with fundamentally different paradigms of learning. And whether for good or bad, these apparent flaws and idiosyncrasies in biology must be heeded to, and designed for, when reproducing a convincing virtual environment for the human player.
In this piece we'll look at some of the significant challenges currently posed in VR - both known and unknown - which make this bridge so long, and the chasm so deep. It's a mightily exciting area of study that we're only just beginning to explore and understand, and it's a privilege to unearth just a footnote of an industry that's set to radically change the human experience.
BRIDGING DIGITAL AND BIOLOGY
One aspect of the brain that makes it so difficult to replicate through artificial intelligence is its proficiency (and compulsion) in creating meaning in a world of uncertainty³⁷. If human brains see in grey, then computers see only in black and white. Sit a dog behind a fence and ask a 9 year old to fill in the blank:
Behind the fence is the head of a _____.
A child will most likely tell you first, that it's a dog (which would tax even the most powerful of computers, which might extrapolate a new species of fence-dog). But even more difficult for a computer, the child would understand implicitly that beneath the head (and behind the fence) was a body which completes the dog. What comes naturally to us - what differentiates a dog from something else - is notoriously hard for a computer. But there is a simple explanation for why this is: they are two entirely different paradigms of learning.
Given that we barely have a framework for the human brain today, we're not quite sure what paradigm the brain uses to the extent to which we can replicate it. The best we can do today is to effectively increase memory capacity, but still never understand exactly what is being seen. A computer will label every colour, breed and orifice of a dog, but it still wouldn't understand what a dog was if it entered the same room, barked a little, and wagged its tail for a treat. Case-in-point, in order for a computer to beat a human at a general knowledge show, it needs to first memorise 200 million pages of content, including the full text of Wikipedia³⁸. And even then it'll tell you that it estimates (estimates!) Barack Obama to be the current president with sub-100% certainty.
One way to contrast these differences is through the idea that computers aren't using all of the 5 senses that are second nature to human brains. As a result they need to overcompensate by continually increasing the computing horsepower thrown at tasks, which the brain happens to do effortlessly as a result of environment optimisation through evolution. A blind person may find themselves taxing their memory to remember the layout of a building, for instance, without having the ability to process it visually and without conscious effort.
Take 16,000 computer processors with one billion connections and give it unlimited access to YouTube to recognise the content of the most commonly occurring images; and still technology is outwitted by a 9 year old. In 2012, Google's state-of-the-art artificial brain with this digital horsepower was able to muster 81.7 percent accuracy in detecting human faces, 76.7 percent accuracy when identifying human body parts and 74.8 percent accuracy when identifying cats³⁹. Such percentages would be ludicrous for a child.
In trying to reproduce human brain intelligence, we are using a fundamentally different paradigm. The result of this is a lot of unexplained phenomena we've bodged together labels for with equally perplexing terms; change blindness, continuity of experience, the continuity field which we touched on earlier, and persistence of vision which we'll revisit later. They are problems that some of the world's brightest minds today are grappling with.
VIRTUAL REALITY ISN'T THAT SIMPLE
Virtual reality isn't just about, "putting a display an inch in front of each eye and rendering images at the right time in the right place"⁴⁰, says Michael Abrash, Chief scientist at Oculus VR. By miniaturizing your living room in front of your face and switching off the lights, the brain is still aware that it's watching a 3D simulation of a 2D projection. What we really want is a 3D simulation of a 3D projection, thereby tricking the brain into thinking it's the real world. This is a far more difficult charade to maintain.
"There are three broad factors that affect how real – or unreal – virtual scenes seem to us"⁴⁰, he says. The first two have a more objective emphasis on the technological side - they're known as tracking and latency. These are problems a static TV setup doesn't encounter as much when the eyes are equally static, and can be near-eliminated through increasing refresh rates and increasing blur when the camera is rotating. Every time the eyes reorient themselves spatially, however, as they do in VR but not on a static display, there are a panoply of physio- and neurological processes active while the brain calibrates to the new scene.
(To repeat the passage from earlier: In humans, the fovea represents a high resolution but very narrow field of view; about 1 - 2 degrees. For this reason, the eyes dart around simultaneously ala saccades, focusing on novel and memory-directed stimuli of interest with the eyes through the vestibulo-ocular reflex (VOR) and forming a three-dimensional map in the brain with an emphasis on the novel and interesting.)
This whole complex system seems to give rise to a host of enduring problems in VR, such as judder; "the fact that during each frame pixels remain illuminated for considerable periods of time"⁴¹, says Abrash. With normal 2D-3D your eyes and brain know they're looking at a picture, but truly convincing images in 3D-3D is, "expected to stay stable with respect to the real world as you move"⁴², says Abrash. It's like comparing your reaction to a bear on a movie screen, and a bear in real life. In 3D-3D, the brain notices every minute inconsistency in the world, because instinctively, it's now believed that your life is at stake. Your survival depends on you being able to pick up subtle inconsistencies in the environment. This is just one reason why VR is so hard. It's like having the harshest food critics with the most sensitive of food palettes visit your restaurant. They notice everything.
Defining tracking and latency in more depth, and other problematic terms is outside the reach of this piece (and as Abrash will attest, there is still much to be understood), but to explain simply; if you've ever noticed the second hand on a clock or watch seeming to hang for longer than a second when you first look, you have already experienced the brief period where the brain seems to idle before resuming normal function. During this pause the brain is actually reconstructing the present through a tenth-of-a-second window⁴³ where recent information is collected and a coherent picture is formed, as we saw with the continuity field earlier. This phenomenon in particular is called the stopped-clock illusion, an example of chronostasis; which itself is a product of saccadic masking, "where the brain selectively blocks visual processing during eye movements in such a way that neither the motion of the eye [...] nor the gap in visual perception is noticeable to the viewer."⁴⁴ It goes some way to explaining why images appear to judder with eyes that constantly move in VR, but not through a static display with equally static eyes.
Reconciling the temporality of the non-static visual system with digital displays in VR, lies at the heart of creating a virtual reality experience that is indistinguishable from real life. (In VR, this problem is known as persistence - related to persistence of vision which was introduced earlier.) It's intuitive for us to think that the brain indiscriminately presents the world around us at any given moment, much like a digital display would in an absolute fashion. This fundamental difference of 'absolute digital' against 'relative biology' is another perspective on why it's so difficult to translate biology into digital. It demands that we design in line with the same seemingly arbitrary compromises that evolution has given us (arbitrary only until we have greater understanding that is), rather than to build a theoretically superior HMD (head-mounted display) from scratch.
The third factor from Abrash is more subjective and not focused on digital technology; the visual system itself - the eyes and the brain - which we've looked at in some depth. And this third factor is really the focus of this piece. Understanding how exactly the brain processes virtual reality brings us back to the central question: what can virtual reality tell us about how the human brain works? In answering the question we come back to trying to draw the line between real and virtual as a starting point. But once more, it turns out to be a harder problem than we might think.
Psychologist and director of the research center for virtual environments at the University of California, Jim Blascovich, stands at the podium of Winnipeg's first TEDx conference⁴⁵. He expresses gratitude to the audience before engaging them with a curious question: "Where will you be during my talk?". The audience lets out a nervous laugh, sensing an unintuitive answer coming. "Right here listening to me?", he offers. "Not completely. If you're like the average North American, your mind will take you some place else 40 times in the 18 minutes I have for this talk."
I'll ask again. Where do we draw the line between real and virtual reality? When you go elsewhere in your head; is that reality still? Or is it more accurate to call it biological virtual reality, like our man at the pizzeria at the beginning? A compelling answer, perhaps, is that reality is relative. Blascovich continues. "Our position is inherent in our notion of what we call psychological relativity. Now scientists have proven that motion is relative. As I stand here tonight, it doesn't look like I'm moving very quickly or very far. But that perception is only accurate from points of view in this room. From a much more distant perspective, I am moving quite far and quite fast, and so are you. In Winnipeg at this latitude, we are moving at more than 1500 km an hour, as the earth spins on its axis."
It's about perspective. When you hear the firing of a gun, changes fundamentally between whether you're standing 10 metres away, or 500 metres away. It takes time for sound to travel through space and into your ear. The gunshot does not exist objectively, only in agents with the apparatus to register and react to the sound. And in the case of humans, perhaps: to register, react and reflect. The proverbial tree that falls in a forest with no one around; does it make a noise? Well, science suggests that it doesn't. Blascovich makes it clear that the line between reality and virtual are very blurred indeed, which seems to have been an unnerving theme while I've been writing this.
But if we take a closer look, there may be more of a difference than we think. In fact, MRI scans of the brain show that there is a difference so profound that the digital world may never be able to perfectly replicate the real world from the brain's perspective. This doesn't mean the conscious mind cannot be tricked, but it might mean that we have a biological totem.⁴ (A fact that would be reassuring in the future if we come to have conscious access to that information at will.)
In order to continue now, we'll assume that tracking and latency are solved problems in VR, and we'll start with an important distinction between how the real world is perceived from how virtual reality is perceived through a HMD. That is, we assume, when we look left in virtual reality, the world moves exactly the same in relation to our head movement, as it does in real life. The first principle is this: that what the visual system perceives, is caused by the corresponding cluster of photons which exist outside, hitting the retina. This is where the profound difference exists. As Abrash says, "The overall way in which display-generated photons are presented to the retina has nothing in common with real-world photons"⁴⁰.
And recent research confirms this. In late 2014, Neurophysicists found that space-mapping neurons in the brain actually react differently between the real world against the digital world of VR⁴⁶. It's well known in the neuroscientific community, that when a person enters a new environment, hippocampal neurons - a region in the brain associated with memory formation - selectively activate to form a 'cognitive map' of that area. The hippocampus calculates distances between notable landmarks such as mountains and buildings, and information from all the senses is interpolated to form a three-dimensional map of the world. It was thought that the brain would react the same in the real world, as it does in virtual reality.
Researchers placed a mini-harness around rats and placed them on a treadmill surrounded completely by a virtual world. Scientists compared brain activity between this environment and that of a real room designed to look exactly the same as the virtual reality environment. What they found surprised them. Brain patterns corresponding to the two scenarios were day and night. While neuronal activity was orderly and sensible in the real world; in the virtual world, neurons seemed to activate randomly. The uncorrelated activity suggested the rat was completely baffled, but its behaviour was the same in both cases. "The 'map' disappeared completely," said Mehta, the study's senior author, before concluding, "We need to fully understand how virtual reality affects the brain." An insight of utmost importance, no doubt, as the VR industry begins picking up speed.
Michael Abrash brings a deeper insight here. "Real-world photons are continuously reflected or emitted by every surface, and vary constantly. In contrast, displays emit fixed streams of photons from discrete pixel areas for discrete periods of time, so photon emission is quantized both spatially and temporally."⁴¹ What we have in Virtual Reality, is a rudimentary version of reality which goes part of the way to fooling the conscious mind, but never the unconscious. Possibly a blessing in disguise.
SPECS FOR THE MATRIX
In Michael Deering's 1998 paper which we referenced earlier²⁸, multiple monitor configurations were used to form a theoretical model of visual system saturation. I quote: "Assuming a 60 Hz stereo display with a depth complexity of 6, we make the prediction that a rendering rate of approximately ten billion triangles per second is sufficient to saturate the human visual system."
Once this point is reached, Deering noted, the ultimate limits of human visual perception would now need to be included in hardware trade-offs. In fact, it appears that in some sense we have already crossed this point. Digital monitors don't discriminate between the direct, narrow gaze of the fovea and our blurred periphery where an opportunity lies in conserving resources through relying more heavily on experienced memory in place of uncertainty. Simply, displays output the same detail whether you're looking at the top left of the display or the bottom right. To quote Deering - even as far back as 1998 - CRTs exceeded "the maximum spatial frequency detection capability of the visual system, in regions away from where the fovea is looking."
There is, however, a digital example of peripheral processing being different to that of the action being directly experienced. And it's understood through what's known as the Nyquist-Shannon sampling theorem⁴⁷. In 2012⁴⁸, the programmer and founder of Epic games, Tim Sweeney, gave a presentation on the future of gaming, wherein he explained the theorem: "The screen resolution limits the amount of graphics data we need to process - beyond this limit, any extra data is wasted." This is to say that only the images on screen for any given resolution need to be sharp (the foveal equivalent). Action outside of the screen view can be on a lower power setting and consume less memory. (The peripheral blur equivalent). This is known as the Nyquist limit.
For this reason, Sweeney says the ultimate true-reality HMD would have a resolution of 8000 x 4000 pixels, with a 90 degree FOV. And with that setup, we would need somewhere in the region of 20 - 40 billion triangles on the screen per second. Sweeney predicts this technology to be just 2-3 generations away. (A relevant note; given the increased FOV in VR, another challenge is introduced. The wider FOV in a HMD means it necessarily has to push more pixels at any one time, when compared to a conventional display. Few gamers would be content to play decade old games alongside their TV counterparts in order to experience the same smoothness.)
Adding to our hypothetical component list of the true-reality HMD which hijacks the conscious mind, Abrash has said that a "1000 Hz display would very likely look great, and would also almost certainly reduce or eliminate a number of other HMD problems, possibly including motion sickness, because it would interact with the visual system in a way that mimics reality much more closely than existing displays."⁴¹ But rather disappointingly he goes on to say, "I have no way of knowing any of that for sure, though, since I’ve never seen a 1000 Hz head-mounted display myself, and don’t ever expect to."
More realistically, a 120Hz display would promise noticeable improvements. But while technically possible to build, there is no reason to build them today from an economic standpoint. Current Rift dev kits are made using phone displays, which have no use for 120Hz panels in their primary market. As Abrash explains, "It’s certainly possible to do so, and likewise for OLED panels, but unless and until the VR market gets big enough to drive panel designs, or to justify the enormous engineering costs for a custom design, it won’t happen."⁴²
THE HUMAN COMPONENT
Ultimately, Abrash admits, "Someone has to step up and change the hardware rules". Obstacles today in making VR all it can be are not just a question of technology - we can't just throw more computing power at them and expect to resolve grand engineering challenges. Give the smartest scientists in the world computers of infinite power and still we wouldn't be able to recreate reality in VR. The problem is also a social one, and that comes down to innovation; a sadly undervalued characteristic in a world that gives inordinate attention to technology, which is easier to quantify and value.
There are experts such as futurist and inventor Ray Kurzweil⁴⁹ who trace Moore's law (the observation the computing power increases exponentially every 12 - 18 months) from 1968 to track and predict the advancement of technology. They conclude that computers will soon surpass human intelligence within the next 20 years. But while Moore's law could fairly accurately predict artificial intelligence technology like Deep Blue being able to beat a chess grandmaster by 1998, anticipate the demise of the Soviet Union and even Google's self driving car, it may not be a paradigm which translates into the extreme diversity of human intelligence which has evolved over millennia.
Our motor skills alongside our opposable thumbs in combination with pattern-recognition and problem-solving minds, present a mammoth challenge for technology which is predisposed to specialisation. (And is rather autistic in its nature.) Task by task, computers beat us without dispute. But in terms of adaptability and flexibility, and the intuition which can arise from that freedom; we take the cake. Computers blow our minds at being the best at a growing list of systematic tasks - following instructions - but they're not likely to be competing for minimum wage against the house-cleaner any time soon. Activities presenting a high degree of uncertainty and/or demanding dextrous motor control are far less understood by machines and it's difficult to put a timeline on technology that can replace us in those tasks. Even the incredible self-driving cars are more akin to a train on exquisitely detailed tracks than a truly liberated car.
Crucially here is the challenge of innovation. "Facebook could have been developed 10 years earlier [on 1994 technology]" says Sweeney in his talk. So why didn't we have Facebook then? Well, it just hadn't been invented then by a human. Computers aren't great at invention. At least not yet. Innovation often comes from being in the right place at the right time, with both the brains and brawn of a human being.
Fortunately for experts like Michael Abrash, it seems he's in exactly the right place to put the biggest dent in the universe with his prodigious knowledge of VR. To help usher in, one might imagine, a world in which players are handed their very own totem before plugging into the virtual economy; spurring a new industry within a new industry, within a new industry. A virtual reality in the brain that indisputably fools the conscious mind, but gravely never the unconscious.
Lest, we get lost forever.
CITATIONS
Thomson, Helen (2014) Induced hallucination turns doctors into pizza chefs, http://www.newscientist.com/article/dn25459-induced-hallucination-turns-doctors-into-pizza-chefs.html#.VHHgForfXCQ
Sacks, Oliver (2012) Hallucinations, “Silent Multitudes: Charles Bonnet Syndrome”, 3-5
Gillette, Dan (n.d.) Can a person born blind still experience visual hallucinations? https://www.healthtap.com/user_questions/340024-can-a-person-born-blind-still-experience-visual-hallucinations
Wikia (2014) Totems in Inception, http://inception.wikia.com/wiki/Totem
Sacks, Oliver (2012) Hallucinations, “Silent Multitudes: Charles Bonnet Syndrome”, 10
Sacks, Oliver (2012) Hallucinations, “Introduction”, ix
Sacks, Oliver (2012) Hallucinations, “Altered States”, 92
Doolittle, Peter (2013) How your "working memory" makes sense of the world, http://www.ted.com/talks/peter_doolittle_how_your_working_memory_makes_sense_of_the_world
Anwar, Yasmin (2014) Scientists pinpoint how we miss subtle visual changes, and why it keeps us sane, http://newscenter.berkeley.edu/2014/03/30/continuityfield/
Whitney, David (2014) Scientists pinpoint how we miss subtle visual changes, and why it keeps us sane, http://newscenter.berkeley.edu/2014/03/30/continuityfield/
Jacobs, Ryan (2014) Why You Rarely Notice Major Movie Bloopers, http://www.psmag.com/navigation/health-and-behavior/dont-always-notice-movie-bloopers-77804/
RT (2014) Brain ‘15-second delay’ shields us from hallucinogenic experience – research, http://rt.com/news/brain-neuroscience-visual-information-709/
Costandi, Mo (2012) Inception helmet creates alternative reality, http://www.theguardian.com/science/neurophilosophy/2012/aug/26/inception-helmet-alternative-reality?CMP=twt_gu
Suzuki, Keisuke et al. (2012) Substitutional Reality System: A Novel Experimental Platform for Experiencing Alternative Reality, http://www.nature.com/srep/2012/120621/srep00459/full/srep00459.html?message-global=remove
Ramachandran, S. Vilayanur et al. (1999) Phantoms of the Brain, "Chasing the Phantom", 39
Ramachandran, S. Vilayanur et al. (1999) Phantoms of the Brain, "Knowing Where to Scratch" - “Why not a hand fetish or a nose fetish? I suggest that the reason is quite simply that in the brain the foot lies right next to the genitalia”, 32
Ramachandran, S. Vilayanur et al. (1999) Phantoms of the Brain, "Knowing Where to Scratch" - “Doctor, every time I have sexual intercourse, I experience sensations in my phantom foot. How do you explain that? My doctor said it doesn't make sense."
"Look," I said. "One possibility is that the genitals are right next to the foot in the body's brain maps. Don't worry about it.”, 32
Neuroscience News (2014) How the Brain Leads Us to Believe We Have Sharp Vision, http://neurosciencenews.com/visual-perception-sharper-images-neuroscience-1451/
Rovee, C. and Rovee, D. T. (1969) “Conjugate reinforcement of infant exploratory behavior,” Journal of Experimental Child Psychology, 33–39, via Hood, Bruce (2012) The Self Illusion: How the Social Brain Creates Identity, Oxford University Press.
Wikipedia (2014) Negativity bias, http://en.m.wikipedia.org/wiki/Negativity_bias
Kinderman, Peter (2014) Why We Need to Abandon the Disease-Model of Mental Health Care, http://blogs.scientificamerican.com/mind-guest-blog/2014/11/17/why-we-need-to-abandon-the-disease-model-of-mental-health-care/
Williams, Alex (2007) Too Much Information? Ignore It, http://www.nytimes.com/2007/11/11/fashion/11guru.html?pagewanted=all&_r=0
Wikipedia (2014) Defense Mechanisms, http://en.m.wikipedia.org/wiki/Defence_mechanisms
Ramachandran, S. Vilayanur et al. (1999) Phantoms of the Brain, "The Sound of One Hand Clapping", 95
Ramachandran, S. Vilayanur et al. (1999) Phantoms of the Brain, "The Sound of One Hand Clapping", 92
Ramachandran, S. Vilayanur et al. (1999) Phantoms of the Brain, "The Phantom Within", 15
Wikipedia, Financial crisis of 2007–08, http://en.m.wikipedia.org/wiki/Financial_crisis_of_2007–08
Deering, Michael (1998) The Limits of Human Vision, 2
Kahneman, Daniel et al. (1974) Judgment Under Uncertainty: Heuristics and Biases, http://psiexp.ss.uci.edu/research/teaching/Tversky_Kahneman_1974.pdf
Jobs, Steve (2005) 'You've got to find what you love,' Jobs says, http://news.stanford.edu/news/2005/june15/jobs-061505.html
Wikipedia (2014) Elaborative encoding, http://en.m.wikipedia.org/wiki/Encoding_(memory)
Plante, G. Thomas (2011) Could Lower Expectations Result in a Happier Life? http://m.psychologytoday.com/blog/do-the-right-thing/201102/could-lower-expectations-result-in-happier-life
Boseovski, J. Janet (2010) Evidence for “Rose-Colored Glasses”: An Examination of the Positivity Bias in Young Children’s Personality Judgments, http://onlinelibrary.wiley.com/doi/10.1111/j.1750-8606.2010.00149.x/abstract
McGilchrist, Iain (2009) The Master and His Emissary, "In Eric Fromm's study On Disobedience, he describes modern man as homo consumens: concerned with things more than people, property more than life, capital more than work. He sees this man as obsessed with the structures of things, and calls him ‘organisation man’, flourishing, if that is the right word, as much under the bureaucracy of communism as under capitalism. There is a close relationship between the mentality that results in bureaucratic organisation and the mentality of capitalism. Socialism and capitalism are both essentially materialist, just different ways of approaching the lifeless world of matter and deciding how to share the spoils.", 759
Hongkui, Zeng et al. (2014) A mesoscale connectome of the mouse brain, http://www.nature.com/nature/journal/v508/n7495/full/nature13186.html
Scott, Cameron (2014) Network of 75 Million Neurons of the Mouse Brain Mapped for the First Time, http://singularityhub.com/2014/04/14/network-of-75-million-neurons-of-the-mouse-brain-mapped-for-the-first-time/
Hsu, Ming et al. (2005) Neural Systems Responding to Degrees of Uncertainty in Human Decision-Making, http://www.citeulike.org/user/oamg/article/465989
Wikipedia (ND) http://en.m.wikipedia.org/wiki/Watson_(computer), "Watson had access to 200 million pages of structured and unstructured content consuming four terabytes of disk storage[8] including the full text of Wikipedia,[9] but was not connected to the Internet during the game."
Wired UK (2012) Google’s Artificial Brain Learns to Find Cat Videos, http://www.wired.com/2012/06/google-x-neural-network/
Abrash, Michael (2013) Why virtual isn’t real to your brain, http://blogs.valvesoftware.com/abrash/why-virtual-isnt-real-to-your-brain/
Abrash, Michael (2014) My Steam Developers Day Talk, http://blogs.valvesoftware.com/abrash/
Abrash, Michael (2012) Latency – the sine qua non of AR and VR, http://blogs.valvesoftware.com/abrash/latency-the-sine-qua-non-of-ar-and-vr/
Eagleman, David (2014) Brain Time, "This brief waiting period allows the visual system to discount the various delays imposed by the early stages; however, it has the disadvantage of pushing perception into the past. There is a distinct survival advantage to operating as close to the present as possible; an animal does not want to live too far in the past. Therefore, the tenth-of- a-second window may be the smallest delay that allows higher areas of the brain to account for the delays created in the first stages of the system while still operating near the border of the present. This window of delay means that awareness is postdictive, incorporating data from a window of time after an event and delivering a retrospective interpretation of what happened.", http://eagleman.com/blog/item/6-brain-time
Wikipedia (ND) Saccadic Masking, "Saccadic masking, also known as (visual) saccadic suppression, is the phenomenon in visual perception where the brain selectively blocks visual processing during eye movements in such a way that neither the motion of the eye (and subsequent motion blur of the image) nor the gap in visual perception is noticeable to the viewer.", http://en.m.wikipedia.org/wiki/Saccadic_masking
Blascovich, Jim (2011) Digital freedom: Virtual reality, avatars, and multiple identities, http://m.youtube.com/watch?v=bgEA4iM8CHc
Aghajan, M Zahra et al. (2014) How does the brain react to virtual reality? Completely different pattern of activity in brain, http://www.sciencedaily.com/releases/2014/11/141124162926.htm
Wikipedia (ND) Nyquist-Shannon sampling theorem, http://en.m.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem
Sweeney, Tim (2012) The Future of Gaming - Tim Sweeney (Epic) DICE 2012 Session, http://m.youtube.com/watch?v=XiQweemn2_A
Kurzweil, Ray (2005) The accelerating power of technology, http://www.ted.com/talks/ray_kurzweil_on_how_technology_will_transform_us/transcript?language=en
本站仅提供存储服务,所有内容均由用户发布,如发现有害或侵权内容,请点击举报
打开APP,阅读全文并永久保存 查看更多类似文章
猜你喜欢
类似文章
【热】打开小程序,算一算2024你的财运
On Artificial Intelligence and Virtual Reality of the Earth Phantom
TED演讲 | 大脑如何幻想出你意识到的现实!
VR/AR/MR, what''s the difference? | Virtual reality | Foundry
全国英语演讲大赛冠军演讲稿
设计元宇宙:什么是元宇宙建筑?
一位艺术家眼里的太极瑜伽 | VR Tai Chi Yoga
更多类似文章 >>
生活服务
热点新闻
分享 收藏 导长图 关注 下载文章
绑定账号成功
后续可登录账号畅享VIP特权!
如果VIP功能使用有故障,
可点击这里联系客服!

联系客服