Almost every day, we hear about revolutions in technology. A smartphone is often said to have a revolutionary camera, display, or design. This kind of marketing hyperbole is present throughout the tech industry, to the point where parodies of it have appeared on TV.

We often hear the same language when discussing VR as well. For the most part, I haven’t seen anything to show that VR is worthy of such hyperbole, but I've only experienced mobile VR while covering mobile devices. Almost every mobile VR headset like Gear VR or Google Cardboard just doesn’t work with glasses very well, if at all. In the case of Gear VR, pretty much any time I’ve tried it I found it to be a cool experience, but not really anything that would be life-changing. I just didn’t see the value of a video in which I have to be the cameraman. Probably the best example of this was the OnePlus 2 launch app, in which I would miss bits of information in the video just because I wasn’t looking at the right place when something appeared and disappeared.

A week ago, I was discussing my thoughts on VR with HTC when they realized that I had never tried their VR headset, the HTC Vive. A few days later, I stepped into one of HTC’s offices in San Francisco, expecting the experience to be similar in feeling to what I had seen before.

The room I was in was relatively simple. There was a desktop PC that was running the whole system that definitely had a good amount of computing power in it judging by the 10-12" long video card (single GPU) and what looked like an aftermarket heatsink fan unit on the CPU, though I wasn’t able to get any details about the specific components in the system. Other than this, two Lighthouse tracking devices were mounted on top of some shelves in the corners of the VR space.

Sitting in the middle of the VR space was an HTC Vive. There were cables running out of the headset, but the two controllers were completely wireless. The display is said to be 1080x1200 per eye, refreshing at 90 Hz with a field of view of 110 degrees or greater. Two headset itself contains one HDMI port and two USB ports to connect the headset to the PC. The motion tracking is also supposed to have sub-millimeter precision, with angular precision to a tenth of a degree. With the two tracking stations, the maximum area for interactivity is a 15 foot square.

Putting the headset on was simple. There were some adjustable straps that hold the display component to the eyes, and there was more than enough space in the headset to allow me to wear my glasses and see the sharpness of the display. Right away, I noticed that it was important to make sure the straps were tight. If I pushed down on the display, it would lose clarity until I pushed the headset up again to keep it in the right position. I also noticed that the subpixels of the display were subtly visible when looking at a pure white background, which suggests that there is room to improve in the resolution department.

At this point, the person managing the demo held out the controllers. One of them had a color palette on the touchpad, but the color of the controller was otherwise grey in this virtual world. I reached out and grabbed both controllers on the first attempt. I couldn’t see my hands, but the controllers were moving with my arms. I could walk around in this area. If I tried, I could inflate a balloon and hit it with my hands. The balloon bounced in reaction, as a balloon should.

The demo loop started after this small tutorial. These demos were the same that Ian had seen before, but I was now seeing it for myself. I was standing on a wrecked ship on the bottom of the ocean. Fish were swimming around me, and if I walked around the ship or flailed my arms, the fish and the water around me would react. An enormous whale swam by and then everything went black.

The next demo was a simulation of a job. There were various ingredients in the kitchen that surrounded me, ranging from eggs to tomatoes. A robotic voice read out a list of ingredients, and I turned to look at where the voice came from. The robot had a display, with a checklist of the previously described ingredients. I instinctively looked around and reached towards some tomatoes and picked them up to drop them in the pot, repeating this until a soup can appeared. I picked it up and dropped it in the pan. The next task was to make a sandwich. I placed and egg and other ingredients between two slices of bread, which made the sandwich.

Source: Valve Time

The final demo was familiar. A robotic voice asked me to place each controller in a receptacle of some sort to be charged, then asked me to open various drawers before closing them. A robot then walked into the room and I stepped away from the robot as it stumbled in unpredictably as it was much bigger than I was. I was told to press some buttons and pull some levers to open it for repair, and I cautiously walked towards it to do so. Some directions to fix the robot were spoken quickly and it was impossible to keep up. Eventually, the entire robot fell apart on to the floor, and the robot was removed. The walls and floor of the room began to disappear until a single platform remained, and I stepped away to avoid falling into the abyss. Eventually, GLaDOS appeared to criticize the work done, and the room was completely sealed.

That was the end of my experience. I took off the headset, headphones, and set the controller down. In some ways, I felt a bit groggy as if I had just woken up from a dream. I was reflecting upon what had happened when I tried to look at my phone closely. I immediately got a sense of vertigo and had to sit down to gain my bearings. The room wasn't spinning, but I was definitely disoriented.

In some ways, the fact that I got vertigo is a bad sign. When I thought about it, I realized the problem that I had was that HTC Vive isn’t a perfect simulation of human vision. In the underwater boat demo, my eyes were always focusing on the display, which appeared to be distant. However, fish swam by extremely close to my eyes staying perfectly in focus with no double vision effects. When I tried to do something similar after the demo, I was disoriented because the real world didn’t work the same as Vive.

Tiltbrush on HTC Vive with Glen Keane

In my mind, Vive had already become my reality. HTC Vive, even in this state, was so incredibly convincing that it had become my reality for half an hour. I was fully aware that this wasn’t real and that I could take off the headset at any time, but at the same time it was so thoroughly convincing that at a subconscious level I was reacting to what I saw as if it was real. In that sense, HTC Vive is almost dream-like. It feels real when you’re interacting within the world that is contained within the headset, but when you take it off you realize what was strange about it. Unlike a dream, you can go back just by putting the headset on again.

In a lot of ways, HTC Vive is hard to describe because of its rarity. I’ve always been around personal computers, and while the modern smartphone was a great innovation it’s always been a connected mobile computer to me. There are other VR headsets out there to be sure, and these headsets were all neat to use, but HTC Vive is life-changing. It is a revolution.

Comments Locked

27 Comments

View All Comments

  • Ian Cutress - Monday, September 14, 2015 - link

    Hi bji, this wasn't a review. Hence why it's a single pager in the pipeline section of the website. For Josh this was his first proper VR experience (rather than the Gear) and he wanted to put a lot of his subjective thoughts on paper, and should be read as such in an anecdotal format. I did the same thing back when I tested the Vive back at MWC. Until we get a device in-house and devise an applicable testing strategy that is actually meaningful and beneficial beyond a user experience analysis, we won't be publishing a full, AnandTech class review.

    My thoughts on the demo I had:
    http://anandtech.com/show/9048
  • bji - Tuesday, September 15, 2015 - link

    Thank you for the info and the link. I find it interesting that the demos always are limited to around 20 minutes ... for Valve. Oculus, and Morpheus. I have this gut feeling that they believe that 20 minutes is all that the uninitiated to VR can handle before having a significant chance of developing VR sickness. Also I have to wonder if there isn't any kind of wrangling with whatever government agency would have the responsiblity for ensuring the safety of these devices, to give those agencies time to test the units to ensure that they don't pose an unnecessary risk to eyesight or some other health risk. Could this be why these devices are taking so long to get to market?
  • IanCutress - Tuesday, September 15, 2015 - link

    For my Vive demo, it was purely because at MWC they had two devices for the whole show and it was invite-only for press - one person per publication. The schedule was tight and they needed everyone to be in and out on time in order to process everyone through the show.

    Plus, there's the argument of best foot forward. As HTC are the only company with their prototypes, they get to dictate the demo and focus the experience on what they deem to be their best foot forward.

    With VR like this, there are two types - free movement, or sitting. For most home users, sitting will be the majority use model, whereas business is more likely to embrace the free movement aspect in order to accelerate their workflow. Of course in the business side of things, you have to be insured and regulated up the wazoo. Long term effects, there have been studies about altered vision perception (beyond glasses/lenses) which they can draw on, if it differs much from watching a monitor with glasses. Standard government suggestions apply I would assume - a break every hour etc.
  • edzieba - Monday, September 14, 2015 - link

    It may be an artefact of having worked with the DK1 and DK2 for quite some time, but using the Vive I experienced LESS visual adaptation(/re-adaptation) after removing it. This could either be a result of the Vive's improved tracking and higher refresh rate, or simply that with habitual (or even regular) VR use the brain adapts to switch between real-world accommodation/vergence matching and VR accommodation/vergence mismatch (fixed accommodation) more quickly and easily.

    It'll be interesting to see what solution for replicating accomodation becomes viable first: Lightfield displays (with their hit to effective resolution, something VR is already starved for by several orders of magnitude) which offer the 'perfect' implementation, or adaptive lenses (e.g. oil-filled lenses) combined with high-speed eye-tracking. While adaptive lenses can only replicate accomodation cues correctly for the foveal region, that's the only region that can reliably DETECT accommodation changes, so is of minimal issue. You also have all the other useful things eye-tracking can do, like foveated rendering, lens axis offset compensation, depth-correct DoF blur, saccade blanking tricks, etc.
  • bji - Monday, September 14, 2015 - link

    You propose some tech that I had not heard discussed before with regards to better emulation of focal depth and accomodation etc. I don't know if any of them are practical but what I do know is that over the next 5 - 10 years we're going to see an amazing evolution of VR tech with ideas like these, or similar to them, being incorporated into the designs. I have no doubt that within the next 10 years we're going to get "close enough" to perfect visual fidelity in the VR experience so as to be a completely "solved" problem.

    Kinesthetic sense is going to be the real difficult nut to crack ...
  • edzieba - Monday, September 14, 2015 - link

    Haptics is the "we don't even know where to practically start" of consumer VR. Actuated exoskeletons (e.g. Cybergrasp) aren't going to reduce in price due to sheer part complexity, and have significant liability concerns. Plus all current systems need one or more extra people to assist the suer getting into and out of the device safely. Combianations of tactice actuators and TENS stimulators are mechancially simpler, but still have the liability concerns (and issues with correct electrode placement for untrained users). And even these only stimulate a limited number of your proprioceptive senses.

    In the near-term, limited haptic devices that use clever tricks to fool your senses in a small number of scenarios are probably going to be the norm. Tactical Haptics have a neat system of moving handgrips that can effective simulate some of the effects of a mass moving in your hand. They stimulate your tactile skin senses of a weight causing an object to shift in your grip, but does not impart any force the the muscle spindles in your forearm.

    When a lot of people in the field are of the opinion "it'll probably end up being easier to invectively stimulate nerves directly than replicate the stimuli externally" you know it's not going to be as easy a problem to solve as vision or audio.
  • TheFuzz77 - Monday, September 14, 2015 - link

    Imagine 2 weight lifting resistance ropes attached to the ceiling or outer wall with wraps around your hand. Program the resistance with the game/simulation to control movement. It's a limited concept, but add enough ropes... Bondage style, anything like that out there?
  • Yaldabaoth - Monday, September 14, 2015 - link

    There are plenty of awesome tension/predicament rope bondage sites...
    ...wait.... VR? Never mind.
  • edzieba - Monday, September 14, 2015 - link

    There's a '3D force feedback mouse' design that uses a single controlled point and a set of cables, one to each corner attached to a motor/encoder, with the controlled point in the centre. e.g. The Inca 6D (https://www.youtube.com/watch?v=MWbFMP6rcSs).

    Unfortunately, this is no good if you want to effectively walk around blindfolded while using it. Even worse if you want two or more points. You're more likely to strangle yourself than build an effective system.
  • JeffFlanagan - Monday, September 14, 2015 - link

    >Almost every mobile VR headset like Gear VR or Google Cardboard just
    >doesn’t work with glasses very well

    Cardboard works fine with glasses, and Gear VR isn't supposed to be used with glasses. You just adjust the dial until the screen is in focus. If you're too near or far-sighted to be in the range it can adjust to, you may have to wear contacts to use it. I'm near-sighted, and Gear VR focuses fine for me.

    Gear VR is a lot of fun despite only being able to play phone games. Vive and OR should be fantastic.

Log in

Don't have an account? Sign up now