If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

In Theory: Valve's Wearable Computing Concept

Digital Foundry assess the potential of "Terminator Vision"

So the cat is finally out of the bag. "I'm working at Valve on next-gen gaming hardware," Jeri Ellsworth tweeted last week, after playfully revealing that "it's fun working on gaming hardware. Makes me smile."

Ellsworth, an "American entrepreneur and self-taught computer chip designer" joined Valve last November, and is part of an R&D "dream team" looking to redefine the way that we play games. It's a team that's set for some rapid expansion too, with Ellsworth drawing the games community's attention to a couple of ads on the Valve website. The firm is looking for engineers and designers who'll be "doing hardware design, prototyping, testing, and production across a wide range of platforms. We're not talking about me-too mice and gamepads here - help us invent whole new gaming experiences."

"Valve's 'Terminator Vision' concept sees computer generated overlays added directly over your field of view in order to create a revolutionary new approach to gaming."

So, as good as confirming the mythical Steambox then? Not quite. In a highly personal blog post, former id software legend Michael Abrash discussed his move to Valve, and the complete lack of a hierarchical structure which essentially left him to his own devices, pursuing the areas that most interested him.

That project now stands revealed - Valve is working on a gaming platform, but it's no me-too console or re-factored PC. Abrash's team is working on wearable computing - the notion of computer imagery being overlaid on reality, or as he prefers to describe it, "Terminator Vision".

"The underlying trend as we've gone from desktops through laptops and notebooks to tablets is one of having computing available in more places, more of the time," he continues. "The logical endpoint is computing everywhere, all the time."

Abrash believes that within two decades this will be "standard", with images supplied via glasses, contact lenses and if that wasn't scary enough, eventually through a direct Borg-style neural interface. In terms of the fundamentals required - input, output, processing, form factor - Abrash reckons that the necessary component hardware is progressing quite nicely, with a viable platform achievable within three to five years.

This is thanks in no small part to the meteoric rise in the capabilities of mobile technology, propelled by the emergence of the smartphone as a mainstream gaming platform. As discussed in last week's Digital Foundry column on the evolution of iOS hardware, by next year mobile graphics tech should have surpassed the capabilities of current generation HD platforms. Speaking with us last year, PowerVR makers IMG told us to expect GPU power to increase by a factor of 100 within five years - so rendering power shouldn't be a problem, especially if it is augmenting a scene rather than generating all of it.

The notion of us carrying around so much graphical potential has seen some - like Abrash's old colleague John Carmack - envisaging the end of the console as we know it, with the smartphone (or whatever its successor may be) directly interfacing with whatever display is conveniently available at any given point. "Terminator Vision" is perhaps a natural extension of that.

"The challenges of incorporating rendered elements into a real-life scene are numerous - stereoscopy seems like a pre-requisite and tracking motion accurately would be very difficult indeed."

In terms of what the gaming possibilities are, on the face of it, the concepts are perhaps similar to what's currently achievable on portable gaming machines like PlayStation Vita or Nintendo 3DS, albeit on a much grander scale. Both of these systems have utilised augmented reality in their existing libraries of games. Put very basically, the unit's camera generates a playfield, with 3D objects overlaid, sometimes on top of real-life props that can be easily tracked by the hardware. The result can be quite uncanny.

Valve's wearable computing concept offers up so much more but presents many more technological challenges in presenting a seamless mix of rendered and real-life objects. 3DS and Vita can create quite authentic looking augmented scenes because they are rendering onto a flat 2D screen that the eyes focus upon. In utilising glasses or contacts, rendering convincing overlays is much more problematic - for a start we're almost certainly looking at stereoscopic 3D rendering being a pre-requisite for realistic objects rendered in 3D space, not to mention some mechanism for determining where the eye is focused too.

Moving forward from that, the system would need to have some sense of depth from the environment around you, perhaps suggesting some kind of Kinect-style z-sensor would be needed. By interposing its own 3D objects into the existing world, the device would need to know where environmental items sit in order for interactions between render and reality to work more convincingly. As an example, let's say that a games developer creates an augmented reality FPS game: ideally you'd want the opponents to duck behind real-life scenery, jump over objects on the ground etc. Of course, first-gen AR systems may well be far less complex and more along the lines of how AR is presented on current systems.

In terms of how convincing it could be, and where the potential is for gaming, one of the existing major platform holders offers up some compelling demonstrations using today's technology. This intriguing presentation from Sony shows PlayStation Vita hosting some truly impressive augmented reality demos, including a prototype game.

Vita obviously doesn't have a depth sensor, so instead the view from the camera is scanned for "interest points" which appear to be mapped and tracked in 3D space perhaps with assistance from Vita's onboard gyroscopes. The more interest points found, the more accurately the 3D world is realised, and objects can then be super-imposed: the Vita demo includes a "Dragon's Den" game where holes are mapped to the real-life ground with dragons popping up out of them, the player moving in real life to avoid their attacks.

"Some of the R&D work on augmented reality carried out by Sony for PlayStation Vita has been remarkable, showing some of the potential for Valve's new approach to gaming."

With a 2D view mounted onto the Vita unit itself, Sony neatly side-steps the other major problem facing Valve in its efforts: how to control these games. As Abrash muses, "what does a wearable UI look like, and how does it interact with wearable input?"

Actual interaction with the augmented scene is one of the least convincing elements of Google's recent Project Glass demo, which sees smartphone functions mapped onto a glasses-based HUD. Aside from some vague hand-waving, not much in the way of viable solutions is presented here in terms of how the user is accessing the functions on the HUD. It's inevitable that this new hardware is going to require some way of interfacing your hands into the augmented world - somehow a joypad seems rather low-tech for this new era of gaming.

"[Isaac Asimov] says that the ultimate interface to a computer isn't a probe that jacks into your head, it's where you insert your hands into this device. You have so much bandwidth going through your fingers," Sony's Doctor Richard Marks, creator of PlayStation Move, once told us. "You have so much fidelity with your fingers and wrists. It's such a high dynamic input."

For gaming we should obviously expect some kind of controller, or "wearable input", as Michael Abrash prefers to call it, and this is an opportunity to revolutionise gaming in itself. Stereoscopic rendering and the notion of full 3D objects literally right in front of your eyes basically demands a means by which the player can interact directly with them. Gloves tracking finger movement may well be the way to access the "bandwidth" that Richard Marks describes and to provide next-gen fidelity in terms of interaction, and there are other cool bonuses too. For example, props you hold in your hand can benefit from rendered overlays: Star Wars Lightsabre remote training, here we come.

"The applications for wearable computing tech go beyond gaming - it makes sense for the hardware to be integrated into next-generation smartphones."

So is the future of gaming as Valve sees it tied exclusively to augmented reality? Well, there's absolutely nothing to stop the real-life view being completely blocked out, or as near as, leaving just the rendered overlay, giving us an effect somewhat akin to what is achieved with the Sony HMZ-T1 personal 3D viewer. Elements in real life, such as the user's hands, could always be rendered in 3D - provided they are being tracked, of course. The advantage of providing this kind of isolated viewpoint is that stereoscopy - while preferable - isn't as essential as it may be with augmented reality, and the levels of immersion are still quite extraordinary: IMAX gaming anywhere is quite a draw. Not only that, it's hard to imagine content-driven AAA-style games working in an augmented reality environment.

And of course, the applications for such a device also extend beyond gaming, as the Google Project Glass video demonstrates rather well. It makes sense to marry a games platform like this into existing smartphone functionality - comms, navigation, voice recognition - which may provide some context to the reports that Apple CEO Tim Cook arrived at Valve HQ last Friday for reasons unknown.

Even if the meeting did happen (there has been no substantial report/eye-witness/photo) the notion of Apple recruiting Valve to help with a games console as some are reporting seems quite bizarre, bearing in mind that Valve doesn't particularly care for closed platforms, and Apple doesn't really need any help from anyone to implement the iOS platform in a prospective HDTV.

It's more likely that Steam could play a big part in boosting the Mac's grossly under-developed gaming credentials, but clearly the notion of a wearable computer dovetails much more closely with the smartphone business and the concept is perhaps exciting enough to warrant the attention of the CEO of the world's richest technology firm...

Author
Richard Leadbetter avatar

Richard Leadbetter

Technology Editor, Digital Foundry

Rich has been a games journalist since the days of 16-bit and specialises in technical analysis. He's commonly known around Eurogamer as the Blacksmith of the Future.

Comments