Gi Live London graphic

Connect with the UK Video Games Industry

Buy Your Tickets Today
Gi Live London graphic

In Theory: Valve's Wearable Computing Concept

Digital Foundry assess the potential of "Terminator Vision"

So the cat is finally out of the bag. "I'm working at Valve on next-gen gaming hardware," Jeri Ellsworth tweeted last week, after playfully revealing that "it's fun working on gaming hardware. Makes me smile."

Ellsworth, an "American entrepreneur and self-taught computer chip designer" joined Valve last November, and is part of an R&D "dream team" looking to redefine the way that we play games. It's a team that's set for some rapid expansion too, with Ellsworth drawing the games community's attention to a couple of ads on the Valve website. The firm is looking for engineers and designers who'll be "doing hardware design, prototyping, testing, and production across a wide range of platforms. We're not talking about me-too mice and gamepads here - help us invent whole new gaming experiences."

"Valve's 'Terminator Vision' concept sees computer generated overlays added directly over your field of view in order to create a revolutionary new approach to gaming."

So, as good as confirming the mythical Steambox then? Not quite. In a highly personal blog post, former id software legend Michael Abrash discussed his move to Valve, and the complete lack of a hierarchical structure which essentially left him to his own devices, pursuing the areas that most interested him.

That project now stands revealed - Valve is working on a gaming platform, but it's no me-too console or re-factored PC. Abrash's team is working on wearable computing - the notion of computer imagery being overlaid on reality, or as he prefers to describe it, "Terminator Vision".

"The underlying trend as we've gone from desktops through laptops and notebooks to tablets is one of having computing available in more places, more of the time," he continues. "The logical endpoint is computing everywhere, all the time."

Abrash believes that within two decades this will be "standard", with images supplied via glasses, contact lenses and if that wasn't scary enough, eventually through a direct Borg-style neural interface. In terms of the fundamentals required - input, output, processing, form factor - Abrash reckons that the necessary component hardware is progressing quite nicely, with a viable platform achievable within three to five years.

This is thanks in no small part to the meteoric rise in the capabilities of mobile technology, propelled by the emergence of the smartphone as a mainstream gaming platform. As discussed in last week's Digital Foundry column on the evolution of iOS hardware, by next year mobile graphics tech should have surpassed the capabilities of current generation HD platforms. Speaking with us last year, PowerVR makers IMG told us to expect GPU power to increase by a factor of 100 within five years - so rendering power shouldn't be a problem, especially if it is augmenting a scene rather than generating all of it.

The notion of us carrying around so much graphical potential has seen some - like Abrash's old colleague John Carmack - envisaging the end of the console as we know it, with the smartphone (or whatever its successor may be) directly interfacing with whatever display is conveniently available at any given point. "Terminator Vision" is perhaps a natural extension of that.

"The challenges of incorporating rendered elements into a real-life scene are numerous - stereoscopy seems like a pre-requisite and tracking motion accurately would be very difficult indeed."

In terms of what the gaming possibilities are, on the face of it, the concepts are perhaps similar to what's currently achievable on portable gaming machines like PlayStation Vita or Nintendo 3DS, albeit on a much grander scale. Both of these systems have utilised augmented reality in their existing libraries of games. Put very basically, the unit's camera generates a playfield, with 3D objects overlaid, sometimes on top of real-life props that can be easily tracked by the hardware. The result can be quite uncanny.

Valve's wearable computing concept offers up so much more but presents many more technological challenges in presenting a seamless mix of rendered and real-life objects. 3DS and Vita can create quite authentic looking augmented scenes because they are rendering onto a flat 2D screen that the eyes focus upon. In utilising glasses or contacts, rendering convincing overlays is much more problematic - for a start we're almost certainly looking at stereoscopic 3D rendering being a pre-requisite for realistic objects rendered in 3D space, not to mention some mechanism for determining where the eye is focused too.

Moving forward from that, the system would need to have some sense of depth from the environment around you, perhaps suggesting some kind of Kinect-style z-sensor would be needed. By interposing its own 3D objects into the existing world, the device would need to know where environmental items sit in order for interactions between render and reality to work more convincingly. As an example, let's say that a games developer creates an augmented reality FPS game: ideally you'd want the opponents to duck behind real-life scenery, jump over objects on the ground etc. Of course, first-gen AR systems may well be far less complex and more along the lines of how AR is presented on current systems.

In terms of how convincing it could be, and where the potential is for gaming, one of the existing major platform holders offers up some compelling demonstrations using today's technology. This intriguing presentation from Sony shows PlayStation Vita hosting some truly impressive augmented reality demos, including a prototype game.

Vita obviously doesn't have a depth sensor, so instead the view from the camera is scanned for "interest points" which appear to be mapped and tracked in 3D space perhaps with assistance from Vita's onboard gyroscopes. The more interest points found, the more accurately the 3D world is realised, and objects can then be super-imposed: the Vita demo includes a "Dragon's Den" game where holes are mapped to the real-life ground with dragons popping up out of them, the player moving in real life to avoid their attacks.

"Some of the R&D work on augmented reality carried out by Sony for PlayStation Vita has been remarkable, showing some of the potential for Valve's new approach to gaming."

With a 2D view mounted onto the Vita unit itself, Sony neatly side-steps the other major problem facing Valve in its efforts: how to control these games. As Abrash muses, "what does a wearable UI look like, and how does it interact with wearable input?"

Actual interaction with the augmented scene is one of the least convincing elements of Google's recent Project Glass demo, which sees smartphone functions mapped onto a glasses-based HUD. Aside from some vague hand-waving, not much in the way of viable solutions is presented here in terms of how the user is accessing the functions on the HUD. It's inevitable that this new hardware is going to require some way of interfacing your hands into the augmented world - somehow a joypad seems rather low-tech for this new era of gaming.

"[Isaac Asimov] says that the ultimate interface to a computer isn't a probe that jacks into your head, it's where you insert your hands into this device. You have so much bandwidth going through your fingers," Sony's Doctor Richard Marks, creator of PlayStation Move, once told us. "You have so much fidelity with your fingers and wrists. It's such a high dynamic input."

For gaming we should obviously expect some kind of controller, or "wearable input", as Michael Abrash prefers to call it, and this is an opportunity to revolutionise gaming in itself. Stereoscopic rendering and the notion of full 3D objects literally right in front of your eyes basically demands a means by which the player can interact directly with them. Gloves tracking finger movement may well be the way to access the "bandwidth" that Richard Marks describes and to provide next-gen fidelity in terms of interaction, and there are other cool bonuses too. For example, props you hold in your hand can benefit from rendered overlays: Star Wars Lightsabre remote training, here we come.

"The applications for wearable computing tech go beyond gaming - it makes sense for the hardware to be integrated into next-generation smartphones."

So is the future of gaming as Valve sees it tied exclusively to augmented reality? Well, there's absolutely nothing to stop the real-life view being completely blocked out, or as near as, leaving just the rendered overlay, giving us an effect somewhat akin to what is achieved with the Sony HMZ-T1 personal 3D viewer. Elements in real life, such as the user's hands, could always be rendered in 3D - provided they are being tracked, of course. The advantage of providing this kind of isolated viewpoint is that stereoscopy - while preferable - isn't as essential as it may be with augmented reality, and the levels of immersion are still quite extraordinary: IMAX gaming anywhere is quite a draw. Not only that, it's hard to imagine content-driven AAA-style games working in an augmented reality environment.

And of course, the applications for such a device also extend beyond gaming, as the Google Project Glass video demonstrates rather well. It makes sense to marry a games platform like this into existing smartphone functionality - comms, navigation, voice recognition - which may provide some context to the reports that Apple CEO Tim Cook arrived at Valve HQ last Friday for reasons unknown.

Even if the meeting did happen (there has been no substantial report/eye-witness/photo) the notion of Apple recruiting Valve to help with a games console as some are reporting seems quite bizarre, bearing in mind that Valve doesn't particularly care for closed platforms, and Apple doesn't really need any help from anyone to implement the iOS platform in a prospective HDTV.

It's more likely that Steam could play a big part in boosting the Mac's grossly under-developed gaming credentials, but clearly the notion of a wearable computer dovetails much more closely with the smartphone business and the concept is perhaps exciting enough to warrant the attention of the CEO of the world's richest technology firm...

Gi Live London graphic

Connect with the UK Video Games Industry

Buy Your Tickets Today
Gi Live London graphic

More stories

SOC Investment Group: Activision Blizzard execs need to be held accountable for toxic working culture

Research director Rich Clayton tells us what more the firm wants from these games giants

By Alex Calvin

Over 200 Chinese games firms reportedly vow to self-regulate in face of new restrictions

Statement from state-backed gaming association suggests firms will use facial recognition to identify minors

By James Batchelor

Latest comments (11)

Tim Carter Designer - Writer - Producer 9 years ago
'..."help us invent whole new gaming experiences."'

God help us.

Can you just give us a stable PC/console with a set of tools that work all the time? Not part of the time, all the time?

Edited 1 times. Last edit by Tim Carter on 18th April 2012 6:49pm

0Sign inorRegisterto rate and reply
Ken Varley Owner & Freelance Developer, Writer, Devpac9 years ago
This ain't gonna happen in my life time. Even photoshop struggles to auto mask around shapes. A little eye glass ain't gonna think about it.
0Sign inorRegisterto rate and reply
Aleksi Ranta Category Management Project Manager 9 years ago
"Terminator vision" might be possible down the road, in 10-20years, but it wont be available anytime soon at a mass-market pricepoint. Same goes for Googles project Glass. Those shots of Sergey Brin wearing the glasses, although nice, hide the fact that he is carrying a packpack worth of gear doing the actual computing and such. And I would wager Google is further along in the tech than Valve.

But I guess we will see soon what, if anything, Valve is working on.
0Sign inorRegisterto rate and reply
Show all comments (11)
Peter Stirling Software Engineer, Firelight Technologies9 years ago
I read that blog post yesterday. The author of this article seems to have entirely missed the point. Here is a quote:

"To be clear, this is R&D – it doesn’t in any way involve a product at this point, and won’t for a long while, if ever – so please, no rumors about Steam glasses"
0Sign inorRegisterto rate and reply
@Aleksi: I will bet that the Google glasses connect to a Android powered phone (Bluetooth, or something else). It makes perfect sense really, and the glasses then only have to handle local comms, have a mic, camera, and project the vision display. All the processing is done on the phone.

As far as this article goes, I'm slightly bemused how easily they dismiss the Google effort. It seems to cover exactly what Steam is trying to do, and more - and they may have a version ready for release in the next 12 months.
0Sign inorRegisterto rate and reply
Greg Wilcox Creator, Destroy All Fanboys! 9 years ago
Good lord, it's just the next Opti-Grab waiting for a class action lawsuit once someone steps off a curb and gets pancaked by a bus.
1Sign inorRegisterto rate and reply
Fran Mulhern , Recruit3D9 years ago
Can't see these happening anytime soon for personal users. Nothing quite screams "mug me!" like a pair of google glasses. Fantastic use for the military though, though helmet mounted displays are the norm now in a lot of modern aircraft and helicopters (albeit with the processing being done by the main computer).
0Sign inorRegisterto rate and reply
Klaus Preisinger Freelance Writing 9 years ago
What happens if Terminator vision is granted to people who are not robots sent to kill from the future:

****Exact lamppost recognized*****
****accessing Valvetube*****
****play video "roxxor23 urinates on lamppost"***
+++exceed roxxor23's dare to earn 10 achievement points+++

We shall live like rockstars fornicating on the streets, living off small advertisement banners in videos, broadcasting our exploits to everybody who is unfortunate enough to own a device picking up on our face or special locations.
0Sign inorRegisterto rate and reply
Aleksi Ranta Category Management Project Manager 9 years ago
@Michael Shamgar Yes, eventually they will connect to a phone or similar device.
My point was mainly about the maturity of the tech at the moment. We are in my opinion discussing things that are way in the future, and thinking that valve might release something in the near future (Terminator vision as stated in the article).

Yes doable, not at a mass market price in the near future, maybe something for people willing to spend the cash. And with no mass market penetration, development interedst will suffer.
0Sign inorRegisterto rate and reply
Morville O'Driscoll Blogger & Critic 9 years ago
I wonder what the medical tech benefits of this would be? Super-imposing an X-Ray picture over the patient's lungs or spine, maybe?
0Sign inorRegisterto rate and reply
Klaus Preisinger Freelance Writing 9 years ago
why would you spend millions of Dollars on equipment to produce a high resolution picture of internal organs and then project it on a surface subject to quality loss? There is no additional information gained by super-imposing images.

I believe the key feature of "Terminator Vision" is recognition, as seen in the picture chosen by the author of the article. Technology will assist the user to recognize the significance of the objects in his vision. Starting with landmarks, crossing over to AR games, and into information people are willing to broadcast walking down the street. Think of it as digital tattoos, a new way of sending information beyond wearing fancy clothes and hippster glasses.

I do not think communication is the product being sold, I rather think Terminator Vision is the idea of selling answers.
0Sign inorRegisterto rate and reply

Sign in to contribute

Need an account? Register now.