Skip to main content
If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

Which next-gen features matter?

Second-screen, motion controls, virtual reality; BioWare Montreal's Ian Frazier tells devs how to figure out which ones are right for them

Hardware has always influenced game design, so new hardware demands new models of design. In a presentation at the Montreal International Game Summit today, BioWare Montreal lead designer Ian Frazier took time away from working on the next installment of Mass Effect to share his views on how the next generation of game machines will open up new design opportunities for developers to explore.

For the purposes of the talk, Frazier laid out a definition of what he means by "next-gen," since different people use it for different things. For Frazier's talk, he means mobile, virtual reality systems, Xbox One, and PlayStation 4. So given that, how does a developer design a game for next-gen platforms?

On the surface, Frazier said the answer to that question is, "Mostly the same way they always have." But after reflecting on the topic a bit longer, Frazier decided that the difference was in the new tools these platforms give developers. It's a problem that has come up in previous generations, but one that developers keep fumbling.

He pointed to the advent of 3D games as an example, saying developers converted everything to 3D whether it fit the game or not. Secret of Monkey Island was one franchise he pointed to as an example, saying the shift to crude 3D made the game look worse than its 2D predecessor, but didn't add anything to the gameplay. Motion control was another next-gen misstep, Frazier said, saying motion control in the current generation was shoehorned into games where it didn't really contribute anything. He pointed to using Sixaxis control to balance in the original Uncharted, saying it worked fine but didn't add anything to the core game and was mostly forgettable.

Before a developer looks at the next-gen vision, they need to understand exactly what their game's vision and player fantasy is, whether that's racing a really fast car, being Batman, or anything else. New features should also tie into the central design tenets of the game, and serve to further engage players by working with the main drives they have to keep playing.

"The fundamentals of what makes a good game haven't changed. Next gen doesn't mean, 'We're throwing it all away. Do it over again!'"

Frazier has a list of eight "drives to play," reasons players are going to want to play your game, and then reasons they'll want to continue playing. The drives are feeling it (escapist immersion), learning it (system mastery), beating it (skill mastery), seeing it all (content), helping your friends (cooperative play), crush your enemies (competitive play), impressing everyone (peacocking), and making it your own (creation). Some of these drives are tied into one another, and some work best when employed simultaneously. For example, the competitive drive and the skill mastery drive are naturally intertwined, while the drive to create and the drive to impress others can be seen at work in titles like Minecraft.

Frazier then laid out the next-gen tools developers will be using to play on those drives: improved processor speed and memory, motion detection, voice recognition, video sharing, touch screens, second-screen functionality, companion apps, and virtual reality.

Better hardware means games can look prettier, which Frazier said is great, but it also allows for better physics and AI simulations. Larger play areas allow for a bigger world, and putting more characters on screen lets developers populate those worlds more densely. On the other hand, the higher expectations associated with that capability means everything is costlier to develop.

For motion detection, the big pro is that players can control their games with natural physical movements, lowering the barrier to entry for non-gamers. Motion control can also improve immersion for physical tasks, if what players do with their hands mirror what their characters do. There are some drawbacks, however. They can require certain space and lighting conditions to work, and they can be physically demanding to play a game like Dance Central with motion controls. And if you're used to a controller, motion controls can actually be harder and feel like a step back.

For voice recognition, Frazier said it's also a natural, accessible interface that helps non-gamers get used to new titles. It also can be used simultaneously with motion or traditional controls and makes for a potentially more immersive experience. On the other hand, it requires a quiet environment to work well, and can bother non-players in the same area, so developers can't rely too much on it.

On the sharing front, Frazier said it can greatly assist in community building. It also gives developers greater exposure on what their players are doing. However, he's worried it may break immersion and pull players out of the experience as they wrangle with the controls to share their gameplay videos with others.

The pros and cons of touch screens are largely known by now. They allow for fluid user interfaces, and are inherent to the mobile platform and (sort of) the PS4. However, it's not on the Xbox One controller, so developers can't rely on it as a standard feature. That brings up the second-screen functionality, which has its own benefits. Freeing up valuable screen real estate by dumping it to a tablet or phone is great, Frazier said, but some players will struggle to multitask. And again, not everyone has a smartphone or tablet, so it's not a feature developers can bank on. Companion apps share that same concern as second-screen features, but it does allow players to connect with the game experience when away from the console.

Virtual reality has some huge opportunities, specifically the potential for immersion. It also allows camera control without players having to input anything at all. Unfortunately, they are expensive peripherals, so not everyone will have one, and they can cause disorientation and nausea for some players.

So how to decide which tools to use in next-gen game design? Frazier said each drive has its own correlates among the next-gen features. For example, "Feeling it" is helped by more powerful hardware, motion detection, voice recognition, virtual reality, and possibly second-screen functionality. Frazier suggested a next-gen Wing Commander, where a virtual reality headset lets players look around the cockpit, motion controls allow them to flip key switched in the cockpit, voice recognition lets them communicate with wingmen, and a traditional controller gives them the direct control of the flight yoke.

If skill mastery is the primary drive, Frazier said taking advantage of faster hardware will allow developers to push games to super-smooth 60fps action that competitive gamers demand, while improved motion detection will let developers require more manual dexterity from players, whether it's for dance games or lock-picking minigames. On top of that, the ability to share video of games will let those who mastered games show their skills off to the world (which obviously ties in to the "peacocking" drive as well).

Ultimately, Frazier said it's up to each developer to figure out what next-gen features will work best for their own projects.

"The fundamentals of what makes a good game haven't changed," Frazier said. "Next gen doesn't mean, 'We're throwing it all away. Do it over again!'"

What it does mean is that developers will just need to intelligently identify what new features and areas of the game experience they are going to invest in.

Related topics
Author
Brendan Sinclair avatar

Brendan Sinclair

Managing Editor

Brendan joined GamesIndustry.biz in 2012. Based in Toronto, Ontario, he was previously senior news editor at GameSpot in the US.
Comments