PS4, Xbox One and the Quest to Escape The Uncanny Valley

Square Enix's Julien Merceron sees the new consoles allowing for better AI, more believable characters in games

Epic Games' Tim Sweeney said recently that in about a decade we should be seeing video game graphics that are "indistinguishable from reality." He went on to say, however, that this huge boost in graphical fidelity will also lead to new problems or will amplify existing ones, particularly when it comes to simulating human intelligence. AI is one of the keys to creating more human-like virtual characters, which could one day enable game developers to exit the uncanny valley, but progress in the field of AI has been somewhat slower than many would like.

Speaking with GamesIndustry International, Square Enix worldwide technology director Julien Merceron said he's hopeful that the next-gen systems will lead to significant advancements in AI, but he agrees that completely escaping the uncanny valley will remain incredibly difficult.

"I still believe it's going to be a long journey before we can sit together and you can show me a game that has nailed it," he said. "It's our duty to make progress on this front in this generation."

PS4 and Xbox One graphics will certainly help add to a game's realism, but initially that next-gen leap may be somewhat lacking. "If improved graphics aren't combined with better animation, all these games are going to dive into the uncanny valley. So it's possible around the launch timeframe that we won't actually see huge improvements in graphics because the developers still have to sort out how they will approach animation. That generally takes more time than upgrading your rendering engine," Merceron noted.

"That's the thing with the uncanny valley - you have to think not only about emotions and facial movements, but at some point we need to have deeper AI"

AI and animation will be hugely important for developers attempting to create characters that can truly engage players. It doesn't take much for an in-game character to feel awkward or distinctly non-human.

"I think the human eye is capable of seeing a lot of subtleties... Static characters will look good very soon, animated non-interactive characters a bit later, and fully interactive characters could take time, especially for some types of games. That's the thing with the uncanny valley - you have to think not only about emotions and facial movements, but at some point we need to have deeper AI. We need to be able to interact with the characters and then there's a new dimension of complexity," Merceron continued.

PS3 and Xbox 360 games at times showcased deeper AI, but on the whole developers may not have been able to devote the resources necessary to really innovate in the field of AI. Merceron said that it wasn't due to a lack of desire, but that differences in hardware required studios to allocate more resources to other areas. Consumers' expectations are incredibly high for even an average game, so it's not like a studio can really skimp on presentation.

"If you look at pure AAA gaming on consoles, working on PS3 hasn't been a walk in the park. The architecture was quite complex and when you spend a lot of resources on fixing things that you probably wouldn't have to focus on if the architecture had been simpler, well then you have lost some opportunities. The more resources you put on trying to understand the architecture, the less you have for other aspects," Merceron acknowledged.

With the PS4 and Xbox One offering more similar, PC-like architectures, Merceron believes some of the roadblocks to AI progress will be removed.

"Definitely the fact that the architectures of the consoles are simpler is probably going to make it easier, along with the fact that we'll have additional resources from a CPU perspective and memory perspective that we can use to present more interesting AI. So I think there's a greater chance that we'll see more focus on AI from developers, and better achievements," he said.


Creating better AI isn't just about coming up with smarter code, however. Merceron stressed that AI is driven by each game and each territory in the world, meaning that what American gamers may find acceptable could be different from players in say, Japan or Brazil, and a game's AI must reflect society in order to be believable.

"AI tends to be something that is very game specific in general, like our AI mechanisms in Final Fantasy are very different from Thief and Tomb Raider and Hitman. I still believe that moving forward there are a lot of aspects of AI that need to stay game specific. But what we would like to do is to actually create solid bases for designing NPCs, and it means graphically, from an animation perspective, physics, procedural elements like cloth, hair, and also from a behavior perspective. How can we interact as naturally as possible with virtual characters in a game?" he asked.

"What I'm very interested in these days is the change in expectations from gamers... It used to be really weird to see someone speaking alone in the street with a mobile phone. Now it's completely accepted. With Google Glass, would I really walk down the street with something like this on my head? I don't know. But in a few years from now, that will probably be totally accepted. The evolution of what's socially acceptable is very interesting - having a camera in your living room that watches you and lets you start your console, speaking to your TV - there are tons of things right now that still feel a bit weird. We have to look at our community of players and see what their habits are. What are the things that they would like to do? What are the things that they would not?"

"Maybe today people are not ready to speak through the TV to a character in the game and have the character engaging the player in a conversation, but there are technologies that are starting to support this"

Merceron continued, "So coming back to characters, there are probably some interactions with NPCs that people will not be open to today, but we have to think about the type of experience that players want to have with NPCs. So when we think about AI, maybe today people are not ready to speak through the TV to a character in the game and have the character engaging the player in a conversation, but there are technologies that are starting to support this. Internally, we are setting up milestones of how we want our NPCs to evolve over time and we're carefully watching the evolution around us to see when people will be ready for new interactions."

Advancements in AI won't be possible if studios don't manage their resources properly. AAA game development is already enormously expensive and risky, but in today's connected world, it's becoming easier for developers to have distributed teams around the globe. Merceron sees this as a key to next-gen development. Talent must be able to work in unison on a project no matter the location.

"We've seen the size of the teams growing over time on HD titles and I think the industry needs to continue improving workflows and tools. Now we have real-time editing and live editing in our engine, so we have teams that can work very fast, but as you grow the size of the teams you get more complex operations happening simultaneously, so concurrent editing management eventually starts to need additional enhancements to allow everyone to work comfortably without messing up the work of other people. Enabling studios to have very distributed teams around the world is going to be key," Merceron said.

"You need to be able to work with experts wherever they are. You might have a guy in Boston that doesn't want to leave Boston and he needs to be able to have access to the same resources as the others around the world so he can collaborate on content. That's a challenge we want to tackle because we strongly believe in the fact that in order to move toward designing the best products you need the best people, wherever they are."

One way some companies are streamlining and mitigating costs is to use one or two proprietary engines across projects and teams. EA, for example, is clearly making more widespread use of its powerful Frostbite engine from DICE. That may not be a suitable approach for everyone, however. Merceron urged caution when it comes to engines.

"If the number one reason for sharing technology is cost reduction, it's a very bad decision. Your number one reason should be the fact that it will lead to better quality in your games," he said.

One of the fears of using the same technology for multiple projects is that your games may start to become too similar. "If you have a great technology used to ship an awesome game and you get another team to use that same technology, instead of starting from nothing, they start from a very solid base... They can use all their resources to add more features and to eventually polish or modify some areas of the pipeline that are specific to their game. If you think from a quality perspective you might win, but if your first goal is to reduce the costs then you might enter a vicious cycle and just end up with 'me too' games," Merceron warned.

"One of the very important aspects of technology sharing for me is that it has to be proven before you share it. For example, when you build a new technology from scratch - like Luminous or Glacier2 - it is very wise to prove it first by building an awesome game with it; then after that you can eventually start sharing it with more teams. Of course, when designing that technology in the first place, you might want to make it such that it will have the potential to be shareable."

None of this is to say, however, that technology can't be a huge asset. Better tools are clearly going to help streamline development and potentially free up those needed resources to work on things like AI.

"Tools are key in relation to production costs and in terms of how fast you can iterate on your content, so it also has an impact on the quality. Time and quality are definitely things that a strong pipeline can solve. And the easier it becomes to craft more complex AI, the more we're going to see great AI. One of the reasons we see limited innovations in some domains is not just related to the simulation aspects, but it's also sometimes because the tools are not easy to manipulate. Some operations are so tricky and complex, and things are difficult to balance, that at some point artists give up on actually going to that level. So tools are a fundamental part of enabling innovation, ambition and also controlling your development costs and your quality," Merceron said.

Related stories

Life is Strange lost a key actor due to SAG-AFTRA strike

Ashly Burch only served as a consultant for Chloe in three-part prequel Before the Storm

By Matthew Handrahan

Mobile revenue gravity pulls Square Enix inwards

As cash flows in from Final Fantasy titles in the Japanese mobile market, is Square Enix losing its drive to build new IP and conquer overseas markets?

By Rob Fahey

Latest comments (3)

Cale Barnett Animator 4 years ago
It will be interesting to see where we are sitting with this topic by the end of this coming generation, with all the work already being put into mo-cap and facial animation, coupled with optimized performance on both consoles.
Already we're seeing some amazing work though, take the female doctor from the Quantum Break trailer - that's well past the uncanny valley imo.
0Sign inorRegisterto rate and reply
Morgan King Animator 4 years ago
That Yasser3D facial capture demo that was floating around a couple months ago seems to have largely solved facial capture with a small rig. If that's where we're at now, it's incredible where we'll be in 5 years.
0Sign inorRegisterto rate and reply
Tucson K Bagley Junior Artist, Dreamgate Studios4 years ago
I believe the bast way to avoid the Uncanny Valley phenomenon is to stay away from attempting photo-realism and instead move on to more stylized works, or at closest a sort of game-equivalent to Hyperrealism. At the moment all we're achieving by maintaining this stubborn obsession with "photo-realism" is indeed making games even less believable, and I find it akin to what rotoscoping does to 2D animation.

Though, comparing it to that sounds like I somehow came to the conclusion that it's the fault of Mo-Cap and Facial-Cap usage, and to be clear I absolutely do not think that is the case in most situations. Rather, in terms of animation, I think we need to stop trying to copy the movements down to the tiniest detail and start focusing more on what makes what we get from the motion more believable, and for graphics in general we need to take some cues from existing mediums. Things can be recognizable, believable and, most importantly, immersive, without being 100% photo-realistic, and best of all this can take most elements of Uncanny Valley out of the equation entirely.
0Sign inorRegisterto rate and reply

Sign in to contribute

Need an account? Register now.