The Game Developers Conference AI Summit concluded Tuesday evening with a rant session, a rapid-fire series of nine quick talks from various AI experts given a few minutes each to rail against whatever bugged them about the current state of the field.
The rants ranged from practical advice (iterate, show don't tell) to wish lists (games that took advantage of Internet of Things devices, more descriptive labels for jobs in the increasingly diverse and fractured AI space), but arguably the most topical rant was that of Spirit AI character engine product manager Emily Short.
Short's rant was focused on "Stupid Questions About AI". She works on interactive characters with modelled knowledge and feelings, she said, so she finds herself in lots of conversations with members of the public, players, and press that start "from a difficult, or possibly even stupid, space."
"I get questions like, 'Does the AI really understand me?' Well, you tell me how you think human understanding works, and then I'll tell you if I think the AI does that. 'Can I hurt the AI's feelings?' Well, your interactions could trigger a mode representing sadness, so does that count? 'Does the AI ever do something you didn't expect?' The answer of course is yes, because it could have bugs, and they're never cool with that answer!"
But the big question that Short focused on for most of her rant was 'Can the NPC catch you in a lie?' The answer is yes, she said, considering there are simple branching narrative games where NPCs respond to a specific deceptive dialog choice by calling the player out on it.
"But that's clearly not what the asker meant at all," Short said. "What they mean is something more like this."
"We should really push back on bad questions from journalists however we can, because now more than ever, it's important for the general public to understand better what AI is actually for, and what it can actually do"
Short then detailed the necessary underpinnings a game would need to create the scenario they're asking about. It would need model states of the world, so there's a certain objective truth to things. Then it would need an NPC knowledge model of the world, then an NPC model of what other NPCs and the player character know of the world, an NPC model of conversation and how their statements relate to the NPC's knowledge of the truth and its model of the player's model, an inference engine to determine the implications of all those models, a narrative model to discard the 99.9% of the instances where the NPC catches you in a lie that's too boring to argue about, and a social engine to model when and how the NPC should confront you about your lie.
"Now, make all of that authorable. And make it run on any machine, ever," Short said, adding, "Somewhere between those two things is the good version of this system. But 'Can NPCs detect lies?' is a lazy question, and it leads to lazy formulations of systems. Because even if you did somehow build the hard version of this, and optimize it, and write an authoring tool for it, it would still not actually be good enough to write a good game with."
Even if someone rigorously conceived a system complex enough to capture all of that, whoever is supposed to be making content would that would be having to learn how to use it. A good AI system is an instrument on which developers create art, Short said, but the people making content on any complex system are having to learn it as if they were musicians handed a newly-invented instrument.
So "Can the NPC catch you in a lie?" is a lazy question. And that's fine, Short said, coming from a random person at a cocktail party who's just trying to make small talk and understand what it is an AI developer does.
"It's a little bit less fine if you're being asked questions of this general nature by journalists," Short said. "And in my opinion, we should really push back on bad questions from journalists however we can, because now more than ever, it's important for the general public to understand better what AI is actually for, and what it can actually do. But there are lots of ways we actually do this to ourselves."
The producer version of the same question is, "How soon can we have NPCs with procedural backstory?" It's tough to answer that precisely because that could actually be any of a million things, Short said, and there need to be more details conveyed before a proper answer can be given. A better approach for producers would be to center the questions around the player, because that's where the complexity is. In the case of the question about catching players in a lie, the player-centric question may be, "Can the player manipulate knowledge to produce different responses?"
"It sounds like a question about game design and knowledge systems, because it is," Short admitted. "But those are also the questions that will lead us to a well-engineered piece of AI that supports the kind of experience we want to have the player to have."
Short concluded her talk with the suggestion that developers should ask, and insist on answering, better questions about AI.