Skip to main content
If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

Disney's Stephen Wadsworth

The games president talks strategy, platforms, online safety and the future

GamesIndustry.biz How much of that experience from the Interactive part of the business is shared with the rest of the company?
Stephen Wadsworth

Disney Channel in particular has a very robust consumer insights function, and we work pretty tightly with that group, as we do with our own consumer insights as well. And as we're talking to these kids every day, we're watching their behaviour in our products.

So more and more across the company that kind of insight gets shared and becomes key learning that we can use when we're building our product or changing what we're doing.

GamesIndustry.biz How does Disney approach the subject of online safety? Are the solutions mainly technology-based, is there a large degree of human moderation as well, or is it a case of building things from the ground up with that in mind?
Stephen Wadsworth

It's kind of all of the above. We have some very robust technology for our in-product chat systems that all our virtual worlds share, across all the products. We have filtering technology that watches what's being said - or what's being attempted to be said, because a lot of it doesn't make it through. But based on what's attempted we are all over that, and will cut people off pretty quickly.

It requires parents' approval for a kid to get to the point where they can do anything outside of the pre-canned phrases - they have to specifically let them type something - but when they do that we're filtering it pretty aggressively.

On top of that we've got a whole range of moderation, our customer support reps who spend a tonne of time in the games, in these online experiences, watching what's going on.

Part of our technology enables us, as soon as somebody gets flagged as having tried to say something inappropriate... we have some pretty good tech for our reps to use, and they'll get the whole log pop up of what was attempted to be said. If they don't like it, you're out for a period of time depending on how severe they think it is.

A note goes out immediately, that explains they've been banned for 24 hours because they attempted to say the following... It pretty much never goes through, but just the attempt - or if a kid does an alert on somebody else's behaviour then they'll jump in and look at it... but at the same time we have the filtering technology there to ensure it doesn't happen.

So we have layers of capability built into this, and it's been fantastic. It's not just for appropriateness and safety between kids and the environment, but we also have the ability to watch what a kid says about their own personal life - there have been times where we've had to contact the authorities because a kid has said something that has nothing to do with this world, but there may an issue here.

Related topics