Skip to main content
If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

ESRB wants to make it easier to harvest kids' personal data | This Week in Business

Ratings board teams up with SuperAwesome and Yoti to get approval for a lower-friction way for parents to approve collection of children's info

This Week in Business is our weekly recap column, a collection of stats and quotes from recent stories presented with a dash of opinion (sometimes more than a dash) and intended to shed light on various trends. Check back every Friday for a new entry.

Is technology that guesses people's age based on their appearance a valid tool to confirm parental consent?

The ESRB is hoping so, as the North American ratings board is asking the US Federal Trade Commission to make facial age estimation one of the approved methods of verifiable parental consent (VPC) under the Children's Online Privacy Protection Act (COPPA).

Joining the ESRB in this request are Yoti, a developer of such technology, and SuperAwesome, which runs the free parental consent management platform Kids Web Services as well as kid-focused advertising and influencer marketing businesses.

The application was first made in June, but when word of it started making the rounds this week, there were concerns expressed about a shiny new technology based on a machine learning algorithm aimed at children (or rather, aimed at preventing children from passing themselves off as adults).

Given the litany of harmful activities already empowered by black box technology that was not created with the explicit goal of causing harm, some pushback should be expected. After all, such technology is already revolutionizing the fields of automobile collisions, legal malpractice, worker exploitation, sexual extortion, eating disorders and union busting AT THE SAME TIME, racist policing, stalking, and so much more.

The ESRB would rather not be associated with any of that, naturally, and once the news of its FTC application started making the rounds, it reached out with a statement.

QUOTE | "First and foremost, this application is not to authorize the use of this technology with children. Full stop. Nor does this software take and store 'selfies' of users or attempt to confirm the identity of users. Furthermore, this application makes no mention of using age estimation to prevent children from purchasing and/or downloading restrictively rated video games, nor do we intend to recommend its use in that way…

"To be perfectly clear: Any images and data used for this process are never stored, used for AI training, used for marketing, or shared with anyone; the only piece of information that is communicated to the company requesting VPC is a 'Yes' or 'No' determination as to whether the person is over the age of 25." – An ESRB spokesperson volunteering comment after reports on the application began circulating.

There's nothing wrong with that statement, but it's striking how many concerns the ESRB is trying to address at once. It wants to soothe people's fears about protecting children, privacy, stricter ratings controls, unethical acquisition of AI model training material, and targeted marketing, to name the most obvious ones. It's a real testament to how many times and in how many ways the public has been abused by tech companies with a well-documented contempt for privacy and properly informed user consent.

This seems like a good place to note that the ESRB's co-applicant SuperAwesome is a subsidiary of Epic Games, which the FTC might remember as the company that had to pay more than half a billion dollars last year to settle charges of using dark patterns to trick players into buying stuff and violating COPPA for (among other things) collecting kids' personal information without bothering to get parental consent.

Would you trust this man with your children's privacy?

While we're at it, we'll also point out that the co-founder and CEO of the other co-applicant Yoti is Robin Tombs, who previously co-founded gambling company Gamesys, stepping away in 2019 around the time the company was fined £1.2 million by the UK Gambling Commission for failing to live up to its social responsibilities, ignoring problem gambling red flags among users and rules meant to prevent money laundering.

Oh, and even if you don't share my view that the ESRB has been woefully negligent when it comes to addressing concerns around loot boxes, it's still a part of the same organization that carelessly mishandled the personal information of thousands of media professionals over several years subjecting a number of them to threats and harassment, so maybe we should take the applicant's assurances about privacy and safety with a deli counter's worth of salt.

(Obligatory note that the ESRB insists it operates independently from the ESA, but the fact remains that if you pay the former to go through the ratings process, it shows up on the latter's books.)

What does age estimation tech do?

But let's stow the trust issues for the moment, and look at what Yoti's facial age estimation technology does. First off, the company is quite clear it's not facial recognition tech. It can't tell who you are; it just looks at a picture of you and takes a guess at how old you are.

QUOTE | "The system takes a facial image, converts it into numbers, and compares those numbers to patterns in its training dataset that are associated with known ages." - The application to the FTC reassures us all that the computer doesn't really see a person's image, just numbers that represent it, numbers like 1s or 0s or... hey, wait a second!

Yoti has proposed some interesting things apps could do once they know the user is a child, like turning off excessive notifications, turning off geolocation by default, minimizing data collected and not storing it, explaining things in easy-to-understand language, and protecting their data from being used for things not in their interest. Or as they're otherwise known, "things apps should do for every user regardless of age."

But that's not really what this is about. This is about whether age estimation technology counts as parental consent under COPPA that would allow companies to collect, use, and even disclose children's personal information to third parties.

There's no real child-protection angle to this... it's more a question of whether firms will be able to lower the hurdles to exploiting children's personal information

Right now, COPPA has a few approved consent methods, including use of a credit card or driver's license. You could also have users mail or fax in signed parental consent forms (quaint, slow), or have parents call a toll-free number or hop on a video call with trained personnel (not scalable due to human costs).

Realistically, any kid asking a parent to give an app their driver's license or credit card number is a huge hurdle. It raises suspicions about what they will be used for, whether the parent trusts the app, and so on. It makes it more likely the parent will actually read the fine print on the app about what's going on and why it would need these things.

Frankly, it's also too much work for some parents. And I don't say this in a strictly judgmental way; I am just trying to be honest about the human tolerance for hassles as someone who once decided to purchase World Heroes on the Wii Virtual Console, made it all the way to checkout screen, and then gave up on the idea when I realized I would have to get up and cross my two-bedroom apartment to retrieve my wallet and enter the credit card information. And I probably like World Heroes as much as some (not great) parents like their kids.

Picture of a World Heroes landmine deathmatch where one character in a wrestling ring is completely engulfed in flames. The opponent, a Hulk Hogan look-alike named Muscle Power, has a look of shock on his face
World Heroes' person-on-fire effect was worth a purchase, but not worth the effort of making that purchase

If the kid of those not-great parents asks mom or dad to break out the credit card, they're probably not getting what they want from that app. But if a parent just needs to stare vacantly at a phone for a few seconds, that's a significantly lower hurdle to clear.

There's no real child-protection angle to this because as the ESRB's comment above says, it doesn't even intend Yoti's technology to be used to restrict children's access to playing M-rated games or anything of the sort. Instead, this is more a question of whether companies will be able to lower the hurdles to exploiting children's personal information the same way they do the personal information of adults.

That's the heart of the issue here. But because AI is new and shiny and promising to do things we haven't seen computers do before, there's an ancillary question we also have to ask.

Does age estimation tech actually work?

STAT | 2.9 years – How close Yoti says the tech gets to a person's actual age, on average.

That's not bad, right? It's good enough that Yoti should be able to make ends meet as a carny if the whole verifiable parental consent gig doesn't pan out. A couple more rounds of VC funding and maybe it can run the Guess Your Weight booth as well.

But naturally, there are some caveats.

For one, self-reported figures may be unduly favorable to the technology.

QUOTE | "For a couple of reporters, the estimated age range was right on target, but for others it was off by many years. For instance, it estimated one editor was between the ages of 17 and 21, when they’re actually in their mid-30s." - A CNN story from last year says the network was running into significant problems with Yoti even when it just had several employees over the age of 25 try it out.

For another, Yoti's accuracy numbers assume the users are playing fair, and unless kids today are better behaved than my generation, I don't think they're going to play fair.

The application does state that Yoti has verification checks that ensure there is "a live human face" in the frame to prevent a child from using a still image like one might find on a book cover to verify their age. However, it doesn't detail how it would cope with, say, a kid pointing the phone at a monitor to show the live human face of their favorite Twitch streamer, or the latest AI-generated photorealistic video tech demo.

On a side note, I for one am not anticipating the arms race between AI tech to prove things and AI tech to thwart that other AI tech, and worry we are about to drown in an uncanny valley flooded with sewage.

Back on topic, even if we were to concede that nobody would ever cheat the system and the tech gets an answer in the right ballpark most of the time, there are other issues.

QUOTE | "While bias exists, as is inherent in any automated system, this is not material, especially as compared to the benefits and the increase in access to certain groups of parents." – The joint application to the FTC acknowledges the tech is a wee bit bigoted.

The app performs differently depending on gender and skin tone, with accuracy increasing for men and lighter skin tones

According to Yoti's own numbers, the app performs differently depending on gender and skin tone, with accuracy increasing for men and lighter skin tones.

On a weighted basis, dark-skinned black women between the ages of 25 and 35 were more than four times more likely to be falsely appraised as a child than dark-skinned men of the same age group, and 140 times more likely to be falsely appraised than people of the group with the highest accuracy, white men between the ages of 36 and 60. (Once again, the world caters to me and my nigh-bioluminescent pallor.)

As for false positives, the technology is slightly less tilted in its mistakes on that front. Girls between the ages of 6-17 are twice as likely to be mistaken as 25 year-olds than boys. Yoti also gives darker-skinned girls false positives at a slightly lower rate than lighter-skinned girls, while darker-skinned boys get false positives at three times the rate of light-skinned boys.

This is not entirely surprising, given that optical sensing technology was giving us racist sinks, webcams, and Xbox 360 peripherals long before people were training algorithms on material scraped from anonymous message boards. (For the record, Yoti says its model has been trained on users of its digital ID app, which is accepted by a variety of businesses and UK government agencies.)

Character witless

Yoti says even with the relative discrepancies around skin tone and gender, the actual number of people for whom the technology would not work is small enough to be immaterial, and it's plenty good enough for some big companies already.

QUOTE | "In November 2022, for example, social media platform Instagram rolled out age assurance using facial age estimation in order to check that users are the age they claim to be when trying to change the age on their social media account. Similarly, Facebook announced it would use facial age estimation to prevent users under 18 years old from accessing its Facebook Dating service." – The application brags that Yoti's age estimation tech has won the confidence of Facebook and Instagram.

Because when you're trying to convince the FTC your technology adequately addresses privacy concerns, it's definitely a good idea to say it's good enough for Facebook and a company owned by Facebook.

QUOTE | "Facebook has repeatedly violated its privacy promises. The company's recklessness has put young users at risk, and Facebook needs to answer for its failures." – FTC Bureau of Consumer Protection director Samuel Levine, announcing in May that the agency was going after the company for privacy failures for the third time because it violated its settlements from the previous two times.

Oh, and this new charge has an extra violation of COPPA thrown in to really hammer home how meaningless Facebook's assessment of the tech should be here.

What happens when Yoti gets it wrong?

Getting back to that relative handful of people who Yoti's tech won't work for, those edge cases still need to be accounted for. And the application says for anyone turned away by the tech, they will be prompted to verify parental consent through one of the existing processes, either by attaching a credit/debit card to the account or scanning an approved ID like a driver's license or passport.

A picture of a dark-skinned woman holding up a photo ID card at the camera
Yoti also sells a PASS ID Card that might help people like this woman when the facial age estimation tech fails

Given the flaws of Yoti's technology, such backup methods of verification are necessary. When the system fails people because of their gender or the color of their skin, there's at least an alternative way for them to reach the same end point.

This new technology cannot prevent the collection and disclosure of children's personal information; it can only increase it

But having the existing methods of giving consent as fallbacks means this new technology is doing nothing but adding more loopholes through which crafty kids or inattentive parents may give companies approval they shouldn't have. This cannot prevent the collection and disclosure of children's personal information; it can only increase it.

But looking around at the COPPA violations of Epic Games, Facebook, Microsoft, Google, Disney, and others, the problem facing the industry seems to be that too many minors are being taken advantage of like adults, not too few.

I don't see how approving this facial age estimation tech serves the interest of protecting kids' privacy in any way.

What I do see is how approving it would serve the industry's interests in collecting the most personal information from the most vulnerable users with the least friction.

If anything, this industry needs more friction on these matters, not less.

The rest of the week in review

QUOTE | "We want to make sure the atmosphere and the feeling of the game matches Tove's style. But it is a game and gameplay is the main focus so it's kind of tricky to go into the darkest corners of Moominvalley within that format." - Are Sundnes, co-founder of Norway-based developer Hyper Games, talks with us about Snufkin: Melody of Moominvalley and the challenges of adapting Tove Jansson's series to games.

QUOTE | "There's never a reason to comment on [someone's body]. I don't compliment people on their bodies, that's not my business. That's a really easy one to avoid." – We Have Always Lived In The Forest founder Chantal Ryan offers a tip in our article compiling advice about what men can do to stamp out harassment in the games industry.

QUOTE | "From a strategic and business perspective, there are some IPs where the initial spark of the idea comes from the external games team, but we still don't come up with the game design. I think it's important we still ask our developers to come back with a game pitch so that it doesn't become a very work-for-hire relationship." – Netflix VP and head of external games Leanna Loombe talks us through the streaming service's approach to working with non-Netflix developers.

QUOTE | "Who are we going to hire in five years if we don't create trainee projects, if we don't hire juniors, if we don't educate them, if we don't give them an opportunity to gain commercial experience, and so on and so forth?" – In a feature on the state of the developer recruitment market, Values Values founder Tanja Loktionova says the industry's preference for hiring experience has drawbacks in the long run.

QUOTE | "Digital distribution is clearly a positive thing for the games business. Yet I feel in our rush to get there, we are at risk of leaving behind potential customers we wouldn't otherwise have." – With UK supermarket chain Tesco no longer selling games, our own Chris Dring wonders if the sooner-or-later death of physical games will cost the industry access to some mainstream audiences. Speaking of the decline of gaming retail…

STAT | 1 year – How long Diana Saadeh-Jajeh lasted as GameStop's CFO before filing her resignation this week. Her predecessor Mike Recupero also only stuck around for about a year. The retailer terminated its CEO Matt Furlong in June and has not yet announced a replacement or even an interim appointment. "This is fine," as the coffee-drinking dog says.

STAT | 2 million – Copies of Street Fighter 6 sold in its first month of release. That's great, but to give some perspective on the relative size of Capcom's big franchises, the Resident Evil 4 remake sold 3.73 million in its first week on sale earlier this year.

STAT | 100 – Number of CD Projekt Red employees to be given a pink slip between now and the end of March. Eight months' notice (and then presumably severance) is pretty good as far as treatment of laid off employees goes in this industry. Let's hope CD Projekt has improved its treatment of existing employees to match.

QUOTE | "A proper legend of game development." – Ripstone head of tech Paul Hughes remembers programmer John Gibson, a veteran of UK studios like Imagine Software, Denton Designs, Psygnosis, Warthog Games and Evolution Studios who we found out died this week.

Read this next

Brendan Sinclair avatar
Brendan Sinclair: Brendan joined GamesIndustry.biz in 2012. Based in Toronto, Ontario, he was previously senior news editor at GameSpot.
Related topics