There’s a growing push to mandate age verification across popular online services in the U.S. and other countries around the world, but here’s one place that won’t be introducing the rule anytime soon: Australia.
For the past couple years, Australia—along with the U.K.—has been leading efforts to pass laws requiring websites to actually check the ages of their users before letting them in. Porn sites are obvious targets, but so are the likes of Facebook and Reddit, which have rules stating users should be age 13 or over, but which are often used by younger kids.
However, the Australian government just decided to pull back from that plan. This is largely because of an analysis of the practicalities conducted by the country’s e-safety commissioner, submitted to the government back in March, and finally published yesterday. The gist: There’s currently no way to make online age verification work without introducing new problems.
Pay attention, legislators in the U.S. and elsewhere.
The first and most obvious problem is that of privacy—for example, there are many good reasons why no one should be keeping registers of who visits which adult site. Then there’s security to consider, as any database of sensitive information is a big risk. But these are problems that are mainly associated with having to hand over credit card details or some other form of identification.
The alternative is to verify someone’s age without identifying the individual, and that’s where a new breed of biometric services come into play. The most notable example is probably Yoti’s facial-scanning tool, which Instagram and Facebook Dating have been using for a year, and which makes a pretty good case for being a pro-privacy age-verification solution—it determines age rather than identity, it claims to delete the images as soon as that’s achieved, and the researchers contracted by the e-safety commissioner concluded that its accuracy was “quite good.”
However, the commissioner’s report also noted that the dataset Yoti used to train its tool “skews towards males, particularly those with light skin tones.” At this stage in the machine-learning game, we should all be able to guess where that’s going: According to the report, Yoti’s technique beats trying to estimate someone’s age from their voice, but it “also carries the potential for bias, including higher accuracy rates for males than females or lighter skinned people compared to those with darker skin.”
“Closing this gap may take time and increased variety within A.I. training sets,” the report said. “These technologies are still evolving, and the global market remains relatively immature.” Yoti rejected the results of the testing because of a small sample size and technical limitations.
I think most people recognize that, in an ideal world, online age verification would be a good thing; we don’t let kids wander into adult cinemas, and the online world shouldn’t operate according to different rules. But, as France’s privacy authority also warned earlier this year, the solutions that are out there now are risky in other ways. Perhaps it’s just a matter of time before the likes of Yoti can fix these problems, but lawmakers should beware of passing age-verification laws before there are demonstrably safe and inclusive ways to comply.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
David Meyer