As Congress Rushes To Force Websites To Age Verify Users, Its Own Think Tank Warns There Are Serious Pitfalls
Policy
from the a-moral-panic-for-the-ages dept
Fri, May 5th 2023 09:21am - Mike Masnick
We’re in the midst of a full blown mass hysteria moral panic claiming that the internet is “dangerous” for children, despite little evidence actually supporting that hypothesis, and a ton arguing the opposite is true. States are passing bad laws, and Congress has a whole stack of dangerous “for the children” laws, from KOSA to the RESTRICT Act to STOP CSAM to EARN IT to the Cooper Davis Act to the “Protecting Kids on Social Media Act” to COPPA 2.0. There are more as well, but these are the big ones that all seem to be moving through Congress.
The NY Times has a good article reminding everyone that we’ve been through this before, specifically with Reno v. ACLU, a case we’ve covered many times before. In the 1990s, a similar evidence-free mass hysteria moral panic about the internet and kids was making the rounds, much of it driven by sensational headlines and stories that were later debunked. But Congress, always happy to announce they’ve “protected the children,” passed the Communications Decency Act from Senator James Exon, which he claimed would clean up all the smut he insisted was corrupting children (he famously carried around a binder full of porn that he claimed was from the internet to convince other Senators).
You know what happened next: the Supreme Court (thankfully) remembered that the 1st Amendment existed, and noted that it also applied to the internet, and Exon’s Communications Decency Act (everything except for the Cox/Wyden bit which is now known as Section 230) got tossed out as unconstitutional.
It remains bizarre to me that all these members of Congress today don’t seem to recognize that the ruling in ACLU v. Reno existed, and how all their laws seem to ignore it. But perhaps that’s because it happened 25 years ago and their memories don’t stretch back that far.
But, the NY Times piece ends with something a bit more recent: it points to an interesting Congressional Research Service report that basically tells Congress that any attempt to pass a law targeting minors online will have massive consequences beyond what these elected officials intend.
As we’ve discussed many times in the past, the Congressional Research Service (CRS) is Congress’ in-house think tank, which is well known for producing non-political, very credible research, which is supposed to better inform Congress, and perhaps stop them from passing obviously problematic bills that they don’t understand.
The report focuses on age verification techniques, which most of these laws will require (even though some of them pretend not to: the liability for failure will drive many sites to adopt it anyway). But the CRS notes, it’s just not that easy. Almost every solution out there has real (and serious) problems, either in how well they work, or what they mean for user privacy:
Providers of online services may face different challenges using photo ID to verify users’ ages, depending on the type of ID used. For example, requiring a government-issued ID might not be feasible for certain age groups, such as those younger than 13. In 2020, approximately 25% and 68% of individuals who were ages 16 and 19, respectively, had a driver’s license. This suggests that most 16 year olds would not be able to use an online platform that required a driver’s license. Other forms of photo ID, such as student IDs, could expand age verification options. However, it may be easier to falsify a student ID than a driver’s license. Schools do not have a uniform ID system, and there were 128,961 public and private schools—including prekindergarten through high school—during the 2019-2020 school year, suggesting there could be various forms of IDs that could make it difficult to determine which ones are fake.
Another option could be creating a national digital ID for all individuals that includes age. Multiple states are exploring digital IDs for individuals. Some firms are using blockchain technologies to identify users, such as for digital wallets and for individuals’ health credentials. However, a uniform national digital ID system does not exist in the United States. Creating such a system could raise privacy and security concerns, and policymakers would need to determine who would be responsible for creating and maintaining the system, and verifying the information on it—responsibilities currently reserved to the states.
Several online service providers are relying on AI to identify users’ ages, such as the services offered by Yoti, prompting firms to offer AI age verification services. For example, Intellicheck uses facial biometric data to validate an ID by matching it to the individual. However, AI technologies have raised concerns about potential biases and a lack of transparency. For example, the accuracy of facial analysis software can depend on the individual’s gender, skin color, and other factors. Some have also questioned the ability of AI software to distinguish between small differences in age, particularly when individuals can use make-up and props to appear older.
Companies can also rely on data obtained directly from users or from other sources, such as data brokers. For example, a company could check a mobile phone’s registration information or analyze information on the user’s social media account. However, this could heighten data privacy concerns regarding online consumer data collection.
In other words, just as the French data protection agency found, there is no age verification solution out there that is actually safe for people to rely on. Of course, that hasn’t stopped moral panicky French lawmakers from pushing forward with a requirement for one anyway, and it looks like the US Congress will similarly ignore its own think tank, and Supreme Court precedent, and push forward with their own versions as well.
Hopefully, the Supreme Court actually remembers how all this works.