Please remember that I am not an expert in anything related but end-userdom and am certainly not qualified to write this up, and that I no doubt made significant mistakes or unintended misinterpretations. I also don't have reasonable access to resources to do legal research that clarifies my ignorance. That software is way expensive or about two hours away.
KOSAS critique session
Portions of this bill are obviously well intended and perhaps even functional. However there are two to three glaring and massive flaws in this bill that scream "Kill me now!" There are also a number of things I'd consider less than ideal, but the largest issues imo are the following two.
Covered platform definition is far too broad. This is one of the disasterous impositions of the bill and warrants a special paragraph at the start as it's sweeping nature is of that of the "Hot" in "Hot Karl". Unless I missed a clarification somewhere while skimming this bill, it covers every single website, online capable video game, messenger apps, steaming video, or "online platform that connects to the internet". To be covered it must also be "reasonably likely to be used by a minor", which uses our new magic mystery word, REASONABLE, a word in the legal usage often up for opinion (at least until a group or person likely smarter than you or I gets around to telling us what it actually means). However an easy interpretation is that any website not actively verifying age of users will run afoul of this reasonability test and risk the expense of a civil court case, since it is no difficult stretch in any court to argue this. In this way this bill's co-sponsors and supporters are being disingenuous regarding the bill having no outright requirement for age verification to access websites; while this is true in the wording of the bill it is functionally untrue in practice when complying with what is in this bill. However since state AGs will be flinging shit like this as part of their new AND sweeping enforcement duties, an attempt will almost certainly be made (among throwing many things to see what sticks) by at least some state AGs to effectively make this a requirement through argueing for favorable precedent in at least some jurisdictions. Of course this will only be a ton of court hours unless the courts' definition of reasonability align, which would prove supporters of the bill claiming otherwise to have been dishonest, so there's not much of a silver lining here.
In the Enforcement section it also requires a reporting system and that it is mandatory to respond to the reports within 21 days for sites under IIRC 10million users (and that's it, no further tiers except above and below 10M). This is an absurd requirement, as it is far too easily abused to harass websites and put them at risk of liability especially if they are not well staffed like a big tech. In addition there is an even stricter requirement to respond immediately to reports of imminent harm, though I'm not sure that is specifically defined anywhere or if it's yet another thing for courts to determine the scope of in this bill. In addition I think this could be paired by especially nefarious actors to "raid" the site and flood it with reportable material.
The second major disaster in this bill is the above use of vague and undefined reasonability standards of simply astounding breadth handed off to state AGs for enforcement.
SEC. 3. Duty of care.
(a) Prevention of harm to minors.—A covered platform shall act in the best interests of a user that the platform knows or !reasonably should know! is a minor by taking !reasonable measures! in its design and operation of products and services to prevent and mitigate the following:
The crux of the issue in this (a) is that "reasonably should know" is not clearly defined. If state AGs are involved as detailed in the enforcement section this basically leaves it up to the court system to define the "reasonability" standards throughout this bill. Since state AGs are allowed to bring civil cases in a federal district with jurisdiction in addition to state courts, there will likely be at least federal circuit (the ones that cover multiple states) precedent quite quickly. It will be patchwork in various federal districts and circuits I think, until the USC takes the case(s).
The same problem exists that I have no idea how to define "reasonable measures" (I did Ctrl+f the term but it was sadly neglected in the definition section).
In fact, if you Ctrl+f the document for "reasonable" or "reasonably" you should be absolutely HORRIFIED by the amount of times they appear in this bill, though many of them are repeats of the same defined term.
reasonable: 10 matches
reasonably: 23 matches! That is a little less than 1 usage per two pages of the bill, and that's only this form of the word alone. Since there will be sections without a reasonability standard posited as a mystery to the universe, this means that it's THICK in sections guilty of this confounding inability to narrowly tailor the law.
(2) Patterns of use that indicate or encourage addiction-like behaviors.
This is also not defined and shares the above problems. I myself partook of video games pretty much every day after school in my youth. I am quite certain that would run afoul of this in at least some (likely many due to the ongoing big money public scare) jurisdictions. While this would mean I would be less able at relatively useless things such as shooting pixelated planetmans in Planetside, it also means I would have much less optional time on a media that taught me a great many things of use in a format I was interested in voluntarily participating in.
(b) Limitation.—Nothing in subsection (a) shall be construed to !require! a covered platform to prevent or preclude—
(1) any minor from deliberately and independently searching for, or specifically requesting, content; or
(2) the covered platform or individuals on the platform from providing resources for the prevention or mitigation of suicidal behaviors, substance use, and other harms, including evidence-informed information and clinical resources.
I'd sugest rewriting (b) in better legalese than I've set down here along the lines of
(b) Limitation.—Nothing in subsection (a) shall be construed to require a covered platform to prevent mandatory and unfettered capability to access of or preclude—
SEC. 4. Safeguards for minors.
(a)(1)
(A) Great
(B) Awesome
(C) Quite understandable within reason, which I would suggest as including better definitions. Also I'd replace "compulsive" with "coercively encouraged", as I think that better fits what is being attempted by the sentence judging from the examples. "Compulsive" I fear could be used to describe a sincere desire to play video games for more than an arbitrary period, particularly if defined by someone who does not approve of such a fine hobby, and also oddly without distinguishing genres as I would have expected at the least. For example, my initial typing lessons were in elementry school in 2nd grade using various educational games for improving typing ability (though it was playing online games with only a keyboard that forced me to raise my words per minute to warn team or public about the train of mobs incoming to zone etc so forth.).
(D) Fine. However I wonder why (i) isn't "set to strictist possible setting by default". I'd suggest instead of allowing an "opt out" within (i) that instead the default is to deactivate the personalized recommendation system (algo and/or advertising I assume) for known minor accounts (starting at opt-out and allowing opt-in). I do warn that while losing the advertising system won't be a difficulty, losing search function is very much a thing that can make using some websites difficult or impossible so that would make opting-in more desirable.
(E) I'd suggest just banning geolocation of minors outside of emergencies or other extreme circumstances. At the least it could default to off, but location is required for things like maps and sometimes is of use in ordering packages. They probably can ask a parent/adult to use their device for that which would also give more control to the parent I assume, but there could be a way for them to re-enable it (perhaps for a window of time before it reverts to off) while keeping protections. I also advise reading the techdirt articles on that particular risk of the rare circumstances of too much parental control in unenviable living environments. As to ways to not collect geo data for the next big corporate or agency hack while still being able to recall it for emergencies I have no idea, I am not an expert, so I don't know if there is a way to securely recall some geo data if it's not kept in logs during the event of an emergency.
Sec 5
Nothing glaringly bad here afaik. The requirement for parental notification upon creating an account could be a problem for some young people however. Probably there should be some reasonable alternatives for minors in difficult parent situations and also to facillitate account creation in reasonable places the parent is not present such as in school.
SEC. 6. Transparency.
(2) SYSTEMIC RISKS ASSESSMENT.—The public reports required of a covered platform under this section shall include—
This is absurdly broad and I think will be vulnerable to being struck for vagueness. Otherwise once again patchwork for whatever political party A doesn't like about party B's voters in as large a jurisdiction as they can finagle precedent for. This is explicitly written into the bill:
(F) an evaluation of any other relevant matters of public concern over risk of harms to minors.
What I think this means is whenever a public scare, right or wrong, enters the public arena that every covered platform (basically the whole internet minus ISPs who I note are big donators) will have to take time out to write this up. This is probably ok for big tech companies, the supposed target of this bill, but how about websites that lack a massive staff?
(3) "automated detection mechanisms for harms"
Are these free? Do all websites use them? Is this now a mandate? A report must now be written including them. A skeptic might say this may lead to data that shows a "public need" for client side scanning or other intrusive techniques as are being pushed by what appear to be security software interests lobbying in many capitals.
SEC. 7. INDEPENDENT RESEARCH.
(a)
(2)(A) is not clearly defined. The current standard of anonymized data is reputedly easy to identify people with as it includes identifiers such as IP address. This might be abused through the lack of definition in this bill; notably another usage of the reasonability standard: reasonably linkable. Maybe that standard already has a defining precedent, I didn't check. Otherwise I think this standard will vary; some may find reasonably linkable to be no more than a person's name or username while allowing less obvious information provably usable to identify the user.
(B) also uses the reasonability standard, and sadly many "covered platforms" already find it quite alright to market user data in ways the user can be identified so the idea of a reasonability standard is already a bad one in practice.
Note: due to (7) this section isn't as bad as it seemed at my first reading
(3)(A)(i) is fine
(ii) is worrisome because these nonprofits are intricately regulated in some instances and potentially can, in theory at least, be abused powerfully though this isn't polite to mention. This is because if this is the type of 501c I'm thinking of, they can in some circumstances (I think it requires specific statutory authorization so probably not very widespread, but I also would not be surprised if there were a more cover-all statute, I don't know if there is without studying harder) be funded as if they are federal agencies (and making a probably incorrect presumption, state agencies?) but without the lawful requirements of federal agencies. This is an odd example of a legal grey zone in that there are AFAIK still Federal circuit splits on the issue of whether this is an agency relationship or not between the non-profit and the (federal) government. The Supreme Court seems to be in no hurry to clarify the matter, though I haven't been keeping a close eye on it and I may have missed something over the past number of years. A second reason was theorized in a by-now relatively old legal review or something like that I read once upon a time as a rare example of criticism: they may also be vulnerable to gradual infiltration and cooption, particularly by the religious right and financially by security interests, due to an attractive aura of holiness that makes them politically difficult to regulate. Moreover they can be portrayed in a saintly and untouchable light in order to justify overspending on the government dime (I'd guess this was referring to those accepting public funds) in ways that would in the review article's theory then fund an ever increasingly insular network seeking ever greater authority and funding. If we speculate this can be true at least in theory, perhaps influence through advisory research as well?
(4) is problematic for relying on the vagueness in 3(a). Though not linked in the bill, I think that when (not if) a state AG politically defines something a harm to children to justify it's silencing, this section mandates access to the internal data of sites with over 10 million users. Unless I'm wrong, this means the organizations (or if not a 501c themselves, an allied 501c) pushing this bill to utilize it for the stated purpose of the internet purge of unfavored but legal voices will have direct access to data at massive scale.
(5) fine on it's face; however a side effect is that this limits research to 501c non-profits and university researchers, all of whom must be approved due to (7):
(7) This would limit who can use this section to a very winnowed field approved by the Assistant Secretary of Commerce for Communications and Information. The person who holds this office seems a good choice who I expect would do well, and this cancels some of my previous detailed concerns for this section; however due to the breadth of this bill and it's vagueness I do hold this to be a problem eventually if the bill were to pass: if the regs can be re-written later and I don't know if that is feasible; or if approvals are not examined very carefully for political motivations.
(b)
(2)(A) says this research only applies to covered platforms with more than 10,000,000 users, which is a much more reasonable scope the rest of this bill should largely replicate (after fixing the other issues).
(3) As a whole, this seems reasonable afaik and gives an opportunity for the assistant secretary to rectify some of the issues I've lamented above. However, once established, are these regulations set in stone so to speak? As I noted the Assistant Secretary seems to be a good choice due to the person currently holding the office.
(c) SAFE HARBOR FOR COLLECTION OF DATA FOR INDEPENDENT RESEARCH REGARDING IDENTIFIED HARMS TO MINORS
I'll withhold judgement on the immunity to ToS violations aspect other than to say that would be a careful balance to examine. I do appreciate that privacy violations are the one exception to the immunity, however I do note that this does not clarify whether these privacy violations must be immediate and knowing or if "down the chain" privacy violations count. If it is the former this (c)'s exception to immunity for privacy violations seems easy to bypass as standards for what is anonymized are lax enough to match people to the data, and in the bill it's not clear if acts of a purchaser of anonymized data that de-anonymize would incur liability for the initial collector. Perhaps there is precedent on the matter, and I would not be aware of it as I am no expert, but I don't think there is. Presumably the allowed non-disclosure agreements would help control that, but I don't think that would be substantial enough.
Section 8 Market Research
This seems fine except it would be much better to default to opt out rather than opting in by default and requiring an opt out. This opt-in by default leads to shenanigans like the "show more options" button hidden somewhere away from a plainly obvious "Next" button, and takes advantage of the general tech illiteracy in the public. I would also like one of these for all people. My car is apparently spying on me, according to a recent Mozilla report. I'm not even fancy, I don't know why actually fancy people aren't more concerned by this. I doubt they are exempt.
SEC. 9. AGE VERIFICATION STUDY AND REPORT
This is a section I don't have much criticism for as it establishes a group to study the issue. I think OS based is the least bad way to go about this as long as it is unburdensome to both minors and adults, though I am not convinced of the wisdom of the scheme in the first place. I would suggest ways to keep from impinging on adult citizens who would prefer anonymity (as much as that is possible anyways) and not have to provide information to public or private verifications, including instant ID or real id or whatever (that does not seem bad for many things but in application to internet free speech it is) for internet use, even only one time initial setup. For example Windows 11 now requires a windows account and internet connection to complete an OS re-install from recovery (a reset) once updated to 22H version through the mandatory updater. This effectively ruined the OS reset option for me as I don't have a Microsoft account and I refuse to comply with the requirement, and don't always have internet.
For the eventual study, my public comment would be that a plan to make specific OS for minors on devices marketed specifically for them would be a less invasive way to accomplish your ends than mandating all OS installs require intrusive age or identity verification upon install, which would be a significant problem. Perhaps this distinct hardware would make manufacturers balk (though I think a goofy case design could be the only hardware changes). In order for manufacturers to use the same hardware manufacturing lines (though I like the idea of a Pikachu shaped cell phone) I'd recommend that these Children's OS are available on removable media that is easy to install by even the most flustered parent in various specified device types; for example a usb removable drive with kindows installer that can be installed on a windows box; or an sd card installer for android, apple, and other manufacturers for cell phones etc. Alternately a way to re-install the OS as a children's version that is built into the near ubiquitous device reset through the settings. Unless certain internet is firewalled to only allow the children's os access (or the os of a user is publically visible somehow) I don't think this would allow extra spoofing and infiltration of such places while accomplising many of your ends. I think subsidizing parent's costs for this OS may be a good idea. For bonus points give an install option (with preview descriptions of the games to help a parent decide which to checkbox) to load it up with quality educational games that aren't just bloatware. Things like the old MECC/The Learning Company game genre, my apologies for not knowing who the big names in that market are nowadays.
Section 10
read section 12 first, seems reasonable but hinges on that section. edit: Sec 12 seems ok on face but makeup of the group will be determinative.
SEC. 11. ENFORCEMENT
(a) FTC stuff, seems less prone to problems than the following
(b) ENFORCEMENT BY STATE ATTORNEYS GENERAL
(A) seems to be odd in that it authorizes state AG to choose a federal jurisdiction as well as state though perhaps that is relatively common in bills of this sort and I don't know (it is a civil action but I'd think DoJ and not state AGs for fed courts; or limiting AG to state court for first bite at the apple and no DoJ, or all DoJ in federal jurisdictions; I'm not wise enough to have a preference). However it seems due to the incredibly widespread use of the reasonableness standard throughout this bill, this (A) will give much faster access to Federal circuit courts for state AGs for very fast and even broader imposition than individual states. With the way the rest of this bill is written, this section seems to be an allowance for a blitz through the federal courts in search of rapid precedent setting decisions. I also suggest that state AGs tend to be elected (I think in a handful of states they are appointed?) and that granting them sweeping authority in (b) will tend to see some handful at least treat the law as a cudgel against counterculture nonconformity for populist and/or elitist reasons.
(2) allows the FTC to intervene against rogueish state AGs but only enables them to (i): politely complain and (ii): file a petition to appeal, presumably the resulting court order?
If that assumption is correct, the state AGs can't be reined in without many court hours even in the most abject of complaints. I'm not familiar with the procedure though so the filing might be possible earler than my assumption, and if so would allow the FTC more impact on AGs.
sec12
callback to 501c concerns above, also note that it will depend on it's makeup whether the advice is as good as it should be ie don't stack it with $ecurity interests and distiguished but still culture warring arch-conservative psychologists to the degree that distinct groups with different reasons for similar ends are overrepresented.
sec 13 date
Section 14
This section says this act does not require age verification. The rest of the bill sets up a situation where age verification is nearly mandatory or at least a relief-bearing and easily evident action even without precedent that is a clear course to avoid being sued by state AG or the FTC were this bill to pass. It is technically correct in a most disingenous way. In addition I am unsure if the advisory board finding such verification standards would count technically as reviewable, or survive in review of this bill's law when in front of the judge (or even seen to be a separate thing and non-reviewable beyond whether the group the bill establishes could be established at all).
SEC. 15. Severability.
I despise severability sections because it justifies (politically not ethically) throwing everything together in one massive bill that pushes the shit with the salad, and then sets the courts to figure it out with innumerable court hours spent when the courts are already backlogged to real problems and that's even to some degree already keeping poor people out of the non-criminal side because of lawyer cost, court fees, etc.