They'll probably try to implement some kind of content recognition or correlation system. You can't check every website, but you can probably do pretty well at generating a suspicion level based on content on the site. After all, they already do some level of content recognition for targeting ads. Keywords and links to other known problem sites probably even go a long way toward generating such a number. If a site's suspicion level is high enough, it could be reviewed manually. If it's high enough you could even block the ads automatically, pending a formal request to review the site.
Videos are a harder problem, but I'm guessing that you can do something similar. Recognizing content in the video itself would probably not be worth the trouble of trying to implement, but you can probably still do correlation based on the channel it's part of, if any. If a channel is known to be a trouble maker, or if the video title contains certain words, or if the video has embedded links to known problem videos or sites, you can flag it as suspicious. Videos with more hits can be reviewed manually more frequently, maybe.
Actually, doesn't YouTube already do some kind of audio analysis to try to detect if you upload a video containing copyrighted music? It might not be that hard to have it also recognize suspicious phrases or words, although the false positives and negatives would probably be a problem.
The real question still comes down to economics I guess. Is it worth it to Google to try to implement something like that? Maybe.