Are Google and Meta using their own AI effectively?

Rob Leathern’s latest Substack post discusses how Meta and Google are spending billions on new AI compute infrastructure, but questions whether they will use these powerful technologies to better protect users from scams and deceptive ads. Specifically:

  • Meta and Google dominate online advertising, so it's reasonable to expect details on how they'll use AI to defend their platforms from bad actors.

  • Scam ads using deepfakes of celebrities appear to be spreading widely on YouTube and Meta's platforms. There are links to recent examples we found directly as well as found by 404 Media.

  • Meta and Google currently use a mix of human review and algorithms to catch policy-violating ads. But AI has advanced significantly in the past 18 months. Rob explains that for just $20/month, consumers can use ChatGPT to reliably detect likely scam ads - not something most users will do - but the tech giants should be able to do much more.

  • The author agrees protecting platforms is an adversarial problem, but argues Meta and Google should still utilize the latest AI to prevent harm, especially given the billions in ad revenue they collect.

  • The perception that these companies profit from deceptive ads despite having the technology to stop them could hurt their brands long-term.

  • The author calls on Meta and Google to explain to the public how they'll leverage powerful new AI models to better protect users from bad ads, even if it reduces their short-term revenue.

Previous
Previous

Interview on Google’s Gemini AI Image Issues

Next
Next

A Deepfake Scam Breakdown