Meta is preparing to automate a significant portion of the privacy and risk evaluations that accompany new features and updates to its applications, including Instagram and WhatsApp. According to internal company documents cited by NPR, Meta intends to shift up to 90% of its product risk assessments to an AI-powered system.
Currently, these reviews—which evaluate potential harms and privacy implications before product releases—are primarily conducted by human evaluators. This practice originated from a 2012 agreement between the company (then Facebook) and the U.S. Federal Trade Commission, requiring thorough privacy assessments and careful review of any new features or changes.
Under Meta’s proposed AI-centric system, product teams will complete standardized questionnaires detailing new features or updates. The AI will then automatically identify potential risks and quickly provide teams with an “instant decision,” outlining requirements that must be met before launch.
Supporters suggest this automated approach will streamline product development and speed up application updates. Critics, however, including at least one former Meta executive who spoke with NPR, express concern that relying too heavily on AI-generated reviews might lead to higher risks. Specifically, they warn that automated risk assessments could fail to adequately identify complex or unforeseen privacy issues, increasing the likelihood of problems surfacing post-launch.
In response to queries regarding these changes, Meta issued a statement confirming adjustments were underway. However, the company stressed that automation would only be applied to “low-risk decisions.” Human oversight and expertise will remain integral, particularly for novel or complex issues, to maintain rigorous privacy and safety standards.