🤖 AI Summary
Culture critic Zeba Blay warns that a new wave of hyperreal AI “influencers” — lifelike Black female avatars populating TikTok, Instagram and YouTube — is reviving digital blackface under the guise of engagement-driven content. Using generative video models (examples named include Veo and Sora 2), creators can synthesize viral clips from a few prompts, producing uncanny faces that perform caricatured Black femininity, use tokenized AAVE, and push shop-and-aspirational scripts. Meta’s 2023 experiment “Liv” — an AI persona with inconsistent backstory and stereotyped language — exemplifies how these systems reproduce developers’ biases, prioritize engagement over truth, and monetize Black expression without accountability.
The piece situates this as an extension of historical minstrelsy and a technical problem of dataset and design bias: generative models trained and deployed by overwhelmingly non-Black teams can produce inauthentic, dehumanizing outputs that scale harmful stereotypes. Implications include reputational harm, cultural extraction for profit, distorted labor markets for real creators, and environmental costs from data-center demands. Blay and quoted scholars urge transparency, regulation, and platform responsibility, plus supporting real creators — while noting the fraught ethics when Black people themselves participate to survive an exploitative attention economy.
Loading comments...
login to comment
loading comments...
no comments yet