Artists who use the human body have long been familiar with censorship online, but recent events have triggered an unsettling trend that has AI artists particularly on edge. Hounded by bias, accused of “sexual solicitation,” and gaslit over shadowbans, artists’ accounts and even careers have lived and died by ever-changing algorithms and fickle moderation. Those who’ve made it through have seen quite the evolution, including tech companies recently being called to task by increased transparency regulations and external bodies like Meta’s Oversight Board. More transparency and accountability mean that users have more insight into how and why they may be actioned on. They also shed stark light on the fundamental inequities artists face online.
Artists are accustomed to censorship over “sexual intent” because of their use of the body, whether they are working with sexual themes or not. Besides notorious social media restrictions, artists have been accused of selling pornography on their websites and terminated from payment processors, email platforms, and online marketplaces. Many are cornered into submitting to the mislabeling of their artwork on platforms in order to continue to post them, such as renowned digital artist Simone Garcia, also known as “cymoonv.” She told Hyperallergic that images of her artwork, meant to “subvert” sexualization of the body, could only stay online on various platforms by being continually tagged as “sexual activity,” “harmful,” or “soliciting sex.”
Faced with this dragnet of violations, certain artists have made a project of skirting the censors and appealing to the baser instincts of an internet bent on ignoring artistic intention.
“My artwork has often worked within the algorithm to try to follow the rules while bending them creatively,” explained artist Leah Schrager, whose popular online persona OnaArtist exemplified performative sexiness. The subversive project resonated through the internet and the art world alike, so it rang odd when her new body of work, which uses generative AI to create dreamy, abstract images inspired by her art and prompts from her Instagram comments, was receiving shockingly little engagement.
“Since starting to share my AI images, I’ve found the majority of them get very little reach to [either] non-followers or current followers,” Schrager told Hyperallergic. “Either my followers don’t like my AI work, or Instagram is not letting my AI work get reach.”
In her case, it was the latter. Recent transparency legislation prompted Instagram to introduce “recommendation guidelines” violations — essentially, a shadowban made visible. A violation, such as for supposedly “sexually suggestive” content, similarly stunts a user’s ability to engage with new audiences, and many see less engagement overall. When she received such a violation, her options were to either remain less visible or delete her artwork herself.
The drive for internet regulation legislation, particularly in the name of protecting children online, has resulted in pressure on tech companies to action content that could be interpreted as illicit, or face consequences. Companies’ zealous overcorrection of their previously lax approach resulted in a chilling of artistic expression, particularly damaging to artists who use the body in their work, no matter how unrealistic or artistic. The effect on an artist’s career can be immediate and lasting, severely limiting their access to opportunities. As Garcia recounted to Hyperallergic, “My [Twitter] account was permanently marked as sensitive, removed from the search tab, and my content hidden even from existing followers. This was the main platform for networking in the crypto art space, so being shadow banned meant less autonomy and opportunities to make a living as a digital artist.”
A concurrent phenomenon has further complicated tech companies’ scramble to adapt to transparency legislation: generative AI. The powerful and game-changing tool for artists is also an ominous instrument of harm. There is increasing — and founded — anxiety over AI-generated illicit material online, particularly Child Sexual Abuse Material (CSAM), which is rapidly proliferating, according to the Internet Watch Foundation. Tech companies are being pressured to show what they are doing to stem the rise of CSAM on their platforms, which, in Meta’s case, includes a pending response to inquiries from the European Union against a shameful track record.
But Meta’s content moderation AI, which is capable of detecting how an image is made, also catches non-harmful work made by artists who employ generative AI as a tool — likely why Schrager’s patently unreal artworks, often only vaguely bodily, triggered such severe suppression.
As platforms and governments set their sights on targeting AI-generated illicit imagery, artists would be wise to be concerned; tech companies have done little to integrate artistic perspectives into their moderation, and routinely sacrifice free expression in the name of tracking down harmful material and avoiding penalties.
Advancing technology opens a new world where artists have the opportunity to create art free from conventional judgment. Pity if this opportunity, too, becomes collateral damage for overzealous moderators and tech companies’ bottom line.
0 Commentaires