This week, Twitter announced a new policy giving itself the power to restrict manipulated media being shared on the platform. Rolling out on March 5th, 2020, the set of rules will allow the platform to label tweets that are found to contain altered or fabricated videos, images and other media, or even remove these tweets altogether if they are deemed likely to cause serious harm.

Announced in a series of tweets by Twitter's official account @TwitterSafety on February 4th, the new ruleset, titled "Synthetic and manipulated media policy," is the product of Twitter's effort to address synthetic and manipulated media which it was working on from October 2019.

According to the new policy, Twitter may label content that is significantly and deceptively altered or fabricated, and content that is shared in a deceptive manner. Posts which share both of these characteristics have a higher chance of being labeled. Finally, if such content is likely to impact public safety or cause harm, it is likely to get removed altogether. Repeated offenders posting content which violates all three of the criteria may be permanently suspended.

In most cases, if we have reason to believe that media shared in a Tweet have been significantly and deceptively altered or fabricated, we will provide additional context on Tweets sharing the media where they appear on Twitter. This means we may:
* Apply a label to the content where it appears in the Twitter product;
* Show a warning to people before they share or like the content;
* Reduce the visibility of the content on Twitter and/or prevent it from being recommended; and/or
* Provide a link to additional explanations or clarifications, such as in a Twitter Moment or landing page.

While the implications for the memes shared on the platform have not been addressed by Twitter, the effects can be expected to be similar to that of the recently-introduced fact-checking system on Instagram. Launched on December 16th, 2019, the system, which relies on manual fact-checking done by several organizations such as Lead Stories, Agência Lupa and Africa Check, saw that certain memes shared on the platform were tagged as "False Information."

If the same holds true for Twitter, this might mean that memes such as Bernie Sanders reading the Hot Chip and Lie copypasta and Carpe Donktum's notorious edits could end up being flagged as false information.

On Instagram, however, the introduction of the fact-checking system was quick to spawn some memes of its own, such as Africa Check and posts baiting the automated system into flagging them. So while it's early to judge how the new system will impact the memes which rely on making a danker version of reality, it is quite certain to spawn a few memes of its own, at least.


Share Pin


Comments 2 total

wisehowl_the_2nd

I am of the opinion that fact checking/information policing services are a futile effort doomed to fail. You either implement a blunt automated system that cannot keep up with the nuance of human communication (ala Youtube's content systems) or you have a few too many individuals of a particular ideology that ruin your reputation to remain unbiased (ala Snopes). In anything, moderation of a service requires participants to trust the moderation, to which I think no system can hope to achieve so long as it relies on automated systems and biased moderators.

6

OppoQuinn

Twitter couldn't even be trusted with something as basic as verifying account identities, since the "verification" system just turned into a badge of honour that gets revoked when a person is out of favour with the company -- regardless of whether that account is still known to be held by that person or not.

4
pinterest