Self-Censoring: Twitter WANTS YOU to be Civil?
Twitter is formally rolling out a new reply feature meant to curb "harmful or offensive" language by making the user aware that what they're saying isn't very nice.
This feature is an extension of the "do you want to read this article before retweeting it" prompt Twitter recently added as part of the company's overall "healthy conversation" effort.
This new "self-censor" feature uses algorithms to detect harmful or offensive language and uses a prompt to ask the user to review their reply before posting, pointing out that their response could be considered hateful.
Twitter has found that after receiving the prompt, 34% of people revisited their initial "hateful" reply or decided not to send the reply at all. 11% opted to use less "offensive" language.
Could self-censorship on Twitter actually work?