Attention to ethics must be paid in light of the sensitive nature of our research. Twitter is relatively new and is quickly evolving into an essential part of society. Users on Twitter may not be aware of the research being conducted on the content that they are publishing, and the company itself is a private entity. There is still some debate as to who actually owns the content published on Twitter, and who has the authority to moderate that content.
Twitter is a champion of “keeping the conversation healthy.” They give users tools to block other users and to curate their feed to exclude content that they would rather not see. But this does nothing to prevent users from posting abusive content in the first place.
The platform is understandably concerned with protecting users’ freedom of expression. If Twitter were to start moderating content before it was posted, therefore preventing abuse from ever being published, then they would be the authority on what is and isn’t acceptable behavior in the public sphere. Twitter’s success so far is probably in some way reliant on the fact that they don’t censor the content published on their platform. They leave that job to their users.
But even if we accept that Twitter’s hands are tied, there is still a hygienic issue involved with the manual review that is part of Twitter’s reporting process. When an abusive Tweet is reported to Twitter, it is routed to a real person sitting in an office to be evaluated. We have the technology and arguably the moral responsibility to try to remedy this situation so that no one is ever faced with the need to report content, or at least so that we aren’t employing people to look at abusive content for 40 hours a week.