Sean Li explains the moderation of audio content on a social network

Content shared on social platforms can be non-conspicuous, but it can also cause issues and even comprise hate speech, necessitating moderation. Thus far, companies regulate their content largely on their own, but more stringent requirements via state legislation could affect part of the digital communication sector in ways yet unknown.

Sean Li gave an interview on the topic ‘The Challenges of Audio Content Moderation’ to the hosts of the podcast ‘Arbiters of Truth’ by Lawfare. Li is a lawyer and former Head of Trust and Safety for Discord, a social communications platform providing possibilities of text, audio, and video chat. At this point, Discord has some 800,000 users on its larger channels, called servers, not to mention the smaller ones. The latter two allow for personal interaction, providing for inclusion and belonging. What is more, conversations can be very creative.

Trust and Safety, outsourced from the company’s User Support, manages disputes in communication, which may arise from hate speech and problematic content, in case these are shared. It is “no longer acceptable to sit back and let go,” Li said, explaining that under a ‘safety first’ policy, some users who are kicked off the platform. Offenders are excluded not “for fun” but when “mitigating or aggravating factors” apply, for instance relapses by users who had already been in conflict with the company’s regulations. These rendered bans necessary, Li pointed out. Thousands of cases have been recorded and solved by Discord.

Diverging points of view

Not all users offend with bad intentions. Some might commit missteps “for the right reasons.” In any of the two situations, policies and procedures as well as support are employed by Discord. Cases of appalling content are not always black and white, but misuse is sanctioned. While the moderation is a part-legal endeavor, it is also a humanities issue. People at times disagree, due to diverging points of views or outlooks on the world as to what is acceptable and what is not. Some users simply like polarization.

Li described his unit as similar to a smaller-scale government within an “online economic ecosystem,” but one that in many cases can “react faster” in implementing and executing stops to bring about a decrease in dissent and to enforrce principles and standards. Users can in less severe cases block other users. At other times, when alerted by the plarforms filters, safety personnel steps in. Thanks to moderation, the platform has gained in value, and will continue to do so in the long run, Li emphasized.

Text is easy to analyze, audio is not

Asked to what extent content is mixed with extremist ideas, Li responded that audio content is more difficult to scrutinize than compact text. It requires much data and storage. Whenever bad words and hate messages, such as anti-Semitism are reported, user conversations are investigated relatively fast. Audio chats and conferences are a newer sort of media and require new responses by the instance of moderation. As platforms grow, additional resources will be devoted to transparency reports, which are very frequent on the part of social networks.

There are many unknowns when it comes to the removal of unacceptable content. How much extremist content actually exists can only be estimated. “Human interaction is complex, and is frequently subtle.” A statement can therefore be a racial slur or a subtle in-group code, but also comprise rap lyrics and, on an even more positive note, journalism. While government protects its citizens, Li stated his conviction that “fundamental responsibiltites should lie with the platforms.” Pushing users out simply as the sole means of keeping the conversations safe and comfortable for users to lead.

Some companies “could suffer”

Take-down, the “default model” in regulating content but can be disappointing. At times, conversations are deleted, in case of doubt, with little reason. Whether to combat hate or protect free speech – there are good arguments for either, Li said. Lately, however, politics has leaned towards more rigorous regulation. There has, in parallel, been much press coverage about social media “missteps.” The question as to how much content is problematic also depends on whether more manpower is mobilized and cases solved or if routines are improved, Li implied.

Bigger companies can spend more money once there are stricter government regulations. Albeit, recent developments affect business models and could increase a balkanization of the Internet, Li predicted, Moreover, small platforms and those with “extra-territorial reach” could suffer from stricter content regimes. Still, self-regulation will continue to be implemented. What effects exactly with a view on the web “at large” regulation will have remains thus far unclear.

Thorsten Koch, MA, PgDip
23 April 2021

Listen to the podcast at:
https://www.lawfareblog.com/lawfare-podcast-challenges-audio-content-moderation

Author: author

Leave a Reply

Your email address will not be published. Required fields are marked *