It was an ordinary night. In a live TikTok chat on the evening of December 7, 2022, a group moderator called for ethnic cleansing against those she deemed to be her enemies. Over a hundred people were online when the account creator, Workeinou, told her followers to begin killing civilians just because of their ethnicity. The Reporter reported the live chat quickly to the platform.
TikTok took longer than 48 hours to ban the account, though. While there is no way to determine whether or not the account owner lives in Ethiopia, the messages from the account have the potential to spark violence in the 120 million-people nation, which is already witnessing unprecedented levels of internal conflict.
Given that ignoring misinformation for even a second in the age of the Internet can be deadly, the routine of watching live conversations and TikTok videos inciting violence is once again one of the factors hindering Ethiopians from achieving the much-needed peace they crave. Unquestionably, social media is not the sole cause of unrest in Ethiopia, but its role in exacerbating an already volatile situation cannot be disputed.
TikTok, which has over a million active users in Ethiopia, is becoming a platform where violence is promoted without being censored, endangering the lives of many individuals. In today’s society, where social media is a breeding ground for hate speech and misinformation, this may not be strange.
In recent years, however, there have been some developments as a result of the pressure imposed by governments across the globe and institutions combating misinformation.
In the case of Ethiopia, Facebook has implemented numerous reforms over the past two years, including the deployment of more than three dozens of moderators to monitor activities in the country and take action against accounts disseminating hate speech and fake news. Twitter has also made some progress, but this does not necessarily imply that there is no longer a gap.
In the case of TikTok and Telegram, however, which are far worse due to the circulation of graphic content without warnings, little effort has been made to prevent the spread of disinformation and hate speech.
During the conflict in North Ethiopia, Telegram was used to disseminate gruesome images of dead bodies of both sides’ fighters. It is still utilized by some to share graphic images depicting the situation in West Oromia. The shared items would have been banned if they had been shared on platforms such as Facebook and Twitter, regardless of how quickly the platforms responded.
TikTok is also popular among individuals who desire to distribute fake news, whether intentionally or unintentionally, as well as those who wish to spread violent imagery. Its live chat feature, which is becoming increasingly common, is regularly used by people or groups advocating violence to achieve their political or personal objectives, despite its predominant use for entertainment.
Tolera Fekadu, an expert in fact-checking with 15 years of experience in the media industry, is among those who have witnessed the expanding influence of TikTok in Ethiopia’s social media environment and how its unchecked power is inciting violence in the country.
“Contents with the potential to ignite a civil war in Ethiopia are being circulated on TikTok,” Tolera said, adding, “It is also overlooked by fact checkers, whose attention has been concentrated on texts and postings made on Facebook and Twitter instead.”
Even if firms that run social media platforms have been increasingly strict with written posts, they have not yet mastered the art of managing short-form video content. The previous week, there were several deceptive TikTok clips, for example, associated with the continuing situation in Oromia and the protest in the public school in Addis Ababa. However, none of these clips carried a label informing users that the content is misleading. There are still plenty in circulation, regardless of the fact that some have been deleted or their account owners have been banned.
It’s a major drawback given the volume of short-form videos flooding every major platform, with big firms like Meta-owned Facebook and Instagram, as well as Google’s YouTube, embracing the format to compete with TikTok’s rapid growth. As more videos appear on these platforms, there is a greater chance that these short movies will contain misleading or deceptive information.
The fact that even people with few followers can reach tens, hundreds, or even thousands of times the friends they have on the platform has made the situation even worse and let videos reach millions of people without being checked, according to Tolera.
“As technology evolves, the fight against misinformation must get stronger, but that’s not happening on platforms like TikTok,” Tolera said, urging organizations that want to stop misinformation to work very hard before the situation spirals out of control.
Ethiopia is not the only country that has been hurt by misleading information spread on social media, including through TikTok. Dozens of children around the world are dying on the same platform.
As the “black out” challenge became popular on TikTok, kids all over the world were seen choking themselves with household items until they passed out. They then filmed the rush of adrenaline they felt when they came to and posted the videos on social media. Businessweek gathered information from news stories, court records, and interviews with family members about the deaths of at least 15 kids in the US who were 12 or younger and who died in the last 18 months.
Recent accusations have been leveled against the platform for ignoring disinformation about the Russia-Ukraine conflict. The popular app, which is used by over one billion people, has been amplifying recordings depicting old fights, movie scenes, and even video game combat as if they were live footage from the exact location.
The situation is no different in Ethiopia.
During both the war in North Ethiopia and the internal conflicts in Oromia and other regions, doctored images and campaigns containing ethnic insults circulated over the platform unchecked.
Human rights activist Befiqadu Hailu has monitored the evolution of social media in Ethiopia, as well as the proliferation of disinformation and hate speech. He argues that there should be an international rule that governs how the owners of these platforms will be governed globally and the repercussions if they violate the legal framework.
“It is fortunate that this is already a crime in Ethiopia. The establishment of a proclamation against hate speech and misinformation was a positive step forward, but it has not been utilized, and I fear that it will be mostly used to suppress government opponents,” Befiqadu said.
The Hate Speech and Disinformation Prevention and Suppression Proclamation took effect in March 2020. It demands that “any enterprise that provides social media services should endeavor to suppress and prevent the dissemination of disinformation and hate speech through its platform.”
Social media service providers should act within 24 hours to remove or take out of circulation disinformation or hate speech upon receiving notifications about such communications or posts, according to the law.
“It is not clear who really regulates the social media activities and whether they are respecting the law. The Justice Ministry is also silent on the issue, even though social media outlets are becoming more destructive than ever,” said an expert who participated in the making of the hate speech proclamation. “The Council of Ministers, although it was expected to come up with regulations to outline what is expected from the platforms, has done nothing in this regard.”
Befiqadu and Tolera believe awareness-creation programs must be encouraged to fight misinformation and hate speech from the users’ side. “I say civil societies and entities must step up in sensitizing the public and on how they should report contents that promote violence,” Befiqadu said.
Tolera agrees. “Adding to fact-checking organizations, the platforms should contribute more in creating awareness. For instance, I know TikTok is edging closer to launching a program in Africa, including Ethiopia, against misinformation.”
Cormac Keenan, Head of Trust and Safety at TikTok, wrote in September 2022 that a proactive detection program had launched with fact-checkers who flagged new and evolving claims they saw across the internet. “This allows us to look for these claims on our platform and remove violations. Since starting this program last quarter, we identified 33 new misinformation claims, resulting in the removal of 58,000 videos from the platform.”
Time will tell if TikTok will follow suit in Ethiopia and shield its users from the app’s potentially lethal content.