Many nations have legislation in place to safeguard children from harmful contents.

War Against cyber-bullying and pornography
[War Against Cyber-bullying and Pornography/Europeans24]

In an era when IT behemoths have become the world's rulers, when even the most advanced users can't keep up with the latest technological developments, when the inaccessibility of the world's largest services is viewed as a global catastrophe for the entire civilization, the main global trend has become user protection. Many users still lack the capacity to impact a wide range of risks, ranging from identity theft to an unpleasant and hostile Internet experience. In recent years, among the global problems of the Internet, unsafe or destructive content that both adults and children face has stood out: everyone can be subjected to bullying and slander, as well as manifestations of hatred or misinformation, all of which have a negative impact on even the most tenacious individuals."


In recent years, there has been a strong push to safeguard consumers, particularly children, all over the world. Many nations have legislation in place to safeguard children from harmful contents, and new initiatives on the subject are always emerging. As a result, in September of last year, the British government imposed limitations on gaming platforms, social media platforms, and streaming services. IT firms will no longer be allowed to employ "enticing" algorithms and technologies to urge youngsters to consume more material in the near future. Automatic video playback, for example, has been prohibited, according to authorities, since it distorts perception and encourages young users to spend their spare time online.


If the service is still found to be in breach of the regulation, the platform would face a massive punishment of up to 4% of its total global revenue. Some of the most prominent social media platforms, such as TikTok, Instagram, and YouTube, have already implemented limitations. TikTok, a popular app among teenagers, will stop sending alerts after 9 p.m. for anyone under the age of 16. This "curfew" will begin at 22:00 for young people aged 16-17. Instagram has made it more difficult for minors and strangers to communicate: an adult will no longer be able to contact a young user if he is not a subscriber.


For users under the age of 18, YouTube disabled targeted advertisements and automated video playback. This "curfew" will begin at 22:00 for young people aged 16-17. Instagram has made it more difficult for minors and strangers to communicate: an adult will no longer be able to contact a young user if he is not a subscriber. For users under the age of 18, YouTube disabled targeted advertisements and automated video playback. This "curfew" will begin at 22:00 for young people aged 16-17. Instagram has made it more difficult for minors and strangers to communicate: an adult will no longer be able to contact a young user if he is not a subscriber. For users under the age of 18, YouTube disabled targeted advertisements and automated video playback.


The developments in the United Kingdom back up the government's decision some years ago to make the country's virtual environment "the safest location for internet communication in the world." The services were given control over the quality of material on the sites, as well as the problem of user protection.


To promote digital literacy, platforms were requested to develop structures dubbed Safety by Design, which restrict harmful material at the level of algorithms for the functioning of sites and applications, as well as users. At the same time, the Online Harms White Paper was released, which detailed the different types of dangerous content, including legally defined harmful information (for example, terrorism), content without a specific status (intimidation, trolling, justification of self-harm, etc.), and legal content not intended for children. Then-Prime Minister Theresa May chastised the tech behemoths for being reckless to their customers; now, the sites would have to work hard to regain their trust.


For a long time, European governments have been working on social media concerns. Residents of Italy, for example, have the right to request in private that harmful content be removed from the platform; the platform has 48 hours to comply. Such a regulation was enacted in 2017 - the same year that Germany's Network Enforcement Act, which regulates social media, went into effect.


The voluntary-compulsory approach of working with services is named after the fact that the categories of dangerous and banned information are not specified, but the platforms must still be responsible for security. Dangerous content can be blocked or removed by platforms, public groups, or government authorities (in difficult cases, additional proceedings are possible). Violators face significant fines of up to 500 thousand euros, with specific types of breaches carrying a penalty of up to 5 million euros. As a result, the services keep a close eye on how the regulations are being followed. “Today, many nations are adopting the required measures to protect minors online,” says Alexander Zhuravlev, chairman of the Moscow chapter of the Russian Bar Association's commission on legal assistance for the digital economy.


The regulation regime in other Asian nations is considerably stricter: beginning in May 2021, Indian providers will be forced to personally censor material. If the content is restricted to those aged 18 and up, each site must also check the age of its users. This concept is similar to a British proposal for document verification: a few years ago, local governments suggested a system of strange "passes" to adult-only sites. Citizens of the United Kingdom were supposed to go through a document check at their local post office before being granted access to such portals. It was subsequently discovered that constructing the infrastructure for such a system would be extremely costly.


As part of its campaign against gambling addiction, the Chinese government has put limitations on video games for children. Children can now only play during the holidays, as well as on Fridays and weekends, for an hour each day. Excessively long-term use of gadgets by minors, according to the current British Online Harms White Paper, is likewise regulated, falling into the last category of lawful but unpleasant behavior. At the same time, beginning in January 2021, Chinese consumers will be able to demand that the site be stopped immediately if the data published on the site is potentially detrimental to minors.


A legislation on social network self-control has been in force in Russia from February 1, 2021. He instructed the site's proprietors to monitor the content that was published and to ban any information that was illegal. Posts that slander persons based on their country or race, gender, age, occupation, or place of residence, insults and insulting phrases, calls for extremism or rioting, and false statistics and other material, for example. The law applies to platforms with more than half a million daily users. They must also create a representative office in Russia and keep a log of user complaints in addition to content control. Experts, on the other hand, feel that not all platforms succeed in carrying out their obligations.



["IT firms have recently given a lot of attention to the problem of safety in terms of safeguarding minors on their websites from unlawful and damaging information. The Alliance for the Protection of Children is a wonderful example of the IT sector working forces to combat dangers in the digital world. However, because the amount of destructive content and unlawful actions against minors is only increasing, it is necessary to collaborate between IT companies, the state, and public organizations, as well as separate regulatory regulation in this area, to effectively counteract and protect minors on the network."]

Rustam Sagdatulin, Director of ROCIT


The partnership, which ROCIT's president, Rustam Sagdatulin, stated, was formed only last September. This is a group of operators, Internet businesses, and media holdings (including VimpelCom, Megafon, Rostelecom, Mail.ru Group, Yandex, and Kaspersky Lab) that have set themselves the goal of building a safe digital environment and fighting new threats. Similar voluntary agreements have been reached in the past on the market: the so-called anti-piracy pact, which was signed a few years ago and resulted in major changes in the consumption of legal material, is a good example.


Members of a new children's protection group expressed optimism that the virtual environment would no longer be a source of dangers to children; in a world where the actual and virtual worlds are blurring, this problem is becoming increasingly severe. Now that the Alliance has been created, it pledges to assist in areas of self-regulation and digital literacy, as well as promote media hygiene concepts and construct a system of guidelines and responsible conduct for children in the network.


Probably, the new company will be tasked with filling in the gaps in the mosaic, which will be used to create an image of a more or less affluent virtual world. It was expected that the new legislation on social networks would impose a certain level of responsibility on platforms, taking it away from users - many services have previously shifted duty to them. Even when actual bullying or incorrect information was circulated, the management and moderators frequently refused to moderate user conflicts and come to the rescue in tough situations. Obviously, no major changes were expected, but network owners began to get more customer complaints.


At the same time, each user's security remains in his hands and is directly dependent on his level of digital literacy - after all, it's not so much about blocking or banning as it is about raising awareness of people who do not publish or share fakes, do not support bullying, and try to act appropriately. It is correct in the virtual world.


Meanwhile, officials believe that the majority of documents containing potentially harmful information are shared on Facebook. In this unfavorable rating, YouTube came in second, while Twitter came in third. The study looked at the kind of harmful content that courts have deemed illegal, such as fakes, suicide calls, and extremist propaganda. According to publicly available statistics, Twitter did not delete 192 units of illegal content and YouTube did not remove 4,624; nevertheless, the number of such "non-removal" cases is steadily reducing, particularly in the areas of drug trafficking and child pornography.


The Federal Service for Supervision of Communications, Information Technology, and Mass Media said at the end of September 2021 that it would create a record of social networks in order to check their compliance with regulations. The government underlined that the legislation's improvements will enable for the rapid disposal of harmful information, such as waste streams. The registration already includes Facebook, Instagram, Twitter, YouTube, VKontakte(VK) and Odnoklassniki(OK), as well as TikTok and Likee.



The author Sofia Kodochnikova is a Russian journalist.



[This article first appeared on Lenta.Ru on October 06, 2021]

Previous Post Next Post