Current Affairs
Explained

‘Countering Disinformation and Hate Speech Online’

  • Posted By
    10Pointer
  • Categories
    Polity & Governance
  • Published
    27th Jan, 2021
  • Context

    In order to align the utility of social media platforms with the welfare of citizens, while safeguarding the right to free speech, there is need for an overhaul of India’s current regulatory framework in order to curb hate speech and fake news online.

  • Background

    • The use of social media for peddling fake news and hate speech is not a new phenomenon.
    • Before the pandemic, episodes of information dumping peaked during elections, socio-political movements, or to manipulate financial markets.
    • The COVID-19 pandemic has shown how fast and wide information can spread: so fast, that the phenomenon was given the name, infodemic”.
    • Amidst the COVID-19 crisis, it has become apparent that widespread fake news can threaten public health. Public awareness is key in battling a health crisis.
    • However, if the regulation of misinformation is concentrated in the hands of platforms or government agencies, it becomes susceptible to perception-alteration tactics.

    Example (scrutinising and fact check)

    • Facebook, for one, can be a highly powerful tool, with over 290 million users in India—its highest in the world.
    • In recent times, however, various governments have begun scrutinising the platform for what they allege to be its lackadaisical approach to hate speech.
    • In April 2020, Facebook flagged 50 million  posts with warning labels; it  argued that once a content is flagged, 95 percent of end-users do not  access it.
    • Fact-checking organisations are also working to counter fake news campaigns, including, in India—reports about purported “cures” against the COVID-19.
    • According to a Reuters report, between January and March 2020, there was a 900-percent increase in fact-checks related to Covid-19.
    • The same report indicates that a mere 20-percent of the total misleading content in that period had come from prominent public figures and enjoyed 69 percent of all engagement.
  • Analysis

    To what extent, social media is to be blamed?

    • Vulnerable to abuse: Social media platforms facilitate the sharing of information and enhance connectivity and civic engagement. At the same time, however, they are vulnerable to abuse by malicious actors who use the channels to spread misinformation and hateful and divisive content. Behind the veil of protecting free speech, tech companies in India remain oblivious to such potential misuse.
    • Conflicts: Social media platforms may have democratised the internet, but the same technology can create conflicts as it enables the proliferation of erroneous information at an unprecedented pace.
    • Lack of quick identification: The companies do not have adequate resources to quickly identify such content and remove them.
    • Numerical advantage: Fake news thrives on dissemination through surplus or deficit information models. Under the surplus model, if enough users share the same information, it validates itself by a sheer numerical advantage, including when the gatekeepers of information (like journalists or politicians) validate it.
    • Widespread impact: The impact of fake news is enhanced due to lack of access to correct information, limited prominence of fact-checking mediums, overwhelming nature, or the user’s inability to comprehend its consequence. 
    • Higher interaction: Of all the content in these platforms, those that are extremist, fake and populist are found to often garner high “interaction” numbers.
      • Facebook, for example, took down 40 million misleading posts in March 2020 alone, and another 50 million the following month.
    • Targeted advertisement: The algorithms of these platforms work in such a manner that they record the user’s past interactions and fill their feed with their identified interests; this facilitates targeted advertisements, from where the platforms earn their incomes.
  • India’s Regulatory Framework: An Overview

    Fake News

    • There is inadequate regulation of fake news under Indian law.
    • Due to the various types of fake news, their motivations, and the ways they are shared, the regulatory challenge is daunting.
    • To combat fake news, the first imperative is to identify the different forms:
      • ‘misinformation’ is the inadvertent sharing of false content
      • Whereas, ‘disinformation’ is deliberate sharing with an intent to deceive
        • Its sub-types are
        • misleading content
        • imposter content
        • fabricated content
        • false connection
        • false context
        • manipulated content
        • satire or parody
      • The Indian Ministry of Electronics and Information Technology (MeitY) has recognised the potential for misuse of platforms and even broadly defined ‘disinformation’.
      • However, the term is yet to be adopted under the IT Act or any provisions of the penal code.
      • Section 505(1)(b) of the Indian Penal Code or Section 54 of Disaster Management Act, 2005, both provide broad recourse against cases which have severe consequences on public wellbeing; they are shorthanded, however, against the rapid pace of social media.
      • These regulations also lack precedent or uniform application against multiple types of fake news.

    Hate Speech

    • Absolute free speech laws that protect against any type of censorship inadvertently render protection to hate speech as well.
    • In India, hate speech is not profusely restricted, it remains undefined with appropriate IT Act provisions or a regulatory mechanism for online content.
    • Absent appropriate codes or regulations for intermediaries, those who tend to have a louder voice—such as politicians or celebrities—can harness this capacity to incite anger or divide communities without being threatened by any form of liability.
    • India’s multiple laws on sedition, public order, enmity between groups, and decency and morality, broadly form the country’s jurisprudence on what is known as “hate speech”, without using the term itself.
    • Following the unconstitutionality of Section 66A of the IT Act, no provision under the IT Act currently aims to curtail either online or offline ‘Hate Speech’.
    • The most employed sections 153A and 295A of the Indian Penal Code (IPC) are also inadequate to deal with the barrage of online hate content.
    • The Parliamentary Standing Committee has recommended changes to the IT Act by incorporating the essence of the Section 153A.
      • The report also suggests stricter penalties than prescribed under Section 153A due to the faster and wider spread of information in online spaces.
      • It advocates criminalising “innocent forwards”, for example, with the same strictness as the originator of the content.
  • How other countries are handling these platforms?

    Many countries have initiated inquiries into the role played by these platforms in spreading extremist, hateful or fake content.

    • Germany, Singapore, and France can now levy significant fines against platforms that fail to restrict illegal content after due process of notice and flagging.
    • The United Kingdom (UK) is debating an Online Harms White Paper.
    • The European Commission has proposed two legislative initiatives—i.e., the Digital Services Act (DSA) and the Digital Markets Act (DMA) for the creation of regulatory mechanisms to counter online harms.

    In the United States in early January 2021, platforms like Twitter provided a peek into their ability to counter disinformation, directing end-users to reliable sources, and suspending the account of former president Donald Trump, “due to the risk of further incitement of violence.”

  • Challenges/Issues

    • No definition: The Indian challenge to garner consensus and counter ‘hate speech’ and ‘fake news’ extends to their understanding in real/offline world. Both remain undefined under any domestic legal mandate, including the IT Act. 
    • Ethical-legal gap: The difficult question concerning hate speech or fake news legislation pertains to the existing ethical-legal gap, the executive response departing from conservative understanding of online spaces and data.
    • Lack of effective regulation: While disruptive technologies are evolving at a faster rate, the regulations fail to address gaps to deter unethical behaviour.
    • Lack of approach to counter manipulation and hate speech: The platforms alone are not equipped to oversee the task for a remodelled approach to counter manipulation and hate speech.
    • Difficulty in removal of risky content: Due to the overarching jurisdictional nature of these acts and easy multiplication, taking down content is not a silver bullet in countering hate speech and fake news.
    • Lack of accountability and transparency: The lack of accountability and transparency calls for a rethinking of social media platforms’ role and structure in order to counter their misuse.
    • No liability: In India, social media platforms are not liable under any rules or regulations. They function under a regulatory vacuum and are not bound by any industry regulatory standards for the functions they dispense
  • Framing India’s Approach (Guiding Principles)

    • The Indian response must be driven by four guiding principles:
      • Accountability and transparency over decision-making by tech platforms, state and non-state actors
      • Ensure consistency and collective will by encouraging inclusive stakeholder engagement for all decision-making processes
      • Respect human rights standards and habituate humane application of tech. Incentivise innovative adoption of redesigned tech products that pre-empt and provide safeguards from online harms
      • Legal certainty for consistent application and execution of duties and rights of stakeholders
  • Conclusion

    The evolving nature of online harm necessitates an appropriate response from the regulatory bodies. Additionally, the dissimilar nature of the pandemic, compounded by the weaponization of information-sharing models, benefit few and negatively affect large populations. Intervention in this regard is necessary. However, any restriction cannot be vaguely or hastily drafted to allow selective and arbitrary application by either the tech companies or government authorities. A balance must be found in this regard, defining the roles of various stakeholders in a co-regulatory model.

Verifying, please be patient.