Current Affairs
Explained

UNESCO appoints international expert group to draft global recommendation on the ethics of AI

  • Posted By
    10Pointer
  • Categories
    World Affairs
  • Published
    13th Mar, 2020
  • Context

    • A national of India, Mr. Amandeep Singh Gill was appointed by UNESCO Director-General Audrey Azoulay, as one of the 24 members of the international expert group charged with drafting recommendations on the ethical use of Artificial Intelligence (AI).
  • Background

    • There is currently no global instrument that covers all relevant fields to guide the development and application of AI in a human-centered approach.
    • UNESCO Director-General Audrey Azoulay has therefore appointed 24 of the world’s leading experts working on the social, economic and cultural challenges of artificial intelligence to draft internationally applicable recommendations on ethical issues raised by the development and use of AI.
    • This follows the decision by UNESCO’s 193 Member States during its last General Conference in November 2019 to task the Organization with the development of the first global normative instrument on this key issue.
    • UNESCO has embarked on a two-year process to elaborate the first global standard-setting instrument on ethics of artificial intelligence, following the decision of UNESCO’s General Conference at its 40th session in November 2019.
  • Who are included in the task force?

    • The international expert group, which is composed of women and men from diverse cultural backgrounds and all geographical regions, includes leading scientists and professionals with extensive knowledge of the technological and ethical aspects of AI.
    • This inclusive and multidisciplinary process will include consultations with a wide range of stakeholders, including the scientific community, people of different cultural backgrounds and ethical perspectives, minority groups, civil society, government and the private sector.
  • What will the task force do?

    • During their first meeting, 20 to 24 April, they will start examining the complex ethical choices which confront us in the emerging age of AI.
    • The expert group has been tasked with the production of a draft text, which will be presented to various stakeholders at the national, sub-regional and regional levels for their comments this spring and summer.
    • The text will then be submitted to UNESCO’s Member States for adoption at the next General Conference.
  • How will it work?

    • The process will build on the preliminary study completed by UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology (COMEST).
    • This study emphasizes that currently no global instrument covers all the fields that guide the development and application of AI in a human-centered approach.
  • What are the ethical issues regarding AI?

    • What happens after the end of jobs?
      • The hierarchy of labour is concerned primarily with automation. As we’ve invented ways to automate jobs, we could create room for people to assume more complex roles, moving from the physical work that dominated the pre-industrial globe to the cognitive labour that characterizes strategic and administrative work in our globalized society.
      • For Example: Trucking employs millions of individuals in the world, who will lose their jobs after coming of the self driving trucks. But on the other hand, if we consider the lower risk of accidents, self-driving trucks seem like an ethical choice.
    • How do we distribute the wealth created by machines?
      • Our economic system is based on compensation for contribution to the economy, often assessed using an hourly wage. The majority of companies are still dependent on hourly work when it comes to products and services.
      • But by using artificial intelligence, a company can drastically cut down on relying on the human workforce, and this means that revenues will go to fewer people. Consequently, individuals who have ownership in AI-driven companies will make all the money.
    • How do machines affect our behaviour and interaction?
      • Artificially intelligent bots are becoming better and better at modelling human conversation and relationships.
      • For Example: In 2015, a bot named Eugene Goostman won the Turing Challenge for the first time. Eugene Goostman fooled more than half of the human raters into thinking they had been talking to a human being.
      • This milestone is only the start of an age where we will frequently interact with machines as if they are humans; whether in customer service or sales. When used right, this could evolve into an opportunity to nudge society towards more beneficial behavior.
    • Artificial stupidity. How can we guard against mistakes?
      • Intelligence comes from learning, whether you’re human or machine. Systems usually have a training phase in which they "learn" to detect the right patterns and act according to their input. Once a system is fully trained, it can then go into test phase, where it is hit with more examples and we see how it performs.
      • Obviously, the training phase cannot cover all possible examples that a system may deal with in the real world. These systems can be fooled in ways that humans wouldn't be.
    • Racist robots. How do we eliminate AI bias?
      • Though artificial intelligence is capable of a speed and capacity of processing that’s far beyond that of humans, it cannot always be trusted to be fair and neutral.
      • For Example: Google and its parent company Alphabet are one of the leaders when it comes to artificial intelligence, as seen in Google’s Photos service, where AI is used to identify people, objects and scenes. But it can go wrong, such as when a camera missed the mark on racial sensitivity, or when a software used to predict future criminals showed bias against black people.
      • Once again, if used right, or if used by those who strive for social progress, artificial intelligence can become a catalyst for positive change.
    • How do we keep AI safe from adversaries?
      • The more powerful a technology becomes, the more can it be used for nefarious reasons as well as good.
      • This applies not only to robots produced to replace human soldiers, or autonomous weapons, but to AI systems that can cause damage if used maliciously.
      • Because these fights won't be fought on the battleground only, cybersecurity will become even more important.
    • Evil genies. How do we protect against unintended consequences?
      • In the case of a machine, there is unlikely to be malice at play, only a lack of understanding of the full context in which the wish was made.
      • For example: An AI system that is asked to eradicate cancer in the world. After a lot of computing, it spits out a formula that does, in fact, bring about the end of cancer – by killing everyone on the planet. The computer would have achieved its goal of "no more cancer" very efficiently, but not in the way humans intended it.
    • How do we stay in control of a complex intelligent system?
      • Human dominance is almost entirely due to our ingenuity and intelligence. We can get the better of bigger, faster, stronger animals because we can create and use tools to control them: both physical tools such as cages and weapons, and cognitive tools like training and conditioning.
      • This poses a serious question about artificial intelligence: will it, one day, have the same advantage over us? This is what some call the “singularity”: the point in time when human beings are no longer the most intelligent beings on earth.
    • Robot rights. How do we define the humane treatment of AI?
      • While neuroscientists are still working on unlocking the secrets of conscious experience, we understand more about the basic mechanisms of reward and aversion. We share these mechanisms with even simple animals. Right now, these systems are fairly superficial, but they are becoming more complex and life-like. Once we consider machines as entities that can perceive, feel and act, it's not a huge leap to ponder their legal status. Should they be treated like animals of comparable intelligence? Will we consider the suffering of "feeling" machines?
  • What is World Commission on the Ethics of Scientific Knowledge and Technology (COMEST)

    • The World Commission on the Ethics of Scientific Knowledge and Technology COMEST is an advisory body and forum of reflection that was set up by UNESCO in 1998.
    • The Commission is composed of eighteen leading scholars from scientific, legal, philosophical, cultural and political disciplines from various regions of the world, appointed by the UNESCO Director-General in their individual capacity, along with eleven ex officio members representing UNESCO's international science programmes and global science communities.
    • The Commission is mandated to formulate ethical principles that could provide decision-makers with criteria that extend beyond purely economic considerations.
    • Since its inception in 1998, the functioning of COMEST has been guided by its Statutes adopted by the UNESCO Executive Board at its 154th session.

Verifying, please be patient.