Browse Definitions :
Definition

AI code of ethics

Contributor(s): Ivy Wigmore

An AI code of ethics, also called an AI value platform, is a policy statement that formally defines the role of artificial intelligence as it applies to the continued development of the human race. The purpose of an AI code of ethics is to provide stakeholders with guidance when faced with an ethical decision regarding the use of artificial intelligence.

Isaac Asimov, the science fiction writer, foresaw the potential dangers of autonomous AI agents long before their development and created The Three Laws of Robotics as a means of limiting those risks. In Asimov's code of ethics, the first law forbids robots from actively harming humans or allowing harm to come to humans by refusing to act. The second law orders robots to obey humans, unless the orders are not in accordance with the first law. The third law orders robots to protect themselves, insofar as doing so is in accordance with the first two laws.

Although developers are still in the early stages of AI adoption, it's important for enterprises to take ethical and responsible approaches when creating AI systems. To that end, a non-profit institute founded by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn and DeepMind research scientist Viktoriya Krakovnahas -- worked with AI researchers and developers to establish a set of guiding principles which are now referred to as the Asilomar AI Principles. This AI code of ethics mandates that:

  • The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
  • Investments in AI should be accompanied by funding for research on ensuring its beneficial use.
  • If an AI system causes harm, it should be possible to ascertain why.
  • Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
  • There should be a constructive and healthy exchange between AI researchers and policy-makers.
  • A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.
  • Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.
  • AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
  • Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
  • Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
  • AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
  • People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
  • The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
  • AI technologies should benefit and empower as many people as possible.
  • The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
  • Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
  • The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
  • Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
  • An arms race in lethal autonomous weapons should be avoided.
This was last updated in December 2018

Continue Reading About AI code of ethics

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

File Extensions and File Formats

Powered by:

SearchCompliance

  • regulatory compliance

    Regulatory compliance is an organization's adherence to laws, regulations, guidelines and specifications relevant to its business...

  • privacy compliance

    Privacy compliance is a company's accordance with established personal information protection guidelines, specifications or ...

  • data governance policy

    A data governance policy is a documented set of guidelines for ensuring that an organization's data and information assets are ...

SearchSecurity

  • asymmetric cryptography (public key cryptography)

    Asymmetric cryptography, also known as public-key cryptography, is a process that uses a pair of related keys -- one public key ...

  • Evil Corp

    Evil Corp is an international cybercrime network that uses malicious software to steal money from its victims' bank accounts.

  • Plundervolt

    Plundervolt is a method of hacking that involves depriving an Intel chip of power so that processing errors occur.

SearchHealthIT

  • telemedicine (telehealth)

    Telemedicine is the remote delivery of healthcare services, such as health assessments or consultations, over the ...

  • Project Nightingale

    Project Nightingale is a controversial partnership between Google and Ascension, the second largest health system in the United ...

  • medical practice management (MPM) software

    Medical practice management (MPM) software is a collection of computerized services used by healthcare professionals and ...

SearchDisasterRecovery

SearchStorage

  • M.2 SSD

    An M.2 SSD is a solid-state drive (SSD) that conforms to a computer industry specification written for internally mounted storage...

  • RAID (redundant array of independent disks)

    RAID (redundant array of independent disks) is a way of storing the same data in different places on multiple hard disks or ...

  • cache memory

    Cache memory, also called CPU memory, is high-speed static random access memory (SRAM) that a computer microprocessor can access ...

Close