An AI code of ethics is a formalized specification guiding the human-generated design of intelligent AI agents and the autonomous behavior of those systems. AI ethics is categorized in the first case as roboethics and in the second case as machine ethics.
The code of ethics, like the social contract for humans, is designed to support the common good of humans and the world we live in. As such, it seeks to prevent artificial intelligence developers from purposely designing AI systems for destruction of humans and the habitat that supports us. Similarly, it seeks to ensure that machine behavior will not evolve in ways that threaten humans.
Isaac Asimov, the science fiction writer, foresaw the potential dangers of autonomous AI agents long before their development, and developed The Three Laws of Robotics as a means of limiting those risks. The first law forbids that robots actively harm humans or allow harm to come to them through refusing to act. The second law orders them to obey humans, unless the orders were not in accordance with the first law. The third law orders robots to protect themselves, insofar as doing so is in accordance with the first two laws.