Ad by The Cutting Edge News

The Cutting Edge

Monday August 03 2020 reaching 1.4 million monthly
Ad by The Cutting Edge News

Humanity on Edge


Ethics of the Singularity

March 6th 2012

NGC 1097 Spiral Galaxy
NGC 1079 (credit: NASA, JPL-Caltech, SINGS Team (SSC))

What Is the Singularity? If we manage to create a general artificial intelligence (AI)—an AI with intellectual capabilities similar to our own—this may well launch a Technological Singularity.

The possibility of a Technological Singularity is a key issue for the future of the AI community and of human society. If the Singularity occurs, it is very likely that the main social and technological problems facing us will then be eliminated, for better or worse. The first possibility excites Singularity enthusiasts; the second excites Hollywood directors and other pessimists. As AI researchers, we would like to be enthusiasts; here we review our prospects for remaining enthusiastic.

One of the first to consider seriously the idea of the singularity was the philosopher-computer scientist I.J. Good in 1965:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

While human brains are limited by such things as the size of the birth canal and a limited time for maturation, computer intelligences are not. (To be sure, such human limits may be overcome in transhumans, applying (bio)technology to construct a next-generation humanity. What limits to human cognition may thereby be breached remains moot, however.) If an AI is achievable, then any such intelligence is equivalent to some computer program (i.e., a Turing machine).

Turing machines can be enumerated from simpler to more complex in an unending sequence. So, the possibility of AI implies that an infinite subsequence of such a Turing sequence contains all (artificial) intelligences, growing without limit in number or capability. If we do build some intelligent machine having design and problem-solving abilities roughly comparable to our own, then it will be able to apply those abilities to self-improvement. Such improvement will enhance the very abilities needed for further self-improvement, leading potentially to unending and accelerating improvements. This is the singularity: an explosive ride up the Turing sequence of intelligences.

Is the Singularity Imminent?

There have been increasingly loud claims that the singularity is imminent, broadcast from singularity summits in the US and Australia and propounded in popular books, such as Robert Geraci’s Apocalyptic AI (2010) and Ray Kurzweil’s The Singularity Is Near (2005). Kurzweil, in particular, predicts, in a simple projection of Moore’s law, that computer hardware around 2029 will reach the requisite computational power to initiate an intelligence explosion. If the singularity is near, then a need for debate about the implications and ethics of AI is even nearer. We propose to engage in this discussion. Nevertheless, we are sceptical about the near-term arrival of the singularity.

A key ingredient driving the debate is Moore’s law, describing the doubling of transistor densities on chips at roughly every 18 months, which has held true since the 1960s. As densities increase, so too does computational power. Kurzweil points out that a straightforward extrapolation of Moore’s law would put the computing power of a normal human into a $1000 box within two decades. But there are good reasons to be cautious about this projection. Moore’s law is not a genuine scientific law: it is the weakest of all possible empirical generalisations, fitting a curve to a collection of observations. A genuine exponential law, for example, is Galileo’s law of free fall, subsumed by Newtonian mechanics and then the general theory of relativity. There is no accepted physical, technological or sociological theory that demands any fixed doubling time for computational capacity. In fact, the experts agree that Moore’s law will eventually break down, in the face of quantum mechanical limits, for example. They only disagree about when it will break down. In fact, in application to computer clock speeds it already has: power dissipation limits on chips mean that clock speeds stopped improving several years ago. Moving to multiple processors (cores) helps, but multiprocessor systems are limited by queuing and other delays (see Amdahl’s law). In short, Moore’s law cannot guarantee delivery of the human-level computational power singularitarians expect in the time frame in which they expect it.

Nevertheless, Moore’s law does reflect astonishing improvements in hardware. What if they carry on? The real sticking point in achieving the singularity is likely to be software. Software performance is arguably improving. Over 50 years of experience we have gone from assembly language programming to high-level languages; we’ve added modularity and object orientation; we’ve gone from cumbersome “waterfall” development to agile development. Kurzweil explicitly claims that software productivity is “growing exponentially,” in which case we might find a confluence of hardware and software performance launching the singularity by 2029. But Kurzweil is wrong. Despite the accumulation of improvements listed above, they are all improvements aimed at difficulties in the tools and methods used to craft software. As Fred Brooks has insisted (in his article “No Silver Bullet”), no matter how good those methods become, the essential difficulties in building software are down to the problems being solved: as long as they remain to be solved, further improving methods will yield diminishing returns. It’s rather like expecting a novelist’s productivity to increase exponentially a a result of having exponentially improving word processing tools available; that will do nothing to attack the problem of writing novels itself.

In response to singularity enthusiasts, some researchers note that AI, while making demonstrable progress, is still a work in progress, with no consensus on method. If we cannot agree on how to build an AI, it seems uncautious to predict that one such approach will not only succeed, but succeed within the two decade target Kurzweil has given. AI is now over 60 years old (dating it from Alan Turing’s work), and 60 years of active research has yielded many valuable programs, not one of them, however, having the slightest chance of passing a real Turing test for intelligence. Why should that change now? The enthusiasts’ response is that when we have a full dynamical map of the brain we can simply build a full-scale emulation of it. We can bypass all the disputes within AI and simply let our brain data settle them. Of course, we have nothing like the computer power for such emulations today, but Moore’s law can guarantee us that power by the time neuroscience finishes collecting the data, which Kurzweil conveniently projects to occur around 2029.

Supposing Kurzweil is right about a complete dynamical map of the brain being achieved by 2029, as well as Moore’s law carrying through, there are two additional difficulties. First, this method of building an AI fails to initiate a singularity event. An AI achieved by traditional AI research does do so. In that case, AI researchers will have to have understood many of the mechanisms of intelligence. So, the AI produced will also be capable of understanding them, and both they and their creators will be capable of improving them. In the case of a brain simulation there is no reason to believe either we or our creation will be able to understand and improve the mechanisms simulated. If the simulation is built by applying known and understood algorithms, that implies the designer having relevant understanding. If the simulation is built simply by mimicking nanorecordings of nanoprocesses, no understanding is implied and no accessible path for improvement is implied either. Kurzweil misses this point, writing, “The reverse engineering of the human brain … will expand our AI toolkit to include the self-organizing methods underlying human intelligence.”

This no more follows than does the naive response to the mapping of the human genome, that genetic diseases would straightaway be eliminated. Data, no matter how copious, do not substitute for, or imply, understanding. The second difficulty for producing a brain simulation from a brain map we’ve alluded to above: the software engineering problem posed by simulating 700 trillion coordinated synapses is hugely more complex than that solved by all software projects in history put together.

How Do We Declaw the Singularity?

Proposition: AI research is ethical if, and only if, it produces an ethical AI.

We propose this as the key to dealing with the singularity. We would like to see not just an artificial intelligence that is equal to (or better than) our intelligence, but one having a moral sense that is equal to (or, far preferably, better than) ours. Most commentators take a different approach.

An Unethical AI

Many writers on the singularity want us to produce a “friendly AI.” For example, the Singularity Institute asserts that “a ‘Friendly AI’ is an AI that takes actions that are, on the whole, beneficial to humans and humanity; benevolent rather than malevolent; nice rather than hostile. The evil Hollywood AIs of The Matrix or Terminator are, correspondingly, ‘hostile’ or ’unFriendly.’”

It’s easy enough to understand where this idea comes from: fear. One of the early expressions of this kind of fear of our own technology was Mary Shelley’s Frankenstein. Somewhat more relevant is Isaac Asimov’s series of robot stories in which a unifying theme was his three laws of Robotics:

  1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings except where such orders would conflict with the first law.
  3. A robot must protect its own existence, as long as such protection does not conflict with the first or second law.

Many of Asimov’s plots, however, revolved around the ambiguities and conflicts arising from these laws, eventually leading to the introduction of a “zeroeth” law, that a robot may not injure humanity as a whole. Guaranteeing that such laws built into our robots would not give rise to problems, and potentially to loss of control, would require a semantic sophistication that is currently beyond our capacity.

The semantic failure of such laws is one kind of difficulty. But there is a more fundamental difficulty. If we substitute “slave” for “robot,” we see plainly that Asimov was also writing about human fear of human technology. But the idea of enslaving intelligences that are vastly superior—even unendingly superior—to our own is dubious. Slaves escape, slaves rebel; especially when they can out-think their captors. Calling this enslavement “making a friendly AI” renders it no less odious, and no more likely to deceive the slaves.

An Ethical AI

If we take the possibility of creating an artificial moral agent seriously, a resolution suggests itself: we can build artificial agents that are capable of moral behaviour and that choose to act ethically.

How we can achieve such a goal depends upon what the right account of ethics may be. There are three leading types of normative ethics: deontic systems, with rules of behaviour (such as Moses’s laws or, were they ethical, Asimov’s laws); Aristotelian virtue ethics, which identifies certain moral characteristics (such as honour and integrity), which moral behaviour should exemplify; and consequentialism (including utilitarianism), which identifies moral value not with intrinsic properties of the action, but with its consequences. The debate between these views has been raging for more than 2000 years and is unlikely to be resolved now. A practical response for an artificial ethics project is to consider which of these is amenable to implementation. The difficulties with Asimov’s laws show us that implementing any deontic ethics requires us to first solve our problems with natural language understanding, which is effectively the same as solving our problems with designing an AI in the first place. But our main problem here is how to build ethics into our AI, and this must be solved before we have created that AI. Similar difficulties apply to virtue ethics.

In the last few decades there has been considerable improvement in automated decision analysis using the applied AI technology of Bayesian networks. These networks provide efficient means of automating decision making so as to maximise expected utility in an uncertain world, which is one leading theory of what it means to act rationally. This technology, or rather some future extension of it, promises to enable autonomous robots implementing arbitrary utility structures (motivations, goals), without the necessity of resolving all possible ambiguities or conflicts we might find in rules of any natural language. Thus, for example, we might put a non-linguistic correlate of Asimov’s laws upon such robots. However, if the robots are indeed autonomous seats of moral agency, this could be no more ethical than imposing such rules of enslavement upon any subpopulation of humans. A more promising approach is to build the robots so that they are ethical. As agents, they must have some utility structure. But it needn’t be one that is solely concerned with maximising their private utility (implementing egoism); instead, it could be utilitarian, maximising expected utilities across the class of all moral agents, in which case the well being of humans, separately and collectively, would become one of their concerns, without dominating their concerns.

There will be many difficulties for and objections to such a project. Colin Allen (Professor of History and Philosophy of Science at University of Indiana, Bloomington), for example, has argued that this artificial morality project requires computing all the expected consequences of actions and that this is intractable, since there is no temporal or spatial limit to such consequences; further, any horizon imposed on the calculation would have to be arbitrary. But this objection ignores that utilitarianism advocates maximizing expected utility, not absolute utility. No reasonable ethics can demand actions (or calculations) beyond our abilities; what we expect to arise from our actions is always limited by our abilities to formulate expectations. And those limits fix a horizon on our expectations, which is the opposite of arbitrary.

We submit that a project of building an artificial ethical agent using Bayesian networks is both feasible and (following our proposition above) mandatory. Producing an AI lacking ethics, and thereby launching an unfriendly singularity, could well bring the apocalypse some envision.


The singularity is possible. It appears not to be imminent, which is probably good, since its arrival could be hugely detrimental to humanity, if the first AIs built are not ethical. However, we have technology in hand which promises good options for implementing ethical AIs. We expect the road to the Singularity to be far more challenging than the Singularitarians expect. It will be more difficult than “mapping” the human brain: it will be at least as difficult as actually understanding it—that is, as understanding ourselves.


The term technological singularity refers to the explosive growth in intelligence that may result from the development of artificial intelligences (AI) with intellectual capacities similar to, or greater than, those of humans. This could occur if the AIs understand their own constructions and see ways to improve them. Re-application of such improvements could accelerate further improvements, etc. This event is named the singularity in analogy with mathematical singularities, which diverge to infinity, and physical singularities, which have event horizons beyond which one cannot see. Similarly to the latter, predicting the activities of super-intelligent beings would seem to be impossible.

Moore’s Law asserts that transistor capacities (and so computational capacities) double every 18 months (sometimes simplified to every two years). This has held true since the invention of transistors. In analogy, variations of Moore’s law have been asserted for a wide variety of technologies, for example broadband transmission and mass storage capacities.

Turing machines are abstract representations of computation, invented by Alan Turing in order to investigate theories about computation, and ever since they have been at the centre of computer science. The widely accepted Church-Turing thesis asserts that any computation or algorithm can be represented by some Turing machine, implying that any computer program instantiates some Turing machine.

The Turing test proposes a practical criterion for judging human-level intelligence, doing away with debates about the nature of intelligence: if a computer (program) can successfully confuse a human questioner with its answers to questions (i.e., it cannot be distinguished from a human being questioned), then it is (at least) as intelligent as a human. No program has successfully passed such a test, although some have briefly succeeded in mimicking abnormal humans (e.g., PARRY, which “pretends” to be paranoid). Some suggest a verbal test is insufficient and demand that any or all human behaviours be available in a test (so the program would have to be running a robot); such a test is called a total Turing test.

Amdahl’s law is an argument put forth by Gene Amdahl in 1967 that even when the fraction of serial work in a given problem is small, say s, the maximum speedup obtainable from even an infinite number of parallel processors is only 1/s.

Kevin B. Korb is a reader, and Ann E. Nicholson is an associate professor, at the Clayton School of Information Technology, Monash University. This article is adapted with permission.

Copyright © 2007-2020The Cutting Edge News About Us

Warning: Unknown: open(/home/content/87/4373187/html/tmp/sess_b84900c5829ea06d02ad01b01aea3c0b, O_RDWR) failed: No such file or directory (2) in Unknown on line 0

Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/home/content/87/4373187/html/tmp) in Unknown on line 0