Nevertheless, let me present you Kismet, the robot head whose social intelligence software system and synthetic / artificial nervous system was designed with human models of intelligent behavior in mind.
At any given moment, Kismet can only be in one emotional state at a time. However, Dr. Cynthia Breazeal (Kismet’s creator) states that Kismet is not conscious, so it does not have feelings. Point taken – but let me state that interestingly, the expression “Kismet” with its Arabic origins today in Turkish means “fate” or “destiny”. Where does that lead?
What is AI exactly?
There are three philosophical questions related to AI:
- Is artificial general intelligence possible? Can a machine solve any problem that a human being can solve using intelligence? Or are there hard limits to what a machine can accomplish?
- Are intelligent machines dangerous? How can we ensure that machines behave ethically and that they are used ethically?
- Can a machine have a mind, consciousness and mental states in exactly the same sense that human beings do? Can a machine be sentient, and thus deserve certain rights? Can a machine intentionally cause harm?
Can a machine be intelligent? Can it “think”? Here are some opinions to that:
- We need not decide if a machine can “think”; we need only decide if a machine can act as intelligently as a human being. This approach to the philosophical problems associated with artificial intelligence forms the basis of the Alan Turing test.
- “Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.” This conjecture was printed in the proposal for the Dartmouth Conference of 1956, and represents the position of most working AI researchers.
- “A physical symbol system has the necessary and sufficient means of general intelligent action.” Professionals argue that intelligence consists of formal operations on symbols. Hubert Dreyfus argued that, on the contrary, human expertise depends on unconscious instinct rather than conscious symbol manipulation and on having a “feel” for the situation rather than explicit symbolic knowledge.
- Kurt Gödel himself, John Lucas (in 1961) and Roger Penrose (in a more detailed argument from 1989 onwards) made highly technical arguments that human mathematicians can consistently see the truth of their own “Gödel statements” and therefore have computational abilities beyond that of mechanical Turing machines. However, the modern consensus in the scientific and mathematical community is that these “Gödelian arguments” fail.
The brain can be simulated by machines and because brains are intelligent, simulated brains must also be intelligent; thus machines can be intelligent. Hans Moravec, Ray Kurzweil and others have argued that it is technologically feasible to copy the brain directly into hardware and software, and that such a simulation will be essentially identical to the original.
Machines are already intelligent, but observers have failed to recognize it. When Deep Blue beat Garry Kasparov in chess, the machine was acting intelligently. However, onlookers commonly discount the behavior of an artificial intelligence program by arguing that it is not “real” intelligence after all; thus “real” intelligence is whatever intelligent behavior people can do that machines still cannot. This is known as the AI Effect: “AI is whatever hasn’t been done yet.”
Widespread use of artificial intelligence could have unintended consequences that are dangerous or undesirable. Scientists described some short-term research goals to be how AI influences the economy, the laws and ethics that are involved with AI and how to minimize AI security risks.
In the long-term, the scientists have proposed to continue optimizing function while minimizing possible security risks that come along with new technologies. Machines with intelligence have the potential to use their intelligence to make ethical decisions. Research in this area includes “machine ethics”, “artificial moral agents”, and the study of “malevolent vs. friendly AI”.
A common concern about the development of artificial intelligence is the potential threat it could pose to mankind. This concern has recently gained attention after mentions by celebrities including Stephen Hawking, Bill Gates, and Elon Musk.
The opinion of experts within the field of artificial intelligence is mixed, with sizable fractions both concerned and unconcerned by risk from eventual superhumanly-capable AI.
As for me, the most scary thing was that the machine started programming itself or refused to be programmed…