Artificial intelligence (AI) has led to countless technological advancements in 2023. The advent of automation and machine learning will contribute to a more developed tech ecosystem in 2024. In 2022, AI use was twice as likely in larger companies, and 80% of retail executives plan to harness AI to experience automation by 2025. This is primarily because modern AI has recently come out of its development stage, and it is now maturing slowly.
The AI market was projected to grow by 38% in 2023, and the use of AI is projected to reach USD 6.8 billion and USD 7.2 billion, respectively.
However, most tech enthusiasts are skeptical about the growth of AI in 2024. As per some experts, creating generative AI is an expensive venture right now and requires chips that are currently short in supply. These speculations have created a bubble around AI that projects an “AI cold shower” for 2024. Although the bandwagon myths are true, companies have invented extensively in this niche without correctly predicting their future. However, it is just a myth, and it can’t be said that there will be an AI cold shower in full bloom.
From Kismet – The Robot Head
Kismet is a robot head created in the 1990s at MIT by Dr. Cynthia Breazeal. It was an experiment in affective computing, designed to recognize and simulate emotions. The name “Kismet” comes from a Turkish word meaning “fate” or “destiny.”
Equipped with various sensory inputs, Kismet can interact with humans naturally and expressively, displaying a range of facial expressions, vocalizations, and movements. Its social intelligence software system, or Synthetic (Artificial) Nervous System (SNS), was designed with human models of intelligent behavior in mind.
Kismet now resides at the MIT Museum. The most scary thing though was that the machine started programming itself or refused to be programmed. That’s no fantasy:
- AI Can Now Write Its Own Computer Code – That’s Good News For Humas
- Our Computers Are Learning How to Code Themselves – Human coders beware
To Ben Gaya – Europe’s First AI-Generated Pop Star
Ben Gaya is a young, AI-generated singer with distinctive features like blue eyes, freckles, and tousled hair. He was created by programmers from Bremen, Germany, using AI software such as Midjourney and Runway. Ben Gaya’s music and persona are entirely digital, showcasing the capabilities of artificial intelligence in the music industry.
Ben Gaya’s creation highlights the advancements in AI technology and its potential to revolutionize the music industry. He continues to gain popularity on social media platforms, where he shares his music and interacts with fans.
The Philosophy
There are three philosophical questions related to AI:
- Is artificial general intelligence possible? Can a machine solve any problem that a human being can solve using intelligence? Or are there hard limits to what a machine can accomplish?
- Are intelligent machines dangerous? How can we ensure that machines behave ethically and that they are used ethically?
- Can a machine have a mind, consciousness and mental states in exactly the same sense that human beings do? Can a machine be sentient, and thus deserve certain rights? Can a machine intentionally cause harm?
The Limits
Can a machine be intelligent? Can it “think”? Here are some opinions to that:
- We need not decide if a machine can “think”; we need only decide if a machine can act as intelligently as a human being. This approach to the philosophical problems associated with artificial intelligence forms the basis of the Alan Turing test.
- “Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.” This conjecture was printed in the proposal for the Dartmouth Conference of 1956, and represents the position of most working AI researchers.
- “A physical symbol system has the necessary and sufficient means of general intelligent action.” Professionals argue that intelligence consists of formal operations on symbols. Hubert Dreyfus argued that, on the contrary, human expertise depends on unconscious instinct rather than conscious symbol manipulation and on having a “feel” for the situation rather than explicit symbolic knowledge.
- Kurt Gödel himself, John Lucas (in 1961) and Roger Penrose (in a more detailed argument from 1989 onwards) made highly technical arguments that human mathematicians can consistently see the truth of their own “Gödel statements” and therefore have computational abilities beyond that of mechanical Turing machines. However, the modern consensus in the scientific and mathematical community is that these “Gödelian arguments” fail.
The Artificial Brain Argument
The brain can be simulated by machines and because brains are intelligent, simulated brains must also be intelligent; thus machines can be intelligent. Hans Moravec, Ray Kurzweil and others have argued that it is technologically feasible to copy the brain directly into hardware and software, and that such a simulation will be essentially identical to the original.
The AI effect
Machines are already intelligent, but observers have failed to recognize it. When Deep Blue beat Garry Kasparov in chess, the machine was acting intelligently. However, onlookers commonly discount the behavior of an artificial intelligence program by arguing that it is not “real” intelligence after all; thus “real” intelligence is whatever intelligent behavior people can do that machines still cannot. This is known as the AI Effect: “AI is whatever hasn’t been done yet.”
The Risks
Moral Reasoning
Widespread use of artificial intelligence could have unintended consequences that are dangerous or undesirable. Scientists described some short-term research goals to be how AI influences the economy, the laws and ethics that are involved with AI and how to minimize AI security risks. In the long-term, the scientists have proposed to continue optimizing function while minimizing possible security risks that come along with new technologies. Machines with intelligence have the potential to use their intelligence to make ethical decisions. Research in this area includes “machine ethics”, “artificial moral agents”, and the study of “malevolent vs. friendly AI”.
Existential Risks
“The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
– Stephen Hawking
Potential Threats
A common concern about the development of artificial intelligence is the potential threat it could pose to mankind. This concern has recently gained attention after mentions by celebrities including Stephen Hawking, Bill Gates, and Elon Musk. A group of prominent tech titans including Peter Thiel, Amazon Web Services and Musk have committed $1billion to OpenAI a nonprofit company aimed at championing responsible AI development. The opinion of experts within the field of artificial intelligence is mixed, with sizable fractions both concerned and unconcerned by risk from eventual superhumanly-capable AI.
What’s More
My Blog (61
)
Dependence (5) Fiction (7) Karma (6) Landmarks (4) Paramount (6) Poignancy (5) Spectrum (6) Spotlight (6) Take Off (5) Unique (5) Virtue (6)
Amazing Stuff (9) Beyond Known (8) Controversial (9) Digital World (9) Inequities (8) Innovative (8) Metaphysics (8) Our Society (9) Outer Space (8) Value Creation (9) Yearnings (8)