will computers become conscious?

Will computers become conscious?

Basically, a quip in Tellegen’s Theorem says “AI is whatever hasn’t been done yet.” As such, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology. Modern machine learning capabilities generally classified as AI include autonomously operating moving vehicles, competing at the highest level in strategic game systems, understanding and translating human speech, intelligent routing in content delivery networks, and more. As it currently stands, the traditional approaches of AI research include computational intelligence, statistical methods, and traditional symbolic artificial intelligence.

The holy grail of machine learning is artificial general intelligence (AGI), which is the intelligence of a machine that can understand or learn any intellectual task that a human being can. As such, developing “strong AI” is among the field’s long-term goals in the search to the question ‘Will Computers become conscious?‘. The thing is that we may never actually get beyond the level of “weak AI”. Plus, how would we even know if a computer became conscious or not? In general, the most difficult problems for computers are informally known as “AI-hard”, implying that solving them is equivalent to the general aptitude of human intelligence, beyond the capabilities of a purpose-specific algorithm.

Self-aware computers become conscious

AI-hard problems are hypothesized to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem. As it stands, AI-hard problems cannot be solved with current computer technology alone. They still require human intervention, and hopefully always will unless computers become conscious. Following this trend, AI should not become self-aware.

So, the doomsday conspiracy theorists are wrong. AI will not become the dominant form of intelligence on Earth, with computers and robots taking over the world. Still, there’s nothing wrong with taking a few precautionary measures to ensure that future super intelligent machines remain under human control. However, I don’t think a robot uprising is possible.

Nonetheless, there are those who believe that machines have minds or soon will. This is why scientists have developed a number of experiments to test AI, to find out what the limits of artificial intelligence are. The classic example of this is the Turing Test, in which a machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, thereby passing the test if it can fool the evaluator a significant fraction of the time.

Next, in the Coffee Test, a machine is required to enter a typical modern home and figure out how to make coffee. In other words, a robot must find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.

Read More…
@Medium

Simulating the mind

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects. General intelligence is among the field’s long-term goals. Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, information engineering, mathematics, psychology, linguistics, philosophy, and many other fields.

The field was founded on the assumption that human intelligence “can be so precisely described that a machine can be made to simulate it”. This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence. These issues have been explored by myth, fiction and philosophy since antiquity. Some people also consider AI to be a danger to humanity if it progresses unabated. Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.

Source

The Aquarius Bus

The Aquarius Bus

A metaphysical emporium.

Share on email
Email
Share on facebook
Facebook
Share on linkedin
LinkedIn
Share on pinterest
Pinterest
Share on reddit
Reddit
Share on telegram
Telegram
Share on tumblr
Tumblr
Share on twitter
Twitter
Share on whatsapp
WhatsApp

You may be interested in...

Quantum Computing
A.I.- Tesla Technology

Quantum Computing (Eric Ladizinsky)

Quantum Computing (Eric Ladizinsky) The founder of D-Wave is actually saying with a straight face that we can start to exploit parallel universes by reaching into them and pulling out their computing power. Just a few weeks ago, D-Wave announced

Read More »
Schrodingers equation
A.I.- Tesla Technology

Schrodingers equation

Schrodingers equation is a linear partial differential equation that describes the wave function or state function of a quantum-mechanical system.  It is a key result in quantum mechanics, and its discovery was a significant landmark in the development of the subject. The equation is named

Read More »

1 thought on “Will private computers become conscious?”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top