Artificial Intelligence (Ai)

Read Complete Research Material

ARTIFICIAL INTELLIGENCE (AI)

Artificial Intelligence (AI)

Artificial Intelligence (AI)

Artificial intelligence (AI) was first defined by the American computer scientist John McCarthy, who coined the term in 1956, as the “science and engineering of creating intelligent machines.” This definition, in its essence, has held, despite considerable shifts in technological paradigms, from the earlier emphasis on creation of intelligent computer programs to the current stress on convergence technologies. However, in the absence of an absolute definition of intelligence, only degrees of intelligence can be defined, with human intelligence being the benchmark to which other intelligences are compared. In addition there is also a lack of consensus on the kind of computational procedures that can be termed intelligent.

While computers can carry out some tasks, they cannot carry out all, and they lack the crucial ability to reason. Computer programs may have tremendous amounts of speed and memory but their abilities are circumscribed by the intellectual mechanisms that have been built into the programs. In fact, the ability to substitute large amounts of computing in lieu of understanding is what gives computers their seeming “intelligence.” Thus, for example, the chess player program Deep Blue substitutes millions of computations of possible moves in the place of reason and intuition.

Barriers to Furthering AI

The key barrier to the creation of AI remains the failure to duplicate the nebulous quality of human intelligence that has been defined as the computational part of the ability to achieve goals in the world. A few experts believe that human level intelligence can be achieved by amassing data in computers, but the general consensus is that without a fundamental transformation, it cannot be predicted when human level intelligence will be achieved. (Lebiere, 1998)

Most of the problems of AI can be traced to the inability to replicate the indefinable nature of human thought processes. John McCarthy identified this problem in 1969 as a qualification problem. Human consciousness is also remarkable for the astronomical number of facts that are simply “known,” making it impossible to create an accurate replica of the complete knowledge base of human intelligence. Some attempts have been made to address these problems. For example, multi-agent planning uses the corporation and contribution of many agents to achieve a given goal, and such emergent behavior is used by evolutionary algorithms and swarm behavior. (Newell, 1990)

Nanotechnology and AI

Nanotechnology has opened up new possibilities in the quest to create AI. One key development is the possibility to use “distributed intelligence” rather than a central intelligence as a guiding factor behind AI. While earlier attempts to create AI focused on creating a centrally controlled machine, scientists now are in the earliest stages of creating distributed intelligence. Such distributed networks of agents mark a change from the earlier “top down” approach, to a “bottom up” programming philosophy that essentially means trying to define and regulate the behavior of individual agents at the lowest structural level without trying to govern the behavior of the system as a ...
Related Ads