Artificial Intelligence

Read Complete Research Material

ARTIFICIAL INTELLIGENCE

Artificial Intelligence

Artificial Intelligence

Introduction

Artificial intelligence (AI) was first defined by the American computer scientist John McCarthy, who coined the term in 1956, as the “science and engineering of creating intelligent machines.” This definition, in its essence, has held, despite considerable shifts in technological paradigms, from the earlier emphasis on creation of intelligent computer programs to the current stress on convergence technologies. However, in the absence of an absolute definition of intelligence, only degrees of intelligence can be defined, with human intelligence being the benchmark to which other intelligences are compared. In addition there is also a lack of consensus on the kind of computational procedures that can be termed intelligent. (McCarthy, 2008)

Barriers to Furthering AI

The key barrier to the creation of AI remains the failure to duplicate the nebulous quality of human intelligence that has been defined as the computational part of the ability to achieve goals in the world. With the inability of programs to replicate the essential features of human nature, such as common sense or intuition—attempts to create AI usually fail under the heavy load of rules that had to be written to deal with every problem. A few experts believe that human level intelligence can be achieved by amassing data in computers, but the general consensus is that without a fundamental transformation, it cannot be predicted when human level intelligence will be achieved. (Nilsson, 2002)

Nanotechnology and AI

Nanotechnology has opened up new possibilities in the quest to create AI. One key development is the possibility to use “distributed intelligence” rather than a central intelligence as a guiding factor behind AI. While earlier attempts to create AI focused on creating a centrally controlled machine, scientists now are in the earliest stages of creating distributed intelligence. This involves assembling myriads of tiny parts into intelligent machines, based on the use of trillions of nanoscopic parallel-processing devices that function together, compare them to recorded patterns, and then exploit the memories of all their previous experiments. Such distributed networks of agents mark a change from the earlier “top down” approach, to a “bottom up” programming philosophy that essentially means trying to define and regulate the behavior of individual agents at the lowest structural level without trying to govern the behavior of the system as a whole. (McCarthy, 2008)

Computers and Robotics

Since the closest extant machines that are akin to human intelligence are computers, which already have an exponential capacity to become faster and faster, most AI research has hitherto been in the realm of information technology (IT). However, convergence technology has led to a certain change in goals, with artificial life replacing AI as a long-term computing goal. Current robotics, for example, tries to write programs that attempt to mimic the attributes of living creatures, such as being adaptive, cooperative, having the ability to learn, and the capacity to adjust to change. However, the inability to create what is known as “common sense” reflects the difficulty of adapting cognitive science to the creation of ...
Related Ads