A Brief History of Artificial Intelligence

From Robot (Who) - Pic from Wikipedia

From Robot (Who) - Pic from Wikipedia

The foundations of Associationism were laid down in 1690 when the English political philosopher, John Locke (1632-1704) published his Two Treatises and the Essay Concerning Human Understanding. In this essay Locke eschewed the innate universal reality of the philosophy of Plato and Descartes – saying in essence that these universal realities could be easily explained with an empirical description of their source in personal experience. (Thorne & Henley, 2005) This was a rather radical idea for 1690 given the cultural and political history of the United Kingdom. We were still only a hundred years away from having been an absolute monarchy, where our political system was bestowed upon the peoples of these islands by the Judeo-Christian God! Locke talked of Sensation and Reflection as being the functional representations of our experience; he thought (and in this his Associationism is clear) that complex ideas could not be the stuff of innate reality, but of the coming together of simpler ideas to form a more complex whole. Locke also argues that accidental associations can be just as compelling as natural ones. Locke made distinctions between the primary and secondary (as he called them) qualities of real objects: the primary being attributes like – solidity, figure, and mobility. The secondary attributes being those which are “…qualities which in truth are nothing in the objects themselves but powers to produce various sensations in us…” (Thorne & Henley, 2005). There is an interesting philosophical notion here (of its time) and one that I think Locke was the first really to get to grips with – the idea that some forms of sensation within us are not generated by an object acting upon us, but by ourselves reacting to the object. We are in a sense, the cause and not merely the embodiment of the effect. Locke held the view that the mind was in some way passive and that the sum of a human being is real experience – in effect, the sum of the sensory experience. Locke’s ideas, particularly, the Essay influenced such other later philosophers such as the great David Hume (1711-1776)

Hume was in agreement with Locke that all the minds contents come from our experience and our impressions, that the primary quality of extension “…is entirely acquired from the senses of sight and feeling…” (Thorne & Henley, 2005). Hume “…sought to explain why we adopt a realistic interpretation of science and ordinary beliefs…” Hume also showed the difference between inductive and deductive reason and the logical fallacy of inductive arguments used to justify inductive reasoning (Rosenberg, 2000). At this time, Hume believed that we could only validate our sense of self from the mind’s own feelings, and in this, he doubted the existence of God and by extension the external world itself. A somewhat radical position for the 18th Century – but one which existential philosophers, psychologists and psychoanalysts have spent considerable time cogitating in the latter half of the 20th Century.

Two other philosophers of the late 18th and early 19th century took the work of Hume and considered these paradigms in the light of new philosophical thought. John Stuart Mill (1806-1873) had a considerable classical education at the hands of his father, James – he was perhaps the first empirically measured child prodigy – something that bore down heavily on his mental health in the later years of his life. Mill’s most important work A system of Logic Ratiocinative and Inductive, Being a connected view of the scientific principles of evidence and the methods of Scientific Investigation (or A system of Logic!) considers whether it is possible for man to have a system of scientific logic. Moreover, if this was possible, could it meet the increasingly rigorous demands of man’s questioning nature of his environment and the self? Mill thought that “…there is no reason [that psychology] should not be as much science as Astronomy….” ( Mill, 1868 )

Mill also coined the idea of Mental Chemistry to explain the process occurring in the mind when one experiences. He used the term “blending together” to explain how simple ideas become more complex – he considered that these simple ideas “generate” rather than merely “compose” the more complex ones. Here we clearly see associationist thought in action. In his book The Organisation of Behaviour, Donald Hebb ( 1890-1958 ) considered the idea that in newborns a random array of neural connections fire, and in this firing the surrounding neurons “reverberate” together and form connected assemblies – which when linked together form “phase sequences” He hypothesised that complex learning in Adults was a direct result of the rearrangement processes he described. In machine learning theory the “Hebbian Rule” states that an equation will predict the learning rate given the number and strength of neural connections. (Bernstien, Penner, Clarke-Stewart, & Roy, 2006)

There is considerable overlap between the work of the Cognitivists and Associationists – this is easily be traced back to the works of Locke and Hume; both of whom fundamentally considered not only the way the human mind perceives the world; but also how those perceptions become the sensations we experience every day. We can also think of Wundt and Ebbinghaus’ work on memory and experience as fitting into the category of cognition and experience. The end of pure behaviourism began with the work of Noam Chomsky (1928 – ), who challenged the idea that all language is either learned or innate; but that language is a fusion of ideas from both theories. He argues “…behavioural science has been much preoccupied with data and organisation of data…” (Chomsky, 2003) and that this is at the expense of a satisfactory account of how language skills develop in young children. These young children are able to create “…novel sentences…” and since this is the case, learning theory cannot be solely based in innate or behaviouristic terms, for where would these novel patterns have originated? It is not difficult to see how cognitivism emerged from the ashes of pure behaviourism to offer new solutions in learning theory.

By the mid-twentieth century, Mathematicians and Psychologists became interested in the new field of computing. Psychologists were interested in how mathematical models can be created in computing, which could map the cognitive functions of the brain. A new paradigm was proposed – “Can machines think?” (Turing A. M., 1950) Turing considered a game, played between two individuals, where one individual was actually a computer. Can the computer fool the other individual into thinking they are having a conversation with another human being. It was heavily criticised at the time as being “weighted in favour of the human” (Turing A. M., 1999) but brought a new way of thinking about the human mind and the analogous computer-processing model. The term Artificial Intelligence was first stated by John McCarthy in 1956 and has since found its way into the English language to describe a situation where computers are able to perform tasks that make them “indistinguishable” from a human being. (Cordeschi, 2007) Turing and others like him were interested in whether human intelligence can be investigated through modelling its functionalities into a computer. The so-called “mundane tasks” were the subject of much debate amongst Cognitivists as to the possibility of this modelling (Cawsey, 1998). These tasks are planning (the ability to decide on a sequence of actions); Vision (can we make sense of what we see); Robotics (the ability to move about in three-dimensional space) and Natural Language (the ability to mimic the naturalistic tone of a human) and it is easy to see why they are so fundamental to AI.

When human beings reason, they do so with acquired knowledge and both deduction and induction – the question being, how might we model these traits, in a meaningful way, within a computing environment? In order to do this the facts have to be related in a “formal representational model” to be manipulated within a computing algorithm (Turing A. M., 1950.) How, also, could we model the ambiguities of language in order for them to be understood in an AI environment. From this question we can see the difference between a computer simply using a program to carry out a set of functions based on the inputs from an operative – and the same computer being able to learn that whilst there are hard and fast rules related to language processing – these are not adhered to all the time. Can we get a computer to understand the vernacular – and to be able to fool a human into believing, with certainty, that they are in conversation with another human being. One difficulty lies in linking cognition with our emotional response – this then raises questions of both free will and the idea of human consciousness. We humans are alive to ourselves; we have a view of ourselves within the world that realises while we are a fundamental part of it, we are also separate – and we can close our eyes and live within our own thoughts – pretty much on demand. How then, would we even begin to model this behaviour in capacitors, resistors and inductors? In addition, perhaps, more importantly; why would we want to and what would happen if we succeed?

At the Dartmouth conference in 1956, AI theorists asked two questions of each other 1) how do we embody knowledge in computers and what would this “life-simulating” knowledge be like? In addition, 2) how do we relate embodied knowledge to more complex and ill-structured problems. Many contributors opposed the idea of hierarchic systems in heuristic (learning) programming, feeling that these could only ever (by definition) mimic the complex embedded algorithms of the human mind. (Cordeschi, 2007) Most contributors felt that the way forward for AI was networks – a novel idea at the time given how new computing was as a discipline. However, in this, they were theorising that complex networks may one day simulate the workings of the mind – since they were already aware of the “networked” status of the neuronal apparatus of the brain.

On the 1st of May 2008, a paper was published in Nature that may in the future provide the basis for complex (artificial) neuronal networks. In 1971, Leon Chua “…reasoned from symmetry arguments within Physics…” the existence of a fourth theoretical component of passive electronic circuits. He called it the Memristor. Whilst we understand the complex mathematics, until now, no one had been able to show that such a device may exist. This new paper details how memristance arises in simple nanoscale systems. (Strukov, Snider, Stewart, & Williams, 2007) These involve the motion of charged atomic particles in Titanium Dioxide switches. Putting aside the complex mathematics, what is the application of this technology for AI? One of the issues in AI circuits is the lack of contiguous electrical memory in parts of a circuit with no applied voltage – what Strukov and colleagues have shown is that Memristor technology is applicable in a neural network situation “…where synapse-like activity is required…” The artificial synapses would be able to maintain potential difference across themselves and discharge it upon receiving the right instruction from neighbouring synapses. This, when realised in technology, is a massive breakthrough – the authors are assured of their places when the history books of AI come to be written.

Singularity is a technological term that defines the point where machine intelligence becomes equal to that of humankind (Schmidhuber, 2006). Algorithmic advances are for the first time keeping pace with hardware development. Schmidhuber argues that convergence of these technologies at their current pace of development will occur in 2040 – in human terms – a very short time away indeed. It is argued that, at, and past this point, AI will have achieved and then surpassed the status of the human brain in its complexity. In essence, AI will have become conscious. It is not difficult to see how this could be: consider the simple transfer of electronic data between Edinburgh and London, in pre-industrial times a letter may have contained 10, 000 bytes and a journey could have taken a month. Today, fibre optic cables transmit billions of bits per second, every second. In approximately 200 years, the rate of information transfer has increased some 100 billion times (1 x 1012) – we have also see this explosion in the power of computing as noted by Gordon Moore in the now ubiquitous Moore’s Law. Speed doubles and price halves. Modern philosophers call this process ephemeralization ( Heylighen, 2008 )

There are significant, practical, social and ethical concerns here to prevent this singularity from ever occurring. What is the future for Mankind in a world populated by machines that are more intelligent than we are? What will happen if they decide that we are surplus to requirements? There are other ethical considerations too. If a machine becomes conscious, do we have the right to make it work for us? Does it become little more than a captive slave to our whims? If consciousness were to develop in artificial networks, what of emotions: love, fear, anger, lust, jealousy? We will be required to make huge steps in our thinking about what differentiates us as species from the inorganic. If machines can exhibit consciousness and thus emotion – what then for our concept of reproduction; of biological pairings. We already augment the human body in myriad ways with inorganic technologies – the hip replacement, the pacemaker. Are we simply heading toward the next stage of human evolution; where the differentiation between man and machine becomes little more than quaint notion of the past – that what will unite us in the future, is perhaps what divides us now. Consciousness.

A hundred years ago, this was the realms of science fiction, but the development of psychology in this area has been an explosion – and now, we must consider these questions, not only as a philosophical exercise, but as a necessary condition for the continuation of the human race as the dominant species on the planet. These questions must not remain unanswered if we are to progress this technology, in a way that humans within the framework of the ethical dimension. Moreover, it would seem only right, given the nature of sentience that we also consider the needs of any machine that thinks, feels even, the way we do too. We are moving to the realm of I, Robot – what we need know are thinkers like the late Isaac Asimov to guide us. Any takers?


Bibliography

Bernstien, D. A., Penner, L. A., Clarke-Stewart, A., & Roy, E. J. (2006). Psychology. Boston:
Houghton Mifflin.

Cawsey, A. (1998). The essence of Artiifical Intelligence. Essex: Prentice Hall Europe.

Chomsky, N. (2003). Langauge and the Mind (3rd ed.). Cambridge: Cambridge University Press.

Cordeschi, R. (2007). AI Turns Fifty: Revisiting it’s origins. Applied Artificial Intelligence , 21, 259-279.

Heylighen, F. (2008). Accelerating Socio-technological evolution: from ephemeralization and stigmergy to the global brain. Philosophy and Sociological Studies. Brussels: Vrije University.

Mill, J. S. (1868). A System of Logic. Retrieved May 1, 2008, from Questia: http://www.questia.com/PM.qst?a=o&d=5774540

Rosenberg, A. (2000). Philosophy of Science. New York: Routledge.

Schmidhuber, J. (2006). New Millennium AI and the convergence of History. ArVix:CS , 0606081 (3), 1-15.

Strukov, D. B., Snider, G. S., Stewart, D. R., & Williams, R. S. (2007). The missing memristor found. Nature (letters) , 453, 80-83.

Thorne, M. B., & Henley, T. B. (2005). Connections in the History and Systems fo Psychology. Boston: Houghton Miflin.

Turing, A. M. (1950). Computing Machinery and Intelligence. Mind , 59.

Computing Machinery and Intelligence. In A. M. Turing, R. Cummins, & D. D. Cummins (Eds.), Minds, Brains and Computers (pp. 153-160). Oxford: Blackwell.

Advertisements

2 Comments »

  1. […] A Brief History of Artificial IntelligenceThe term Artificial Intelligence was first stated by John McCarthy in 1956 and has since found its way into the English language to describe a situation where computers are able to perform tasks that make them “indistinguishable” from a … […]


{ RSS feed for comments on this post} · { TrackBack URI }

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: