Cognitive Science – An Overview

I wrote a very broadly-defined paper for an Intro to Cognitive Science Class. It shows the chronological development of competing theories of mind and ends with a series of predictions regarding where I expect the discipline to go. Yes, it does feel weird putting this on the internet.

It’s very likely that his doesn’t interest you. That’s fine also. Just click here.

Cog Sci in 2025

I think therefore I must lean. Up until only two months ago, professionals in neuroscience, semantics, cognition, and artificial intelligence were bogged down with systems and theories too complex to master in even ten years. There was rarely a consensus among these professionals, and when there was, it was because the dominant approach was the only possible flailing towards an explanation of an extremely complex system. Twenty years ago, these infant fields of study were just learning to stand but today they are running. Before discussing the New Synthesis, a unified program that has facilitated several generational leaps in the study of mind, we will discuss the theories that led to the unification.

The study of cognitive science began with the exploration of the nature of mind‚ connection to the body. During the scientific revolution especially, the material and rule-based foundation of the world was being rapidly defined. As the great minds of the time looked towards the stars to discover irreducible laws, others looked introspectively (or anatomically) to try to answer questions of the mind. Though the concepts and theories of what governs the mind and body go back to Plato‚ time, Descartes’ ‘wind of animal spirits’ that connects the pineal gland to the immaterial soul began a more serious discourse on the nature of mind. For the most parts, the scientific community has thrown out strict dualism, but some maintain that epiphenomenalism addresses certain irreducible non-physical elements of mind.

Along with the unveiling of once-mysterious aspects of the physical universe, the mind also has become a subject of meticulous scientific discovery. Reductionism, the belief that mind reduces to observable physical processes, hoped to address the lackadaisical soft science of belief-desire psychology popularized by Freud. Although the convergences of highly specialized technologies have made the study of the physical brain far easier, the search for reductionist frameworks are ironically just as correlational and ‘soft’ as the belief-desire framework it hopes to replace.

Because both the spiritual aspects of dualism and the purely physical aspirations of reductionism fail to completely address the multitude of feelings, skills, dreams, and sensory complexities that make up mental capacity, cognitive scientists have probed for complete reinventions of mind representation. Stemming from respect for formal systems like calculus and physics, the rise of the Computational Representational Theory of Mind took several decades to lose followers due to its limitations. From there, connectionism provided a vastly improved method of representing both information, and a realist approach to explaining connections between ideas and feelings.

The role of language and semantics are also major issue when trying to get a hold on exactly what the mind does and how we can understand it. Through the understanding of knowledge in the discourse on epistemology, only so many problems have been solved. . The discussion of mental states, functions and limits of language, and the problem of encoding meaning have reached uncomfortable barriers.

Alan TuringThe next major leap for cognitive science was the development of the Computational Representational Theory of Mind (CRTM). Interest in using a computer-based formal system to understand and emulate the human mind was generated by the Turing-Church thesis published in 1936. Alan Turing and Alonzo Church‚ thesis showed that nearly all problems consisting of algorithms and symbols can be solved by a mechanical system‚Äîa machine Turing named the Turing Machine. The CRTM approach is rooted in the Turing machine, a system of nodes, inputs, and recursive processing that is able to solve complex problems on a relatively simple framework. Specifically, nodes with simple instructions drive one input at a time through a set of predefined symbols through the Turing machine. These symbols have meaning-preservation rules that maintain their representation across multiple internal processing systems. As the state of the machine advances with a correlation to the input, the output is generated after the problem is solved.

It was believed that reducing logic to the simplest processing vehicle possible would provide a window to how the brain works. Using sets of symbols to reduce complex problems to a simple mechanical system had some appeal. After all, the brain is a physical system with the power to resolve problems‚Äîwhy should it be any more difficult than an equation? The problem lies in what the embodied Turing machine knows. An equation is only a rule, and rules don’t add to associative aspects of knowledge; not everything that passes through a human‚ brain is checked by a structure of rules. A system of algorithms and static symbols could not encode for all that we know to be cognition or intelligence.

The particularly vexing problem of the rule-based awkwardness inherent in CRTM systems became formalized with John Searle‚ Chinese Room thought experiment. The trouble comes down to an issue of semantics: A machine that correctly translates from one language to another has no semantic understanding of what it‚ doing. It may turn ‚Äúduck‚’ correctly into the Chinese word for ‚Äúduck‚’, but there‚ no embedded understanding of what it‚ actually up to. Feeding the machine an English and Chinese dictionary gives it the databank for perfectly accurate translation, but it‚ only following rules provided to it by a human. Similarly, you can input an entire encyclopedia into a machine run by a formal system based on symbols and mechanical rules but ‚Äúthe background knowledge is precisely what is not in an encyclopedia.‚’ (Dreyfus, 76) Though this thought experiment primarily refutes artificial intelligence and the concept of a semantics-savvy machine language, it also refutes symbolic computation as a viable framework for generating intelligence. Algorithmic systems are great at things we make them great at. Their intelligence is directly proportional to our gift of composing algorithms. This problem asks whether a program-based system of mind could be possible at all. Dreyfus finds the field of program-based cognitivism stuck: ‚ÄúNobody has ever made a program that deals with anything that deals with anything in the everyday world successfully.‚’ (Dreyfus, 77) He also adds that this theory of ‚Äúrules and representations‚Ķhas been stuck for fifteen years over the commonsense knowledge problem.‚’ Because these machines are unable to learn, they have no self-referential commonsense that allows them to truly know data they are given.

For our purposes, we’ll place this theory of mind into a machine with sensory inputs and motor outputs of reasonable complexity. Because this system arises from logic and mathematics, it‚ naturally confused whenever words like hope, fear, or love enter its nodal structure. It can be programmed to open an umbrella when it senses rain, but its train of thought would be unlikely to involve a Gene Kelly movie‚Äîthat would be counter-intuitive to its algorithmic rubric. To get to the root of the problem with the CRTM, we must look further at the limits of the formal system it‚ based on. Systems using computational methods of output production have considerable trouble understanding their frame of existence. Providing a CRTM-based machine with an algorithm for saying ‚ÄúHello‚’ to all people with glasses and then waiting for a friendly response would not keep a machine from recognizing that it had been speaking to a poster of Einstein. It may know what glasses look like, but there‚ no general approach to teaching cross-situational awareness in these machines. This situatedness problem is at the root of ‚Äúspecific behavioral failures of the classical [symbolic logic] machines.‚’ (Churchland 1990, 37)

Another broad problem of the CRTM is how strictly it formalizes the world around it. The basis for intelligence is learning, and this approach to modeling cognition depends on an external algorithm injection. While the machine is busy squeezing dynamic sensory inputs to exceedingly simple units for processing through a rigid formula, it‚ unable to do anything else because of its computationally-intensive methods. Because algorithms have a specified beginning and solution, the time used to solve one of these problems takes time away from any of the other concurrent surrounding world dynamics. A topic discussed later in this paper, Parallel Distributed Processing (PDP), will attempt to address this problem.

Attempting to structure a set of information to guide this machine‚ actions must take into account the earlier-mentioned Chinese Room problem‚Äîthis machine is unable to understand information in a manner that shows intelligence. But moreover, symbolic logic machines have immense difficulty accessing this vast and abstract databank. It may be useful to represent ‚Äúthis is an instrument‚’ in a CRTM-based machine when it hears ‚Äúpiano,‚’ but in these strict machines it‚ difficult to ‚Äúaccess just the contextually relevant parts of that knowledge base in real time.‚’ (Churchland 1990, 33) Establishing an efficient system of defining data storage and retrieval without defining relationships is not an easy task in the CRTM. Data is most often represented as sentences that must be dissected into structure, meaning, syntax. Searching in the most basic of systems requires high-level processing and every piece of data must be searched. Data has either very little or no interconnectedness with other data.

Though representation through computation leads cognitive science in the direction of understanding methods of logic and deduction, it only goes so far. CRTM-based machines provide situated ‚Äúexpert systems‚’ with often extraordinary capacities for problem solving, but are ultimately limited in real cognitive ability by their formal structure. When applied to real cognition or human intelligence, this system lacks open-endedness, a structure of learning, rapid information retrieval, and it inevitably suffers from behavioral and situational awkwardness. Though the CRTM has many problems‚Äîsome of which will be resolved with the theory of connectionism‚Äîthere are certain elements of the CRTM that are not completely useless that will be discussed later.

Connectionism more closely resembles the known structure of the brain both in physical terms and computational terms. In this system mind is represented by a vast interconnected network of simple nodes. When a node in the network becomes activated, this creates an activation value in a node that is communicated to all its connected nodes. From this, we can produce a massive associative structure that bears a resemblance to the interconnected parts of the brain. Whereas the CRTM made a simple mechanical model of merely mind‚ processing, connectionism produces a similarly material framework‚Äîbut one that processes information through connecting large groups of interconnected nodes with other process groups in a Parallel Distributed Processing (PDP) system. Networks in different parts of the brain work in collaboration as separate modules to share the activation values to efficiently represent a mental state.

Between these nodes, connection weights determine how strongly one node communicated to another. In numerical terms, the total input to a lower layer is the connection weight multiplied by the activation value of a particular node. So from a hundred inputs or ten, the output is only as strong as the connection weights that the values pass through.

For the case of object perception, a set of elements are instantaneously represented by activation on the appropriate network. Upon perceiving that object for the first time, a set of relational ideas corresponding to that object‚ immediate surroundings such as speed, color, and form are encoded by the formation of connections. When that object is again perceived, connections are further strengthened to more strongly relate that object to its relevant surroundings. In learning, positive reinforcement of a result creates stronger connections through a learning rate mechanism that employs what‚ known as the delta Rule. This rule adjusts connection weights to more accurately match inputs with desired outputs. This adaptability offers much more than the CRTM approach to mind.

Within the connectionist system, there are different types of network structures. Feed forward nets communicate activation values in one direction through a system of input layers, hidden layers that process information, and output layers that elicit a response. This system employs the earlier-mentioned delta rule. Back propagation networks connect all layers with each other in a more interconnected system. This mode is the most widely accepted because it facilitates what‚ known as ‘training up.’ Errors decrease at a faster rate because we are observing gradient descent. (Churchland 2001, 161) So as we train these networks, pipelines between nodes increase their bandwidth and strength because the output unit is rerouted back to the input unit to check for errors. Back propagation instructs hidden nodes within networks similar to the way the delta rule instructs output nodes. (Marcus, 18)

The reward for connectionist cognitive scientists came when they could create artificial systems that, like the brain, could model a distributed network of simultaneous computations. In 1984, it was found that this system could multitask across multiple systems: ‚Äúthere was no implicit separation of data and computation‚Äîthey were both distributed over the same network of elements.‚’ This ‚Äúintertwining of aspects‚’ (Brooks, 1229) granted the connectionists with an appropriately fluid and constantly adapting model of mind that imitates mind structures.

With this interconnected system of parallel connectionist processing, there‚ a leaning towards the consensus that there are ‚Äúno theoretical limits‚’ (Churchland 2001, 163) to what these learning networks can accomplish. Doing away with the rule-based Turing model eliminated a static computational system that‚ running on one cylinder. Nothing is lost in this new approach, but it‚ still not perfect. The capacity to represent this system in a mechanical embodiment is far from being a reality. One neuron may have between 1,000 and 5,000 synapses that connect to others. Though hundreds of teraflops of processing power approach this level of distributed processing, the human mind is far beyond that speed. This system doesn’t require symbols with specific meanings to solve problems. Its versatility allows it to do all that the CRTM model could accomplish. In fact, ‚Äúsymbol manipulation appears to be just one of the many cognitive skills that a network may or may not learn to display.‚’ (Churchland 1990, 36) From these developments, we see a history of an increasing complexity of systems. From animal spirits to activation values in the 90s, we’ve come a long way. To introduce the New Synthesis, keep in mind the structure of connectionists nets.

The major breakthrough experienced in the field of cognition was the production of a simple system that could learn how to learn. Using a programming language conceived in 2023, a flexible rubric for the concepts of trial and error, failure avoidance, and perception utilization for goal attainment, were coded into a system with motor, perceptual, and data storage capabilities. From simple interactions with the surrounding world, experiments with this machine showed that it could quickly learn anything it had a reason to knowing. Getting the first prototypes of these machines to do something was tricky, but by offering batteries these machines would follow the battery-holders with an increasing dexterity and maneuverability so as to acquire their one simple newborn goal. I remember one story about a four-wheeled sprightly robot that got too far from home and depleted its batteries. From that point on on, it became a lazy and introverted creature that never strayed more than ten feet from its charging cradle. It became a lazy and boring robot that was eventually recycled. Just like child learning the system of rewards and punishment, these robots were similarly affected by pleasurable and painful external inputs. This system not only has implications for artificial intelligence, but for understanding the semantic structure of information and problem processing in the brain.

Because of the limited motor capacities of the initial models in 2023, we could only take joy in their ability to tightrope walk and play dodge ball with us‚Äîbut these past few years have been most exciting. Just as we’ve seen them quickly learn to move, they’ve also learned to communicate. Twenty years ago, mainframe distributed computing was capable of 135 trillion teraflops of processing power. Advances in parallel quantum computing made the learning capacity and rapid data processing possible. The data storage is maintained on a quantum dot system within a small crystal capable of storing several years of complete experiential and learning-based data.

This development has provided a fascinating new system to ponder. Instead of making heuristic levels and argument modules with a massive and difficult to manage knowledge base, this machine takes in knowledge when it‚ ready to know it and has a framework of understanding. Thinking machines are no longer merely automatic formal systems (Hagueland, 1996, 11) because they are thinking for themselves. We have created a self-taught organically learning agent of perception and action.

This is has all emerged from the inadvertent development of what Fodor calls a ‚Äúbrain language‚’ (Fodor) for the agent. It took many years to understand this, but when an agent can make up their own brain language and absorb information in a meaningful way, it will know its way around that stored data much better. These developments have halted even work on producing expert systems; today doctors can just tell machines what to tell patients when they seek guidance. This has taught the field of neuroscience that particular mental states have sometimes drastically different physiological elements. Brain languages and brain structures have too much variance for any ‘mental state’ investigation across multiple brains.

Just as this robust processing robot has learned to walk, it also has learned to juggle multiple processes at once. Its exposure to the world has taught it to give computational precedence to the task at hand, but to keep a mindfulness of surroundings, moving objects, and conceptions of time. While maintaining a parallel processing system, it has also learned to create flexible categories of things in a connected way. In this way ways, it‚ has built its own connectionist neural net like it has written its own brain language. As teams across the world have analyzed the data structures of these machines as they learn and acquire experience, they’ve observed a strikingly connection-based system of learning. When one of these robots is taught about a new thing like a tomato, it instantly tries to relate similarly shaped object to it. When the robot learns then of other vegetables, it has categorically nestled that form and color into a connected system that facilitates future recognition. But this system does not maintain lists of relationships. This fact is better explained in McClelland and Rumelhart‚ paper on distributed memory connectionist systems: ‚ÄúThe model is not simply a categorizer or a prototyping device; rather, it captures the structure inherent in a set of patterns, whether it be characterizable by description in terms of prototypes of not.‚’ (McClelland)

What cognitive scientists have learned from studying this machine‚ method of discovery is this: evolution must have ingrained a connectionist system into our method of learning. It also must have ingrained a propensity for the emotions and mental states which we have so much trouble explaining in cognitive terms. Hope and fear and love have been reduced to biological concerns of the organism in question. Machines using the new learning language show primitive emotive states like frustration and pain when learning to climb stairs or upon falling. Describing these mental states and emotions in terms of neuroscience may occur with a revolution in brain imaging, but that may be a long way off. We are a long way from getting any meaning out of specifically neuroscientific methods. For now, the creation of learning systems similar to us has proven very beneficial to our understanding of how our brains and emotions are manifested.

Works Cited:

Brooks, Rodney A. New Approaches to Robotics. Science, New Series, Vol. 253, No 5025 (September 13, 1991), 1227-1232.

Churchland, P. and Churchland, P. (1990) Can machines think? Scientific American, 262., pp. 31-37.

Churchland, Paul M. (2001) Matter and Consciousness. MIT Press.

Dreyfus, Hubert (1995) Cognitivism abandoned. In Peter Baumgartner and Sabine Payr (Eds.) Speaking Minds: Interviews with twenty eminent cognitive scientists. Princeton, New Jersey: Princeton University Press, pp. 71-84.

Fodor, J. The Language of Thought (Cambridge, Mass.: MIT Press, 1975).

Laird J., Newell, A., and P. Rosenbloom, SOAR: An Architecture for General Intelligence. Artificial Intelligence, 33, 1987.

Marcus, Gary (2001). The Algebraic Mind. Cambridge, MA: MIT Press. Chapter 2, section 2.1. Searle, J. (1990) Is the brain’s mind a computer program? Scientific American, 262, pp. 26-31

McClelland, James L. and Rumelhart, David E. (1985) Distributed Memory and the Representation of General and Specific Information. Journal of Experimental Psychology: General 1985, Vol. 114, No. 2, 159-188.

Kosslyn, Stephen M and Koenig, Oliver. (1995) Wet Mind the new Cognitive Neuroscience. Free Press.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s