The world of our experience consists at all times of two parts, an objective and a subjective part . . . The objective part is the sum total of whatsoever at any given time we may be thinking of, the subjective part is the inner 'state' in which the thinking comes to pass.
William James, Varieties of Religious Experience: A Study in Human Nature, p. 499
Abstract: Why is the problem of
subjectivity so hard, as David Chalmers claims? This essay
suggests that it becomes hard when we adopt
an implausible, perfectionistic standard. In the last two
decades the standard has come to be 'observer
empathy' -- the ability to know what it's like to be a bat
or another human. That makes understanding
consciousness difficult indeed. Far more
practical criteria are used every day in medicine and scientific studies of
consciousness, and indeed traditional philosophy from Kant to James
took a much more relaxed view of subjectivity. Once we adopt [with] these more
workable standards, subjectivity
is suddenly revealed to involve a familiar concept, namely 'the self as observer' of conscious experiences.
Contrary to some, this sense of self is
conceptually coherent and well-supported
by hard evidence. For example,
the 'left-hemisphere interpreter' in split-brain
patients behaves as one such self. Given a modest and
practical approach, we can expect to make progress toward
understanding subjectivity.
There is, however, a problem with the specific point that we need "a much more relaxed view of subjectivity". Simply stated, there is little point and much confusion to be had by equating consciousness with it's signs be they Kantian, or drawn from medical ethics, or specialized experiments. Ontology is not a weight lifting competition. In any case, the practical point of view needed by persons studying mental function in the absence of a proper ontology and regardless of their orientation - functionalist, dualist, non-categorialist, or neutral monist - can and should proceed without having to wait for that ontology. This is not at all to say that ontology is irrelevant but, rather, that it is obsessive to wait for conceptual foundation before proceeding with practical or theoretical research which even though poorly founded in a ontological sense may well contribute to founding of ontology.
Can human beings learn to understand conscious experience, even in its subjective aspect? Many analytic philosophers in this century have said no. David Chalmers is more optimistic, believing that human consciousness is understandable but that subjectivity presents a particularly hard problem. Chalmers takes Global Workspace theory as a prototype of a cognitive theory of consciousness, but raises the question whether such a theory can deal with subjectivity. (See Baars, 1983; 1988; and 1996) GW theory gives the most complete account to date of the interplay of conscious and unconscious processes in perception, imagery, action control, learning, attention, emotion, thought and motivation, problem-solving and language. These topics can all be usefully treated as types of information processing, and today we are discovering many of their brain correlates as well. Indeed, GW theory shows many striking points of convergence between brain, behavioural and experiential evidence.
David Chalmers endorses a the central hypothesis from GW theory, namely that conscious contents become 'globally available' to many unconscious systems. The reader's consciousness of this phrase, for example, makes this phrase available to interpretive systems that analyze its syntax and meaning, its emotional and motivational import, and its implications for thought and action. It appears therefore that even single conscious experiences have global consequences. Global availability is an information-processing [emphasis added] claim about consciousness -- what Chalmers considers to be part of the 'easy' problem. Whether a scientific theory like this can deal with subjectivity is the central point at issue.
I would suggest that GW theory has
a number of plausible implications for understanding subjectivity.
The really significant distinction is
not between inherently hard vs. easy problems, but between the contents of consciousness
and what we intuitively think of as an observing self. 'Subjectivity'
from this point of view corresponds to the sense of an observing
self.
In the last ten years I have presented evidence for the proposition that we need a concept of self to fully understand consciousness (e.g. Baars, 1988; 1996). The notion of self has been criticized mercilessly in analytic philosophy, yet a great body of evidence from brain and behaviour speaks in its favour. I believe that we must deal with the self in an intellectually rigorous fashion if we are ever to understand the meaning of such humanly vital terms as subjectivity. This quest for understanding can be seen as a deeply humanizing enterprise, promising to bring us far beyond the mechanistic tendencies that have so vitiated life in this century (Baars, unpublished).
What makes the hard problem so hard, I suggest, is the criterion we adopt for subjectivity. By shifting criteria we can make the problem either easy or hard.
In traditional philosophy 'subjectivity' was a fairly well-understood idea. One can imagine a conversation between Immanuel Kant, William James, and Aristotle, for example, with a good deal of mutual agreement. Traditionally, subjectivity concerns the experiencing self. Thus Kant writes that 'It must be possible for the "I think" to accompany all of my (conscious) representations, for otherwise ... (they) would be nothing to me.' (italics added). And James, in the epigraph to this paper, tells us that 'the subjective part (of consciousness) is the inner "state" in which the thinking comes to pass.' That state, he writes, is what we generally mean by self. From Kant to James, subjectivity was not viewed as impossible to understand.
It is only in this century that subjectivity was first expelled from Anglo-American philosophy and then, decades later, reintroduced as a hard or even impossible problem. For that reason we need first to untangle the current philosophical sense of subjectivity and show that at bottom it has not changed. In recent years 'subjectivity' has come to be identified with Thomas Nagel's question, 'What is it like to be a bat?' In Nagel's view, the key to understanding whether bats or humans are conscious is to know what it is like to be a bat or a human. To know whether you, the reader, are conscious, I must know what it is like to be you. In more traditional language the Nagel criterion demands proof of 'observer empathy' as the criterion for consciousness. Chalmers writes, 'As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience.'
Here is a recipe for the above caricature. In recent years, matter has become identified with matter. This is a cruel joke that originated with Aristotle and was brought to fruition by Isaac Newton who has been revered by some as a God. In fact William Blake was quite hostile to Newton's abstract and mechanistic world view. In fact, the problem of Newton is that he thinks that in order to know matter you have to predict it's behavior. However, I have a much more simple, rough and ready concept of matter that is not matter and does not matter. Just the other day I sat on a pin and the subjective experience of pain that I had became globally available for mis-information processing including the production of this essay. This new, more humanistic conception of matter, has the advantages of being understeable by villagers in India, the caribou in Alaska, professors of semantic confusion, and freshmen who have not been exposed to that other Newtonian scheme known as the calculus. For empirical studies, the Newtonian theory and formalism are not at all helpful. I have been studying the field for fifteen years and am quite surprised to learn that the field is not describable by the motor responses of a biological organism, to wit: myself, but rather require the hypothetical constructions of a seventeenth century neurotic Englishman conceived while hiding out at his mother's farm in an evasive attempt to avoid the practical realities of the plague and expressed in the language of what a certain Bishop referred to as "the ghosts of vanishing quantities."
For empirical studies of consciousness, however, the empathy criterion is not helpful. In practice we use a much simpler standard. Neurologists who routinely evaluate patients with head injuries define consciousness in terms of waking EEG, the ability to answer questions, report perceptual events, show alertness to sudden changes in the environment, exercise normal voluntary control over speech and action, use memory, and maintain orientation to time, place, and self. These practical criteria are used all over the world to evaluate mental status in head injury cases. Physicians make life-or-death decision on the basis of these observable events, and in practice this works very well. Very similar criteria are used in psychological and brain research. Thus medicine and science seem to agree with traditional philosophy that consciousness and subjectivity can be identified in practical ways.
"Pseudo-functionalism!"
The traditional philosophical concept of subjectivity is much more plausible, as shown by the words of William James and Immanuel Kant quoted above. The Oxford English Dictionary points out that 'subjectivity' originated in the concept of being a subject, including being the subject of a reigning king. [aha] Over time it evolved into a more general sense of being a person with certain traits. In psychological terms, subjectivity in this sense has to do with a sense of self. This meaning, I would suggest, is a theoretically deep yet workable sense of the term. It has a long and distinguished history in philosophy and psychology. And as usual, once we find a workable criterion, the way toward genuine insight [provided by me] becomes much clearer.
For example, there seems to be a close connection between the sense of subjectivity and what Michael Gazzaniga has called the 'left-brain interpreter', the part of the brain that maintains a running commentary about our experience. In split-brain patients, where transfer of information between the two hemispheres is blocked, the left side can be shown to maintain a narrative account of its reality that can be quite different from the right side's story. But the left-hemisphere system is clearly not the only 'self-system' in the brain. There is good evidence for a sensorimotor self, an emotional and motivational self probably represented in the right hemisphere, a social self-system, and perhaps an appetitive self. All these self-systems ordinarily work in reasonable coordination with each other, though they can be in conflict at times.
Notice that we need not know 'what it is like to be a split-brain patient' in order to come to reasonable conclusions about the left-side self system. We can simply ask the patient's left cortex, and it seems to give sensible answers. Given practical criteria for consciousness and subjectivity therefore, we can increase our understanding significantly.
The following points can be made on the evidence we have to date.
You are the perceiver, the actor and narrator of your experience. Every statement of personal experience in English refers to a personal pronoun, an I, as in 'I saw a pussycat', 'She believes murder is wrong', and 'He smelled a rat'. Unconscious and involuntary activities do not mandate such a connection with a self. 'We' do not acknowledge permanently unconscious knowledge as our own, and 'we' disavow responsibility for slips and unintended errors. They are not ours. Conscious events are invariably attributed to yourself. People routinely report having some definite but hard-to-specify sense of themselves in connection with conscious experiences. All this suggests that consciousness is generally accompanied by subjectivity.
Let us take an 'easy' claim about consciousness, one that is understandable in information-processing terms. For example, we know that if you can experience the letter 'p' you will be able to discriminate, to distinguish 'p' from 'q', 'b' and 'd'. The ability to discriminate is taken by David Chalmers to be an easy problem, because we can easily imagine a robot that can do the task. Given that robots can do it, evidently without consciousness, leading many philosophers to conclude that consciousness is not a necessary condition for discrimination between perceptual events. However, scientifically this is an odd argument indeed, because empirically we know that many things we do to decrease conscious access to the letter 'p' will also change the ability to discriminate between it and other letters. We can decrease your conscious access to 'p' by means of distraction, overloading immediate memory, boredom, fatigue and a dozen other factors. As you become less clearly conscious of 'p', your ability to discriminate between it and other letters tends to decline precipitously. Empirically, therefore, consciousness appears to be a necessary condition for discrimination, at least in creatures we believe to be conscious.
But as we pointed out above, consciousness is always accompanied by subjectivity. It appears therefore that far from being separate from information-processing functions, the 'hard' problem interpenetrates what are said to be easy problems!
Take the phenomena of limited conscious capacity, the fact that we can only be conscious of one consistent percept or concept at any given moment. To go back to the ambiguous word 'focus' mentioned above, try for example to be aware of two separate meanings of that word at the same time. The evidence is strong that humans cannot keep two inconsistent ideas in mind at the same time, though we can often find metaphors and images that unify the two meanings. Nor can I see two perceptual interpretations of an ambiguous figure, nor can I hear two streams of conversation at a cocktail party.
Limited conscious capacity implies that different conscious contents will interfere with each other. Try, for example, to read the following few sentences while keeping in mind three numbers such as 92, 14, and 6. Interference is understandable in Global Workspace theory as competition for a small working memory, the stage of the theater of the mind, called the global workspace. It is rather well understood in modern information-processing theories. But it also happens to correspond with your personal experience! That is, as soon as you try to keep the numbers listed above in immediate memory while reading, you also lose conscious access to the meaning of any sentence you try to read at the same time. There is clearly some sort of causal interaction between your personal experience and our information-processing account of limited- capacity interference. Further, the self-systems described above, like the left-brain narrative interpreter, clearly respond to conscious information. It seems therefore that there must causal interactions between the 'hard' and the 'easy' problems. But how can that be, if the hard problem is so different from the information-processing account?
While conscious contents and a sense of self generally go together, that does not mean that they are identical. We can maintain what seems to be a pretty stable sense of self while shopping in the supermarket or reading this sentence, even though those are different conscious experiences. But we can also keep conscious contents stable and change our sense of self. That is what seems to happen when we become absorbed in a fairy tale as children, actually identifying with the characters. Years later we may read the same story again without such identification, though the conscious stimulus is the same. There are numerous other examples of such changes in self independent of conscious contents (Baars, 1988, and in press). In technical jargon, conscious contents and self may be orthogonal constructs, which always coexist but do not necessarily covary. In this same sense all objects have size and shape, but size and shape do not necessarily covary.
Daniel Dennett has phrased our common intuition about self and consciousness as "That of which I am conscious is that to which I have access, or (to put the emphasis where it belongs), that to which I have access"
'I' have access to perception, thought, memory, and body control. Each of us would be mightily surprised if we were unable to gain conscious access to some vivid recent memory, some sight, smell or taste in the immediate world, or some well-known fact about our own lives such as our own name. The 'self' involved in conscious access is sometimes referred to as the self as observer. William James called it the knower, the 'I'.
One way to think of 'self' is as a framework that remains largely stable across many different life situations (Baars, 1988, and in press). The evidence for 'self as stable context' comes from many sources, but especially from the effects of deep disruptions of life goals. Contextual frameworks are after all largely unconscious intentions and expectations that have been stable so long that they have faded into the background of our lives. We take them for granted, just as we take our health and limbs for granted. It is only when those assumptive entitlements are lost, even for a moment, that the structure of the self seems to come into question. Losing a loved friend may be experienced as a great gap in oneself. 'A part of me seems to be gone', is a common way of expressing such a gap. It helps to take this common tragedy seriously as a basic statement about the self in human psychology.
Oxford philosopher Gilbert Ryle famously pointed out an apparent contradiction in the everyday notion of 'the self as observer'. He thought it made no sense to postulate an observing self because it does not explain anything at all, it merely moves the job of explanation to another level. If we had an observing self contemplating the contents of consciousness, he argued, how would we explain the self itself? By another observer inside the inner self? That would lead to a infinite regress of observing selves each looking into the mind of the preceding one, little imaginary men sitting inside our heads observing each other's observations. The observing self -- the homunculus or little human -- was said to be a fallacy of common sense. Ryle's arguments against the 'ghost in the machine' persuaded countless scientists and philosophers that 'the self' is a snare and a delusion.
The only trouble with Ryle's impossibility proof is that some notion of self is indispensable and not noticeably problematic in daily life, and indeed in much contemporary psychology and brain science. Ryle's impossibility proof applies only if the concept of self is not decomposed into cognitive or brain entities that are better understood than the word 'self'. As Daniel Dennett has written, 'Homunculi are bogeymen only if they duplicate entire the talents they are rung in to explain. If one can get a team or committee of relatively ignorant, narrow-minded, blind homunculi to produce the intelligent behaviour of the whole, this is progress.' (Dennett, 1978, p. 123) .
Consider William James' 'self as observer'. It is hard to see anything impossible about it if we think of observers as pattern recognizers. Many brain systems 'observe' the output of another, and we now know a great deal about pattern recognizers in the brain. There seems to be plentiful brain and psychological evidence regarding self-systems.
All that is not to deny the existence of genuine mysteries about self. But there seem to be aspects of self that are not beyond human understanding. If they were, we would have an awfully difficult time dealing with ourselves or other people. As we understand more of the details of the cortical self system, Rylean doubts may begin to sound more and more dated.
Oddly enough, in the sensorimotor area on top of the cortex there are four maps of a little upside-down person, distorted in shape, with every bit of skin and muscle represented in detail. This upside-down map is called the sensorimotor homunculus, the little human. The nervous system abounds in such maps, some of which appear to serve as 'self systems', organizing and integrating vast amounts of local bits of information. The anatomy of the brain looks like a physical refutation of Ryle's position. Yikes. Oops; I meant: what brilliant reistic logic.
Contrary to widespread belief, many aspects of consciousness are quite knowable, [these are not consciousness per se and therefore though these phenomena are known they are not known data about consciousness itself.] witness productive research in selective attention, perception, psychophysics, protocol analysis, spontaneous thought monitoring, imagery, neuropathology of coma and stupor, and so on. All these efforts meet the most widely used operational criterion of conscious experience, namely verifiable voluntary report of some event described as conscious by a human observer: Statements like 'Mommy, airplane!' 'She smelled a rat,' [I am smelling a rat while reading this paper] or 'My stomach hurts' fit the bill, but also more abstract sentences like 'I was conscious of her painful dilemma,' and 'I just realized how to solve Fermat's last theorem.'
The reader can consult his or her own experience to see whether these conscious events are accompanied by a sense of subjectivity, of selfhood. But is it real consciousness, with real subjectivity? What else would it be? A clever imitation? Nature is not in the habit of creating two mirror-image phenomena, one for real functioning, the other just for a private show. The 'easy' and 'hard' parts of mental functioning are merely two different aspects of the same thing.
References
Baars, B.J. (1996a), In the Theater of Consciousness: The Workspace of the Mind (NY: Oxford UP).
Baars, B.J. (1996b), 'A thoroughly empirical approach to consciousness: contrastive analysis', in Consciousness in Science and Philosophy, ed. N. Block, O. Flanagan, and G. Guzeldere, (MIT Press).
Baars, B.J. (unpublished) Consciousness is humanizing: A scientific program for the new millenium.
Baars, B.J. (1994a) A thoroughly empirical approach to consciousness. Psyche: An International Journal of Consciousness Research, 1 (2).
Baars, B. J. (1993b), 'How does a serial, integrated and very limited stream of consciousness emerge from a nervous system that is mostly unconscious, distributed, and of enormous capacity?' In CIBA Symposium on Experimental and Theoretical Studies of Consciousness, ed. G. Bock & J. Marsh (London: John Wiley and Sons), pp. 282-90.
Baars, B.J. (1988), A Cognitive Theory of Consciousness (NY: Cambridge University Press).
Baars, B.J. & McGovern, K. A. (1994b), 'Consciousness', in Encyclopedia of Behavior, ed. V.S. Ramachandran (NY: Academic Press).
Chalmers, D. (1996), 'Facing up to the problem of consciousness', Journal of Consciousness Studies, 2 (3), pp. 200-19.
Dennett, D. C. (1978), Brainstorms (NY: Bradford Books).
Nagel, T. (1974), 'What is it like to be a bat?' Philosophical Review, 4, pp. 435-50.
Spitzer, R. L. (Ed.) (1979), Diagnostic and Statistical Manual of Mental Disorders, DSM III (Washington, DC: American Psychiatric Association).