ABSTRACT: Can phenomenal consciousness be given a reductive natural explanation? Many people argue not. They claim that there is an ‘explanatory gap’ between physical and/or intentional states and processes, on the one hand, and phenomenal consciousness, on the other. I reply that, since we have purely recognitional concepts of experience, there is indeed a sort of gap at the level of concepts; but this need not mean that the properties picked out by those concepts are inexplicable. I show how dispositionalist higher-order thought (HOT) theory can reductively explain the subjective feel of experience by deploying a form of ‘consumer semantics’. First-order perceptual contents become transformed, acquiring a dimension of subjectivity, by virtue to their availability to a mind-reading (HOT generating) consumer system.
My topic in this chapter is whether phenomenal consciousness can be given a reductive natural explanation. I shall first say something about phenomenal – as opposed to other forms of – consciousness, and highlight what needs explaining. I shall then turn to issues concerning explanation in general, and the explanation of phenomenal consciousness in particular.
1 Phenomenal consciousness
Phenomenal consciousness is a form of state-consciousness: it is a property which some, but not other, mental states possess. More specifically, it is the property which mental states have when it is like something to undergo them (Nagel’s famous phrase, 1974). Put differently, phenomenally conscious states have distinctive subjective feels; and some would say: they have qualia (I shall return to this terminology in a moment). Phenomenal consciousness is to be distinguished from creature-consciousness, on the one hand (this is the property which creatures have when they are awake, or when they are aware of properties of their environment or body); and also from other varieties of state-consciousness, on the other hand (including a number of forms of functionally definable access-consciousness, where states may be said to be conscious by virtue of their accessibility to reasoning, or to verbal report, say).
Most people think that the notion of phenomenal consciousness can only really be explained by example. So we might be asked to reflect on the unique quality of the experience we enjoy when we hear the timbre of a trumpet-blast, or drink-in the pink and orange hues of a sunset, or sniff the heady sweet smell of a rose. In all of these cases there is something distinctive which it is like to undergo the experience in question; and these are all cases of states which are phenomenally conscious. As Block (1995) puts it: phenomenal consciousness is experience.
Explanations by example look somewhat less satisfactory, however, once it is allowed – as I think it has to be – that there are a good many types of non-conscious experience. A variety of kinds of neuropsychological and psychological evidence suggests that we possess at least two (perhaps more) functionally distinct visual systems, for example – a conceptualising system whose contents are conscious (realised in the temporal lobes of the brain), and a sensorimotor system charged with the detailed on-line control of movement, whose contents are not conscious (and which is realised in the parietal lobes). Thus movement control is possible for those who are blindsighted, or whose temporal lobes are damaged and who are otherwise blind; such control can take place more swiftly than conscious experience in the normally sighted; and sensorimotor control is not subject to the various distinctive illusions which effect our conscious experience. And while it might be possible for someone to claim that both sets of experiences are phenomenally conscious (although only the products of the conceptualising system are access-conscious), this is a very unattractive option. It is very hard to believe that within blindsight subjects, for example, there are phenomenally conscious visual experiences which the subject cannot be aware of. At any rate, for the purposes of the discussion here I shall assume that only states which are access-conscious can be phenomenally conscious.
If there can be experiences which are not conscious ones, then plainly we cannot explain the idea of phenomenal consciousness by identifying it with experience. Perhaps what we can say, however, is that phenomenally conscious events are those for whose properties we can possess introspective recognitional capacities. And then the citing of examples should best be understood as drawing our attention, introspectively, to these properties. Phenomenally conscious states and events are ones which we can recognise in ourselves, non-inferentially or ‘straight off’, in virtue of the ways in which they feel to us, or the ways in which they present themselves to us subjectively.
Note that this proposal need not be construed in such a way as to imply that phenomenally conscious properties depend for their existence upon our recognitional capacities for them – that is, it need not imply any form of higher-order thought (HOT) account of phenomenal consciousness. For it is the properties recognised which are phenomenally conscious; and these need not be thought to depend upon HOT. So this characterisation of the nature of feel does not beg any questions in favour of the sort of dispositionalist HOT theory to be sketched here and defended in detail in my 2000. First-order theorists such as Dretske (1995) and Tye (1995), as well as mysterians like McGinn (1991) and Chalmers (1996), can equally say that phenomenally conscious properties (feels) include those properties for which we possess introspective (second-order) recognitional capacities. For they can maintain that, although we do in fact possess recognitional concepts for these properties, the properties in question can exist in the absence of those concepts and are not in any sense created or constituted by them, in the way that (as we shall see later) dispositionalist HOT theory maintains.
Note, too, that this talk of what an experience is like is not really intended to imply anything relational or comparative. Knowing what a sensation of red is like is not supposed to mean knowing that it is like, or resembles, some other experience or property X. Rather, what the experience is like is supposed to be an intrinsic property of it – or at least, it is a property which strikes us as intrinsic (see below), for which we possess an immediate recognitional capacity. Here the point converges with that made in the previous paragraph: the non-metaphorical substance behind the claim that our phenomenally conscious states are ones which are like something to possess, is that such states possess properties for which we can have recognitional concepts.
An important word about terminology before we proceed any further: many philosophers use the term ‘qualia’ liberally, to refer to those properties of mental states (whatever they may be) in virtue of which the states in question are phenomenally conscious. On this usage ‘qualia’, ‘subjective feel’ and ‘what-it-is-likeness’ are all just notational variants of one another. And on this usage, it is beyond dispute that there are such things as qualia. I propose, myself, to use the term ‘qualia’ much more restrictedly (as some other writers use it), to refer to those putative intrinsic and non-representational properties of mental states in virtue of which the latter are phenomenally conscious. On this usage, it is not beyond dispute that there are such things as qualia. On the contrary, it will be possible to be a qualia-irrealist (denying that there exist any intrinsic and non-representational properties of phenomenally conscious states) without, of course, denying that there is something which it is like to smell a rose, or to undergo a sensation of red or of pain.
What primarily needs to be explained, then, are the subjective feels of phenomenally conscious states. Indeed, this is the defining feature of phenomenal consciousness. Since there is, by definition, something which it is like to undergo a phenomenally conscious state, anything which doesn’t explain this isn’t really an explanation of phenomenal consciousness, but will be (at best) an explanation of something else – some form of access-consciousness, perhaps. Second, phenomenally conscious states at least seem to their subjects to possess the properties of qualia – seeming to have properties which are intrinsic and non-relational. I shall argue that this feature of phenomenal consciousness is best explained away by a would-be naturaliser. We should – if we can – claim that there are no qualia, while explaining how people can easily come to believe that there are.
The reason why anyone wishing to provide a naturalistic explanation of phenomenal consciousness should seek to explain away our temptation to believe in qualia, rather than accepting and directly explaining their existence, is that otherwise we shall be forced to look for some sort of neural identity, or neural realisation, by way of an explanation. For there is no question but that intentional contents, causal roles, and computational processes are all relational in nature. So if there were any intrinsic, non-relational, properties of conscious experience, our only option would be to seek an explanation at the level of neurology. But this would be to make the problem of phenomenal consciousness well nigh insoluble. For it is very hard to see how, even in principle, further facts about the neurological events in our brains could explain why those very events have a subjective feel or a what-it-is-likeness to them (McGinn, 1991).
Unfortunately, word has gotten around the philosophical and scientific communities that the problem of phenomenal consciousness is the problem of explaining how subjective feel is instantiated in the brain. And most people assume that there is a race to find a neurological explanation of the subjective properties of experience. But this common perception is actually impeding the search for a solution. For it is bad scientific method to try and jump over too many explanatory levels at once. And between phenomenal consciousness and neuroscience there are a number of distinct levels of scientific enquiry, including at least the intentional and the computational levels. It is rather as if the problem of life had come to be seen as the problem of explaining the distinctive properties of living things in terms of the indeterministic principles governing sub-atomic wave particles. Cast in those terms, the problem of life, too, would look well nigh insoluble! And in this case, of course, the correct approach – now more or less successful – is to seek an explanation in terms of biochemistry. Similarly, then, in the case of phenomenal consciousness: our best strategy is to go for an explanation in terms of some combination of causal role and intentional content, if we can. And then that means denying the real existence of qualia.
2 Reductive explanation and the ‘explanatory gap’
What is it for something, in general, to be given a reductive natural explanation? A property or event is explained when we can show how suitable arrangements or sequences of lower-level properties or events (which do not themselves involve or presuppose the target phenomena) would constitute just such a property or event. So life is explained when we can see how the right sequences of biochemical events would give rise to such phenomena as reproduction, metabolism, and other distinctive properties of living things. Similarly for speciation when we provide an account in terms of natural selection. And so on. In ontological terms, reductive explanation requires at least metaphysical supervenience of the explained on the explaining. There must be no possible world where the lower-level facts together with all relevant laws of nature remain just as they are, but the explained facts are different or absent. Otherwise something would remain unexplained – namely, why in the actual circumstances the higher-level phenomena are not different or absent.
Chalmers (1996) argues that we can see in advance that phenomenal consciousness cannot be reductively explained – not even into intentional and/or functional terms. According to Chalmers, our concept of any given higher-level process, state, or event, specifies the conditions which any reductive explanation of that phenomenon must meet. For example, our concept life contains such notions as reproduction and energy production by metabolic processes, which are amongst the functions which any living thing must be able to perform. And then a reductive explanation of life will demonstrate how appropriate biochemical changes and processes can constitute the performance of just those functions. The phenomenon of life is explained when we see just how those lower-level biochemical events, suitably arranged and sequenced, will instantiate just those functions which form part of our concept living thing.
According to Chalmers, our concepts of chemical, geological, biological, psychofunctional, intentional, etc. facts are broadly functional ones. Reductive explanations can then show how suitable arrangements of lower-level facts would constitute the execution of just those functions. But our concepts of phenomenally conscious states are not functional, but purely recognitional – we can just recognise the feel of pain, or of an experience of red, whenever we have it. And the conceivability of zombie worlds and/or inverted feel worlds shows that phenomenal consciousness does not supervene metaphysically on lower-level facts. If we can conceive of states which are functionally and intentionally identical to our conscious experiences while being phenomenally distinct, then we cannot be conceptualising the felt aspect of those experiences in terms of functions and intentional contents. Rather our concepts, here, are presumably bare recognitional ones, consisting in our possession of immediate recognitional capacities for phenomenal states of various kinds. And if there can be worlds microphysically identical to this but in which phenomenal consciousness is absent or inverted, then phenomenal consciousness does not supervene on the lower-level phenomena in the way necessary for reductive explanation.
It is this which sets up the ‘explanatory gap’ between neurological or cognitive functions, on the one hand, and phenomenal consciousness on the other. Chalmers claims, indeed, that we can see in advance that any proposed reductive explanation of phenomenal consciousness into neurological, or computational, or intentional terms is doomed to failure. For what such ‘explanations’ provide are mechanisms for instantiating certain functions, which must fall short of the feel possessed by many types of conscious state. Since we do not conceptualise our conscious states in terms of function, but rather in terms of feel, no explanations of function can explain them. Hence the existence of the ‘hard problem’ of phenomenal consciousness, rendering the latter irredeemably mysterious.
Now, I agree that reductive explanations normally work by specifying a lower-level mechanism for fulfilling some higher-level function. And I agree that we have available purely recognitional concepts of phenomenally conscious states. So no explanation of phenomenal consciousness can be immediately cognitively satisfying, in the sense of meshing with the way in which phenomenally conscious states are conceptualised. But explanation should be about properties, facts, and events as worldly phenomena, not about the way in which we conceptualise those things. While the ‘explanatory gap’ is of some cognitive significance, revealing something about the manner in which we conceptualise our experiences, it shows nothing about the nature of those experiences themselves. Or so, at any rate, I maintain.
Naturalists should have a ‘thick’ conception of facts, properties and events as worldly, concept-independent, entities. From the perspective of naturalism we should believe, both that there are real properties belonging to the natural world, and that which properties there are in the world is an open question, which cannot be read directly off the set of concepts which we happen to employ. And the question of which properties are immanent in the natural world is a question for science to answer. Moreover, we should hold that the nature of the properties picked out by our concepts is a matter for discovery (just as we discovered that our concept water picks out a property which is none other than H2O); and that two or more concepts may turn out to pick out one and the same property.
If we are scientific realists then we think, not only that there is a mind-independent reality whose nature and causal operations science attempts to uncover, but also that science is gradually uncovering (or at least getting closer to) the truth about that reality (Kitcher, 1993). So it is to science that we should look to discover the set of naturally existing properties. If we idealise to the point at which we achieve a completed science, then we can say that the set of natural properties are the ones referred to by the predicate-terms in the various laws of that science. Or putting the point epistemically, we can say that whenever we have reason to believe in the truth or approximate truth of a scientific theory, then we also have reason to believe in the existence of the properties picked out by the property-terms employed by that theory.
We can, then, allow that we have purely recognitional concepts for some of our phenomenally conscious mental states. But it is not the existence of such concepts which particularly needs explaining (although this is worth explaining, and can be explained – see below). Rather, our target should be the properties which those concepts pick out. We may be able to specify the nature of those properties in such a way as to make it clear, not only how they can be available to immediate recognition, but also why they should involve the characteristic properties of subjectivity. A reductive explanation of those properties may still be possible, even though the concepts which we use to pick out those properties may not be functionally defined.
Consider, for comparison, some other domain in which people can come to possess purely recognitional concepts (or at least concepts which are nearly so – see the paragraphs which follow). It is said, for example, that people can be trained to sex very young chicks entirely intuitively by handling them, without having any idea of what they are doing, or of the basis on which they effect their classifications. So suppose that Mary is someone who has been trained to classify chicks into As and Bs – where the As are in fact male, and the Bs are in fact female – but without Mary knowing that this is what she is doing, and without her having any idea of what it is about the As which underpins recognition.
Then we ask Mary, ‘Can you conceive of a world which is micro-physically identical with our own, except that the chicks which are As in this world are Bs in that, and vice versa?’ If A really does express a purely recognitional concept for Mary – if she really has no beliefs at all about the nature of A-hood beyond the fact that some chicks have it – then she should answer ‘Yes’. For all she then has to imagine, is that she is confronted with a chick exactly like this A-chick in all micro-physical respects, but that it is one which evokes a recognitional application of the concept B. Plainly Mary should not – if she is sensible – conclude from this thought-experiment that A-hood is not a physical or functional property of the chicks. And if she did, she would reason fallaciously. For as we know, the property picked out by her recognitional concept is in fact the property of being male.
It is unlikely, of course, that Mary will have no beliefs at all about the nature of A-hood. She will probably at least believe that A-hood is a perceptible property of the chicks. And if, like us, she believes that perception is a causal process, then she must believe that instances of A-hood can have some sort of causal impact upon her sense-organs. These beliefs may well lead her to believe that the property of A-hood is somehow or other constituted by physical facts about the chicks, and so to reject the possibility of a world where all micro-physical facts remain the same but A-hood and B-hood are reversed. But then the only differences here from recognitional concepts of feel, are (first) that many of us may have no beliefs about the causal nature of introspective recognition. And (second) even if we do believe that introspection is causally mediated, we may lack any beliefs about the nature of the introspective process which imply physicality, in the way that we do believe that outer perception of the properties of physical objects requires those properties to have physical effects upon our sense-organs.
The morals of this example for phenomenal consciousness should be clear. Possessing purely recognitional concepts of feel, we can deploy those concepts in thought experiments in ways which are unconstrained by the physical or functional facts. But nothing follows about the non-physical, non-functional, nature of the properties which those concepts pick out. So although we can conceive of worlds in which all the micro-physical facts remain as they are, but in which phenomenal consciousness is different or absent, it may be that there are really no such worlds. For it may be that phenomenal consciousness is constituted by some physical or functional fact, in which case there are no possible worlds where the facts of consciousness can be different while the constituting facts remain the same.
So much by way of ground-clearing: phenomenal consciousness consists in introspectively-recognisable properties of subjective feel and what-it-is-likeness, which many are tempted to think are intrinsic and non-relational; but there is no reason of principle why such properties should not be reductively explained. I turn, now, to provide just such an explanation. My thesis is that the properties involved in phenomenal consciousness are successfully reductively explained by dispositionalist higher-order thought (HOT) theory. I shall make no attempt to review the alternatives, or to contrast my proposal with others on the market. In the space available to me here, I shall concentrate just on explaining my own positive proposal, and on displaying some of its virtues.
3 Dispositionalist higher-order thought theory
According to dispositionalist HOT theory, phenomenally conscious states consist in analog intentional contents held in a special-purpose short-term memory store in such a way as to be available to a variety of down-stream conceptual systems (including various systems for belief-formation, and for practical reasoning), crucially including a ‘mind-reading’ or ‘theory of mind’ system capable of HOTs about those very contents. The architecture of the theory is represented in figure 1. The remainder of this section will be devoted to elucidating and commenting on this architecture, before we turn to consider the explanatory potential of the theory in the sections which follow.
Figure 1: dispositionalist HOT theory – inserted about here
On this account, perceptual contents are regularly passed to two or more short-term memory stores, C (conscious) and N (non-conscious), to be integrated with the subject’s goals in the control of action. C itself is defined, inter alia, by its relation to HOTs – any of the contents of C being apt to give rise to a HOT about itself, should circumstances (and what is going on elsewhere in the system) demand. This allows us to retain our belief in the richness of conscious experience (contra Dennett, 1991) without making outrageous demands on the mind-reading system which generates HOTs, since the account imposes no particular limit on the amount of information held in C at any one time. Certainly the contents held there can have a degree of richness and detail which far outstrips our powers of conceptualisation and description, just as intuition suggests.
The model is consistent with, and partly motivated by, the evidence that our perceptual faculties sub-divide into a number of functionally distinct perceptual sub-systems, one of which provides a set of representations for conceptualisation and decision making, and the other of which feeds a different set of representations to guide our detailed movements (e.g. Milner and Goodale, 1993, 1995). So one set of percepts is available to be integrated with a variety of action schemas to guide movement, but is neither conscious nor available to conceptual thought; whereas the other set of percepts is available to a variety of belief forming and practical reasoning systems, and are conscious, but these are not the percepts which guide the details of our movements on-line. Just such a bifurcation of cognitive and sensorimotor perceptual systems is found in many other creatures besides ourselves. But according to dispositionalist HOT theory it had to wait on the evolution of HOT-wielding, mind-reading, module – as one of the down-stream consumer systems for the contents of the conceptualising system – in order for the contents of C to become phenomenally conscious.
The contents of C, while perhaps being imbued with concepts (often or always), also involve representations more fine-grained than any concept. These representations are analog ones (or at least, they are analog in relation to the containing concepts – see below). To see the intended contrast here, think of the difference between a digital clock, on the one hand, and the traditional analog ‘handed’ variety, on the other. On the face of the former, time is represented in discrete steps (one for each minute which elapses, say); on the face of the latter, the passing minutes are represented continuously, without discrete steps: the hands just move continuously around. Now strictly speaking, properties are only analog if – like length or movement – they admit of continuous variation; so that between any two such properties there is always a third. This might seem to present a problem for the account, since the processes subserving perception are almost certainly not continuous but discrete – after all, any given brain-cell is either firing or at rest at any given moment. But we can in fact introduce a relativised variant of the same notion, saying that representations are analog relative to a certain conceptual repertoire if they admit of significantly more variations than there are concepts to classify them.
Some people insist that perceptual content is non-conceptual (e.g. Tye, 1995). In this connection Peacocke (1992) introduces the idea of what he calls ‘scenario content’, which is to consist of analog representations of the ways in which the space immediately surrounding the perceiver is filled, but without these filled spaces being categorised into objects or kinds. Now, I suspect that one aspect of this proposal, at least, is mistaken – namely, the claim that perception does not represent discrete objects. Data from infancy studies suggest that quite young infants have a firm grip on simple physical/mechanical causal principles, with expectations about what can move, and how (Sperber et al., 1995). So I suspect that from the start these ways-of-filling-space are seen as dividing into objects which can move in relation to one another. But I am happy to allow that there may be a stage early in development (and perhaps also in evolution) when no concepts are yet applied to the represented fillings-of-space. Indeed, adults, too, may sometimes have perceptions which are almost wholly non-conceptual in content. Think, for example, of someone from a hunter-gather tribe who is introduced into some hi-tech scientific laboratory. she may literally have no idea what she is seeing – just surfaces and filled shapes and potentially moveable objects.
Normal adult perceptions are not like this, however. I do not first experience a distribution of filled spaces around me, and then come to believe that these are tables, chairs, people, and television-sets. Rather, I see myself as surrounded by such familiar objects. Indeed, a variety of considerations suggest that perceptual states are normally imbued with concepts. To mention just one: perceived similarity spaces can undergo a dramatic shift as a result of concept learning. This has been demonstrated experimentally by psychologists (Lucy, 1992; Goldstone, 1994; Andrews et al., 1999). But the same point is also familiar to common sense. When I had my first job in the wilds of Scotland, for example, there was little else to do but take up bird-watching on the estuary where we lived. At first I just saw crowds of little grey birds on the beach, but I later came to see the beach as populated by plovers, knots, dunlins and red-shanks. As a result of concept-learning, the differences between the birds came to leap out at me in a phenomenologically salient way; I saw them as distinct. It soon became barely intelligible to me how I could ever have confused a plover with a dunlin, they looked so different.
So I say that the contents of C are analog, but normally imbued with concepts; whereas beliefs are wholly conceptual, or digitally ‘chunked’. This way of drawing the percept/belief distinction seems to fit the phenomenology quite well. What I perceive is presented to me under concepts (I see a car, or a person, or Mary), but I am always aware of more subtle variations than I have concepts for. For example, imagine you are looking at a tree whose leaves are being shifted in the breeze. What you see comes to you imbued with the concepts tree and leaf; but the subtly shifting pattern of motion, and the precise shape which the tree outlines against the sky, are things for which you have no concepts. Nevertheless they are part of what is represented, and you can distinguish subtle variations in them.
Note that this already puts us in position to explain one of the puzzling features of phenomenal consciousness, namely its supposed ineffability. For any analog representation will be ineffable – in a sense – in relation to the concepts used to describe its content. For example, my visual system delivers representations of colour which are analog in the sense that they allow a seemingly-smooth spectrum of only-just-distinguishable shades of colour to be represented. My colour concepts are relatively few by comparison. Then any particular shade will be discriminable from its nearest neighbours; but the difference will be indescribable – it is a difference which will slip through the mesh of my conceptual net. The only way of describing the difference will be by means of an example, saying ‘It is the shade of that object there as opposed to this object here’.
What properties does the mind-reading system need to have, in order for the contents of C to be phenomenally conscious, on the present account? I maintain that it needs to be sophisticated enough to understand the appearance/reality or is/seems distinction; and it should contain some recognitional concepts of experience – e.g. seems red, seems green, seems smooth, and so on. If the conceptual systems already contain first-order recognitional concepts of red, green, smooth, and such like, then it would be a trivial matter to turn these into higher-order recognitional concepts once the mind-reading system is armed with the is/seems distinction. This distinction is thought to emerge in children, together with an understanding of belief as a representational (and possibly-false) state of the subject, somewhere between the ages of 3 and 4. (See Flavell et al., 1987; Gopnik and Astington, 1988; Gopnik, 1993; Baron-Cohen, 1989; Clements and Perner, 1994 – but see Fodor, 1992, and Leslie, 1994, for the claim that these capacities are available to children from the age of 2 or earlier.)
4 Explaining away qualia
Given the correctness of dispositionalist HOT theory, it is easy to explain why it so naturally seems to people that phenomenally conscious states possess intrinsic, non-relational, properties (qualia). For phenomenally conscious states result from the availability of experience to purely recognitional higher-order concepts. These concepts have no relational components – they are not relationally defined. Someone deploying these concepts will then easily be able to conceive of worlds in which the corresponding feels are either absent or inverted while all relational facts remain the same. And then by eliding the distinction between concept and property, or by confusing conceptual with metaphysical possibility, it will be easy to think that the properties which our phenomenal concepts pick out are similarly intrinsic and non-relational.
Moreover, notice that our first-order perceptual states present many properties in the world as intrinsic. Our perception of a surface as red, for example, represents that surface as covered by a certain intrinsic – non-relationally individuated – property; namely, redness, of some or other shade. (It certainly does not present that surface as having the power to cause experiences of a certain sort in normal perceivers in normal circumstances, in the way that dispositionalist analyses of colour concepts maintain!) And then higher-order analog states – states of seeming red, seeming green and the rest – will be seemings of the presence of a certain intrinsic property. Small wonder, then, that we might naturally come to think of those states as possessing intrinsic properties (qualia).
But of course, as we noted in section 2 above, the property of phenomenal consciousness can consist in analog intentional content available to HOT – and so be relational and non-intrinsic – while our concepts of phenomenal consciousness are non-relational. So there is no special problem in explaining why people are so naturally tempted to believe in qualia. The harder problem is to explain the defining feature of phenomenal consciousness – its what-it-is-likeness or feel.
5 Explaining subjective feel
How can dispositionalist HOT theory explain the subjective feel of experience? In particular, how can mere dispositions to deploy higher-order recognitional concepts confer on our experiences the property of feel or what-it-is-likeness? Remember that what is to be explained is how a non-phenomenally-conscious perceptual state comes to acquire the properties of subjectivity and what-it-is-likeness distinctive of phenomenal consciousness. Yet is might seem puzzling how mere availability to HOT could confer these additional properties on a perceptual state. How can something which hasn’t actually happened to a perceptual state (namely, being targeted by a HOT) confer on it – categorically – the dimension of subjectivity? For subjectivity surely is a categorical property of a phenomenally conscious experience. Worse still, indeed: when I do actually entertain a HOT about my experience – thinking, say, ‘What a vivid experience!’ – it is surely because the experience already has the distinctive subjective properties of phenomenal consciousness that I am able to think what I do about it (Robb, 1998). So, once again, how can we legitimately appeal to HOTs in the explanation of those very properties? The answer consists in a dose of ‘consumer semantics’.
Given the truth of some form of consumer semantics (e.g. teleosemantics, or functional or inferential role semantics), the contents of C will depend, in part, on what the down-stream consumer systems can do with those contents. And the attachment of a HOT consumer system to the outputs of an otherwise first-order conceptualising perceptual system will transform the intentional contents of the events in C. Where before these were first-order analog representations of the environment (and body), following the attachment of a HOT system these events take on an enriched dual content – each experience of the world/body is at the same time a representation that just such an experience is taking place; each experience with the analog content red, say, is at the same time an event with the analog content seems red or experience of red. And the events in C have these contents categorically, by virtue of the powers of the HOT consumer system, in advance of any HOT actually being tokened.
My claim is that the very same perceptual states which represent the world to us (or the conditions of our own bodies) can at the same time represent the fact that those aspects of the world (or of our bodies) are being perceived. It is the fact that the faculties of thinking to which experiences are made available can make use of them in dual mode which turns those experiences into dual-mode representations. This is because, in general, the intentional content of a state will depend upon the nature and powers of the ‘consumer-systems’, as Millikan (1984) would put it. The content possessed by a given state depends, in part, upon the uses which can made of that state by the systems which can consume it or draw inferences from it. And similarly, then, in the case of perceptual representations: it is the fact that perceptual contents are present to a system which is capable of discriminating between, and making judgements about, those perceptual states as such which constitutes those states as second-order representations of experience, as well as first-order representations of the world (or of states of the body). If the content of a state depends partly on what the down-stream systems which consume, or make use of, that state can do with it, then the attachment of a mind-reading consumer system to first-order analog perceptual states may be expected to transform the intentional contents of the latter. In virtue of the availability of the analog content red to a consumer system with a recognitional concept of seems red, each perceptual state with the content red is already a state with a higher-order analog content seems red or experience of red. Each such state then has, as part of its content, a dimension of seeming or subjectivity.
There are a variety of different forms of consumer semantics: for example, teleosemantics, functional role semantics, and inferential role semantics; and each can be construed more or less holistically or locally. In fact I favour some form of inferential role semantics, where the immediate inferential connections of a state are particularly important determiners of content (see my 1996, ch.5, and Botterill and Carruthers, 1999, ch.7). Thus for a state to have the content P&Q, the subject must be disposed to infer P from it – but not necessarily to infer from it ~(~P v ~Q). There are powerful reasons for preferring some form of consumer semantics to any kind of pure causal co-variance semantics (Botterill and Carruthers, 1999, ch.7). And there is independent reason to think that changes in consumer-systems can transform perceptual contents, and with it phenomenal consciousness (Hurley, 1998). Consider the effects of spatially-inverting lenses, for example (Welch, 1978). Initially, subjects wearing such lenses see everything upside-down, and their attempts at action are halting and confused. But in time – provided that they are allowed to move around and act while wearing their spectacles – the visual field rights itself. Here everything on the input side remains the same as it was when they first put on the spectacles; but the planning and action-controlling systems have learned to interpret those states inversely. And as a result, intentional perceptual contents become re-reversed.
If consumer semantics is assumed, then it is easy to see how mere dispositions can transform contents in the way that dispositionalist HOT theory supposes. For notice that the consumer-system for a given state does not actually have to be making use of that state in order for the latter to carry the appropriate content – it just has to be disposed to make use of it should circumstances (and what is going on elsewhere in the cognitive system) demand. So someone normalised to inverting spectacles does not actually have to be acting on the environment in order to see things right-side-up. She can be sitting quietly and thinking about something else entirely. But still the spatial content of her perceptual states is fixed, in part, by her dispositions to think and move in relation to the spatial environment.
Consider, here, the implications of some form of inferential role semantics, for the sake of concreteness. What is it that confers the content P&Q on some complex belief-state of the form ‘P#Q’? (The sign ‘#’ here is meant as a dummy connective, not yet interpreted.) In part, plainly, it is that one is disposed to infer ‘P’ from ‘P#Q’ and ‘Q’ from ‘P#Q’ (Peacocke, 1992). It is constitutive of a state with a conjunctive content that one should be disposed to deduce either one of the conjuncts from it. But of course this disposition can remain un-activated on some occasions on which a conjunctive thought is entertained. For example, suppose that I hear the weather-forecaster say, ‘It will be windy and it will be cold’, and that I believe her. Then I have a belief with a conjunctive content even if I do nothing else with it. Whether I ever form the belief that it will be windy, in particular, will depend on my interests and background concerns, and on the other demands made on my cognitive resources at the time. But my belief still actually – and categorically – has a conjunctive content in virtue of my inferential dispositions.
So a dose of consumer semantics is just what dispositional HOT theory needs to solve the categoricity problem. Indeed, notice from the example above that in any particular instance where I do exercise my inferential dispositions, and arrive at a belief in one of the conjuncts, we can cite my prior conjunctive belief as its cause. So it is because I already believed that it will be windy and cold that I came to believe that it will be windy in particular. But for all that, my preparedness to engage in just such an inference is partly constitutive of the conjunctive content of my prior belief. So, too, then, in the case of phenomenal experience: if I think, ‘What an interesting experience!’ of some perceptual state of mine, it can be because that state is already phenomenally conscious that I come to entertain that higher-order thought; but it can also be by virtue of my disposition to entertain HOTs of just that sort that my perceptual state has the kind of content which is constitutive of phenomenal consciousness in the first place.
We can easily explain, too, how our higher-order recognitional concepts of experience can ‘break free’ of their first-order counterparts, in such a way as to permit thoughts about the possibility of experiential inversion and such like. Here is how the story should go. We begin – in both evolutionary terms and in normal child development – with a set of first-order analog contents available to a variety of down-stream consumer systems. These systems will include a number of dedicated belief-forming modules, as well as a practical reasoning faculty for figuring out what to do in the light of the perceived environment together with background beliefs and desires. One of these systems will be a developing mind-reading module. When the latter has reached the stage of understanding the subjective nature of experience and has grasped the is/seems distinction, it will easily – indeed, trivially – become capable of second-order recognitional judgements of experience, riding piggy-back on the subject’s first-order recognitional concepts. So if the subject had a recognitional concept red, it will now acquire the concept seems red, knowing that whenever a judgement ‘red’ is evoked by experience, a judgement of ‘seems red’ is also appropriate on the very same grounds.
This change in the down-stream mind-reading consumer system is sufficient to transform all of the contents of experience (and imagination), rendering them at the same time as higher-order ones. So our perceptual states will not only have the first order analog contents red, green, loud, smooth, and so on, but also and at the same time the higher-order analog contents seems red, seems green, seems loud, seems smooth, and so on. The subject will then be in a position to form recognitional concepts targeted just on these higher-order contents, free of any conceptual ties with worldly redness, greenness, loudness, and smoothness. (This can either be done by fiat – dropping or cancelling any conceptual connection with redness from the recognitional concept seems red – or by introducing new concepts of the form this experience.) And once possessed of such concepts, it is possible for the subject to wonder whether other people have experiences of seems red or of this sort when they look at a ripe tomato, and so on.
Notice that this account of the subjectivity of phenomenally conscious experience makes essential appeal to analog higher-order representations. So in one sense it is quite right of Browne (1999) to accuse me of being a closet higher-order experience (or ‘inner sense’) theorist. Like such theorists (e.g. Lycan, 1996) I believe that phenomenal consciousness constitutively involves higher-order analog (non-conceptual or only partly conceptual) contents. But I get these for free from dispositionalist HOT theory by appeal to some or other form of consumer semantics, as outlined above. No ‘inner scanners’, nor any special faculty of ‘inner sense’, need to be postulated; nor are the states which realise the higher-order analog contents distinct from those which realise the corresponding first-order contents, in the way that higher-order experience theorists normally suppose. If this makes me a ‘closet introspectionist’ (Browne) then I am happy to concur; but it is introspectionism without costs.
What we have here is, I claim, a good and sufficient explanation of the defining feature of phenomenal consciousness – its subjectivity, or what-it-is-likeness; and it is, moreover, an explanation which is fully acceptable from a naturalistic perspective. That feature gets explained in terms of the dual representational content possessed by all phenomenally conscious states (they have both ‘objective’, or world/body-representing content and ‘subjective’, or experience-representing content), by virtue of their availability to both first-order and higher-order consumer systems.
6 Dormative virtues
Someone might object that the account provided here has all of the hallmarks of an explanation in terms of dormative virtue – that is to say, all the hallmarks of no explanation at all. For recall the line just taken: it is because my experience already has a given higher-order analog content that I think, ‘What an interesting experience!’; but it can also be because that state is of a kind which is disposed to cause HOTs of just this sort that it possesses a higher-order content in the first place. The account then seems formally analogous to this: if I fall asleep after drinking a soporific cocktail, it can be because that drink is already a soporific that I come to lose consciousness; but it can also be by virtue of my disposition to lose consciousness in just this way that the cocktail is a soporific in the first place.
The first point to make by way of reply is that explanations of the ‘dormative virtue’ sort are perfectly appropriate in their place. It can be both true and explanatory to say that I fell asleep because I drank a liquid containing a soporific. This is to explain one particular event (me falling asleep) in terms of another which is its cause, and to indicate that there is some property (not further specified) of the cause such that events of that kind are correlated with sleep in a law-like way. And it can be both true and explanatory to say of the liquid in question – opium, as it might be – that it is a soporific. This is to provide a partial functional specification of its properties. Where dormative virtues definitely become non-explanatory is if we appeal to them in trying to answer the question, ‘Why does opium put people to sleep?’ (Bad answer: ‘Because it is a soporific’.) For this question is a request to specify the underlying mechanism, not just to be told that some such mechanism exists. (That is, we don’t just want to be told, ‘Because it has some property which tends to cause sleep’ – we already knew that.)
In the same way, it can be both true and explanatory to say that I came to have a belief with the content that it will be windy because I already had a belief with the content that it will be windy and cold. This is to explain one event in terms of another with which it is connected in a law-like manner. And it can be both true and explanatory to say that and-beliefs tend to cause beliefs in their individual conjuncts. This is to provide a partial functional specification of the nature of conjunctive content. Where explanation by content runs out, is when we ask the question, ‘Why do people with conjunctive beliefs tend to believe the individual conjuncts?’ For this, too, is a request to specify the underlying mechanism, needing to be answered by appeal to some sort of computational account, for example, and not by an appeal to content. Likewise, then, for the relations between higher-order analog contents and higher-order recognitional judgements: appeals to them are only non-explanatory if our question is why such contents give rise to such judgements at all.
Notice, too, that in one respect saying that I came to believe that P because I already believed that P&Q is quite unlike saying that I fell asleep because I took a soporific. For to say the latter is just to say that I fell asleep because I drank something which tends to make people sleep, since a soporific is nothing other than a substance which causes sleep. Conjunctive beliefs, in contrast, aren’t identical with beliefs which cause belief in the individual conjuncts, since introduction-rules are just as important as elimination-rules in specifying the contents of the logical connectives. The functional specification of conjunction by its elimination-rule is only a partial one. So to explain my belief that P in terms of my belief that P&Q is to give a good deal more information about the cause, of a functional sort, than merely to say that it has some property which tends to cause P-beliefs.
Likewise for higher-order analog contents; only more so. To say that someone is in a perceptual state with the analog higher-order content seems red is not just to say that they are in a state which tends to make them judge that they are experiencing red. This may be a partial characterisation of the content of the state, but it is only partial. In addition we need to say that the state has an analog content, that it is also an analog representation of red, normally caused by exposure to red, and so on. So here, too, the explanation of my higher-order judgement is a good deal more informative than a mere ‘dormative virtue’ one. Indeed, it is particularly important to stress the analog nature of the higher-order contents in question. For this means that there is no end of possible higher-order judgements, each employing one of an unlimited range of potentially-available higher-order recognitional concepts, to which those contents could give rise. On the present account, it only requires the subject to have an understanding of the is/seems distinction in general, and to possess some higher-order recognitional concepts, for all of the subject’s perceptual (and imagistic) states which are available to such concepts (i.e. which are contained in C) to acquire a dimension of seeming. This means that there is a richness of content to higher-order experience which goes far beyond a mere disposition to make a few types of higher-order judgement.
In general, then, my answer to the ‘dormative virtue’ challenge is this: higher-order analog contents are just as real, and just as categorical in nature, as are any other species of intentional content; and causal explanations by appeal to them can be explanatory. But just as with other types of content, their nature is determined, in part, by their effects on the down-stream consumer systems – in this case subjects’ capacities to make higher-order recognitional judgements about their experiences. So the one question which this account cannot (and is not designed to) answer, is why people tend to make such higher-order judgements at all. Here the answer, ‘Because they undergo higher-order analog contents’ – although it does give a good deal of additional information – is not really an explanatory one.
The ‘hard problem’ of phenomenal consciousness (Chalmers, 1996) is not so very hard after all. Common-sense notions of cause and intentional content give us all that we need for a solution. Phenomenally conscious states are analog intentional states available to a faculty of higher-order recognition. In virtue of such availability, those states have a dual content – world/body representing, and also experience/seeming representing. So it is by virtue of such availability that phenomenally conscious states acquire a dimension of seeming or subjectivity. Put differently: phenomenally conscious states are analog intentional states with dual content (both first and second order); where such contents result from availability to both first and second-order consumer systems.
Aglioti, S., DeSouza, J. and Goodale, M. 1995. Size-contrast illusions deceive the eye but not the hand. Current Biology, 5.
Andrews, J., Livingston, K. and Harnad, S. 1999. Categorical perception effects induced by category learning. ???
Baars, B. 1988. A Cognitive Theory of Consciousness. Cambridge University Press.
Baars, B. 1997. In the Theatre of Consciousness. Oxford University Press.
Baron-Cohen, S. 1989. Are autistic children behaviourists? An examination of their mental-physical and appearance-reality distinctions. Journal of Autism and Developmental Disorders, 19.
Block, N. 1995. A confusion about a function of consciousness. Behavioural and Brain Sciences, 18.
Botterill, G. and Carruthers, P. 1999. The Philosophy of Psychology. Cambridge University Press.
Bridgeman, B. 1991. Complementary cognitive and motor image processing. In G. Obrecht and L. Stark, eds., Presbyopia Research. Plenum Press.
Bridgeman, B., Peery, S. and Anand, S. 1997. Interaction of cognitive and sensorimotor maps of visual space. Perception and Psychophysics, 59.
Browne, D. 1999. Carruthers on the deficits of animals. Psyche, 5. <http://psyche.cs.monash.edu.au/v5/>
Carruthers, P. 1979. The Place of the Private Language Argument in the Philosophy of Language. Oxford DPhil thesis. Unpublished.
Carruthers, P. 1986. Introducing Persons. Routledge.
Carruthers, P. 1996. Language, Thought and Consciousness. Cambridge University Press.
Carruthers, P. 2000. Phenomenal Consciousness: a naturalistic theory. Cambridge University Press.
Castiello, U., Paulignan, Y. and Jeannerod, M. 1991. Temporal dissociation of motor-responses and subjective awareness study in normal subjects. Brain, 114.
Chalmers, D. 1996. The Conscious Mind. Oxford University Press.
Clements, W. and Perner, J. 1994. Implicit understanding of belief. Cognitive Development, 9.
Dennett, D. 1991. Consciousness Explained. Penguin Press.
Dretske, F. 1995. Naturalizing the Mind. MIT Press.
Flavell, J., Flavell, E. and Green, F. 1987. Young children’s knowledge about the apparent-real and pretend-real distinctions. Developmental Psychology, 23.
Fodor, J. 1992. A theory of the child’s theory of mind. Cognition, 44.
Goldstone, R. 1994. Influences of categorisation on perceptual discrimination. Journal of Experimental Psychology: General, 123.
Gopnik, A. 1993. How we know our own minds. Behavioural and Brain Sciences, 16.
Gopnik, A. and Astington, J. 1988. Children’s understanding of representational change and its relation to the understanding of false belief and the appearance-reality distinction. Child Development, 59.
Hurley, S. 1998. Consciousness in Action. Harvard University Press.
Kitcher, P. 1993. The Advancement of Science. Oxford University Press.
Leslie, A. 1994. Pretending and believing. Cognition, 50.
Lucy, J. 1992. Grammatical Categories and Cognition. Cambridge University Press.
Lycan, W. 1996. Consciousness and Experience. MIT Press.
Marcel, A. 1983. Conscious and unconscious perception. Cognitive Psychology, 15.
Marcel, A. 1998. Blindsight and shape perception: deficit of visual consciousness or of visual function? Brain, 121.
McGinn, C. 1991. The Problem of Consciousness. Blackwell.
Millikan, R. 1984. Language, Thought, and Other Biological Categories. MIT Press.
Milner, D. and Goodale, M. 1993. Visual pathways to perception and action. Progress in Brain Research, 95.
Milner, D. and Goodale, M. 1995. The Visual Brain in Action. Oxford University Press.
Nagel, T. 1974. What is it like to be a bat? Philosophical Review, 83.
Peacocke, C. 1992. A Study of Concepts. MIT Press.
Robb, D. 1998. Recent work in the philosophy of mind. Philosophical Quarterly, 48.
Rosenthal, D. 1986. Two concepts of consciousness. Philosophical Studies, 49.
Sperber, D., Premack, D., and Premack, A. eds. 1995. Causal Cognition. Oxford University Press.
Tye, M. 1995. Ten Problems of Consciousness. MIT Press.
Weiskrantz, L. 1986. Blindsight. Oxford University Press.
Weiskrantz, L. 1997. Consciousness Lost and Found. Oxford University Press.
Welch, R. 1978. Perceptual Modification. Academic Press.
Wittgenstein, L. 1953. Philosophical Investigations. Blackwell.
 See Rosenthal, 1986; Block, 1995; Lycan, 1996; and my 2000, ch.1 for elaboration of these and other distinctions.
 See Marcel, 1983, 1998; Weiskrantz, 1986, 1997; Baars, 1988, 1997; Castiello et al., 1991; Bridgeman et al., 1991, 1997; Milner and Goodale, 1993, 1995; Aglioti et al., 1995; and my 2000, ch.6. for a review.
 For defence of this assumption see my 2000, ch.6.
 See Botterill and Carruthers, 1999, for defense of the scientific status of intentional psychology, and so also for vindication of the scientific reality of intentional content.
 In fact it is science’s track-record of success in providing such reductive explanations which warrants our belief that physics is closed in our world (that is, for thinking that physical processes cannot be altered or interfered with by higher-level processes – there is no top-down causation), and which provides the grounds for the claim that all natural phenomena supervene (naturally and/or metaphysically) on micro-physical facts.
 This isn’t meant to deny that explanation is a partly epistemic notion, such that whether or not something is an explanation of a phenomenon is relative to what you know already. My point is just that explanation is always world-directed. It is the worldly (that is, concept-independent) events and properties themselves which we seek to explain (relative to our background knowledge), not the way in which those events and properties are conceptualised by us.
 This is the crucial premise needed for defense of the reality of intentional properties, provided that intentional psychology has the status of a science. See Botterill and Carruthers, 1999, chs. 6 and 7.
 Of course many arguments for an explanatory gap have been offered, by a variety of thinkers; whereas I have only (briefly) considered one. For discussion and disarmament of others, see my 2000, chs. 2-4.
 For detailed consideration of a range of alternatives to dispositionalist HOT theory – both first-order and higher-order – see my 2000, chs. 5-11.
 Actually my belief in the modularity (and genetically chanelled nature) of the mind-reading system is ancillary to the main story being told here. The principal explanatory claims of dispositionalist HOT theory can go through just the same even if our mind-reading capacities are socially acquired, constructed through childhood theorising, or result from processes of mental simulation, as some maintain. See Botterill and Carruthers, 1999 chs. 3 & 4, for discussion.
 The hunter-gatherer’s perceptions will still not be wholly non-conceptual, since she will be able to apply colour-concepts, like red, as well as concepts like rough, smooth, and so on to what she sees.
 For other arguments in its support, see my 2000, ch.5.
 This account applies most readily to outer perceptions of vision and touch, say, where the appearance/ reality distinction has a clear purchase, and less easily to bodily sensations like pain, where it does not. For a more rounded and slightly more complex account, see my 2000, chs. 5 and 9.
 See my 2000, ch.7, for more extensive development of the points made briefly in this section.
 For further development and elaboration of the explanation proposed in this section, see my 2000, ch.9.
 Of course Wittgenstein (1953) famously argued that the very idea of private concepts of experience of this sort is impossible or conceptually incoherent. In due deference to Wittgenstein, I spent a good many years of my life trying to find a viable version of the Private Language Argument, one manifestation of which was my Oxford DPhil thesis (1979; see also my 1986, ch.6). I ultimately came to the conclusion that there is no good argument in this area which doesn’t presuppose some form of verificationism about meaning or quasi-behaviourism about the mind. But I don’t need to argue for this here.
 Of course it would be informative to answer this question by saying, ‘Because there are intrinsic properties of people’s experiences of which they are aware’, if such properties existed. But (a) there are no good reasons to believe in the existence of any intrinsic, non-intentional, properties of experience (qualia), and (b) it is easy for a higher-order theorist to explain why people are so naturally tempted to believe in such properties. See my 2000, chs. 2, 3, 4 and 7.
 For the most part this chapter weaves together material from my 2000, chs. 2, 3, 5, 8 and 9; reproduced here with the permission of Cambridge University Press. Thanks to Colin Allen for the objection which gave rise to section 6, to Dudley Knowles and Tim Schroeder for comments on an earlier draft, and to all those who partcipated in the discussion of my presentations of earlier versions of this chapter at the universities of Bolton and Glasgow, and to the Royal Institute of Philosophy conference, ‘Naturalism, Evolution and Mind’, held in Edinburgh in July 1999.