Kim’s Dilemma and Ecological Reductionism for the Mind



Kim’s Dilemma and Ecological Reductionism for the Mind

Anoop Gupta

Wir Mussen Wissen, Wir Werden Wissen (We must know, we will know.) David Hilbert (18621943; inscribed on his gravestone)

Unlike most the metaphysical debates about truth, the central issue in the philosophy of mind has remained ontological: what is the mind? If it is not like tables and chairs, then what is it? In fact, according to Kim, contemporary accounts of the mind have addressed only one of two problems in trying to answer these questions.

We can get a lay of the land by considering the two solutions that have been on the market, namely, functionalism and materialism.  First, functionalism is the view that “mental kinds and properties are functional kinds at a higher level of abstraction than physicochemical or biological kinds” (Kim, 1999, p. 3); it has been the favored stance of most cognitive scientists.[1] Yet on the one hand, functionalist does not solve the dilemma: As Kim (1998; 2006) pointed out, functionalism is no solution if we cannot explain mental causation: we must explain how the mind affects the brain.

Second, the materialist solution is to reduce the mind to the brain. This causes an alternative problem. On the other hand, then, as Kim anticipated, if we identify the mind and brain we stand to lose the subjective character of the mental. So, functionalists and physical reductionists will flounder on one or the other horn of the dilemma. As Kim (1998) put it, we face a “profound dilemma” (p. 237), in the order in which I shall discuss matters, either:

1. We reject reductionism and cannot see how mental causation could be possible (functionalism).

2. We accept reductionism and explain mental causation, but lose the subjective character of the mental

“Either way” as Kim (1998) said, “we are in danger of losing the mental. That is the dilemma” (p. 237). Many recent solutions to the mind-body problem can be viewed as reply to Kim. To indicate the direction I wish to head: I propose an ecological reductionism, whereby, mental states are semi-global brain states that are causally efficacious. More about this as my argument unfolds.

I proceed thus. First, I reconstruct how Kim (1999) said we arrived at the dilemma. Second, I shall scrutinize Kim’s discussion of functionalism, in view of several recent views that attempt to account for mental causation. Finally, I shall argue ecological reductionism offers the best prospects of solving Kim’s dilemma by updating what we mean by mind in the different contexts we employ the term.


Some caveats. I do not aim to present an exhaustive historical account of the origins of the contemporary debate about the ontological status of the mind. Nor shall I scrutinize Putnam’s (1998) argument against functionalism. I consider Kim’s historical retelling only to elucidate his claim that the dilemma must be solved. I shall only mention in passing the problems in using words like causation, physical, and so on, as they go beyond the scope of this paper. Finally, I also only touch upon debates about ontological status, since my purpose is to account for the subjective character of consciousness.

In my account, I will accept two things. First, I accept multiple realizablity. One brain state may realize various mental states, at different times in the same brain.  The inverse situation, as Kim (2005) anticipated, “multiple reducibility” (p. 53) also occurs. One mental state can be reduced to various physical states at different times in the same brain.[2] Second, I accept, in a way to be explained in this paper, bi-directional causality. Brain states cause mind states and vice versa. Finally, the ecological reductionism I propose is only intended to be programmatic.

Kim’s Dilemma and its Historical Antecedents in the Philosophy of Science

It is useful to briefly trace how we got entrapped in the dilemma in the first place. Kim (1999) traced the contemporary problem of the relation of the mind to the body to an ad hoc question in the philosophy of science that arose in the 1950s and 1960s. He identified Feigl’s (1958) “The Mental and the Physical” as well as J. J. Smart’s (1959) “Sensations and Brain Processes” as antecedents of the current debate. As he noted, the relation of mind to body is one example of how higher levels (e.g., economic laws) relate to basic physical ones (e.g., quantum laws).[3] Kim (2005) claimed that the mind-body problem arose for physicalists who aspired to take the mind seriously and accepted a layered model of domains of science like a “ladder” (p. 53).  As Kim (2005) put it, we need to know how the mind can exert causal powers in a world that is fundamentally physical. The philosophical debate became focused, as Kim has said, with Putnam’s (1967/1975) argument for functionalism. 

Kim (2005) defined irreducibility as the notion that mental properties are not identical with physical ones. Kim (1998) commented that Putnam’s (1967/1975) paper brought about the demise of identity theory, the notion that “mental states S is identical with physical state P”; gave birth to functionalism; and established anti-reductionism as the received view among cognitive scientists.

Kim (2006) had noted that there are “three players” (p. 553), the mind, body, and behavior. “Having a mind”, Kim (1998) said, “can be constructed as a simple property, capacity, or characteristic that humans and other higher animals possess in contrast with things like pencils and rocks” (p. 5).  He said that the mental includes things like sensations (seeing red), that-clauses (“I hope that”), and personality (honesty). As he explained, we can distinguish the reference of our intentions (what our thoughts are about) from their content (e.g., remembering a thunderstorm). Behavior, he held, includes not just reflexes, bodily movement, but also beliefs, thinking, judging, and so on. How, Kim (1998) asked, does a mental state come to have “meaning” (p. 185)?

According to Kim (1999) in his Townsend Lectures delivered at the University of California in 1996, March, in the 1970s non-reductive physicalism came on the scene in the form of “the mind supervenes on brain”, “x is realized by y”, and “x emerges from y”. Yet as Kim (1999) noted, non-reductive functionalism leaves the “harder problem untouched” (p. 19). We still have the “mystery” (Kim, 2005, p.153) of how the mind causes physical states.

Kim (1999) noted that Putnam (1967/1975) proposed that the mind is a functional state of the brain, which has often been understood in terms of supervenience.  Kim (1998) contended that supervenience is the view that the mind supervenes  on the physical in that any two things (objects, events, organisms, persons) that are exactly alike physically cannot differ in “mental properties” (p. 10). [4] As Kim (2005) diagramed it on Page 45:

                                     M               M*

                                     ↑                 ↑

                                     P       →      P*

Figure 1. The mind supervenes on the physical.

According to supervenience advocates of the mind, physical states cause brain states, and mental ones, too. As Kim (2006) said, howeve
r, the idea of “downward causation will loom large” (p. 548).

However, supervenience had lived on in a new guise. Emergence (sometimes called non-reductive physicalism) Kim (2006) noted, made a comeback in the 1990s. Kim (2006) defined emergence:

[A] purely physical system composed exclusively of bits of matter, when it reaches a certain degree of complexity in its structural organization, can begin to exhibit genuinely novel properties not possessed by simple constituents. (p. 448)

Kim (2006), however, did not think emergism solves the problem of accounting for mental causation. He noted that Van Gulick (2001) distinguished various types of emergism. According to Kim’s account of Van Gulick, there is (a) a specific value emergism (a statue being 1lb. is not a property of its parts); (b) modest emergism (a piece of cloth is purple but not its parts; or a cell is alive, not its constituent elements); (c) radical emergism, whereby the whole is different than its parts. As Kim explained, emergent properties must be considered to have distinctive causal powers, irreducible to the base ones’.

According to an epiphenomenalist’s view—and this is the danger—the mind serves no causal role. Brain states produce mental states that are causally inert. Supervenience, however configured, may collapse into epiphenomenalism. Kim (2005) worried that supervenience yields the exclusion principle, the notion that “sufficient causal content at one level excludes causation at another level” (p. 52). 

As an alternative to supervenience, Kim (2005) considered reductive identity on Page 53:

                                     M    →       M*

                                     ↑                 ↑

                                     P       →      P*

Figure 2. The mind is identical with the physical.

According to the identity theorist, mental and physical states are co-extensive.  It has been often assumed that only physical objects like neurons electrical charges are causally efficacious. Kim (2005) remained optimistic that “physicalism will vindicate mental causation” (p. 148). “If we have not”, Kim (2005) remarked, “identified the actual realizer—perhaps we never will—it would not make much difference philosophically” (p. 164).[5]  Kim (2005) accepted physicalism, which he says there is “no credible alternative to” (p. 174). Kim (2005) has maintained that we must choose between two unsavory options, “reductionism” (p. 70), losing the subjective character of consciousness; or “epiphenomenalism” (p. 70), without accounting for mental causation.  In what follows, I want to consider some contemporary accounts in light of Kim’s dilemma; as stepping stones on which to suggest my way out of the thistle.

The First Horn: Emergent Systems

To avoid getting caught on the second horn of the dilemma we must explain mental causation. For a hint at a solution to how mental causation is possible, I aim to look back to the general problem of how levels of description relate to each other.

Bontly (2002), who considered the general problem of levels of descriptions, attempted to show that we need not account for mental causation. He claimed contra Kim that the supervenience argument generalizes such that only fundamental physical prosperities are causally efficacious.  Bontly (2002) wrote that we are left to think “all causation is found in nature’s basement, at the level of fundamental particles” (p. 90). Bontly’s way out of locating all causal powers at the quantum level is to question our folk theory of causation (see discussion under Figure 5, p. 6).

Bontly’s argument, as he noted, takes the form of a reduction ad absurdum. Since we do not accept that only fundamental particles are causally efficacious, we should not have to choose only one level of description as being so. For Bontly, embracing emergism does not bring with it the problem of mental causation. According to him, we must accept that causality at one level does not obviate higher-order entities being thus efficacious.[6]

The notion of causality based on the idea of the billiard balls metaphor may require revision, yet Bontly does not elaborate what the alternative is. We can concur, for the sake of argument, that various levels of description must make reference to objects that are causally efficacious. Yet without an alternative theory of causality, the problem of mental efficacy remains.

Batterman (2000), writing from a physics point of view, told us that universality can be understood as “similarities in the behavior of diverse systems” (p. 120). Batterman (2000) argued that the concept of universality as employed by physicists can be used to explain macro regularities that are realized by wildly different and heterogeneous lower level mechanisms.  He rephrased the problem of the relation of levels of descriptions in terms of how we explain universality, identical behavior by distinct systems.

Batterman (2000) claimed that the mind-body problem has been set between two extremes. According to him, we have considered the mind and body as either distinct or identical. He claimed to take a moderate position. According to him, physical properties are often irrelevant when the realizers are wildly heterogeneous. For instance, he said that the structure of molecules in a fluid and their forces do not affect its critical behavior.[7]

The upshot of Batterman’s argument, in this context, is that since systems’ behavior at various levels of description are not identical, this gap between them may save causal efficacy as we go up the rungs, as it were. Multiple realizablity writ large, we are prodded to think, saves causally efficacy of emergent systems. Yet even if we accept some co-variation between upper and lower levels, base properties still have an influence on what they realize, and not vice versa. The viscosity of a fluid will be influenced, by variables like temperature and the nature of the particles that constitute it. The problem with analogies for the mind drawn from science is that they only involve the physical realm. The temperature of a liquid is an analogy that is dissimilar to the relationship between the mind and brain in a relevant respect.  The temperature of a liquid only requires a discussion of physical causation. At best, Batterman provided only an analogy of fluid dynamics for the mind, not an explanation of mental causation.

Newman (1996), drawing on his doctoral dissertation Chaos and Consciousness, buoyed up the contention that examples of chaotic systems are illustrations of emergent properties. Newman’s (1996) argument depends upon his definition of emergism, so it is useful to cite it:

[A] property P of an object O is emergent if P is the result of physical properties of the physical constituents of O while at the same time it is impossible to explain P in terms of the physical constituents of O. (p. 246)

Newman relied upon Broad’s (1929) criterion, which has been influential for philosophers of mind, that “the existence of an emergent property cannot be predicted on the basis of the best possible knowledge of the lower-level entities” (1996, p. 247). According to Newman (1996, p. 252), a chaotic system has several features, of which only the last concerns us, in this context: a system when having three or more dimensions has a “particular kind of aperiodic long-term behavior that is characterized by the existence of a strange attractor in the system’s state space” (p. 252). “This means” Newman (2002) said, “that the state of a chaotic system evolves toward the attractor in its state space, it will never be in the same state twice, and any two nearby points in the state space will diverge exponentially under the dynamical evolution of the system” (p. 254). Yet he also pointed out that the unpredictability of non-linear dynamical systems is a kind of “epistemic impossibility rather than a metaphysical impossibility” (pp. 554-555).

Yet if we are dealing with only our epistemic limits, there is no reason to posit ontologically emergent properties. Speaking of the mind as an emergent system is to precisely claim that it is an entity called the mind that is above and beyond the brain in the strong sense Newman discussed emgerism. Suffice it to say, Gillett (2002b), who I shall consider in more detail in this section, rejected Newman’s (1996) attempt to use chaos theory to account for emergent properties, claiming he cannot show they are “causally efficacious” (p. 104). If chaotic systems reflect our epistemic limits, as Newman held, I shall argue, when also considering Gillett, that there is no reason to posit them having ontologically emergent properties, having not established their causally efficacy. At best, non-linear dynamical systems may be able to provide a model of the relationship between the state of the brain and the firing of any one neuron, but not between a physical and mental system. Having not explained downward causation, we impale ourselves upon the first horn of Kim’s dilemma.

Another tact. Gillett (2002a, 2002b) has attempted to solve Kim’s dilemma by distinguishing causal and non-causal ontological determination. Gillett (2000b) relied upon Shoemaker’s (1980) causal theory of properties, according to which “a property is individuated by the causal powers it potentially contributes to the individuals in which it is instantiated” (as cited in 2002b, p. 98).  As Gillett (2002a; 2002b) noted, the shape of a knife makes it appropriate for cutting flesh (a property) when made of, for instance, steel and of the right size.

Ontologically emergent properties, like being appropriate for cutting flesh, argued Gillett (2002a), are examples of non-causal determination.  Gillett (2002a) called his view a “patchwork physicalism” (p. 114). On his view, a patchwork of different properties, both causal and non-causal, function in conjunction. As Gillett (2002b) explained, upward determination is not causal because it is “instantaneous” (p. 100) and involves “wholly distinct entities” (p. 100). He claimed the relationships between the two levels are like that between parts and a whole; they non-causally determine each other.

Gillett thinks that we need not account for mental causation because it does not apply in physics anyways. Gillett thought we can jettison reductionism and are not required to account for mental causation as he defended a non-causal version of it.

Consider the property of being knife-shaped that is not entailed by its micro-properties nor is its ability to cut flesh, as Gillett held.  The function we assign the object, namely to cut, is what makes it a knife. Merely designating an object as a knife only reflects how we use language and our social practices, not that some terms cannot be reduced to their physical properties appropriately combined. Gillett has only argued for multiple causal determinations, where the non-causal ontological component is, in the case of a knife, a fact about our conventional designation. It could be argued that the difference between a knife and the mind is that between an artifact and natural kind. An example of a natural kind is a tomato, which can be defined by its deoxyribonucleic acid (DNA), which is not true of artifacts.

Though Gillett has attempted to evade the necessity for accounting for mental causation, he must do so. On Gillett’s view, taken to an extreme, entities non-causally proliferate without any clear limits, other than how we use language. At the least, in the case of the mind, accounting for its causal efficacy is a useful constraint for considering its ontological reification.

The ontological status of the mind is a litmus test for resolving debates about reductionism in the philosophy of science. We can take solace it knowing that general problem of reductionism that spawned the mind-body problem may be solvable because there is an explanatory commerce between levels that must involve causality. Yet we must explain how there could be mental causation. Kim’s dilemma cannot be evaded; it must be tackled.

The Second Horn: The Reductive Definition of Mind

The alternative tack, reductionism, we may wish to recall, brings with it the danger of impalement on the second horn, losing of the subjective character of consciousness. One notion remains attractive in emergent accounts: the mind is a sum greater than its parts at any one time that also acts upon them. Learning that water is H20 does not eliminate our talk of water, but clarifies our definition of it. Much of the problem, I shall argue, requires a clarification of what we mean by mind.

Putnam (1967/1975) had already said that sense is to be understood in terms of “what there in fact is” (Putnam, 1967/1975). What there is, is brains and their constituent parts. 

Marras (2005; 2006), at the University of Western Ontario, in Canada, department of philosophy, had argued that the reduction of emergent properties to physical ones does not require identifying the two, only that they can be reductively explained in terms of base properties. Marras (2006) conjectured that emergent properties like pain may not be “beyond the scope of reductive explanation altogether” (p. 367). Marras (2005) claimed, in fact, that if the mental can be reduced to the physical is an “empirical question” (p. 359).

Resolution of Kim’s dilemma is both a conceptual and empirical issue. It is a conceptual issue because we need to define what we mean by mind. It is an empirical issue because what we think a mind is will be influenced by what we learn about the brain. The famous examples of neurological progress relate to studying lesions in the brain, and it is helpful to rely upon them.

As Kim had already realized, we cannot, on the first score, explain mental causation without making the identification between the physical and the mental.  At issue is how we are to identify the physical and mental. On the second score, empirical evidence does bear on the problem of mental causation. As is well known, if we have damage to Broca’s area, in the frontal lobe, there are deficiencies in understanding the meanings of words. Damage to the neo-frontal cortex can cause deficiencies in planning and making judgments, too. If we have damage to Wernicke’s area, on the left posterior section of the superior temporal gyrus encircling the auditory cortex, we can understand the meaning of words but lose a sense of syntax. Of course, some brain damage causes memory loss. As brain damage accrues, we can lose a sense of self altogether. If we had no memory, I think, we would not even be conscious, as understood in colloquial terms.

It is highly probable that there is no one brain state that is always coextensive with the exact same mental one. Yet certain mental states do depend on brain ones in certain areas, like the vision center at the back of the cranium. Marras needs to explain what a mental state of pain is if given a reductive explanation. I have an answer. The mind is, I contend, semi-global states of the brain, since some parts of the brain must be active to be conscious, and we can leave to neuroscientists the task of determining what modules are so.

Van de Larr (2007), of Radbound University Nigjmegen, the Netherlands, faculty of philosophy, argued that some dynamical systems “exhibit a form of global to local determination or downward causation” (p. 308).[8] He claimed that the mathematical models of dynamical systems can be extended to natural phenomenon. Though noting all talk of higher and lower is metaphorical, Van de Larr (2007) also held, “The dynamical higher-order patterns—some of which are associated with (conscious) mental activity—constrain the behavior of their lower-level constituent (neurons)” (p. 320).[9] As Van de Larr (2007) pointed out, the problem is that we lack a “clear metaphysical embedding” (p. 321). Van de Larr (2007) claimed that Gillett (e.g., 2002a; 2002b) provided the required metaphysical clarification.  For Van de Larr, micro-properties are necessary, and the realized ones’ explanation requires reference to them.

Yet I have already concluded that there are reasons to have deep reservations with Gillett’s (2002a; 2002b) account. Gillett, we may wish to recall, attempted to legitimate the ontological reification of the mind based on the notion of non-causal determination between various levels of descriptions. We cannot I argue, however, clarify the metaphysical issue of what the mind is without explaining mental causation. In fact, in lieu of explaining mental causation, Van de Larr has only modeled the relationship between brain states and the firing of individual neurons, not said what the mind is.  Further conceptual clarification of semi-global brain states is necessitated, as well as its relationship to mind.

Returning to the computer analogy helps our way through the thicket. The electrons moving in the circuits of computers allow the execution of various functions. If we are playing a video game say, Donkey Kong, the colors upon the screen, music, coordination with the joystick, and so on, are a function of the software. In the analogy between computers and talk of semi-global reductionism, the brain is the hardware-cum-software. The screen, to follow the analogy, is like the physical apparatus of the body (e.g., the face). There would be little use of a computer processor if not connected to the physical apparatus, like the screen, joystick, disk drive, printer, or keyboard, required for the instillation of the software, and even though some is built in, the system never gets of the ground without inputs. There is a moral.

The debate over the ontological status of consciousness has been one between proponents of mental states versus those of brain ones alone.  The brain, however, is part of the nervous system. Students of brain development know that the senses—such as, hearing, seeing, tasting, touching, and smelling—are indispensable to having a mind. It is fanciful to think of brains in vats, a science fiction episode of Star Trek, but this is just a different way to feed information into the system.  From an empirical point of view, there must be inputs in the first place. We need a mind because we have a body, as Aristotle already knew.

By a robust understanding of semi-global brain states I mean this: imagine a series of concentric circles, one including another: at the core, the mind is semi-global brain states. At the, allowing the possibility of mind, we have the nervous system and the body; as well as even further out, the entire socio-cultural world.

Since the idea of ecology is all the rage right now, I need to distinguish my view from more radical ones.  Robbins and Aydede (2009) authored the introductory chapter for The Cambridge Handbook of Situated Cognition. They identified three theses often held by proponents of situated cognition: the embodied, embedded, and extended theses. First, the embodied thesis is that cognition depends on the body. For example, we can explain how symbols become meaningful by understanding the role of sensory motor basis of cognition. Further, they distinguished between on-line cognition when we are engaged in the world; and off-line cognition when we are not engaged (e.g., when we are reflecting).

Second, the embedded thesis is that cognitive activity exploits structure in the natural and physical worlds. As Warneken and Tomasello (2009) said, writing about cognition and culture, we form “shared intentionality” (p. 469). For example, institutions like marriage, money, and government require social recognition to maintain their meaning/value.  An example is cognitive off-loading, the way in which we store information in our environment, which beckons us to an ecological theory of mind, and this leads to their next and final point.

The extended thesis is the view that the boundaries of cognition extend beyond the individual. The extended thesis is radical because it entails an ontological claim about what the mind is. As they noted, the reason to take seriously the extended thesis has been interest in dynamical systems, whereby the entire system has properties that cannot be accounted for by merely understanding its parts. Rather, we have to understand the mind as part of the environment, as one entire system.

However as they also noted, Adams and Aizawa (2008) have argued that the extended thesis requires conflating the important metaphysical distinction between causation and constitution. That is, just because something causes something else, does not make it part of what it is. Suffice it say, the extended thesis may be worthwhile considering for political or methodological reasons, but we need not endorse it in philosophy of mind. If we are to take into account the role of the social context in cognition with or without ontologically distributing the mind outside our skulls I shall leave for others. The ecological view I propose is merely intended to pay heed the embodied and embedded thesis, specifically, the way the mind draws upon chunks of brain functioning developed primarily through social interactions. 

To solve Kim’s dilemma, we need to take stock of the benefits and costs of any one view in light of his desiderata. The challenge is to explain mental causation while maintaining the subjective character of consciousness. As Kim anticipated, emergent views of consciousness must fail, not allowing us to account for mental causation. Kim (2005) was on the right track as indicated in the title of his book Physicalism or Something Near Enough. I have pursued a reductive strategy to show the dilemma can be solved. Once we accept the mind as semi-global brain states, as I have suggested, we no longer have a problem of mental causation. Contending that the mind causes a brain state just means certain collective brain states influence say, the firing probability of a bunch of neurons in a certain area of the brain.  The mind, I emphasize, has not been eliminated. It is still useful to talk of a mind, as it denotes a specific type of brain functioning in the robust sense discussed.

The talk of folk terms also allows us to save the subjective nature of the mental. The use of our folk language, however, has carried with it the metaphysical assumption that what is denoted by them is not real. For our present purposes, we need determine if the negative connotation with the world folk requires emendation because that takes us into an ontological dispute and hence beyond the scope of this paper. Kim’s dilemma, recall, only requires that we respect consciousness. Suffice it to say that the use of folk language related to mental life is justified because it is consistent with current scientific practices. Identifying the mind with semi-global brain states does not hinder, or in any way invalidate, relying upon introspective methods to probe a person’s mental states in a folk language, like qualitative studies. A folk language, in fact, is a natural way to access qualitative truths at that level of description, which it is reasonable to think, would at least have to be taken into account in addition to any strictly reductive findings.

Kim’s desiderata for a successful resolution to his dilemma include explaining how the mind could be causally efficacious and respecting consciousness. Kim’s dilemma forces this. First, the only way to account for mental causation is to reduce the mind to the brain. Second, we obviously experience the world so it would be—a shame—if all our experiences are themselves chimeras. I have attempted to show that, far from conflicting, both criterion function in tandem. From reductionism flows the solution to mental causation. I argued that brain states precipitate other ones, and semi-global ones, identified with consciousness, do so, too. There is both upward and downward causation within the brain: from the parts to larger chunks and vice versa. Ecological reductionism does not entail the loss of the mental. On the contrary, I have attempted to clarify what we mean when we assert “we have minds”.

There is a philosophical moral. Wittgensteinians have often been considered a threat to cognitive scientists. Traditional philosophical problems, on Wittgenstein’s view, merely involve linguistic confusions. For a Wittgensteinian, the mind-body problem must be rooted in confusion about the use of those words. Further, as Brook, the previous Director of Institute of Cognitive Science at Carleton University, in Canada, pointed out, for Wittgensteinians, nothing goes on in the head (A. Brook, personal correspondence, 2002). Cognitive scientists, in an ironic twist, may concur in their own way. Meanings do not float freely in a netherworld. As I have argued, and by the lights of our best scientific accounts, meanings are brain states that arise from persistent and ongoing attempts to solve problems in an environment that includes an entire socio-cultural context. The best way to show the fly how to get out of the fly bottle is to help it to see. We can get out together.

Kim identified a dilemma that reflects the desire to account for mental causation—the mind is good for something, roughly—while maintaining the subjective nature of consciousness.  As I anticipated in the “Scope” of this essay, ecological reductionism is only a picture, but it has the virtue of fulfilling Kim’s desiderata when several other views I considered flounder on one or the other horn of the dilemma. I have argued from an ontological vantage point that ecological reductionism allows us to recast our definition of mind in reductive terms and retain our folk semantics. After all, our meanings are nuanced, and we intend slightly different things by mind in different contexts.

Further research will be fruitful, I think, by extricating ourselves from metaphysical disputes without dismissing them out-of-hand. On the contrary. We need to spell out how brain functioning allows mental states and the proliferation of ideas: that often we have taken to be abstract objects, like numbers.  We cannot escape the mind; we cannot escape the brain. And we cannot exist outside of the socio-cultural context that provides some shape to our perceptions about the world and our place in it. And is not this not what Kim’s dilemma was all about? 

For example, considering whether to accept or reject ecological reductionism we learn something about ourselves. Namely, we desire a philosophical account of the mind that is scientifically cogent, yet tallies without our everyday goings-on.  And a theory of mind’s acceptability must lie in its successful application, its explanatory power, and its usefulness to ordinary linguistic practices. Because that is the level of description at which mind-talk occurs. And there must be room in our ontology for the everyday, too.



University of Windsor 

Windsor, Ontario, Canada
 About the Author


Batterman, R. W. (2000). Multiple realizability and universality. British Journal of the Philosophy of Science, 51, 115-145.

Bontly, T. D. (2002). The supervenience argument generalizes. Philosophical Studies, 109, 75-96.

Broad, C. D. (1929). The mind and its place in nature. NY: Harcourt, Brace.

Brook, A., & Akins, K. (Eds.). (2005). Cognition and the brain: The philosophy and neuroscience movement. NY: Cambridge University Press.

Clayton, K. (1997). Basic concepts in nonlinear dynamics and chaos [Online document]. Available:

Feigl, H. (1958). The ‘mental’ and the ‘physical’. In H. Feigl, G. Maxwell, & M. Scriven (Eds.), Minnesota studies in the philosophy of science (2nd ed., Vol. 2). Minneapolis: University of Minnesota Press.

Fodor, J. A. (1981). Representations: Philosophical essays on the foundation of cognitive science. Cambridge, Mass.: MIT Press.

Freidenberg, J., & Silverman, G. (2006). Cognitive science: An introduction to the study of the mind. London: Sage.

Gardner, H. (1985). The mind’s new science: A history of the cognitive science revolution. NY: Basic Books.

Gillett, C. (2002a). Strong emergence as defense of non-reductive physicalism: A physicalist metaphysics for ‘downward’ determination. Principia, 6, 89-120.

Gillett, C. (2002b). The varieties of emergence: Their purposes, obligations and importance. Grazer Philosophische Studien, 65, 95-121.

Hauser, M. (2000). Wild minds: What animals really think? NY: Henry and Holt.

Khalidi, M. A. (2005). Against functional reductionism in cognitive science. International Studies in the Philosophy of Science, 19(3), 319-333.

Kim, J. (1996). Philosophy of mind. Oxford: Westview Press.

Kim, J. (1999). Mind in the physical world: An essay on the mind-body problem and mental causation. Cambridge, Mass.: MIT Press.

Kim, J. (2005). Physicalism or something near enough. Cambridge, Mass.: MIT Press.

Kim, J. (2006). Emergence: Core ideas and issues. Synthese, 151, 547-559.

Marras, A. (2005). Consciousness and reduction. British Journal of the Philosophy of Science, 56, 335-361.

Marras, A. (2006). Emergence and reductionism: A reply to Kim. Synthese, 151, 561-569.

Newman, D. V. (1996). Emergence and strange attractors. Philosophy of Science, 63, 245-261.

Poeppel, D., & Hickok, G. (2004). Towards a new functional anatomy of language. Cognition, 92, 1-12.

Posner, M. I. (Ed.).  (1990). Foundations of cognitive science. Cambridge, Mass.: MIT Press.

Putnam, H. (1975). The nature of mental states. In H. Putnam, Mind, language, and reality: Philosophical papers (Vol. 2, pp. 429-440). Cambridge: Cambridge University Press. (Original work [Psychological predicates] published 1967)

Putnam, H. (1990). Realism with a human face. Cambridge, Mass.: Harvard University Press.

Putnam, H. (1998). Representation and reality. Cambridge, Mass.: MIT Press.

Putnam, H. (2004). Ethics without ontology. Cambridge, Mass.: Harvard University Press.

Robbins, P., & Aydede, M. (2009). A short primer on situated cognition. In P. Robbins & M. Aydede (Eds.), The Cambridge handbook of situated cognition (pp. 3-10). Cambridge: Cambridge University Press.

Shoemaker, S. (1980). Causality and properties. In Van Inwagen (Ed.), Time and cause. Dordrecht: Reidel.

Smart, J. J. C. (1959). Sensations and brain processes. Philosophical Review, 68, 141-156.

Tucker, D. M., Luu, P., & Derryberry, D. (2005). Love hurts: The evolution of empathic concern through the encephalization of nociceptive capacity. Development and Psychopathology, 17, 699–713. doi: 10.1017/S0954579405050339

Van de Larr, T. (2006). Dynamical systems theory as an approach to mental causation. Journal for General Philosophy of Science, 37, 307-332.

Van Gulick, R. (2001). Reduction, emergence and other recent options on the mind-body problem: a philosophical overview. Journal of Consciousness Studies, 8, 1-34.

Warneken, F., & Tomasello, M. (2009). Cognition for culture. In P. Robbins & M. Aydede (Eds.), The Cambridge handbook of situated cognition (pp. 467-479). Cambridge: Cambridge University Press.


[1] In Putnam’s (1967/1975) “The Nature of Mental States” he asked if temperature could be identified with mean molecular kinetic energy, thinking it could be. Mental states, like pain, however, he held was something different altogether; it “is a functional state” he claimed, “of the whole organism” (1967/1975, p. 433). Famously, however, in Putnam’s (1998) lectures at McMaster University, he held, “[F]unctionalism, constructed as the thesis that propositional attitudes are just computational states of the brain cannot be correct” (p. 74).  As Putnam explained when we assert “there are a lot of cats in the neighborhood” cannot be reduced to one computational state. Mental states are realizable from multiple physical ones.

[2] For the purpose of making the point only, we can put matters in logical terms “(x)(y) Cxy”: it is possible that all brains states can cause a single mental state. Also, “(y)(x) Cyx”: it is possible that all mental states can be caused by a single brain state.

[3] Putnam (1998) noted that if we ask how many objects are in a room, it depends on our definition of object. In Putnam’s (1990) paper “Is Water H20?” he noted that a table is the same if one molecule is missing. A table is not a meteorological sum (like a set) of molecules, he said.  The moral is that medium sized objects merit their ontological reification and we could think the same is true for the mind, too. Yet, at the same time, in Putnam’s (2004) in Ethics Without Ontology he acknowledged that lower-level descriptions can help explain what happens at the higher levels.

[4] Supervenience understood along the lines of Kim forces us to reject multiple reducibility. Suffice it to say, however, that we can discuss the mind-body problem, and Kim’s dilemma specifically, even if we embrace multiple realizablity. The idea in supervenes, I think, need not stipulate the exact nature of relationship between two levels of description, only that there is warrant for distinguishing them, namely, one being contingent upon the other.

[5] “To think”, Kim (1999) wrote, “that one can be a serious physicalist and at the same time enjoy the company of things and phenomenon that are nonphysical, I [Kim] believe, is an idle dream” (p. 120).

[6] We may wish to note that attributing causal efficacy to only base properties holds the danger of the drainage problem. If we could keep on dividing matter, causality drains away. If only fundamental particles have causal efficacy, we are on a slippery slope as we divide them further at the subatomic level. 

[7] Batterman (2000) said, “Instead, in the asymptotic regime where many molecules find themselves correlated—that is, where correlation diverges—it is the collective or correlated behaviour that is salient and dominates the observed phenomenological behaviour” (p. 129). “Such asymptotic methods” said Batterman (2000), “often allow for the understanding of emergent structures which dominate observably repeatable behavior in the limiting domain between the theory pairs” (p. 137). 

[8] Citing Clayton (1997), Van de Larr (2007) said, “[A] dynamical system is a set of functions (rules, equations) that specify how variables change over time” (p. 309).

[9] Van de Larr (2007) wrote, “In the ideal case of an intentional action of the intentional influence would be modeled by the current order parameter being the parameter that is dominant with regard to the trajectory of the system at that time” (p. 320).