What
is the human soul?
I realize that most of you do not believe in the religious concept of a soul as some nonphysical/magical entity overlayed onto your body. By "soul," I do not mean anything that narrow. Rather, I mean the human identity, consciousness, personhood, the sense of "I" that you feel in your mind. "I think, therefore I am"—but what is the "I"? Certainly, non-human animals (and maybe even plants and other organisms) might be said to have a "soul" in this sense.
If the soul exists in the brain, then how exactly does it arise? Does the soul "grow" along with the brain as the zygote develops into an embryo, then an infant, then an adult? Is the soul a binary state, all-or-nothing—or are there levels of "soul-development?" For example, do people have more developed souls than animals? Along those same lines, do certain people have more developed souls than others (for example, adults vs. infants, or intelligent people vs. severely mentally handicapped)?
Also: how does a soul arise in the brain? How is it that certain animals can have an acute sense of awareness of their own personhood and consciousness, based upon nothing more than electric signals in gray matter? Can a robot have a soul?
A lot of my ideas about the soul come from Douglas Hoffstader's
I Am A Strange Loop, which posits that the soul is literally a "strange loop" mathematical function that arises from the inherent complexity of the brain's organizational system—similar to the function described in Godel's Incompleteness Theorem. I thought this book was the coolest thing ever and went a huge way towards explaining the mystery of the soul in terms of materialistic, natural philosophy.
Posts
Maybe you should finish watching that episode first
I'll swallow your soul!
I'll swallow your soul!
I'll swallow your soul!
Your statement sounds like the homonculus fallacy, where you posit the existence of a tiny man inside your brain controlling your body's actions and emotions. But does this tiny man also have a tiny man inside of his brain? And so on.
Would that simulation become sentient?
Would it be murder to turn it off?
I'm not trying to say anything conclusive, just thinking about how your description of conciousness reminds me of that concept.
And are we talking about a brain in a vat? Or a brain hooked up to some apparatus that inputs sensory experiences and outputs physical actions? I wouldn't consider a brain in a vat "alive" necessarily. But a robot with a brain, sure, killing that would be murder.
Although that's obviously completely hypothetical at the moment - an accurate simulation would take some ridiculous processing power that's well out of our grasp currently - but I believe it would, yes.
Irregardless of whether the simulation would or would not actually be conscious, the only evidence we would have to determine either way would be the exact same evidence that we have to determine whether other people are conscious. Since we assume that other humans are, indeed, sentient, it would be horrifically anthropocentric of us to assume that the simulation was not.
I hate to say it, but I'm kind of a dualist when it comes to talking about the soul. The idea that a collection of atoms could somehow lead to a subjective consciousness completely boggles my mind, and seems like just as big a gap in the theory of materialism as a dualist being unable to say how something that is not physical could interact with the physical world.
Well, if it became sentient, and if it's a computer model, I'd assume it'd still be stored when you turned the computer off. Turn it off one day, turn it back on the next, it'll be like nothing happened.
And if you just left it switched off?
Life-long coma?
Ever been unconscious?
I'd say that the vitals coming into the brain through the virtual bloodstream could stop right at the inputs, and perhaps we could put this brain in a simulated body & environment (including sensory organs) of much less complexity to test it. Or we could hook this simulated brain to a robot of some kind, and let it experience the natural world through the robots senses.
I will attempt to explain it now. If there are any mathemeticians here, please correct me because this shit is really hard to understand.
Godel's Incompleteness Theorem states that any organizationl system—language, philosophy, mathematics, science, anything really—is necessarily incomplete, that is, it cannot describe everything and cannot be perfect. For our present discussion, his conclusion is less interesting than the way he proved it.
Basically, Godel noticed that the entire structure of mathematics is recursive. You start with a few axioms, apply them again and again in new configurations, and you end up with theorems—which are truths that are extensions of the original axioms. Then, you can apply the theorems again and again in new configurations, and end up with even more truths. And so on and so on.
Extended far enough, you end up with an extremely complex structure of mathematical true statements. The structure becomes so complex that you can even use the structure to refer to the structure itself! Godel assigned numbers with values that corresponded to theorems—essentially using math to talk about math, abstractly. This is the "loop" in the title of the book—like how you see an infinite hallway when you point a videocamera at a TV screen.
What makes it a "strange loop" is as follows: with increasing complexity comes increasing ambiguity—much in the same way that complex human languages (like English) have words with two meanings, or more than one word for the same thing. Godel showed that the same thing arises spontaneously in complex mathematical structures. He also showed that these inevitable ambiguities result in an infinite number of paradoxes, hence the conclusion of his incompleteness theorem. So the "loop" is less of an infinite hallway, and more of an Escher drawing, with two hands drawing each other—a "strange loop."
The author of the book posits that the very essence of consciousness is one of these "strange loops" in the human brain. Animals evolved brains with increasing ability to organize and compartamentalize physical stimuli—stuff like "good to eat"/"not good to eat" or "good to sleep"/"not good to sleep". This increasing, evolved complexity is similar to the way theorems recursively build upon themselves to create more complex mathematical structures. Eventually, an animal evolves with a brain complex enough to compartamentalize the process of compartamentalization—in the same way Godel showed that math can be used to describe math.
The trick is, with a certain level of complexity in any system comes ambiguity and strange-loopiness, which is what Godel proved. This also applies to the brain. So consciousness, self-awareness, "I"-ness, the soul, can be thought of as a physical example of a strange loop.
And this isn't going to sound very logical or scientific, but this explanation just feels right to me. When I think about the essence of my most basic thought process, of this sense of identity that somehow doesn't feel fleeting at all like regular thoughts, it just feels like a feedback loop, like two mirrors facing each other. I definitely thought Hofstader's explanation was enough to silence any qualms I had about materialism vs. dualism. Dualism is for losers.
Maybe it's possible (one day) to analyze the layout and electrical activity of someone's brain and build a computer program out of silicon that simulates the exact thought processes and accumulated experience in that brain. We already build robot programs that are as complex as insect brains.
I really have no idea whether destroying such a program would constitute murder. Especially if the program could easily be copied onto other hardware. Since brains depend on physical stimuli and feedback and our souls "grow" accordingly based on this feedback, I would wonder how such a program would fundamentally changed, based on the "experiences" of the hardware in which it resides.
Lies.
I'm going to have to track down I am a strange loop, it sounds like an interesting read. Having not read it, I obviously can't comment too much on it, but it does seem odd to me that this feedback loop would lead to consciousness and subjectivity.
I'll have to take a look at it, methinks.
That last bit there is a huge leap from any previous statements, and honestly I dont buy it, the whole argument seems so hand-wavey.
A lot of it has to do with the concept of symbols and abstractions, which is another section of the book.
Your brain doesn't just compartamentalize stimuli, it abstracts them into symbols. For example, humans don't see dozens of unique woody structures with branches and different-shaped leaves. They see "trees." The concept of "tree" is an abstract symbol, a pretty complex and powerful one actually (ant brains probably do not know what a "tree" is) It exists, literally, in your brain as an electrical configuration in your neurons.
It's important to note that the "tree" symbol is more than just a way to organize sensory inputs. It's an abstraction; it functions on a higher level than the base inputs. In fact, the brain can use this abstraction as an input of its own, in language, for example.
The more complex the physical brain is, the more abstractions it can make up from physical stimuli—the more such abstractions are removed from both the physical matter that you are touching/seeing and the physical matter that your brain is made of. (Similarly, the more theorems you can derive from base axioms recursively.)
If the brain is complex enough, it can make abstractions for its own process of abstraction and everything it has compartamentalized and abstracted so far. Like in Godel's incompleteness theorem, this creates a self-contained "loop." This self-contained loop, because it is so highly abstracted, is far removed from physical matter. But it is still ultimately based in physical matter, in the exact same way that an ant's instinct to "eat if sweet" is based in the physical matter of its brain—the same way that a thermostat's instruction to "turn on if lower than 68 degrees" is based on the physical matter of its circuits.
I remember we had a debate in high school about souls and ghosts. Most of my 11th grade classmates were convinced that there was a human soul within us. However, when we were polled (blindfold poll) it was brought up that most of those who did believe in the soul also were religious in some form or way.
My argument (raised Catholic but overall a non-believer nowadays) was that the soul is basically the natural instinct to survive as a single entity as well as a large group (extended to the human race, I suppose). When someone has given up all hope and accepted death as an inevitable outcome despite their best efforts or given up on others, they essentially become a shell of their former selves.
Anyway, I think it made more sense when I was a kid. I type it out now and I get confused.
I guess we can broaden the definition of "soul" to include a base sort of survival instinct. But what I had in mind was narrower than that—like I said, it's the sensation that you are really you, that there is a "you" who is controlling your body and makes decisions.
I have faith (ie, not just belief) that there is a soul. The problem is that the soul, which Heidegger so aptly pointed out, has been completely misunderstood since Plato. This is because the debate of agency and identity has been played on the wrong field. The traditional Cartesian notion of the soul has pretty much permeated philosophy and science and basically how everyone thinks. I am not talking about Cartesian duality; I am talking about the notion of the soul as a viewer at a movie theater. Almost every major philosopher before Husserl, and many after him, view the soul in this way: the "ghost in the machine," the ego trapped in a sensory world. Locke described the soul as trapped in a cabinet where pictures are projected. Existence is thus our perceptions and interpretations of of objects, which appear to us. This is obviously egocentric, and leads to solipsism. (I suppose we could have the solipsistic debate, but...let's not.)
Heidegger changed the playing field. Being is completely relational, and depends on our "intending" with objects. Being is broken down to ontic and ontological. The first is basically the difference between the presence and the void, each of which are equally real. Ontological deals with our different levels of being. We can intend things religiously, mathematically, scientifically, etc. The human soul is what allows this to be possible. The human soul allows for the ego to initiate the subject object dialogue and noesis. The human soul literally allows for existence.
However, it is completely impossible to fully analyze this completely phenomenologically, which is its disadvantage, because we as humans love theories, which it cannot truly provide.
The human soul also may be public, existing through language, as opposed to our private souls, which exist through our genes.
Introduction:
Introduction
Difficulties persist in reconciling physicalism with our intuitive notions of conscious experience, and where consciousness can be found in the physical realm is hotly debated. In this paper I’ll take a stab at this problem. I’ll begin by introducing two prominent theories which are compatible with physicalism: Identity Theory and Functionalism. There are other theories, like Eliminative Materialism and Behaviorism, but I’ll presume that the criticisms of them found elsewhere are valid, and hence won’t trouble myself with them. Once the players are introduced, I’ll go on to consider two prominent objections to the physicalist project in general, along with their replies. Then I’ll offer my own argument, which doesn’t dispute the truth of physicalism, but rather doubts our ability to discriminate among particular physicalist theories.
Identity Theory:
One prominent physicalist conception of the mind is that of the Identity Theory. Under this theory, our mental states, like hurting, hungering, and desiring, are identical to specific states of our brain. It’s not always necessary that there be an exact analogue for each state, and, for example, believing is often thought to involve many different brain states. However, whenever a person has a mental state it’s also a state of his brain. For a man to be in pain is just for the C-Fibers in his brain to be excited (so the common example goes). Australian philosopher and Identity theorist J.J.C. Smart coined the term ‘nothing over and above’ to indicate that pain does not merely correlate with C-Fibers firing, but rather, pain is C-Fibers Firing (hereafter abbreviated CFF). Pain and CFF don’t go hand in hand the way that a man and his footprints do, but rather, pain and CFF are like a lightning bolt and a burst of electrical energy—one and the same thing. Hence, the question of why pain accompanies any particular mental state doesn’t even make sense under Identity Theory. If pain is CFF, then there aren’t two separate things to puzzle over—of course whenever you have one you have the other. This definitive answer to, and rejection of, an otherwise difficult question is considered an advantage of the theory.
To illustrate the theory, let us consider a particularly good identification that we’ve made based off empirical investigation, one which is often touted as an example: the identification of lightning with an electrical discharge. A lightning bolt is nothing over and above an electrical discharge that occurs in stormy weather: it’s not the case that there are two things, a bolt of lightning and a group of electrons. Lightning is not causally interconnected with electrons, nor do they merely appear together. Rather, the electrons are the lightning, just described in a different way.
One curious tenet of Identity Theory is that my cat can’t feel pain. Pain is C-Fibers Firing, and C-Fibers are human anatomy. My cat might feel something when she meows and runs away, but it’s not pain—it may be that my cat feels something unpleasant, but again, it’s not the same thing I feel. Identity Theory doesn’t allow for the same mental state to be realized in disparate physical systems, which can be viewed as a serious disadvantage. After all, it seems like Martians, Robots, and my cat could all feel pain--even the same pain I feel. Consider the Android Data, of Star Trek fame: he seemed awfully human, for all his difficulties comprehending emotion. Take this exchange from the Naked Now, which may capture our intuitions on the subject:
Captain Picard: "Data, intoxication is a human condition. Your brain is different. It's not the same as..."
Commander Data: "We are more alike than unlike, my dear Captain. I have pores, humans have pores. I have fingerprints, humans have fingerprints. My chemical nutrients are like your blood. If you prick me, do I not ... leak?"
Functionalism is a theory of the mind which accounts for the possibility of disparate physical systems having the same mentality. It does so by identifying mental states with the functional role they play, rather than their particular physical instantiation. To the functionalist, being in pain is not C-Fibers Firing: CFF is merely how humans instantiate pain. Instead, being in pain is being in the state caused by bodily harm, leading to distress, and causing avoidance behavior. Thus, in the functionalist picture, mental states are functional states of a system, characterized by their inputs, outputs, and the other functional states they’re caused by and lead to.
The identification of mental and functional states also has the advantage of feeling natural, at least in one direction. While it is often puzzling to think how a mental state could be identical with a physical state, it seems easy to identify pain with its functional role: the input of being harmed, the output of avoidance and distress, and the other functional states which lead to pain and to which pain leads. After all, it seems that the sensation of pain is inherently unpleasant, and that one just couldn’t be in pain without also being in distress and engaging in avoidance behavior.
Another advantage of functionalism is that it’s neutral with regards to physicalism: functional states may be states of physical matter, dualist substance, or both, depending on whether such thing as dualist substance exists. However, this paper is concerned with physicalism, and hence will consider the functional states in question to be physical.
Despite these plusses, there are difficulties with the functionalist account. The most serious worry seems to be the lack of a principled explanation for what functional states get to feel like something. Though it seems easy to associate some of the feelings we know from personal experience (pain, etc.) with corresponding functional states, it doesn’t seem easy at all to go in the other direction. To wit: there are many more things in the world that could be given a functional characterization than there are things with consciousness: my computer, for example, has a printing state. The printing state occurs upon the input Ctrl-P, leads to a signal being sent along the printer cord, and can both be the cause and result of other functional states of the computer—does that signify that it feels like something for my computer to print, in the way that it feels like something for me when I’m in pain? We generally think that my computer isn’t conscious, and that nothing feels like anything for my computer. But if that’s the case, then what’s the principled distinction between its functional description and mine, such that I feel things by virtue of my functional description but it feels nothing by virtue of its?
This worry becomes even more pressing when we consider certain natural and artificial systems that not only posses functional states, but also self-perpetuate, propagate, and otherwise behave in ways that practically beg comparison to us. Yet it still seems highly unintuitive to say that a weather front is conscious, or that rivers, or trees, or rock formations are. Yet, a river may ‘repair’ itself after a flood by rebuilding its sandbars, or ‘reach’ for the sea when it’s blocked. A tree may interact with the environment by growing around obstructions and towards the sun. A corporation seems like an even more difficult example for the functionalist, as the arrangements of its memos and e-mails in various filing cabinets and servers can be seen to play functional roles analogous to, say, chemical configurations that allow for memory in our brains. Corporations behave quite similarly to people in certain functional respects: they seek to grow and survive, spin off copies of themselves, and so on. If there is a principled way to allot consciousness to humans based on functional properties, but not to this plethora of entities, then I am not aware of it.
Such worries are grim lot, indeed—but don’t worry too much, they will get better later on, after a fashion (in of that everything will begin to look worse, so this sort of worry will be par for the course). Now that we’ve been introduced our principle players, though, let us move on to a pair of famous arguments which seek to discredit the possibility of physicalism entirely.
Let us begin with Chalmers’ zombie argument. Chalmers asks us to consider the logical possibility of zombies. Specifically, we are asked imagine our zombie twin, who is exactly like us in terms of his basic physical properties, but unlike us, he has no phenomenal properties. Hence, why we call him a zombie—he doesn’t shamble around eating brains, but there’s nothing that it’s like to be him. Similarly, his world is physically identical to ours in terms of basic physical properties, but populated entirely by zombies. The lights are out, so to speak.
We can imagine that I am sitting, eating a chocolate bar, and feeling a dull ache in my shoulder. My zombie twin and his world are physically, and hence causally, identical me and my world, so we deduce that he must be interacting with his environment in the same causal ways that I do. Hence, he is tasting the chocolate and feeling the ache in his shoulder insofar as he is processing information about shoulders and chocolate and will act accordingly—but he is not feeling or tasting phenomenally.
If this zombie world is possible, then it follows that physicalism is false. I have a dull ache in my shoulder, and hence CFF. My zombie twin is physically identical to me, and hence also has CFF. But, being that he is a zombie, my twin does not have phenomenal pain. If CFF can occur without pain in some possible world, then it follows that they aren’t identical. Similarly, if pain is a functional state, then the zombie world shows that it can’t be a functional state of a purely physical system, because my zombie twin has a physical makeup identical to mine but never feels phenomenal pain.
However, we can avoid this problem with a response due to Perry. Perry argues that the zombie argument isn’t really about dualism so much as the causal efficacy of the mental. If you believe that the mental genuinely causes physical states, then the zombie world is impossible. For example, suppose you believe that my desire to pass my classes causes me to turn in my papers on time, or that my fear of failing causes me to do the assigned readings. Removing those mental states from the world without changing anything else would then change the subsequent course of events, and I wouldn’t turn in papers or do readings. If the mental is causally efficacious, then it’s impossible to conceive of a world in which the mental is missing, the physical relations and laws are all the same as ours, and everything continues to proceed along its regular course. If the zombie world isn’t actually coherent, then it poses no threat to physicalism.
The Knowledge Argument:
This next argument, due to Frank Jackson, attacks physicalism broadly. In the Knowledge argument, Jackson asks us to imagine a super-scientist, Mary. Mary knows all of the statements of a complete theory of physics (she is, indeed, that well educated). She knows everything about the physical characteristics of light rays, retinas, and the agitations of grey goo in our head—all of the physical facts of the world are at her disposal. However, in the pursuit of this knowledge she has spent her entirely life in a plain black and white room, and has never actually seen the color red.
Eventually Mary emerges, and upon exit she sees a ripe apple. Jackson maintains that when she does she learns a new fact about the world: she learns what seeing red is like. Suddenly, she realizes that her previous knowledge about the world was impoverished—there was something about color, and specifically about what other people’s experiences were like, that she didn’t understand. But if this is so, then it means that there’s a non-physical fact about color. But by hypothesis she knew all the physical facts, so if she learns a new fact upon release it must not be a physical fact. Hence there are non-physical facts about the world, and physicalism is false.
This time we can respond following David Lewis and Laurence Nemirow. Mary doesn’t learn a new fact about the world upon her release: instead she gains a representational ability, an ability to imagine what red is like, and to tell herself that seeing red is ‘like this’ while conjuring up a mental picture of red. Mary already knew all the physical facts before her release, and indeed those were all the facts: she doesn’t learn any new ones afterwards. What Mary gains is a conceptual, cognitive capability. Hence, physicalism can account for the change that occurs in Mary without admitting some species of non-physical fact that she apprehends only upon on her release.
Hey Mr^2, is my reading of Heidegger correct? Because I think I'm starting to grasp it, but I don't want to really start formulating my beliefs until I'm on the right track.
We have seen how physicalism can survive the zombie and knowledge arguments. However, regardless of whether they successfully show physicalism is false, I think that they do both point to serious difficulties in out ability to find the mental in the physical. What exactly I mean by that should become clearer as we go along. To begin, allow me to present my own variation on the knowledge argument, and explain how I think it demonstrates a difficulty with physicalism.
To start, imagine that I pick up a book of Burundian history at the book store. Furthermore, imagine that I’m ignorant of the fact that Burundi is a real country. By the time I finish the book I know all sorts of facts about Burundi, however, I still don’t realize that the facts I’ve picked up are actually true of a real country. I think that my knowledge is merely of a fiction invented by a particularly creative author.
Now, imagine Meredith, another super-scientist who’s spent her life locked in a room. Meredith knows all the physical laws, and has a complete theory of physics at her disposal. She also has a complete physical description of the universe, which describes to her the location of each physical particle and all of its physical properties, and as such she should be able to figure out everything that is a consequence of the physical. But there is a gap in Meredith’s knowledge--no one ever told Meredith that the physics she’s read so much about describes her own world, instead of just a possible one: Meredith’s relation to her knowledge of physics is analogous to my relation to my knowledge of Burundi in the previous example. Now, absent that information, would Meredith be able to figure out that the universe she had studied in the abstract would not only give rise to conscious experience, but that it would be phenomenally no different from her own?
Before answering, let’s first review what Meredith does know: she knows how things feel. She’s had experiences, and unlike Mary, she’s seen color. She has a whole bagful of phenomenal ‘feels’ that she’s acquainted with. She also knows all about masses of atoms’ neurological and functional roles. So, in turn, she has a whole bagful of matter-states she can identify by their biological and causal roles. In that respect, she’s not unlike a modern neurologist who has some knowledge of his phenomenal life and some knowledge of the brain. But unlike a modern day neurologist, she doesn’t start from the position of knowing that these phenomenal and physical occurrences are happening in the very same world, and she has no particular reason to try to accommodate one in terms of the other.
Let’s also establish that there are a couple of easy, but irrelevant, ways Meredith could connect her conscious experience with the atoms and particulate masses she’s studied. She might, for example, notice that they to form into person-shaped aggregates which behave in much the ways she does, and hence she might start trying to form an argument from analogy for their behavior. But the entire point of this hypothetical is to avoid that reasoning. We are seeing where we can get without any sort of argument from analogy at our disposal, merely a thoroughgoing knowledge of the base physical properties of the universe.
My intuition is that Meredith wouldn’t realize that the states of matter she studied had any associated phenomenal feels, let alone the exact feels she was already acquainted with through personal experience. Furthermore, it doesn’t seem that her theory of physics being complete really does much to help her there: the march towards smaller and smaller particles and more and more subtle forces doesn’t seem to be getting us closer to the glittery mental stuff. The point this hypothetical makes goes as follows: we identity pain with C-Fibers not because our increasingly staggering knowledge of the brain demands we attribute phenomenal feels to its physical states (or because any such future knowledge would), but because our phenomenal experience of pain demands accounting for and we have nowhere else to put it. When Meredith talks about atoms, she doesn’t have a reason to feel compelled to fit in consciousness, and as such it doesn’t look like she’d find it there.
At this point the reader might be wondering: isn’t this just the knowledge argument all over again? Meredith knows all the physical facts, but not all of the facts—doesn’t that commit us to dualism?
We needn’t conclude that, it certainly isn’t the goal of the argument. Instead we can say that Meredith does know all the facts about pain in the world. If we accept Identity Theory, for example, then all the facts she knows about C-Fibers firing are also all the facts about pain, just under a different mode of presentation. What Meredith doesn’t know is that some of the facts that she knows through physics are just the very same facts that she’s aware of through phenomenal experience. The same goes for functionalism, facts about functional states, and facts about phenomenal experience. The upshot of this argument is not a point about whether the mental is physical, but rather, our ability to connect our knowledge under a physical guise with knowledge under a phenomenal guise. Meredith demonstrates that no matter how many physical facts we know about some situation, that information alone won’t be enough to determine which of those facts, if any, are also phenomenal.
The reader may object at this point that this is farfetched. What could possibly motivate the preceding account of Meredith’s knowledge? Perhaps this exercise should lead us to consider dualism as an explanation of mentality. After all, even if we can demonstrate that Meredith’s case doesn’t force dualism, the option is still open. I’ll respond to this train of thought in the next section.
The problem with dualism, simply speaking, is that it doesn’t help us at all. Take Frank Jackson’s original knowledge argument. Churchland rightly objected that if it proves anything, it proves too much: Mary could just as well study up and learn all the nonphysical, dualistic facts of the world and of seeing red. Regardless, the argument would still hold that when she finally saw red for the first time she would learn something new. Hence, the conclusion would go, there must be a non-dualistic fact about the world, because by hypothesis she knew all of the dualistic facts. Perhaps the conclusion would be Treblism, the belief in three types of thing. But of course the argument can be repeated against itself ad nauseam.
Jackson replies that there’s no reason to believe that the full story, according to dualism, can be taught in black and white, and that there’s no reason to suppose that Mary can learn everything there is to know about phenomenal feels solely through study. But if this is the case, then why assume that Mary can learn everything there is to know about physicalism through study? The problem of the knowledge argument is one of being able to explain mentality period, not just an issue specific to physicalism.
Take a theory which admits souls as the source of mentality. We are still left with all of the problems that we’ve always had—how could some arrangement of soul-stuff, or some functional relation of the soul, be identical to my mentality? Couldn’t I know everything there is to know about souls without knowing about the specifics of phenomenal consciousness? Why is one configuration of soul-stuff pain and the other pleasure, instead of vice versa? The addition of the soul to our picture doesn’t serve any purpose when it comes to making sense of our phenomenal lives.
The problem of our mental lives is a problem regardless of the stuff we’re made of. Dualism is just a red herring, and it doesn’t pull any weight in our account. There’s no upside to it, only an additional mystery about what this other non-physical side of things is—and it seems, at this point, that the last thing we need is another mystery.
The Indeterminacy of Physicalism:
Unlike Meredith, we know that consciousness exists, somehow, in the physical system of quarks, gluons, leptons, and so forth. As such, we have to account for it. But the question remains: where does consciousness sit in the physical scheme? On my understanding, that’s the heart of the dispute between Functionalism and Identity Theory, as I’ll seek to elucidate in this section.
We know that humans are conscious beings. We also know that we have a laundry list of physical descriptions which are true of us: we have various states, as characterized under various classificatory schemes, we have relational properties with the rest of the world, we fulfill causal roles, and we’re made of certain stuff. Furthermore, since we’re committed to physicalism, we have to conclude that our consciousness must be a consequence of one or more of the physical descriptions which happens to be true of us. Perhaps, as Identity Theory asserts, to have consciousness is to have a certain brain composition. Perhaps, as Functionalism asserts, to have consciousness is to fit a certain sort of functional description.
Allow me to introduce a pair of terms to the debate: accidental, and essential. To illustrate their use, take the example of an apple, its reddish color, its roundish shape, and the specific pigmentation of its skin. We can say of the apple that its roundness is merely accidental to its reddish color: all other things being equal, were the apple’s shape to change, that change wouldn’t affect its color. The particular pigmentation of the apple’s skin, by contrast, is essential to its reddish color: all other things being equal, if the pigmentation of the apple’s skin were to change, the apple would be a different color.
When we apply this language to ourselves, our physical descriptions, and our consciousness, we can see how the disagreement between Functionalism and Identity Theory can be understood as a disagreement over what is essential to our consciousness, and what is merely accidental. To the functionalist, it’s essential to our feeling a pain that we have a certain functional description, and merely accidental that we have CFF. To the identity theorist, the converse is true.
So which is correct? If the argument I made earlier about Meredith’s goes through, then we can’t know. We may be committed to physicalism by lack of alternative, but even if consciousness is a physical thing it doesn’t follow that we’ll be able to find out which physical thing it is. Meredith knew all the physical facts, but she didn’t know which of those were the very same facts she was also acquainted with phenomenally. Even if we eventually assemble the same sort of godlike physical knowledge that Meredith had, we’ll have to deal with the same sort of impasse.
We may eventually know all the facts about pain, for example, under two guises—the physical descriptions and the phenomenal acquaintances. However, none of those facts, known under its physical guise, will be able to tell us that it’s also one of the phenomenal facts. Hence, if there is more than one physical description that’s been true of me at every time I was in pain and no times when I wasn’t, then I won’t be able to tell which of those descriptions is essential to my pain and which is accidental: I won’t be able to tell from the physical descriptions themselves (as Meredith demonstrates) and I won’t be able to tell by seeing which has been true iff I was in pain (both have been true iff I was in pain). This applies directly to Functionalism and Identity Theory: it seems highly likely that both theories could, after sufficient investigation, come up with precise descriptions such that, in humans, they are true of a person every time that person is in pain, and no other times. If so, we are left without a way of determining which is accidental and which is essential to pain. I am left knowing facts about my pains, about my functional description, and about my C-Fibers, but not knowing whether the facts about my pain are the very same as the ones about my functional description or whether they are the very same as the ones about my C-Fibers.
At the bottom of it, we are left without a method for eliminating any theory of the mind which can pick out a physical description that appears to coincide with pain, and which is neither self-contradictory nor patently absurd—and, unfortunately, what’s patently absurd comes largely on faith. This takes its toll on our ability to determine which animals, robots, or entities aside from us can have pain.
For example, consider the case of an anthill. It’s intuitive for some that any group of conscious creatures cannot possibly form a collective consciousness over and above its members. A hive mind, on this view, is just absurd. I, however, do not see the problem; we are conscious, and we are made up of living cells. If our cells were to be conscious, would we stop being so? Would it really have been impossible for us to have discovered that our bodies were operated by little colonies of gnomes? If colonies of ants exhibit the sort of behavior we expect from sentient creatures, and they have some similarities to us that we could reasonably call essential to consciousness, then it seems that ant colonies could be conscious. But yet, it still seems absurd to seriously entertain the notion that a corporation or business is, actually, a conscious entity. And where’s the principled distinction there?
Consider, again, functionalism: some, like Searle, think that it’s just absurd to say a machine could be conscious. However, the quality of multiple realizations that many others find attractive is that it not only allows machines to have consciousness, it allows machines to have consciousness in exactly the same way we do. We appear to be in murky waters, where intuitions fly left and right.
In Conclusion:
The upshot of all of this is that, when forced to choose among competing theories of the mind, we are not only at the mercy of our intuitions, but they are not particularly regular amongst us. It doesn’t even seem that any given person should necessarily have a consistent set. Physicalism, while it may be true, is not fully accessible to our knowledge.
To put this position into full perspective, I invite the reader to imagine the ethical consequences. When the argument is taken in conjunction with the highly intuitive notion that we have ethical obligations towards an entity if and only if that entity has consciousness, things begin to get very difficult very quickly. This is an issue of more than academic interest, and furthermore, the current conclusion is not a happy one. It is the conclusion of my thought nonetheless.
Take that, thread.
Can't really help you on that one--I generally only study analytic philosophy.
I actually thought of you when I was assigned Quine's "Identity, Ostension, and Hypostasis"
I was like, oh man Mr. Mr. will be so proud of me!
I agree with you that the Cartesian soul is bullshit (see my homonculus comment earlier). However:
This is question-begging: what is the "our" that is intending with objects? You are defining the soul as a relationship between an identity (soul?) and objects. What is the identity?
Again, you are defining the soul as separate from an "ego," (or in combination or resulting from an ego) but what is the ego?
This is quite a statement, probably more poetic than literal, but I take it you don't mean that the human soul allows for chimpanzee existence?
Does a chimpanzee soul allow for chimpanzee existence?
What about an ant soul?
Sponge soul?
Eh. We can examine it logically and with empirical knowledge we've derived through evolution and psychology. I disagree that the soul is inneffeble; there is a great deal that we can discover about it, much moreso today than at any other point in history.
This doesn't make sense to me, though maybe it's because I'm still mixed up on your weird definition of "soul". Do you believe that identical twins separated at birth have identical private souls? In other words, is your private soul immutable and completely resistant to change based on input and experience? Or does it grow along with the brain?