Our new Indie Games subforum is now open for business in G&T. Go and check it out, you might land a code for a free game. If you're developing an indie game and want to post about it, follow these directions. If you don't, he'll break your legs! Hahaha! Seriously though.
Our rules have been updated and given their own forum. Go and look at them! They are nice, and there may be new ones that you didn't know about! Hooray for rules! Hooray for The System! Hooray for Conforming!

Split-Brain Patients, Confabulation, and the Nature of Consciousness

13»

Posts

  • Grey PaladinGrey Paladin Registered User regular
    edited December 2011
    Free will is a term empty of meaning if you just consider it for a few moments. We are born in a certain world. In a certain environment which shapes our personality. Every decision we make is ultimately a reaction to the world around us - that is, we would have nothing to decide on if there was no outside world. We are not floating intellects in a dark void - most-everything we think and feel depends on some sort of an outside input.

    Having established that all we do is react to the outside world, there are two possibilities:
    1) Our decisions are not based on pre-established facts. Despite being the same person we make inconsistent choices. The outcome is then essentially probabilistic.
    2) We react in a pre-determined manner, based on our personality. We consistently react in the same way because we are the same person and thus ultimately arrive to the same decision.

    Since I find the first option both terrifying and defying some of what we think we know about the world, I am personally leaning towards the second.

    Ultimately, even if we make the same choice every time, it is still us who make that choice. The choice remains ours. Separating a person from their subconsciousness is a dangerous prospect. A person is the whole, and every piece of research suggests that both the conscious mind and the subconsciousness work together to provide the final output, each both altering and relying on the other.

    EDIT: Rereading the post, the last point seems to not be entirely clear. What I was trying to say is that the most scientifically-minded of individuals have a worrying tendency to hold to some sort of a 'dualist' view wherein the 'soul' is the conscious mind and the subconsciousness, along with the body, are somehow 'not truly part of the person' and are of the 'earthly prison' in the dualist view. A person's self is composed both of what is on the surface as well as what lies beneath.

    EDIT2: I cannot at english today, it seems.

    Grey Paladin on
    "All men dream, but not equally. Those who dream by night in the dusty recesses of their minds wake in the day to find that it was vanity; but the dreamers of the day are dangerous men, for they may act their dream with open eyes to make it possible." - T.E. Lawrence
  • WinkyWinky Registered User regular
    Free will is a term empty of meaning if you just consider it for a few moments. We are born in a certain world. In a certain environment which shapes our personality. Every decision we make is ultimately a reaction to the world around us - that is, we would have nothing to decide on if there was no outside world. We are not floating intellects in a dark void - most-everything we think and feel depends on some sort of an outside input.

    Having established that all we do is react to the outside world, there are two possibilities:
    1) Our decisions are not based on pre-established facts. Despite being the same person we make inconsistent choices. The outcome is then essentially probabilistic.
    2) We react in a pre-determined manner, based on our personality. We consistently react in the same way because we are the same person and thus ultimately arrive to the same decision.

    Since I find the first option both terrifying and defying some of what we think we know about the world, I am personally leaning towards the second.

    Ultimately, even if we make the same choice every time, it is still us who make that choice. The choice remains ours. Separating a person from their subconsciousness is a dangerous prospect. A person is the whole, and every piece of research suggests that both the conscious mind and the subconsciousness work together to provide the final output, each both altering and relying on the other.

    EDIT: Rereading the post, the last point seems to not be entirely clear. What I was trying to say is that the most scientifically-minded of individuals have a worrying tendency to hold to some sort of a 'dualist' view wherein the 'soul' is the conscious mind wherein the subconsciousness, along with the body, are somehow 'not truly part of the person' and are of the 'earthly prison' in the dualist view. A person's self is composed both of what is on the surface as well as what lies beneath.

    I wouldn't say that's true of scientists at all. Rather, society at large is still commanded by a Cartesian point of view.

    Anyway, you would probably enjoy Dennett and his views on free will.

    mjoa2p.jpg
  • durandal4532durandal4532 Registered User regular
    bowen wrote:
    I could see our eventual transformation into a collective though. That is basically what the internet is, except its voluntary and not on all the time.

    Oh totally, but that's Clark's point: we act as though the skin barrier is what allows us to interface with machines. But it's kind of obviously not! I mean, if a person uses a phone, no big. If a person IMPLANTS a phone then woah freaky cyborg. He makes the argument that so long as utility and transfer of information is the same, whether or not something penetrates got skin has nothing at all to do with whether or not you're a cyborg. Being a cyborg involves altering yourself via interface with tools, but not necessarily in a physical manner.

    Transhumanists have a total tech fetish, everything gotta be robot parts, and it makes no sense.

    Plus I like the general idea Clark puts forth of us being a collection of tools, rather than a consciousness with a "seat".

  • Grey PaladinGrey Paladin Registered User regular
    Due to a (hopefuly temporary) condition called stupidity I forgot to add an 'even' before 'the most scientifically-minded of individuals..'.

    "All men dream, but not equally. Those who dream by night in the dusty recesses of their minds wake in the day to find that it was vanity; but the dreamers of the day are dangerous men, for they may act their dream with open eyes to make it possible." - T.E. Lawrence
  • WinkyWinky Registered User regular
    Due to a (hopefuly temporary) condition called stupidity I forgot to add an 'even' before 'the most scientifically-minded of individuals..'.

    Ahhhh.

    Well then I agree :P.

    mjoa2p.jpg
  • electricitylikesmeelectricitylikesme Registered User regular
    bowen wrote:
    I could see our eventual transformation into a collective though. That is basically what the internet is, except its voluntary and not on all the time.

    Oh totally, but that's Clark's point: we act as though the skin barrier is what allows us to interface with machines. But it's kind of obviously not! I mean, if a person uses a phone, no big. If a person IMPLANTS a phone then woah freaky cyborg. He makes the argument that so long as utility and transfer of information is the same, whether or not something penetrates got skin has nothing at all to do with whether or not you're a cyborg. Being a cyborg involves altering yourself via interface with tools, but not necessarily in a physical manner.

    Transhumanists have a total tech fetish, everything gotta be robot parts, and it makes no sense.

    Plus I like the general idea Clark puts forth of us being a collection of tools, rather than a consciousness with a "seat".

    No transhumanists generally recognize that having to carry a phone around sucks, and it would be better if it was more like a natural function of my body.

    I don't think anyone would object to say, a phone which was just a thin patch you could wear on the back of your head.

    But then we get into other issues, like how great it would be if I could lock all the joints in my arm together, and then use my fingertip as the cantilever of an AFM on an arbitrary basis.

  • bowenbowen Registered User regular
    That would be pretty amazing. Fucking borg.

  • redxredx East Bumblefuck, PARegistered User regular
    I mostly just want my body to stop doing stupid shit that seemed like it was a good idea millions of years ago. Access to food isn't really an issue, most folks are not going to starve to death if they waste energy. There isn't really a huge reason why I should have to make my body move around just so I have a reasonably active metabolism. There's no real reason why my brain can't just produce enough neurotransmitters that I don't feel like curling up in a ball and hiding under the blankets all day. There's no link between how much light I get exposed to, and how active I am going to be(seasonal affective disorder is pretty ridiculous). There's a whole class of medical issues that deal with nothing other than the body's over reactions to stuff being more dangerous than the actual stimulus.

    I don't really want anything all that special. I just want a body that works decently in the world I actually live in. One that doesn't make me miserable and harm itself just because it saved a few lives millions of years ago. I have this big huge logical brain, maybe it could actually have control over the functions of the body it's be strapped to.

    All I've got is a snuggle hammer.
  • electricitylikesmeelectricitylikesme Registered User regular
    redx wrote:
    I mostly just want my body to stop doing stupid shit that seemed like it was a good idea millions of years ago. Access to food isn't really an issue, most folks are not going to starve to death if they waste energy. There isn't really a huge reason why I should have to make my body move around just so I have a reasonably active metabolism. There's no real reason why my brain can't just produce enough neurotransmitters that I don't feel like curling up in a ball and hiding under the blankets all day. There's no link between how much light I get exposed to, and how active I am going to be(seasonal affective disorder is pretty ridiculous). There's a whole class of medical issues that deal with nothing other than the body's over reactions to stuff being more dangerous than the actual stimulus.

    I don't really want anything all that special. I just want a body that works decently in the world I actually live in. One that doesn't make me miserable and harm itself just because it saved a few lives millions of years ago. I have this big huge logical brain, maybe it could actually have control over the functions of the body it's be strapped to.

    This thread has convinced me that brain augmentations are probably a necessity really.

    I mean it is batshit fucking insane that my conscious response to losing some data input is to wildly speculate as to what just happened, and then assume that's absolutely true.

    No, do not do that. Calmly observe that something is wrong. I mean we have computer intrusion detection systems which do things better then our brain does.

    Of course that does make me think that the consciousness of say, an AI or an augmented human would in fact be radically different if we engineered out weird processes like that. What would the world "feel" like if your consciousness was engineered to doubt it's sensory data, rather then what it seems to do (which is consider all input from the brain to the "consciousness circuit" as unquestioningly accurate).

    It also seems like it has "brain in a box" ramifications: if you'll invent a reason as to why you can't move your arms, what else does that lead to? How degraded could say, data from your visual processing centers be before you'd notice that it was low-resolution virtual reality (so say, rather then your optical system, you instead feed signals to face-recognition, pattern recognition etc. - would you even be able to notice you weren't studying crisp images from your eyes).

  • durandal4532durandal4532 Registered User regular
    edited December 2011
    I mean, agreed, but I think there's undeniably a skin focus. More natural more often than not means "in my body".

    I mean, I think his best example is speech and writing. You can consider them external tools that allow us to interact with the world and change it or ourselves. But we don't consider them artificial constructs because we access them easily.

    That's not to say things can't be improved by implanting them, but that shouldn't be given precedence. Being next to neurons won't necessarily make information transfer more efficient. Storing your magic memory implant in your skull is only different from using a paper journal in that it is easier to keep with you.

    That seems to be the major focus of a lot of the problem solving transhumanism does: stick the screwdriver to your hand! I mean I get it from an always be prepared perspective, but the difference between internalized and external tools doesn't usually seem to go beyond mild convenience. I could implant an ALU or use a calculator, the difference is mostly that in one case I have to go through neurosurgery.

    But! I'm responding to you as though you'd said something different. I honestly had never heard a pure HCI proposal from the perspective of transhumanism. I've read more 'put your brain in a robot body!' proposals. I'm a big fan of focusing on making tools transparent first, and considering literal "I would like adamantium bones BECAUSE that provides me with a property only having adamantium bones can", or giving yourself a permanent third arm or whatnot. I'm actually much more interested in large scale body modification if you're going surgical, because at least you can't replicate that via efficient information transfer.

    Edit: I'm totally in favor of brain implants, but I think it'll be forever and a day before there's anything decent. I mean right now we're still arguing some incredibly basic aspects of the brain. Like. "Does it realtime process information?" basic.

    Also, I think you're presupposing some things about why consciousness. I mean, it isn't necessarily true that completely accurate analysis of sense data is preferable. We have some evidence, for instance, that perfect memory.is possible. But we don't have it! Which is weird. It is possible that's due to the blind idiocy of evolution, but it's also possible that this confers some advantage.

    Now obviously I want the ability to TEST that and fuck if I'm not I'm favor of being a test subject for the memory-perfector, but I don't know that things like confabulation point to any kind of systemic error.

    durandal4532 on
  • WinkyWinky Registered User regular
    Personally, though I am certainly an optimist when it comes to this sort of technology, I don't believe that brain implants are that far away.

    Take recent research like this.

    I basically peed my pants when I read this quote as part of the paper (which was published in Nature, by the way):
    Our BMBI [brain-machine-brain interface] demonstrated direct bidirectional communication between a primate brain and an external actuator. Because both the afferent and efferent channels bypassed the subject’s body, we propose that BMBIs can effectively liberate a brain from the physical constraints of the body.

    For good or ill, we will probably be directly mucking around in the brain for a long time before we actually come to fully understand it, and likely that mucking about will lead to a lot of incidental understanding!

    mjoa2p.jpg
  • YarYar Registered User regular
    edited December 2011
    I have long believed that people subconsciously reason decisions and conclusions which they cannot consciously perceive or fully explain, though they still act upon these conclusions, or even consciously perceive the ramifications of these conclusions.

    I've never really seen it explained or explored the way Winky did, and I think it's great. I absolutely believe that most or many or all people have a thorough logical reasoning capacity, regardless of how conscious they are of it. I think that a lot of the most profound advances in ethcs and philosophy were when someone was able to discover and articulate a thought process that was already going on subconsciously in millions of people.

    Free will is a term empty of meaning if you just consider it for a few moments.

    No, no, no, don't start this. Free will has a perfectly usable meaning. It's only empty when you try to force the term to be some broader useless concept of non-determinism that doesn't even make sense.

    Yar on
  • MrMisterMrMister Valuing scholarship above all elseRegistered User regular
    ronya wrote:
    This is not a point about determinism; rather the point is whether awareness is the causal intermediary or the causal effect (so to speak) within the temporal sequence of making decisions

    I see what you're saying (and it's true--I was not adequately addressing it earlier). From what I understand, though, the Libet experiments have not established anything so robust.
    Winky wrote:
    I mean, isn't the obvious conclusion to draw that a decision never happens?
    Winky wrote:
    you have a conscious process that has the job of going through all the things that it observes concerning your perceptions, feelings, and actions and comes up with a story regarding what "decisions" you made and why.

    A story that, ultimately, is important for informing your future behavior as well as for communicating with other humans.

    It is something of a convenient fiction, in much the same way as color constancy or other similar optical illusions.

    This story is plausible for routine and low-consequence decisions. It is also plausible for certain cases involving brain damage. I think it is unwarranted to generalize from those cases to the totality of human action, however, including paradigm cases of reasoning and deliberation wherein, for instance, a person thinks for months (over a job offer, say), consults friends and family, and so on.

  • WinkyWinky Registered User regular
    MrMister wrote:
    ronya wrote:
    This is not a point about determinism; rather the point is whether awareness is the causal intermediary or the causal effect (so to speak) within the temporal sequence of making decisions

    I see what you're saying (and it's true--I was not adequately addressing it earlier). From what I understand, though, the Libet experiments have not established anything so robust.
    Winky wrote:
    I mean, isn't the obvious conclusion to draw that a decision never happens?
    Winky wrote:
    you have a conscious process that has the job of going through all the things that it observes concerning your perceptions, feelings, and actions and comes up with a story regarding what "decisions" you made and why.

    A story that, ultimately, is important for informing your future behavior as well as for communicating with other humans.

    It is something of a convenient fiction, in much the same way as color constancy or other similar optical illusions.

    This story is plausible for routine and low-consequence decisions. It is also plausible for certain cases involving brain damage. I think it is unwarranted to generalize from those cases to the totality of human action, however, including paradigm cases of reasoning and deliberation wherein, for instance, a person thinks for months (over a job offer, say), consults friends and family, and so on.

    Like I said previously, however, it's not as though the conscious account doesn't influence decision making though. Rather, I think it's a crucial tool, as it updates our self-narrative in a manner that our behavior forming processes can work with. The notion is more that the conscious account of your mental processes, while incomplete, is the only real record you can store (because of some difficulty in translation of the processes that we actually use for cognition into information that we can store). Consciousness is how we update our mental profile of ourselves, and we work off of it in a sort of feedback loop.

    This is to say a lack of consciousness causes a very different sort of behavior, and it can be demonstrated that there are individuals who go about performing behavior without consciousness (there are literal examples of patients who suffer from temporary 'zombieism'). One example was of a man who left work, got on and the off a train again at a different station all the while having a deficit of consciousness (like sleepwalking, kind of). He was making somewhat complicated decisions and actions, certainly more than what is merely reflexive, but he was doing so without any sort of thought or direction because he was lacking any sort of self-narrative to tell him what he should be doing.

    mjoa2p.jpg
  • jothkijothki Registered User regular
    Winky wrote:
    MrMister wrote:
    ronya wrote:
    This is not a point about determinism; rather the point is whether awareness is the causal intermediary or the causal effect (so to speak) within the temporal sequence of making decisions

    I see what you're saying (and it's true--I was not adequately addressing it earlier). From what I understand, though, the Libet experiments have not established anything so robust.
    Winky wrote:
    I mean, isn't the obvious conclusion to draw that a decision never happens?
    Winky wrote:
    you have a conscious process that has the job of going through all the things that it observes concerning your perceptions, feelings, and actions and comes up with a story regarding what "decisions" you made and why.

    A story that, ultimately, is important for informing your future behavior as well as for communicating with other humans.

    It is something of a convenient fiction, in much the same way as color constancy or other similar optical illusions.

    This story is plausible for routine and low-consequence decisions. It is also plausible for certain cases involving brain damage. I think it is unwarranted to generalize from those cases to the totality of human action, however, including paradigm cases of reasoning and deliberation wherein, for instance, a person thinks for months (over a job offer, say), consults friends and family, and so on.

    Like I said previously, however, it's not as though the conscious account doesn't influence decision making though. Rather, I think it's a crucial tool, as it updates our self-narrative in a manner that our behavior forming processes can work with. The notion is more that the conscious account of your mental processes, while incomplete, is the only real record you can store (because of some difficulty in translation of the processes that we actually use for cognition into information that we can store). Consciousness is how we update our mental profile of ourselves, and we work off of it in a sort of feedback loop.

    This is to say a lack of consciousness causes a very different sort of behavior, and it can be demonstrated that there are individuals who go about performing behavior without consciousness (there are literal examples of patients who suffer from temporary 'zombieism'). One example was of a man who left work, got on and the off a train again at a different station all the while having a deficit of consciousness (like sleepwalking, kind of). He was making somewhat complicated decisions and actions, certainly more than what is merely reflexive, but he was doing so without any sort of thought or direction because he was lacking any sort of self-narrative to tell him what he should be doing.

    Also, spending months coming to a conclusion isn't actually a single decision, but a long series of individual decisions that you made, analyzed, and then filed back away. You seem to be consciously making the decision because you're constantly performing feedback on it, meaning that that the feedback becomes a significant cause of the conclusion. You're basically consciously reshaping your brain until the point where you can subconsciously make the decision without consciously balking at it.

  • Grey PaladinGrey Paladin Registered User regular
    Yar, did you read the rest of my post? because criticizing the common conception of it is exactly what it does.

    "All men dream, but not equally. Those who dream by night in the dusty recesses of their minds wake in the day to find that it was vanity; but the dreamers of the day are dangerous men, for they may act their dream with open eyes to make it possible." - T.E. Lawrence
  • ronyaronya Arrrrrrf. the ivory tower's basementRegistered User regular
    MrMister wrote:
    This story is plausible for routine and low-consequence decisions. It is also plausible for certain cases involving brain damage. I think it is unwarranted to generalize from those cases to the totality of human action, however, including paradigm cases of reasoning and deliberation wherein, for instance, a person thinks for months (over a job offer, say), consults friends and family, and so on.

    Well, Libet himself argued for the conscious veto, the "free won't", so to speak. No need to wait for months. The conscious mind gets consulted almost immediately after an action is initiated.

    I view confabulation and such to be more troubling for the conscious mind in the long-term. Neurons seem to fire relatively slowly for a realtime process and it is, given the instances of what happens when bits of the neural machine breaks down, much more persuasive to think of consciousness as only feeling realtime; whilst mostly actually being heavily cached and only periodically fully engaged to grapple with required decisionmaking. Too energetically expensive, perhaps.

  • Grey PaladinGrey Paladin Registered User regular
    edited December 2011
    I basically agree but think some of the words used by certain people are (perhaps not intentionally) misleading. You don't merely 'seem' to be making a choice consciously - you are making it (partially) consciously. The difference is that we are now realizing that you are not making it wholly consciously. This seems obvious in retrospect if we consider how much of what we do we actually mentally narrate and what we think of 'silently', but so do most breakthroughs.

    Personally I do not think conscious thought merely serves as a chronicler, but also (sometimes) takes part of the decision making process. I do not know if I am alone here but sometimes when I struggle with a difficult logical problem I have to externalize the thought process, thinking of it 'audibly', and 'walking my subconsciousness through it'. This leads me to believe that conscious thought serves a greater role than just feedback (at the least part of the time). The veto hypothesis similarly makes sense to me, as I often begin doing something then actually think of it 'audibly' and abort the action.

    Grey Paladin on
    "All men dream, but not equally. Those who dream by night in the dusty recesses of their minds wake in the day to find that it was vanity; but the dreamers of the day are dangerous men, for they may act their dream with open eyes to make it possible." - T.E. Lawrence
  • surrealitychecksurrealitycheck NONSTOP INFINITE CLIMAX POSTING you must go on i cant go on ill go onRegistered User regular
    edited December 2011
    I view confabulation and such to be more troubling for the conscious mind in the long-term. Neurons seem to fire relatively slowly for a realtime process and it is, given the instances of what happens when bits of the neural machine breaks down, much more persuasive to think of consciousness as only feeling realtime; whilst mostly actually being heavily cached and only periodically fully engaged to grapple with required decisionmaking. Too energetically expensive, perhaps.

    also unnecessary

    to the vast majority of decisions you make your consciousness would either be superfluous or an active detriment

    consider how bad the conscious mind is at almost any kind of information processing

    To grey paladin:

    the best way i find to think about it is as follows;

    your consciousness is like the highest level of processing in the brain and the most general - it can deal with any problem or category of problems, but it deals with it badly. when you first start to do anything it is this level that applies to it first, eg riding a bike, and it works through forming an initial algorithm that then gets passed along to other areas. gradually as you get better at whatever it is, it bows out entirely leaving your newly formed algorithm to be dealt with by another, non-conscious part such as the cerebellum. this is true even of things such as chess, where an experienced player will be doing huge amounts of his analysis and decision making subconsciously - and it is interesting that it is this "emotional" thinking that informs him. if anybody has ever played an fps they can describe that feeling where you suddenly "feel" like you are in a really bad position and have to get in cover - thats a subconscious routine reminding you that you are out of cover and using emotional tools to get you to move.

    it is a troubleshooter. it is the supra-algorithm editor. but it is not necessary or needed for most things. but it still gets involved in stuff in a non-trivial way, and i am almost certain that you can force yourself to run through things analytically using it if you choose to; but its not necessary in most cases.

    surrealitycheck on
    obF2Wuw.png
  • Grey PaladinGrey Paladin Registered User regular
    edited December 2011
    I mostly agree. It acts as a 'processor' that then passes on its insight to underlying 'optimizers'. The conscious mind considers the problem and basically sets the mission along with an initial framework which the lower levels develop.

    Something to keep in mind, though, is that a lot of the time the subconscious responses people learn to something are just plain wrong. That is, because the conscious premise for what needs to be optimized was incorrect the resulting algorithms do not achieve what actually needs to be done. You can see this when people 'unlearn' bad habits - it is most often a conscious act wherein they reconsider the premise.

    Thus, if we continue with analogies from the computer world, you could consider the conscious mind as a programmer programming in a high level language, where the subconsciousness optimizes what is needed and translates it to machine code; kind of like a compiler/assembler. The programmer examines the result, compares it to what it intended, and if need be rewrites the premise. This process repeats itself.

    Grey Paladin on
    "All men dream, but not equally. Those who dream by night in the dusty recesses of their minds wake in the day to find that it was vanity; but the dreamers of the day are dangerous men, for they may act their dream with open eyes to make it possible." - T.E. Lawrence
  • redxredx East Bumblefuck, PARegistered User regular
    Thus, if we continue with analogies from the computer world, you could consider the conscious mind as a programmer programming in a high level language, where the subconsciousness optimizes what is needed and translates it to machine code; kind of like a compiler/assembler. The programmer examines the result, compares it to what it intended, and if need be rewrites the premise. This process repeats itself.

    I kinda prefer to think about it by contrasting the kernel mode and user mode of an OS. You have lowish level code being used to directly manipulate hardware, and onto this is strapped the user mode which is where presentation happens and higher level code is used. Think about the rather segmented physical formation of the brain, the specialization with happens in certain areas, the inability of high level code to directly effect low level functions, and this is kinda consistent. By having the functions separate , it makes it safe for designers to add in new features and functions without having to completely overhaul all the underlying code.

    The operator interacts only with the top most user layer, which has to trust the lower level systems to provide it with accurate information about what is happening. It's also going to be the last place to get updated about what is going on at the lower levels, so some delay between when a decision is determined and when it is updated to the user level is likely. When something breaks at a low level, and starts providing inaccurate information, unless it is specifically checking their work or the lower level returns an error state, the high level stuff is not going to notice a problem at the lower levels.

    All I've got is a snuggle hammer.
  • Grey PaladinGrey Paladin Registered User regular
    edited December 2011
    You are right. That's probably a better analogy since it both captures the difference between the two while maintaining the higher level as being the 'director'.
    Much of what is being processed at the lower levels has ultimately been requested by the user (it is not all in this case since it is not the user who actually turned the machine on, so to speak - essential functions are running all the time), so it is quite close to a fetch cycle.

    Grey Paladin on
    "All men dream, but not equally. Those who dream by night in the dusty recesses of their minds wake in the day to find that it was vanity; but the dreamers of the day are dangerous men, for they may act their dream with open eyes to make it possible." - T.E. Lawrence
  • ronyaronya Arrrrrrf. the ivory tower's basementRegistered User regular
    Regrettably, in this case the operator cannot turn the machine off and take a screwdriver and multimeter to it. Yet, anyway.

  • surrealitychecksurrealitycheck NONSTOP INFINITE CLIMAX POSTING you must go on i cant go on ill go onRegistered User regular
    and thats why most people are bad at multiplayer computer games - they dont think about what theyre doing :O

    obF2Wuw.png
  • MrMisterMrMister Valuing scholarship above all elseRegistered User regular
    Winky wrote:
    Like I said previously, however, it's not as though the conscious account doesn't influence decision making though. Rather, I think it's a crucial tool, as it updates our self-narrative in a manner that our behavior forming processes can work with. The notion is more that the conscious account of your mental processes, while incomplete, is the only real record you can store (because of some difficulty in translation of the processes that we actually use for cognition into information that we can store). Consciousness is how we update our mental profile of ourselves, and we work off of it in a sort of feedback loop.

    This is to say a lack of consciousness causes a very different sort of behavior, and it can be demonstrated that there are individuals who go about performing behavior without consciousness (there are literal examples of patients who suffer from temporary 'zombieism'). One example was of a man who left work, got on and the off a train again at a different station all the while having a deficit of consciousness (like sleepwalking, kind of). He was making somewhat complicated decisions and actions, certainly more than what is merely reflexive, but he was doing so without any sort of thought or direction because he was lacking any sort of self-narrative to tell him what he should be doing.

    I think it's a little dicey the way you use 'you' and related person-designating-terms in these descriptions, given that what you're claiming might undermine those terms in a significant way.

    I'm also, actually, pretty interested about the details of the zombie stuff. There are a lot of questions I would have about such a case, and pretty big potential philosophical payoffs (philosophers have debated at length the metaphysical possibility of similar-ish 'zombies' in a way that seems might be informed by such results).

  • redxredx East Bumblefuck, PARegistered User regular
    edited December 2011
    This is to say a lack of consciousness causes a very different sort of behavior, and it can be demonstrated that there are individuals who go about performing behavior without consciousness (there are literal examples of patients who suffer from temporary 'zombieism'). One example was of a man who left work, got on and the off a train again at a different station all the while having a deficit of consciousness (like sleepwalking, kind of). He was making somewhat complicated decisions and actions, certainly more than what is merely reflexive, but he was doing so without any sort of thought or direction because he was lacking any sort of self-narrative to tell him what he should be doing.

    I think it might be nice to know a little more about this term 'deficit of consciousness'. I've been prescribed xanax, and when I drink, even a little, while on it I pretty much always black out. I act like a slightly more outgoing version of myself, but I'm not aware of any of it after.

    That I suppose is consciousness with the absence of memory, but if we are now defining consciousness as the Narrator processing internal and external stimulus into a story, and then retaining the story in long/short term memory, and updating whatever reads the narrative.

    'Deficit of consciousness', at least from my googling, does not seem to be a well defined term of art. Could someone in such a state respond to a simple command? Could they respond to a question that requires accessing short term memory? Long term memory? Something that applies even simple recursive logic like counting to 70 by 7s?

    When someone has a deficit of consciousness, what is breaking? It's about as helpful as someone calling up saying they have a "deficit of internet", when it could be any on piece of a pretty long chain that is failing.

    redx on
    All I've got is a snuggle hammer.
  • WinkyWinky Registered User regular
    Let me see if I can find my Antonio Damasio book and transcribe the bit I am talking about to you guys. Hopefully I didn't significantly misremember it.

    mjoa2p.jpg
  • surrealitychecksurrealitycheck NONSTOP INFINITE CLIMAX POSTING you must go on i cant go on ill go onRegistered User regular
    philosophical zombie is a bit more complicated than that because the only difference between them and an ordinary person is that they are "unconscious". There is no external test that can be done to determine who is or isn't a zombie. So any other symptom precludes the classification of that person as a philosophical zombie. It's also why the idea is so silly.

    obF2Wuw.png
  • WinkyWinky Registered User regular
    Aha, here we go, this is from Self Comes to Mind:
    Perhaps the most convincing evidence for a dissociation between wakefulness and mind, on the one hand, and self, on the other, comes from another neurological condition, epileptic automatism, which can follow episodes of certain epileptic seizures. In such situations, a patient's behavior is suddenly interrupted for a brief period of time, during which the action freezes altogether; it is then followed by a period, generally brief as well, during which the patient returns to active behavior but gives no evidence of a normal conscious state. The silent patient may move about, but his actions, such as waving goodbye or leaving a room, reveal no overall purpose. The actions may exhibit a "minipurpose," like picking up a glass of water and drinking from it, but no sign that the purpose is part of a larger context. The patient makes no attempt to communicate with the observer and no reply to the observer's attempts.

    If you visit a physician's office, your behavior is part of a larger context that has to do with the specific goals of the visit, your overall plan for the day, and the wider plans and intentions of your life, at varied time scales, relative to which your visit may be of some significance or not. Everything you do in the "scene" at that office is informed by these multiple contents, even if you do not need to hold them all in mind in order to behave coherently. The same happens with the physician, relative to his role in the scene. In a state of diminished consciousness, however, all that background influence is reduced to little or nothing. The behavior is controlled by immediate cues, devoid of any insertion in the wider context. For example, picking up a glass and drinking from it makes sense if you are thirsty, and that action does not need to connect with the broader context.

    I remember the very first patient I observed with this condition because the behavior was so new to me, so unexpected, and so disquieting. In the middle of our conversation, the patient stopped talking and in fact suspended moving altogether. His face lost expression, and his open eyes looked past me, at the wall behind. He remained motionless for several seconds. He did not fall from his chair, or fall asleep, or convulse, or twitch. When I spoke his name, there was no reply. When he began to move again, ever so little, he smacked his lips. His eyes shifted about and seemed to focus momentarily on a coffee cup on the table between us. It was empty, but still he picked it up and attempted to drink from it. I spoke to him again and again but he did not reply. I asked him what was going on, and he did not reply. Finally he rose to his feet, turned around and walked slowly to the door. I called him again. He stopped and looked at me, and a perplexed expression came to his face. I called him again, and he said, "What?"

    I can't find a quote for the story about the guy who boarded the train, I realize, because it was an anecdote my professor told me in class. I can try to see if I can't look it up anyway. The phenomenon is referred to as automatism. I imagine it is probably almost exactly similar to sleep walking.

    mjoa2p.jpg
  • YogoYogo Registered User regular
    edited December 2011
    It's a fascinating subject and my recent studies into the origins and workings of fear has given me a taste of split-brain studies. I don't have much to say about the subject itself, but I want to share a story from the book which introduced me to the subject. The story is a bit long, so I have spoilered it for length. Also, it may reference information which have already been spoken of in the thread
    Spoiler:

    Yogo on
13»
Sign In or Register to comment.