The Life & Mind Seminar Network

The Non-representational Bases of Cognition

Posted in General by mjsbeaton on October 5, 2007

The other day, at Tom’s suggestion, we read the recent “Why Heideggerian AI Failed…” by Dreyfus in the CogPhi reading group at Sussex.

Dreyfus argues that various ‘Heideggerian’ views have fallen short. He then attempts to lay out a new view, in which ’embodied coping-in-the-world’ is seen as a required, non-representational basis to subsequent (possibly representational, in some creatures) cognition.

Some in the reading group expressed themselves deeply confused about how Dreyfus could possibly think this would work. I thought the objections to Dreyfus, and my way of responding to them, might be interesting here, especially since I bring in a notion from the philosophy of perception (disjunctivism) which you guys don’t usually talk about (at least not explicitly) but which seems directly relevant.

The example used to make the objection to Dreyfus, was the case of a rat deciding when it is hungry or not. Experimentally, the rat doesn’t respond to its need being satisfied, but only to its stomach being distended (no doubt, we’d all agree, a simplification of the detailed facts, but it makes the point). Arguing (in a natural way) from this example, it was suggested that whatever the rat responds to, it is not the need being satisfied, but a representation which normally represents the need being satisfied.

To me, now (after long exposure to Inman, McDowell, and other people!) this seems to have several problems. Firstly, the rat can’t respond to the representation, either. For whatever system ‘responds to the stomach being distended’, will itself be a physical system, which doesn’t actually respond to the stomach being distended, if you intervene in its normal operation (e.g. it might respond to ‘that neuron’, or ‘those neurons’ firing). This type of objection to the objection would carry on, ad infinitum. But then, by parity of reasoning, if there is a usable sense in which a system responds to its stomach being distended, there is a usable sense in which a rat responds to its needs being satisfied, after all.

Another point. Take a Watt governor. It governs speed. It doesn’t matter that, if you fiddle around inside it, and affect it’s normal operation, that it stops governing speed. What matters is that, when it’s working, it governs speed. It looks to me that there can be a sense in which simple mechanisms enable (constitute?) embodied coping, in the world, which is similarly non-representational. It’s what they do, when they are working. As always, in these cases, if they are physical mechanisms, there will necessarily be situations in which they do not work. But that mere fact doesn’t seem to be enough to make such systems representational.

Now, as I said in our reading group, I feel a strong parallel with (a naturalistic type of) disjunctivism, here. In philosophy of perception, this is the idea that, to understand what’s going on, in cases which seem like normal perception from the first-person point of view, we have to think of these cases as either situations which put you in contact with the world (perception) or situations which seem to, but don’t (illusion and hallucination).

Applying very similar reasoning to the examples here, what the system does when it is working is what makes it the system it is. This is not to deny that it is a physical mechanism, and hence that it can fail to work. But it is to deny that the central thing to be understood is that which is common between the case when the system is working and the case when it is not working. If you think that, you will definitely end up with a representationalist view (of that common thing). The disjunctivist denies that this common thing (between working and not working) is what is fundamental and claims that the central thing to be understood is what the system must do, when it is working, for it to be that type of system. Then, what the system does when it is not working can be understood derivatively on that.

Combined, I wonder whether these two arguments help (or at least help un-reconstructed representationalists) to show why, and how, we should think of the bases of cognition as non-representational?

Mike

Advertisements

6 Responses

Subscribe to comments with RSS.

  1. tomfroese said, on October 10, 2007 at 11:03 am

    Mike,

    I have to admit that I’m rather confused by your post, but I’ll try to comment on a few things.

    When you say that “experimentally, the rat doesn’t respond to its need being satisfied, but only to its stomach…”, I think we need to be careful about the kind of perspective that we are using. An ethologist would definitely say that the rat is responding to its need (hunger), whereas a physiologist might point to certain internal causal chains between organs, and a biochemist will talk about changes in the chemical milieu. None of these explanations is superior to any other or can be reduced to another.

    Saying that people agreed that the rat responded to the stomach being distended just makes a point about the kind of perspective which has been consensually adopted during the meeting. However, note that saying that the rat “only” responds to this distending threatens to promote one domain of explanation at the expense of other equally valid ones. Note also that saying that it is the “rat” that responds in this manner is misleading; it would be better to say the rat-as-a-physiological-system reacts.

    In this context the apparent need to introduce the notion of representation seems to me to derive from a desire to relate (or more likely: reduce) two non-intersecting epistemological domains, namely the personal and sub-personal (or rat and sub-rat). Accordingly, the physiological details are said to “represent” the need being satisfied. This is ok, as long as we are clear that this supposed representation is a constructed epistemological entity which might perhaps help us observers capture correlations between two distinct domains of enquiry. Nevertheless, it is not an entity which exists for the rat as such nor the rat-as-a-physiological-system, but only for the external observer who decides to relate these two perspectives in a particular manner.

  2. Mike said, on October 10, 2007 at 12:29 pm

    I think what I’m trying to say is very similar to what you say in your post.

    Two major points:

    First: you are treating what you say as a starting point – a given. Since my conclusion is that you’re right, I can’t fault you on that. But, as you know, there’s a big world out there that doesn’t even understand what you’re saying – as I wouldn’t have until recently. I’m starting, I think, from the same starting point as that big (misguided!) world, and then trying to argue for what you take as obvious.

    Second: that said, I still wish to argue for some of the intuitions that come with my starting point. You can’t call the ‘rat’ and ‘sub-rat’ non-intersecting epistemological domains. That’s clearly wrong, isn’t it? They do relate, very intimately – information about one clearly does constrain information about the other (doesn’t it??) – and we want to understand how.

    Mike

  3. tomfroese said, on October 10, 2007 at 2:53 pm

    Ok, it is certainly the case that we can observe correlations and constraints between the domain of the ‘rat’ and that of the ‘sub-rat’. However, understanding these relationships is non-trivial. I think that the only way to understand the relation between these two domains is by adopting the perspective of a meta-domain which includes both as complementary aspects.

    In a way what you are asking is how to relate a system viewed as a unity with that system viewed as a system of components. According to one perspective the system as a unity (i.e. the rat as a whole) is distinguished as one among other unities (e.g. entities in its environment), and according to the other each component is a unity distinguished among other unities (e.g. the stomach, nervous system, etc.).

    What I mean by non-intersecting epistemological domains is that we should be careful when asking questions which trivially relate properties of a unity distinguished in one domain (e.g. hunger of the rat) with properties of a unity distinguished in a different domain (e.g. changes in the stomach). The system as a unity does not generally relate to one of its components in this manner (though I’m sure there might be some cruel experiments where the rat could encounter its stomach in such a way).

    The behavior of the system as a whole emerges out of the historical interplay of its components and its structural coupling with its environment. What you are asking is how that “global” behavior relates to the “local” behavior of one particular component of the brain-body-environment system of components. The do evidently modulate each other, but exactly how is something which needs to be determined on a case to case basis, and which can only be done properly by being aware of the distinct domains involved.

    The homunculus in the brain is one famous example of what happens when properties of the system as a whole are not kept in a domain that is distinct from the domain which includes properties of the components of the system.

    I don’t think I’m expressing myself very well, but I hope that this made some sense. Otherwise send me an e-mail and I’ll send you a paper by Maturana that does a much better job than I could ever do.

  4. Simon Bowes said, on October 10, 2007 at 9:12 pm

    “by parity of reasoning, if there is a usable sense in which a system responds to its stomach being distended, there is a usable sense in which a rat responds to its needs being satisfied, after all”

    But the latter thing is a normative notion, and doesn’t this break the parity? The system responds to its stomach being distended, and if its stomach isn’t distended, it can’t respond to that. The system responds, under normal circumstances, to its needs being satisfied, but can also respond to an ‘appearance’ of its needs being satisfied (e.g. it’s stomach being distended). There can’t be an ‘appearance’ of its stomach being distended, this is just a 3rd person description of what is normally going when rats feel satiated.

    On the whole I’ve been confused by this debate because I’m confused about what representations are supposed to be exactly: representations for whom? Aren’t the kind of representations that lead to homuncular worries ones where there is a kind of copy of the whole in the head, and that’s all the subject has access to? So that would be something like there being a copy of a distended stomach in the rat’s brain. The non-representational response, that the world is the best model of itself, says, no, why copy that? There’s just the distended stomach and it’s connections to the brain. So, why can’t we say that the distended stomach and it’s links to the brain represent the condition of being satiated to the rat?

  5. Colin Reveley said, on October 12, 2007 at 1:53 am

    “why can’t we say that the distended stomach and it’s links to the brain represent the condition of being satiated to the rat?”

    Is the answer to that not simply: because the stomach and brain are part of the rat. Put another way: does being a rat represent being a rat to the rat?

  6. Colin Reveley said, on October 12, 2007 at 2:29 am

    Perhaps that remark seems terribly facile and awful or something, but it really does seem to me to be the gist of the posts above. I’ll refrain from further defense in the interests of brevity.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: