The Non-representational Bases of Cognition
The other day, at Tom’s suggestion, we read the recent “Why Heideggerian AI Failed…” by Dreyfus in the CogPhi reading group at Sussex.
Dreyfus argues that various ‘Heideggerian’ views have fallen short. He then attempts to lay out a new view, in which ’embodied coping-in-the-world’ is seen as a required, non-representational basis to subsequent (possibly representational, in some creatures) cognition.
Some in the reading group expressed themselves deeply confused about how Dreyfus could possibly think this would work. I thought the objections to Dreyfus, and my way of responding to them, might be interesting here, especially since I bring in a notion from the philosophy of perception (disjunctivism) which you guys don’t usually talk about (at least not explicitly) but which seems directly relevant.
The example used to make the objection to Dreyfus, was the case of a rat deciding when it is hungry or not. Experimentally, the rat doesn’t respond to its need being satisfied, but only to its stomach being distended (no doubt, we’d all agree, a simplification of the detailed facts, but it makes the point). Arguing (in a natural way) from this example, it was suggested that whatever the rat responds to, it is not the need being satisfied, but a representation which normally represents the need being satisfied.
To me, now (after long exposure to Inman, McDowell, and other people!) this seems to have several problems. Firstly, the rat can’t respond to the representation, either. For whatever system ‘responds to the stomach being distended’, will itself be a physical system, which doesn’t actually respond to the stomach being distended, if you intervene in its normal operation (e.g. it might respond to ‘that neuron’, or ‘those neurons’ firing). This type of objection to the objection would carry on, ad infinitum. But then, by parity of reasoning, if there is a usable sense in which a system responds to its stomach being distended, there is a usable sense in which a rat responds to its needs being satisfied, after all.
Another point. Take a Watt governor. It governs speed. It doesn’t matter that, if you fiddle around inside it, and affect it’s normal operation, that it stops governing speed. What matters is that, when it’s working, it governs speed. It looks to me that there can be a sense in which simple mechanisms enable (constitute?) embodied coping, in the world, which is similarly non-representational. It’s what they do, when they are working. As always, in these cases, if they are physical mechanisms, there will necessarily be situations in which they do not work. But that mere fact doesn’t seem to be enough to make such systems representational.
Now, as I said in our reading group, I feel a strong parallel with (a naturalistic type of) disjunctivism, here. In philosophy of perception, this is the idea that, to understand what’s going on, in cases which seem like normal perception from the first-person point of view, we have to think of these cases as either situations which put you in contact with the world (perception) or situations which seem to, but don’t (illusion and hallucination).
Applying very similar reasoning to the examples here, what the system does when it is working is what makes it the system it is. This is not to deny that it is a physical mechanism, and hence that it can fail to work. But it is to deny that the central thing to be understood is that which is common between the case when the system is working and the case when it is not working. If you think that, you will definitely end up with a representationalist view (of that common thing). The disjunctivist denies that this common thing (between working and not working) is what is fundamental and claims that the central thing to be understood is what the system must do, when it is working, for it to be that type of system. Then, what the system does when it is not working can be understood derivatively on that.
Combined, I wonder whether these two arguments help (or at least help un-reconstructed representationalists) to show why, and how, we should think of the bases of cognition as non-representational?