Some problems with information theory
I’ve been thinking about the pros and cons of information theory as applied to cognitive science for a while now. The discussions during yesterday’s COGS talk made it clear to me yet again that I’m just failing to understand the significance of adopting this kind of approach. This is because I think that there are some severe practical, theoretical, and philosophical problems associated with an information theoretic approach to cognitive science.
First of all, a practical consideration: in order to calculate the amount of “information” transfered between the “environment” and an “agent” it is necessary to have a model of both systems and their manner of coupling. For this we need to know all relevant state variables, all the states that these can take, the conditional dependencies between them, and the probability associated with each of the states. This poses severe practical problems for investigating any system that goes beyond a simple GOFAI toy world. How many different states can our environment have? Could we ever know them all? As Inman would say, we do not even have a model of the humble nematode worm!
Second, a theoretical consideration: even if we assume that we had all the relevant models so that we could start applying the methods of information theory, then there is still the question of what more knowledge can be gained from doing this. Doesn’t this imply the additional assumption that we must already have sufficient understanding of the system’s structure and operation to obtain a model in the first place? In other words, we need to know how the system works before we can even begin to apply these methods. Accordingly, it would seem that all that information theory could provide is a theoretical rediscription of a system which we already understand anyway.
Third, a philosophical consideration: can an information theoretic rediscription of such a system provide us with any new knowledge about the way in which an autonomous agent operates? It seems that the answer is no, because information theory requires us to have access to relational knowledge that we only have in virtue of being external observers. Even if we could determine what amount of “information” is transfered into a particular variable of the agent from its environment, for example, this would tell us nothing about the internal operations of that agent.
Finally, even if someone would want to claim that an agent acts on information theoretic principles this would require it to have an internal model of itself, its environment, and the relationship between the two. Of course, this might sound ok to GOFAI practitioners, but it sounds absurd to those who favor a more embodied-embedded approach. And would such an undertaking even be possible in principle? For does it not end up in an infinite regress of internal models (since the existence of the internal model in the agent needs to be included in the internal model of the agent, etc.)?
These considerations seem to severely limit the viability of an information theoretic approach to cognitive science. Perhaps there is a niche for it in engineering GOFAI robots? However, judging from the excitement that information theory has generated in some areas, I’m probably just missing the point. Could someone please enlighten me?