The Life & Mind Seminar Network

Information Theory, yes. But where is Autonomy Theory?

Posted in General by Tom Froese on September 2, 2010

This is a quick post to follow up on the discussion generated yesterday after Paul William’s seminar on Information Dynamics of Embodied Agents. I raised the worry that information theoretic measures rely on the observer’s access to the totality of environmental states, and that this limits their relevance to understanding the embodied agent. In the subsequent discussion I failed to articulate my worry in a more precise way, so I will try to do that here.

To begin with, I am worried that the measure too naively adopts an absolute God’s eyes viewpoint. We surely understand something of the brain-body-world system as a whole when taking such a perspective, but have we understood something about how the agent as an agent does what it does? The label “brain-body-world” system fools us into thinking that we do at least partially, but the complete arbitrariness of the divisions shows us that things are not this simple.

I’m not saying that a dynamical analysis can fully sidestep this critique, it cannot. But at least there we have the option of looking at the “autonomous” dynamics of the system that we have denoted as the “agent”, and can try to understand what kind of system it is in itself. It’s another privileged viewpoint, but one which tries to respect the boundaries of the system, while at the same time acknowledging that the system is parametrically coupled to an environment. This is why I was trying to suggest that information theoretic measures might tell us more about how the agent as an agent works, by applying these tools within the sensory-motor loop. After all, that’s all the system has to work with in order get the job done.

This may sound like a retreat to some kind of traditional cognitivist or internalist approach to cognition, as some of you have pointed out, but this is not the case (I hope!). What I am interested in is to find a middle ground. Traditional internalism (assumed by GOFAI) and radical externalism (assumed by the way in which Paul measured global information flows) are both too extreme: one differentiates the agent too much, thereby forgetting that the agent is situated, and the other differentiates the agent too little, thereby forgetting that the situation has an agent.

It therefore appears that there is currently a tension between an embodied-situated approach to cognition, and one which tries to emphasize the autonomy of the agent. As far as I understand Maturana and Varela’s work, they introduced the notion of organizational closure precisely in order to provide a scientific principle which could mediate between the two extremes. I attach here an excerpt of a short opinion piece by Varela (1977), which makes this point well.

So I guess my frustration with Information Theory, as it is mostly used in our field, is that it falls into an externalist extreme that leaves no space for the agent. It’s good mathematical training, but it tells us little (if anything at all) about what is specific to living and cognitive beings as such. Of course, you could say that Information Theory does not even try to say anything that specific, but then the question is: why use it as a tool for doing cognitive science? This comment may be too harsh, but the problem remains: here we have another framework to keep us busy, but we are not getting any closer to studying the big questions about what is autonomy, agency, cognition, etc. Why are there no signs of an Autonomy Theory to complement Information Theory? Or am I just too blind to see it?

In the end, as Nathaniel has to keep on reminding me, I may simply be looking for autonomy in the wrong place. As we argued in our ECAL’09 paper (Virgo, Egbert & Froese, in press), the organizational boundaries of the autonomous system do not need to coincide with its apparent physical boundaries. To confuse the two kinds of boundaries is to commit a category mistake. Thus, if we want to find autonomous systems in our simulations, then we may have to look for them in the relational dynamics of the brain-body-world systemic whole, and NOT inside that sub-system which we typically refer to as the ‘agent’.

I tried to do something along those lines in a paper I submitted to Alife’08 (cf. Froese & Di Paolo 2008), but it got rejected so I’m not sure how successful the attempt was. Perhaps Nathaniel’s idea of analyzing a reaction-diffusion system in terms of an individual ‘spot’ may be a more productive starting point.

In any case, I think that as long as we cannot begin to address the big questions in a more systematic way, we won’t be able to convince those working within the traditional cognitivist mainstream to change their approach. They may be studying cognition the wrong way but, at the moment at least, it seems that we are not even studying cognition at all.

Advertisements

5 Responses

Subscribe to comments with RSS.

  1. Lucas said, on September 2, 2010 at 3:46 pm

    I think you have been deceived by the term “autonomous” in “autonomous dynamics”. The autonomous dynamics of a system are not special, just a particular decomposition of the system that is just as arbitrary as any other that decomposes it into the same parts. It’s like trying to work out what ants are “in themselves” by launching the nest unprotected into the void of deep space.

    Splitting the world up into arbitrary pieces is what you have to do if you want to:
    * Communicate with other people
    * Perform scientific research
    * Apply a formal measure (such as probability or information), the first two entail this one.

    You can try and cleave nature at it’s joints but you still have to butcher it. Unless you think there is an objectively correct way (I guess some people do) or you describe it in full.

    (You can do the latter if you have a finite sized model of the world or whatever)

  2. Nathaniel Virgo said, on September 2, 2010 at 8:07 pm

    You already wrote a bit about what I think, so I don’t need to repeat that part of the argument here. But there’s an additional methodological point I’d like to make: personally I don’t think we’re anywhere near having a well-defined notion of behavioural autonomy, but I’d say our best chance at reaching one probably lies in further development of information theory. See, for instance, Anil Seth’s notion of Granger-Autonomy. This doesn’t really address the notion of autonomy in the Varellian sense, but it’s clearly a start, and it’s closely related to information theory. (Granger-causality is a special case of an information-theoretic notion called transfer entropy (Barnett, Barrett and Seth, 2009); presumably G-Autonomy could be generalised to a similar information theoretic measure)

    On this view, your objection that we should be trying to develop an “autonomy theory” instead of worrying about information theory is flawed, since we need to develop a better understanding of how information theory works in these types of systems before a really good theory of autonomy can be developed.

    Regarding the Varela extract you posted, the main thrust of the argument (after a tricky bit of deciphering) seems to go like this:

    1. Sometimes actuator neurons have an effect on sensor neurons, either directly within the nervous system, or via the body, or indirectly through the environment.
    2. Therefore, although it’s possible to arbitrarily consider the sensor neurons to be “inputs” and the actuator neurons to be “outputs”, if you really want to understand what’s going on, you’d better include the above feedbacks in the system you study, as if they were part of the nervous system itself. The system thus obtained (nervous system plus external feedbacks) is somewhat unfortunately referred to as “closed” because, according to the way Varela uses the terms, it doesn’t have inputs and outputs. Even more unfortunately, it’s referred to as “the nervous system”.
    3. Therefore “there is no ‘information’ being ‘processed’ BY THE NERVOUS SYSTEM” (emphasis added) because the concept of information, as Varela understands it, is dependent on inputs and outputs, which are an observer-dependent notion. Therefore, to say the information (which is observer-dependent) is processed by the nervous-system-plus-feedbacks, (which is not) is a category mistake.
    4. This “amounts to a very drastic critique of information and communication theory as currently [i.e. in 1977] understood.”

    This is all more-or-less fair enough as far as I can see, but it doesn’t add up to a critique of the way information theory is being applied now, in 2010, by Paul Williams. That analysis can not only cope with the fact that feedbacks external to the CTRNN play an important role in its dynamics, but it can actually quantify the extent to which this is the case for a given agent.

    Moreover, if one were to perform the analysis you suggest, in which one uses information theory to reason about only the internal dynamics of the agent’s control system, one would be doing the exact thing that Varela is warning against. This would entail analysing the nervous system / CTRNN as if it had inputs and outputs at some particular arbitrarily defined point.

    Thus, despite Varela’s criticism of information theory and communication theory as they were applied in cognitive science at the time he was writing, the article you posted can be seen as an argument in favour of an approach like Paul’s.

  3. matthewegbert said, on September 3, 2010 at 12:57 pm

    I agree with Tom that the division in Paul’s model between brain, body and world is arbitrary. Also, as Tom noted, this is no different when the analysis is dynamical rather than informational. As Lucas emphasised, the autonomous dynamics are only dynamics that we have somewhat arbitrarily labelled “autonomous” or “part of the brain”. So, the problem (if there is one) is there whether you are speaking of dynamical or informational analyses of these minimally cognitive systems.

    But, there is not necessarily a problem. Beer is quite explicitly aware that he abstracts out the autopoietic or viability-boundary producing aspects of a system when performing dynamical analysis of his minimally cognitive systems (Beer 2004, p. 319-320). There are many things that can be learned about cognition without explicitly including autonomy. However, I would agree that autonomy remains an under-explored area of cognitive science.

    There are two ways in which autonomy can be included. One, that I have been exploring with Xabier and Ezequiel is a relationship between biological autonomy and behaviour. Nathaniel’s work with RD spots I see as being in the same vein. Another approach is to consider a cognitive autonomy, where, instead of considering the physico-chemical self-construction of physical cells, cognitive mechanisms themselves are autonomous, self-maintaining dynamics. Consider “habits”, or dynamics in the brain-body-world that are self-maintaining.

    This second form of autonomy is less well understood (part of the reason I chose to study biological autonomy as a stepping stone towards cognitive autonomy). I can imagine cognitive autonomy being studied in CTRNN-esque systems of differential equations, but I think they would have to take on quite a different form that would support concurrent autonomous dynamics, unlike the largely-monolithic networks (small, fully connected) that we are more familiar with. In the meantime, we are having a hard time understanding these simpler networks, so tools developed to analyse them such as that presented by Paul are welcome steps forward to eventually understanding more complex models of cognition that include autonomy.

    I would love to see more work focusing on autonomy (biological or cognitive), but I don’t think that the lack of inclusion of autonomy in e.g., Paul’s model makes its conclusions irrelevant to the study of cognition.

    Beer, R.D., 2004. Autopoiesis and Cognition in the Game of Life. Artificial Life, 10(3), 309-326.

  4. Paul Williams said, on September 3, 2010 at 1:34 pm

    Very interesting discussion. For the most part, I agree with the responses of Lucas, Nathaniel and Matthew, so I’ll just elaborate a bit on a point that Matthew made.

    I think it’s important to distinguish between the construction of a model, and all the assumptions that go into that, and the subsequent analysis of that model. Tom, your criticisms seem to be about the assumptions that go into our minimally cognitive models (which, as Matthew points out, explicitly abstract over issues of self-production, viability, etc.), rather than the analysis of those models. Which is why the criticisms apply equally to dynamical and informational analyses, since it’s not really a criticism of the analysis at all. If what you’re after is a better understanding of agency, then you need different models (as Matthew noted, and his own work illustrates), but that is decidedly not the focus of our work.

  5. Tom Froese said, on September 7, 2010 at 12:41 pm

    Many thanks for all of your responses!

    First of all, let me thank Paul for helping me to clarify the target of my criticism. Am I talking about the assumptions which go into the making of the model or the analysis of the model? I was talking about both, though I muddled the issues.

    Regarding the assumptions, I realize that in order to obtain minimally cognitive models we abstract away from the biological foundations and that this will impact on the model. I know that at the moment they are a practical necessity, and in my own work I’ve had recourse to these very same abstractions often enough.

    But, as Matt rightly points out, these assumptions does not necessarily preclude self-sustaining dynamical loops or “habits” to emerge in the relational dynamics within the whole system. I agree. And, indeed, I suggest that we raise the bar for our ‘minimally cognitive models’ to at least include such self-sustaining loops for them to be sufficiently interesting for cognitive science.

    If we have to abstract metabolic self-construction (etc.) to study the dynamics of cognition, then let us at least make sure that these model dynamics have to cope with something akin to the transient instability found in actual living systems.

    How are we to analyze these models? Regarding Lucas’ point that the distinctions we place on nature are arbitrary, I would just like to emphasize that this misses the full complexity of the situation. Yes, I as an observer can decide to demarcate nature in any odd way. But under some conditions I will find that the system that I distinguish appears to be organized in such a way that its operations demarcate itself – this is the idea of autonomy.

    But in order for the target phenomenon to reveal its autonomous organization I need to distinguish it in a way which respects that organization. Now, the crucial question for me is: do our current means of analyzing the operations of our models, be it dynamical systems analysis or information theory, in principle allow for this autonomy to be revealed?

    My intuition was that information theory is less equipped for this task than dynamical systems theory. But, as Nathaniel has remarked, some measures of information (e.g. Anil’s g-autonomy) may actually be the best tools we currently have for the job. I’m curious to think, what other measures are there? When we look at the flow of information or the dynamic state space, what kind of structures are the marks of self-sustaining processes?

    I think this debate has helped to clarify my gut reaction that I had during Paul’s seminar.

    1) In terms of modeling assumptions for ‘minimally cognitive systems’, I propose that we include criteria which inherently destabilize the ‘brain’ part of the simulations to explicitly model the perpetual metabolic turnover of actual living/cognitive systems.

    2) In terms of methods for analyzing these models, I am happy to consider both information theory and dynamical systems theory as potential tools to discover the characteristic markers of living/cognitive systems, though we are currently relatively in the dark as to what to look for.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: