Two Tokyo L&M Seminars
This week we are having two Life and Mind seminars at Komaba campus of the University of Tokyo, one by Fumiya Iida on embodied robotics and one by Michal Paradowski on embodied linguistics. The first talk will take place on Wednesday at 3pm.
From Soft Robotics to Self-Organizing Machines
Bio-Inspired Robotics Lab
Institute of Robotics and Intelligent Systems, ETH Zurich
Self-organization is one of the main design principles of biological systems. Starting from musculosekeletal structures to sensory apparatus and nervous systems, everything in biological systems is continuously changing over time and self-organizing themselves to maintain the living processes. In contrast, the conventional robotic systems are generally composed of rigid materials such as aluminum and steel, and the robotics engineers always attempt to minimize the deformation and changes of physical morphologies of the systems. The goal of this talk is to introduce some of the soft robotics projects that we are exploring in our laboratory and to discuss what are the roles of self-organization in the real-world robots and how they can be technologically achieved.
The second talk will take place on Friday at 3pm. This talk will prepare us nicely for the upcoming EvoLang conference in Kyoto.
From dualism to holism. Why language should not be conceived of in abstraction from the brain and body, why there is more to it than sensorimotor and semantic resonance, and the consequences for embodied systems
Michal B. Paradowski
University of Warsaw
Faculty of Applied Linguistics
Until very recently, most language research has, in a Cartesian manner, traditionally regarded language phenomena as internal, mental, isolationist and amodal (separate and independent from perception, action and emotion systems, and the body); a view endorsed in psychology, philosophy, and linguistics. This could lead one into believing that in order to emulate linguistic behavior, it suffices to develop ‘software’ operating on abstract representations that will work on any computational machine. This picture is inaccurate for several reasons, which will be elucidated in the talk and extend beyond sensorimotor and semantic resonance. Embodied cognition will be taken as the starting point, where the brain, the body, and the environment all form part of reasoning, heuristics, decision-making and action execution. This is fantastically patent in robotics, where during e.g. learning to walk the resultant neural network is going to depend on the robot’s morphology.
Without taking into account both the architecture of the human brain, and embodiment, it is unrealistic to replicate accurately the processes which take place during language acquisition, comprehension, or production, or during non-linguistic actions. While robots are far from isomorphic with humans, they could possibly benefit from strengthened associative in both the optimization of their processes, and reactivity and sensitivity to environmental stimuli, across a range of tasks: i) in grounded language understanding, where structuring the environment acts as scaffolding; ii) while learning about context-dependent phenomena in the surrounding world, or in the process of language acquisition in general; iii) to support action prediction or anticipation; and iv) to reinforce feedback in ‘soft robotics’ and morphological computation, where there is no clear separation between the controller (or orchestrator) and the hardware (morphology), and the tasks are distributed between the brain, body, and environment. Additionally, v) common grounding and alignment are crucial for situated human-machine interaction, and this is another area where sensory experience must be coordinated with linguistic interaction. The concept of multisensory integration should be extended and cover linguistic input and the complementary information that the brain combines from temporally coincident sensory impressions.