CARLA is an abbreviation for:
C*oncepts in A*ction – R*epresentation,
L*earning and A*pplication.

Complementary, CARLA Monday
is an open but focused discussion
about selected topics in cognitive computing.

CARLA is an abbreviation for C*oncepts in A*ction – R*epresentation, L*earning and A*pplication. This interdisciplinary workshop series is covering the field of cognitive science as well as cognitive computing. For more details, see also: CARLA Workshop website

Concepts in Action is a paradigm to explain the emergence of concepts. Concept learning is about how become words meaningful – the origin of semantics. The concepts in action paradigm assumes that semantics emerge in interaction between humans and their environment. Relevant work in the field has been published, among others, by Ludwig Wittgenstein, Stevan Harnad, Heinz von Foerster or Francisco Varela.

Complementary, CARLA Monday is an open but focused discussion channel about selected topics in cognitive computing. At CARLA Monday researchers and practitioners come together to get inspired by outstanding research contributions which have an impact in the evolution of artificial intelligence.

Each CARLA Monday discussion is driven by a cognitive computing topic that let arise questions, contradictions or open issues within the community. These topic characteristics are a good starting point to dive into new approaches in collaborative manner. Invited members of CARLA Monday helping each other in a common topic but very probably with interdisciplinary roots.

What are topics of current CARLA Monday events?

01*
Predictive Encodings

CARLA Monday 01 is presenting some interesting contributions in the field of Predictive Encodings. Such encodings are central to learn deep neural networks in the manner of a generative Bayesian inference model based on Friston’s free energy principle. Martin Butz (University of Tübingen) and Sebastian Stober (Otto-von-Guericke-University Magdeburg) have been researching the topic for several years. After two talks on current Predictive Encoding developments, the discussion will be moderated by Michael Spranger (Sony AI, personal website).

Key facts

  • What? Keynote talks by Martin Butz and Sebastian Stober, joint discussion moderated by Michael Spranger.
  • Other panel members? YES! Dare Baldwin (University of Oregon), Jean Daunizeau (Brain and Spine Institute (ICM), Paris), Marta Garrido (University of Melbourne), and Georg Martius (MPI Institute for Intelligent Systems Tübingen)
  • When? Monday, 05 July 2021 – 1.00 PM to 4.00 PM CET.
  • Where? Pure online event. More details are provided after successful invitation.
  • Procedure? Based on given event requests a subset becomes invited by the event contributors.
  • Costs? Event participation is for free.
  • Deadline? Request application until Sunday, 20 June 2021 03 July 2021 – 11.59 PM CET.

Interesting?

Martin Butz University Tübingen Portrait

Martin Butz

* University of Tübingen

Martin Butz is Professor for Cognitive Modeling at the University of Tübingen since 2011. He has studied computer science and psychology at Würzburg University from 1995-2001. In 2004 he finished his PhD in computer science with a focus on rule-based evolutionary online learning systems at the University of Illinois at Urbana-Champaign, IL. Since the very beginning of his research career, he has collaborated with psychologists, neuroscientists, machine learners, and roboticists and has attempted to integrate their respective disciplinary perspectives into an overarching theory on the mind, cognition, and behavior.

Our minds learn conceptual structures from our self-generated sensorimotor experiences. We intuitively know how objects behave based on physics. We perceive things as entities with particular distinctive properties, and other animals and humans as agentive. Meanwhile, we become able to recombine these conceptual structures in compositional meaningful manners. This ability allows us to plan, reason, and (partially) solve problems that we had never encountered before. Cognitive science still searches for an explanation on how humans learn such compositional conceptual structures from sensorimotor-based experiences.

In this talk, I provide evidence and own research insights that suggest that our brains have the tendency to develop event-predictive, generative models from the encountered sensorimotor experiences, actively exploring them to foster further development. Interestingly, these models may have also set the stage for language development and thus for the generation of even more abstract cognitive abilities.

Butz, M. V., Achimova, A., Bilkey, D.,& Knott, A. (2020). Event-Predictive Cognition: A Root for Conceptual Human Thought. Topics in cognitive science.

“With respect to artificial intelligence, when the task is to develop systems that become truly intelligent, we suggest that their learning mechanisms should be endowed with inductive biases that tend to develop latent, event-predictive encodings. Such encodings tend to yield compact factored, and partially even causal, explanations of the observed sensorimotor dynamics, they enable planning and reasoning on conceptual and compositionally meaningful levels, and they appear to be well-connectible with language encodings and processing mechanisms. As a result, these neurocognitive machine learning systems may be able to uncover innovative problem solutions and recommendations, and may thus outperform current deep learning, classification-oriented machine learning systems by far.”

Butz, M. V. (2021). Towards strong AI. KI-Künstliche Intelligenz, 35(1), 91-101.

“In this paper, I have argued that the current AI hype may be termed a Behavioristic Machine Learning (BML) wave. It is the involved blind, reactive development that I consider as unsustainable, even if short-term rewards are generated. I have suggested that research efforts should be increased to develop Strong AI, that is, artificial systems that are able to learn about the processes, forces, and causes underlying the perceived data, becoming able to understand and explain them. As a precursor, the field should target the development of world-knowledge-grounded compositional, generative predictive models (CGPMs).”

Sebastian Stober

* Otto-von-Guericke-University Magdeburg

Connecting Predictive Processing In Humans and Machines With Deep Neural Networks

Sebastian Stober is Professor for Artificial Intelligence at the Otto von Guericke University Magdeburg. He studied computer science with focus on intelligent systems and a minor in mathematics in Magdeburg until 2005 and received his PhD on the topic of adaptive methods for user-centered organization of music collections in 2011. From 2013 to 2015, he was postdoctoral fellow at the Brain and Mind Institute in London, Ontario where he pioneered deep learning techniques for studying brain activity during music perception and imagination. Afterwards, he was head of the Machine Learning in Cognitive Science Lab at the University of Potsdam, before returning to Magdeburg in 2018 to establish the new Artificial Intelligence Lab as a bridge between computer science and cognitive neuroscience.

The Bayesian brain hypothesis and predictive processing are a dominant paradigm in cognitive science. Following the same underlying imperative – to minimise prediction error – allows to design models in cognitive neuroscience but also to implement increasingly intelligent and autonomously acting artificial systems. However, there is another interesting aspect of the predictive processing framework: The direct connections between human and model perception when they are based on the same principles. Going one step further, the idea of emergent and mutually predicting behavior in human-in-the-loop systems arises. Based on our own research and the existing literature, we discuss how deep predictive coding models can be applied to unsupervised representation learning of high-dimensional sensory signals. Focusing on auditory signals, we show that network surprisal allows to derive meaningful event locations in simultaneously recorded electroencephalography (EEG). Finally, following our ongoing research on active inference, we provide a broad perspective on human-machine shared autonomy and adaptive inference on human neural responses.

Ofner, A., & Stober, S. MODELING PERCEPTION WITH HIERARCHICAL PREDICTION: AUDITORY SEGMENTATION WITH DEEP PREDICTIVE CODING LOCATES CANDIDATE EVOKED POTENTIALS IN EEG.

“The conducted exploratory analysis of EEG at locations connected to peaks in prediction error in the network allowed to visualize auditory evoked potentials connected to local and global musical structures. This indicates the potential of unsupervised predictive learning with deep neural networks as means to retrieve musical structure from audio and as a basis to uncover the corresponding cognitive processes in the human brain.”

Sebastian Stober

Request

* Required

Please decide how you want to get access to the event. Only general information are required if you random selection. After clarifying an event participation limit, the earliest requests minus the invited once get a confirmation of event participation. If you want to be invited by the leading CARLA Monday event members than please add your personal statement of interest and some link to a personal profile page (e.g. Google Scholar, personal website or a company you work for). Invited participants can listen the event speakers. Nevertheless, it might be the case that you want to be part of the discussion. Please, write down your community relevance to get invited into the discussion board.