CARLA is an abbreviation for:
C*oncepts in A*ction – R*epresentation,
L*earning and A*pplication.
Complementary, CARLA Monday
is an open but focused discussion
about selected topics in cognitive computing.
CARLA is an abbreviation for:
C*oncepts in A*ction – R*epresentation,
L*earning and A*pplication.
Complementary, CARLA Monday
is an open but focused discussion
about selected topics in cognitive computing.
CARLA is an abbreviation for C*oncepts in A*ction – R*epresentation, L*earning and A*pplication. This interdisciplinary workshop series is covering the field of cognitive science as well as cognitive computing. For more details, see also: CARLA Workshop website
Concepts in Action is a paradigm to explain the emergence of concepts. Concept learning is about how become words meaningful – the origin of semantics. The concepts in action paradigm assumes that semantics emerge in interaction between humans and their environment. Relevant work in the field has been published, among others, by Ludwig Wittgenstein, Stevan Harnad, Heinz von Foerster or Francisco Varela.
Complementary, CARLA Monday is an open but focused discussion channel about selected topics in cognitive computing. At CARLA Monday researchers and practitioners come together to get inspired by outstanding research contributions which have an impact in the evolution of artificial intelligence.
Each CARLA Monday discussion is driven by a cognitive computing topic that let arise questions, contradictions or open issues within the community. These topic characteristics are a good starting point to dive into new approaches in collaborative manner. Invited members of CARLA Monday helping each other in a common topic but very probably with interdisciplinary roots.
What are topics of current CARLA Monday events?
Our minds learn conceptual structures from our self-generated sensorimotor experiences. We intuitively know how objects behave based on physics. We perceive things as entities with particular distinctive properties, and other animals and humans as agentive. Meanwhile, we become able to recombine these conceptual structures in compositional meaningful manners. This ability allows us to plan, reason, and (partially) solve problems that we had never encountered before. Cognitive science still searches for an explanation on how humans learn such compositional conceptual structures from sensorimotor-based experiences.
In this talk, I provide evidence and own research insights that suggest that our brains have the tendency to develop event-predictive, generative models from the encountered sensorimotor experiences, actively exploring them to foster further development. Interestingly, these models may have also set the stage for language development and thus for the generation of even more abstract cognitive abilities.
“With respect to artificial intelligence, when the task is to develop systems that become truly intelligent, we suggest that their learning mechanisms should be endowed with inductive biases that tend to develop latent, event-predictive encodings. Such encodings tend to yield compact factored, and partially even causal, explanations of the observed sensorimotor dynamics, they enable planning and reasoning on conceptual and compositionally meaningful levels, and they appear to be well-connectible with language encodings and processing mechanisms. As a result, these neurocognitive machine learning systems may be able to uncover innovative problem solutions and recommendations, and may thus outperform current deep learning, classification-oriented machine learning systems by far.”
Butz, M. V. (2021). Towards strong AI. KI-Künstliche Intelligenz, 35(1), 91-101.
“In this paper, I have argued that the current AI hype may be termed a Behavioristic Machine Learning (BML) wave. It is the involved blind, reactive development that I consider as unsustainable, even if short-term rewards are generated. I have suggested that research efforts should be increased to develop Strong AI, that is, artificial systems that are able to learn about the processes, forces, and causes underlying the perceived data, becoming able to understand and explain them. As a precursor, the field should target the development of world-knowledge-grounded compositional, generative predictive models (CGPMs).”
Connecting Predictive Processing In Humans and Machines With Deep Neural Networks
Sebastian Stober is Professor for Artificial Intelligence at the Otto von Guericke University Magdeburg. He studied computer science with focus on intelligent systems and a minor in mathematics in Magdeburg until 2005 and received his PhD on the topic of adaptive methods for user-centered organization of music collections in 2011. From 2013 to 2015, he was postdoctoral fellow at the Brain and Mind Institute in London, Ontario where he pioneered deep learning techniques for studying brain activity during music perception and imagination. Afterwards, he was head of the Machine Learning in Cognitive Science Lab at the University of Potsdam, before returning to Magdeburg in 2018 to establish the new Artificial Intelligence Lab as a bridge between computer science and cognitive neuroscience.
The Bayesian brain hypothesis and predictive processing are a dominant paradigm in cognitive science. Following the same underlying imperative – to minimise prediction error – allows to design models in cognitive neuroscience but also to implement increasingly intelligent and autonomously acting artificial systems. However, there is another interesting aspect of the predictive processing framework: The direct connections between human and model perception when they are based on the same principles. Going one step further, the idea of emergent and mutually predicting behavior in human-in-the-loop systems arises. Based on our own research and the existing literature, we discuss how deep predictive coding models can be applied to unsupervised representation learning of high-dimensional sensory signals. Focusing on auditory signals, we show that network surprisal allows to derive meaningful event locations in simultaneously recorded electroencephalography (EEG). Finally, following our ongoing research on active inference, we provide a broad perspective on human-machine shared autonomy and adaptive inference on human neural responses.
“The conducted exploratory analysis of EEG at locations connected to peaks in prediction error in the network allowed to visualize auditory evoked potentials connected to local and global musical structures. This indicates the potential of unsupervised predictive learning with deep neural networks as means to retrieve musical structure from audio and as a basis to uncover the corresponding cognitive processes in the human brain.”
* Required
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie | Duration | Description |
---|---|---|
cookielawinfo-checkbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checkbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.