First participatory group discussion on the paper.
“Enactive-Dynamic Social Cognition and Active Inference”
http://philsci-archive.pitt.edu/20352/
“Active Inference and Abduction”
https://link.springer.com/article/10.1007/s12304-021-09432-0
Active Inference Institute information:
Website: https://activeinference.org/
Twitter: https://twitter.com/InferenceActive
Discord: https://discord.gg/8VNKNp4jtx
YouTube: https://www.youtube.com/c/ActiveInference/
Active Inference Livestreams: https://coda.io/@active-inference-institute/livestreams
Email: ActiveInference@gmail.com
...
https://www.youtube.com/watch?v=1yYFJnf_mHY
All Active Inference Institute livestreams and videos:
https://coda.io/@active-inference-institute/livestreams
Full Textbook Group Cohort 4 playlist: https://www.youtube.com/playlist?list=PLNm0u2n1Iwdpm1wcq9DOGSdKDDvnEt_xG
Active Inference: The Free Energy Principle in Mind, Brain, and Behavior
By Thomas Parr, Giovanni Pezzulo and Karl J. Friston
https://mitpress.mit.edu/9780262045353/active-inference/ https://www.activeinference.org/
...
https://www.youtube.com/watch?v=dlZk_iF9EqM
Abstract
At Normal Computing, we believe that achieving System-2 thinking in artificial systems will necessarily involve going beyond the auto-regressive language models we all know and love. Leveraging said systems in high-stakes applications will require step changes in explainability and reliability. The foundations of probabilistic machine learning and Bayesian decision theory provide rich tool-kits to augment today’s powerful and unwieldy LLMs. In this talk, we discuss the Bayesian world models we’re developing, highlighting connections to the Free Energy Principle. In particular, we discuss using LLMs as likelihood machines, hierarchical world models from message passing, and using world models as recommendation engines.
Normal Computing: https://normalcomputing.ai/
“When it comes to robust reasoning, an Achilles’ heel of current large language models is that the world model and the inference machine are one and the same.”
- Yoshua Bengio
[BH23]
“I submit that devising learning paradigms and architectures that would allow machines to learn world models in an unsupervised (or self-supervised) fashion, and to use those models to predict, to reason, and to plan is one of the main challenges of AI and ML today. One major technical hurdle is how to devise trainable world models that can deal with complex uncertainty in the predictions.”
- Yann LeCun [LeC22]
[BH23] Yoshua Bengio and Edward Hu. Scaling in the service of reasoning model-based ml. 2023.
[LeC22] Yann LeCun. A path towards autonomous machine intelligence. 2022.
Active Inference Institute information:
Website: https://activeinference.org/
Twitter: https://twitter.com/InferenceActive
Discord: https://discord.gg/8VNKNp4jtx
YouTube: https://www.youtube.com/c/ActiveInference/
Active Inference Livestreams: https://coda.io/@active-inference-institute/livestreams
...
https://www.youtube.com/watch?v=eMtgUJl68jg
It’s all well and good being able to capture the intelligence and behaviour of a single agent, but what about collectives of them? Is that even possible and, if so, what do those models tell us not only about the individuals that belong to that group, but also the dynamic that emerges over and above their respective contributions? To answer those questions and many more, Active Inference Insights welcomes Conor Heins, Senior ML Research Engineer at Verses AI Research Lab and PhD student at the Max Planck Institute of Animal Behaviour, to the show.
Conor Heins
https://scholar.google.com/citations?user=3OKMye8AAAAJ&hl=en
https://www.ab.mpg.de/person/101190/2736
https://twitter.com/conorheins
Darius Parvizi-Wayne
https://twitter.com/dparviziwayne
https://www.researchgate.net/profile/Darius-Parvizi-Wayne
Active Inference Insititute
https://www.activeinference.org
https://twitter.com/InferenceActive
...
https://www.youtube.com/watch?v=nf57wi3qLjk
"Is the Electromagnetic Field Topology the Key to Solve the Boundary Problem of Consciousness?"
Andrés Gómez-Emilsson and Chris Percy
Based on the paper:
"Don’t forget the boundary problem! How EM field topology can address the overlooked cousin to the binding problem for consciousness"
https://www.frontiersin.org/articles/10.3389/fnhum.2023.1233119/full
The boundary problem is related to the binding problem, part of a family of puzzles and phenomenal experiences that theories of consciousness (ToC) must either explain or eliminate. By comparison with the phenomenal binding problem, the boundary problem has received very little scholarly attention since first framed in detail by Rosengard in 1998, despite discussion by Chalmers in his widely cited 2016 work on the combination problem. However, any ToC that addresses the binding problem must also address the boundary problem. The binding problem asks how a unified first person perspective (1PP) can bind experiences across multiple physically distinct activities, whether billions of individual neurons firing or some other underlying phenomenon. To a first approximation, the boundary problem asks why we experience hard boundaries around those unified 1PPs and why the boundaries operate at their apparent spatiotemporal scale. We review recent discussion of the boundary problem, identifying several promising avenues but none that yet address all aspects of the problem. We set out five specific boundary problems to aid precision in future efforts. We also examine electromagnetic (EM) field theories in detail, given their previous success with the binding problem, and introduce a feature with the necessary characteristics to address the boundary problem at a conceptual level. Topological segmentation can, in principle, create exactly the hard boundaries desired, enclosing holistic, frame-invariant units capable of effecting downward causality. The conclusion outlines a programme for testing this concept, describing how it might also differentiate between competing EM ToCs.
Active Inference Institute information:
Website: https://activeinference.org/
Twitter: https://twitter.com/InferenceActive
Discord: https://discord.gg/8VNKNp4jtx
YouTube: https://www.youtube.com/c/ActiveInference/
Active Inference Livestreams: https://coda.io/@active-inference-institute/livestreams
...
https://www.youtube.com/watch?v=bhLPZaLmi2k
Anand Subramoney
Principles of scalability and biological inspirations"
In the talk I will discuss how current models scale and what we can learn from the efficiency of biological brains. One of the central themes will be sparsity, its significant role in scalable systems and its synergies with neuromorphic hardware. I will present existing ideas based on spiking neural networks and recent work from my group focussed on using various forms of sparsity and distributed learning to improve the scalability and efficiency of our learning models.
Anand Subramoney is a Lecturer (Assistant Professor) in the Department of Computer Science at Royal Holloway, University of London. He is broadly interested in learning and intelligence, both algorithmic and biological. His current research focuses on understanding the principles of scalability in deep learning. He aims to use this to build models that are more efficient and can scale up seamlessly. His research draws inspiration from neuroscience and biology in his quest to build a better and more general artificial intelligence. Website: https://anandsubramoney.com/
Active Inference Institute information:
Website: https://activeinference.org/
Twitter: https://twitter.com/InferenceActive
Discord: https://discord.gg/8VNKNp4jtx
YouTube: https://www.youtube.com/c/ActiveInference/
Active Inference Livestreams: https://coda.io/@active-inference-institute/livestreams
...
https://www.youtube.com/watch?v=T5nG_5UZ_yk
"Irruption Theory: A Novel Conceptualization of the Enactive Account of Motivated Activity"
https://www.mdpi.com/1099-4300/25/5/748
Tom Froese
2023
Active Inference Institute information:
Website: https://activeinference.org/
Twitter: https://twitter.com/InferenceActive
Discord: https://discord.gg/8VNKNp4jtx
YouTube: https://www.youtube.com/c/ActiveInference/
Active Inference Livestreams: https://coda.io/@active-inference-institute/livestreams
...
https://www.youtube.com/watch?v=ChO1u1mJRGE