RSS

Free En­ergy Principle

TagLast edit: 29 May 2023 19:16 UTC by Roman Leventov

The Free Energy Principle (FEP) is a principle that suggests that dynamic systems, including the brain and other physical systems, are organized to minimize prediction errors, or the difference between the predictions made about the environment and the actual outcomes experienced. According to the FEP, dynamic systems encode information about their environment in a way to reduce surprisal from its input. The FEP proposes that dynamic systems are motivated to minimize prediction errors in order to maintain stability within the environment. FEP has been influential in neuroscience and neuropsychology and more recently has been used to describe systems on all spatiotemporal scales, from cells and biological species to AIs and societies.

FEP gives rise to Active Inference[1]: a process theory of agency, that can be seen both as an explanatory theory and as an agent architecture. In the latter sense, Active Inference rivals Reinforcement Learning. It has been argued[2] that Active Inference as an agent architecture manages the model complexity (i. e., the bias-variance tradeoff) and the exploration-exploitation tradeoff in a principled way, favours explicit, disentangled, and hence more interpretable belief representations, and is amenable for working within hierarchical systems of collective intelligence (which are seen as Active Inference agents themselves[3]). Building ecosystems of hierarchical collective intelligence can be seen as a proposed solution for and an alternative conceptualisation of the general problem of alignment.

FEP/​Active Inference is an energy-based model of intelligence: a FEP agent minimises an informational quantity called variational free energy (VFE), and Active Inference nuances this picture further, modelling agents as minimising an informational quantity called expected free energy (EFE), which is derived from VFE. This likens FEP/​Active Inference to Bengio’s GFlowNets[4] and LeCun’s Joint Embedding Predictive Architecture (JEPA)[5], which are also energy-based. On the other hand, this distinguishes FEP/​Active Inference from Reinforcement Learning, which is a reward-based model of agency, and, more generally, utility-maximising decision theories.

Active Inference is one of the most general theories of agency. It can be seen as a generalisation of the predictive coding theory of brain function (or, the Bayesian Brain hypothesis). Specifically, while predictive coding explains the agent’s perception as Bayesian inference, Active Inference models both prediction and action as inference under the single unifying objective: minimisation of the agent’s VFE or EFE. Active Inference also recovers Bayes-optimal reinforcement learning, optimal control theory, and Bayesian Decision Theory (aka EDT) under different simplifying assumptions[1][6].

The mathematical content of Active Inference is based on Variational Bayesian methods.

References

  1. ^

    Parr, Thomas, Giovanni Pezzulo, and Karl J. Friston. Active inference: the free energy principle in mind, brain, and behavior. MIT Press, 2022.

  2. ^

    Friston, Karl J., Maxwell JD Ramstead, Alex B. Kiefer, Alexander Tschantz, Christopher L. Buckley, Mahault Albarracin, Riddhi J. Pitliya et al. “Designing Ecosystems of Intelligence from First Principles.” arXiv preprint arXiv:2212.01354 (2022).

  3. ^

    Kaufmann, Rafael, Pranav Gupta, and Jacob Taylor. “An active inference model of collective intelligence.” Entropy 23, no. 7 (2021): 830.

  4. ^

    Bengio, Yoshua. “GFlowNet Tutorial.” (2022).

  5. ^

    LeCun, Yann. “A path towards autonomous machine intelligence.” preprint posted on openreview (2022).

  6. ^

    Friston, Karl, Lancelot Da Costa, Danijar Hafner, Casper Hesp, and Thomas Parr. “Sophisticated inference.” Neural Computation 33, no. 3 (2021): 713-763.

God Help Us, Let’s Try To Un­der­stand Fris­ton On Free Energy

Scott Alexander5 Mar 2018 6:00 UTC
49 points
43 comments14 min readLW link
(slatestarcodex.com)

Ac­tive In­fer­ence as a for­mal­i­sa­tion of in­stru­men­tal convergence

Roman Leventov26 Jul 2022 17:55 UTC
12 points
2 comments3 min readLW link
(direct.mit.edu)

The two con­cep­tions of Ac­tive In­fer­ence: an in­tel­li­gence ar­chi­tec­ture and a the­ory of agency

Roman Leventov16 Nov 2022 9:30 UTC
15 points
0 comments4 min readLW link

Neu­ral An­neal­ing: Toward a Neu­ral The­ory of Every­thing (cross­post)

Michael Edward Johnson29 Nov 2019 17:31 UTC
86 points
29 comments40 min readLW link3 reviews

Prop­er­ties of cur­rent AIs and some pre­dic­tions of the evolu­tion of AI from the per­spec­tive of scale-free the­o­ries of agency and reg­u­la­tive development

Roman Leventov20 Dec 2022 17:13 UTC
33 points
3 comments36 min readLW link

Why I’m not into the Free En­ergy Principle

Steven Byrnes2 Mar 2023 19:27 UTC
140 points
48 comments9 min readLW link

Power-Seek­ing = Min­imis­ing free energy

Jonas Hallgren22 Feb 2023 4:28 UTC
21 points
10 comments7 min readLW link

How evolu­tion­ary lineages of LLMs can plan their own fu­ture and act on these plans

Roman Leventov25 Dec 2022 18:11 UTC
39 points
16 comments8 min readLW link

Top Left Mood

Jacob Falkovich24 Jul 2018 14:35 UTC
17 points
2 comments1 min readLW link
(putanumonit.com)

Gaia Net­work: a prac­ti­cal, in­cre­men­tal path­way to Open Agency Architecture

20 Dec 2023 17:11 UTC
22 points
8 comments16 min readLW link

Wor­ri­some mi­s­un­der­stand­ing of the core is­sues with AI transition

Roman Leventov18 Jan 2024 10:05 UTC
5 points
2 comments4 min readLW link

AXRP Epi­sode 32 - Un­der­stand­ing Agency with Jan Kulveit

DanielFilan30 May 2024 3:50 UTC
20 points
0 comments53 min readLW link

Agent Boundaries Aren’t Markov Blan­kets. [Un­less they’re non-causal; see com­ments.]

abramdemski20 Nov 2023 18:23 UTC
82 points
11 comments2 min readLW link

Pro­posal for im­prov­ing the global on­line dis­course through per­son­al­ised com­ment or­der­ing on all websites

Roman Leventov6 Dec 2023 18:51 UTC
35 points
21 comments6 min readLW link

Men­tal health benefits and down­sides of psychedelic use in ACX read­ers: sur­vey results

RationalElf25 Oct 2021 22:55 UTC
117 points
18 comments10 min readLW link

«Boundaries», Part 3a: Defin­ing bound­aries as di­rected Markov blankets

Andrew_Critch30 Oct 2022 6:31 UTC
90 points
20 comments15 min readLW link

LOVE in a sim­box is all you need

jacob_cannell28 Sep 2022 18:25 UTC
64 points
72 comments44 min readLW link1 review

My com­pu­ta­tional frame­work for the brain

Steven Byrnes14 Sep 2020 14:19 UTC
157 points
26 comments13 min readLW link1 review

Biolog­i­cal Holism: A New Paradigm?

Waddington9 May 2021 22:42 UTC
4 points
9 comments19 min readLW link

Pre­dic­tive Pro­cess­ing, Hetero­sex­u­al­ity and Delu­sions of Grandeur

lsusr17 Dec 2022 7:37 UTC
36 points
13 comments5 min readLW link

A Prince, a Pau­per, Power, Panama

Alok Singh27 Sep 2022 7:10 UTC
10 points
0 comments1 min readLW link
(alok.github.io)

The cir­cu­lar prob­lem of epistemic irresponsibility

Roman Leventov31 Oct 2022 17:23 UTC
5 points
2 comments8 min readLW link

Multi-agent pre­dic­tive minds and AI alignment

Jan_Kulveit12 Dec 2018 23:48 UTC
63 points
18 comments10 min readLW link

All the posts I will never write

Alexander Gietelink Oldenziel14 Aug 2022 18:29 UTC
54 points
8 comments8 min readLW link

A fu­ture for neuroscience

Mike Johnson19 Aug 2018 23:58 UTC
22 points
12 comments19 min readLW link

Let There be Sound: A Fris­to­nian Med­i­ta­tion on Creativity

jollybard4 Jul 2020 3:33 UTC
3 points
2 comments1 min readLW link
(jollybard.wordpress.com)

In­tro­duc­tion to the Free-En­ergy The­ory of Mind

IAFF-User-17724 Dec 2016 1:15 UTC
0 points
0 comments1 min readLW link
(medium.com)

Free-en­ergy, re­in­force­ment, and utility

IAFF-User-17726 Dec 2016 23:02 UTC
0 points
0 comments1 min readLW link
(medium.com)

Refine­ment of Ac­tive In­fer­ence agency ontology

Roman Leventov15 Dec 2023 9:31 UTC
16 points
0 comments5 min readLW link
(arxiv.org)

Cri­tique of some re­cent philos­o­phy of LLMs’ minds

Roman Leventov20 Jan 2023 12:53 UTC
52 points
8 comments20 min readLW link

Goal al­ign­ment with­out al­ign­ment on episte­mol­ogy, ethics, and sci­ence is futile

Roman Leventov7 Apr 2023 8:22 UTC
20 points
2 comments2 min readLW link

Why Si­mu­la­tor AIs want to be Ac­tive In­fer­ence AIs

10 Apr 2023 18:23 UTC
91 points
8 comments8 min readLW link

Gaia Net­work: An Illus­trated Primer

18 Jan 2024 18:23 UTC
3 points
2 comments15 min readLW link

FixDT

abramdemski30 Nov 2023 21:57 UTC
56 points
14 comments14 min readLW link

Ap­ply to the Con­cep­tual Boundaries Work­shop for AI Safety

Chipmonk27 Nov 2023 21:04 UTC
50 points
0 comments3 min readLW link

A Brief In­tro­duc­tion to ACI, 3.5: How ACI im­proves Ac­tive Inference

Akira Pyinya29 Apr 2023 23:51 UTC
−7 points
0 comments2 min readLW link

Vi­pas­sana Med­i­ta­tion and Ac­tive In­fer­ence: A Frame­work for Un­der­stand­ing Suffer­ing and its Cessation

Benjamin Sturgeon21 Mar 2024 12:32 UTC
50 points
8 comments19 min readLW link

«Boundaries/​Mem­branes» and AI safety compilation

Chipmonk3 May 2023 21:41 UTC
57 points
17 comments8 min readLW link

Align­ing an H-JEPA agent via train­ing on the out­puts of an LLM-based “ex­em­plary ac­tor”

Roman Leventov29 May 2023 11:08 UTC
12 points
10 comments30 min readLW link

For­mal­iz­ing «Boundaries» with Markov blankets

Chipmonk19 Sep 2023 21:01 UTC
21 points
20 comments3 min readLW link

A multi-dis­ci­plinary view on AI safety research

Roman Leventov8 Feb 2023 16:50 UTC
43 points
4 comments26 min readLW link

A short ‘deriva­tion’ of Watan­abe’s Free En­ergy Formula

Wuschel Schulz29 Jan 2024 23:41 UTC
13 points
6 comments7 min readLW link

Agent mem­branes and causal distance

Chipmonk2 Jan 2024 22:43 UTC
20 points
3 comments3 min readLW link

A re­ply to Byrnes on the Free En­ergy Principle

Roman Leventov3 Mar 2023 13:03 UTC
28 points
16 comments14 min readLW link

Clar­ify­ing the free en­ergy prin­ci­ple (with quotes)

Ryo 29 Oct 2023 16:03 UTC
8 points
0 comments9 min readLW link
No comments.