RSS

Free En­ergy Principle

TagLast edit: May 29, 2023, 7:16 PM by Roman Leventov

The Free Energy Principle (FEP) is a principle that suggests that dynamic systems, including the brain and other physical systems, are organized to minimize prediction errors, or the difference between the predictions made about the environment and the actual outcomes experienced. According to the FEP, dynamic systems encode information about their environment in a way to reduce surprisal from its input. The FEP proposes that dynamic systems are motivated to minimize prediction errors in order to maintain stability within the environment. FEP has been influential in neuroscience and neuropsychology and more recently has been used to describe systems on all spatiotemporal scales, from cells and biological species to AIs and societies.

FEP gives rise to Active Inference[1]: a process theory of agency, that can be seen both as an explanatory theory and as an agent architecture. In the latter sense, Active Inference rivals Reinforcement Learning. It has been argued[2] that Active Inference as an agent architecture manages the model complexity (i. e., the bias-variance tradeoff) and the exploration-exploitation tradeoff in a principled way, favours explicit, disentangled, and hence more interpretable belief representations, and is amenable for working within hierarchical systems of collective intelligence (which are seen as Active Inference agents themselves[3]). Building ecosystems of hierarchical collective intelligence can be seen as a proposed solution for and an alternative conceptualisation of the general problem of alignment.

FEP/​Active Inference is an energy-based model of intelligence: a FEP agent minimises an informational quantity called variational free energy (VFE), and Active Inference nuances this picture further, modelling agents as minimising an informational quantity called expected free energy (EFE), which is derived from VFE. This likens FEP/​Active Inference to Bengio’s GFlowNets[4] and LeCun’s Joint Embedding Predictive Architecture (JEPA)[5], which are also energy-based. On the other hand, this distinguishes FEP/​Active Inference from Reinforcement Learning, which is a reward-based model of agency, and, more generally, utility-maximising decision theories.

Active Inference is one of the most general theories of agency. It can be seen as a generalisation of the predictive coding theory of brain function (or, the Bayesian Brain hypothesis). Specifically, while predictive coding explains the agent’s perception as Bayesian inference, Active Inference models both prediction and action as inference under the single unifying objective: minimisation of the agent’s VFE or EFE. Active Inference also recovers Bayes-optimal reinforcement learning, optimal control theory, and Bayesian Decision Theory (aka EDT) under different simplifying assumptions[1][6].

The mathematical content of Active Inference is based on Variational Bayesian methods.

References

  1. ^

    Parr, Thomas, Giovanni Pezzulo, and Karl J. Friston. Active inference: the free energy principle in mind, brain, and behavior. MIT Press, 2022.

  2. ^

    Friston, Karl J., Maxwell JD Ramstead, Alex B. Kiefer, Alexander Tschantz, Christopher L. Buckley, Mahault Albarracin, Riddhi J. Pitliya et al. “Designing Ecosystems of Intelligence from First Principles.” arXiv preprint arXiv:2212.01354 (2022).

  3. ^

    Kaufmann, Rafael, Pranav Gupta, and Jacob Taylor. “An active inference model of collective intelligence.” Entropy 23, no. 7 (2021): 830.

  4. ^

    Bengio, Yoshua. “GFlowNet Tutorial.” (2022).

  5. ^

    LeCun, Yann. “A path towards autonomous machine intelligence.” preprint posted on openreview (2022).

  6. ^

    Friston, Karl, Lancelot Da Costa, Danijar Hafner, Casper Hesp, and Thomas Parr. “Sophisticated inference.” Neural Computation 33, no. 3 (2021): 713-763.

God Help Us, Let’s Try To Un­der­stand Fris­ton On Free Energy

Scott AlexanderMar 5, 2018, 6:00 AM
51 points
43 comments14 min readLW link
(slatestarcodex.com)

Ac­tive In­fer­ence as a for­mal­i­sa­tion of in­stru­men­tal convergence

Roman LeventovJul 26, 2022, 5:55 PM
12 points
2 comments3 min readLW link
(direct.mit.edu)

The two con­cep­tions of Ac­tive In­fer­ence: an in­tel­li­gence ar­chi­tec­ture and a the­ory of agency

Roman LeventovNov 16, 2022, 9:30 AM
15 points
0 comments4 min readLW link

Neu­ral An­neal­ing: Toward a Neu­ral The­ory of Every­thing (cross­post)

Michael Edward JohnsonNov 29, 2019, 5:31 PM
86 points
29 comments40 min readLW link3 reviews

Prop­er­ties of cur­rent AIs and some pre­dic­tions of the evolu­tion of AI from the per­spec­tive of scale-free the­o­ries of agency and reg­u­la­tive development

Roman LeventovDec 20, 2022, 5:13 PM
33 points
3 comments36 min readLW link

Why I’m not into the Free En­ergy Principle

Steven ByrnesMar 2, 2023, 7:27 PM
147 points
49 comments9 min readLW link1 review

Power-Seek­ing = Min­imis­ing free energy

Jonas HallgrenFeb 22, 2023, 4:28 AM
21 points
10 comments7 min readLW link

How evolu­tion­ary lineages of LLMs can plan their own fu­ture and act on these plans

Roman LeventovDec 25, 2022, 6:11 PM
39 points
16 comments8 min readLW link

Top Left Mood

Jacob FalkovichJul 24, 2018, 2:35 PM
17 points
2 comments1 min readLW link
(putanumonit.com)

Gaia Net­work: a prac­ti­cal, in­cre­men­tal path­way to Open Agency Architecture

Dec 20, 2023, 5:11 PM
22 points
8 comments16 min readLW link

Wor­ri­some mi­s­un­der­stand­ing of the core is­sues with AI transition

Roman LeventovJan 18, 2024, 10:05 AM
5 points
2 comments4 min readLW link

AXRP Epi­sode 32 - Un­der­stand­ing Agency with Jan Kulveit

DanielFilanMay 30, 2024, 3:50 AM
20 points
0 comments53 min readLW link

Agent Boundaries Aren’t Markov Blan­kets. [Un­less they’re non-causal; see com­ments.]

abramdemskiNov 20, 2023, 6:23 PM
82 points
11 comments2 min readLW link

Pro­posal for im­prov­ing the global on­line dis­course through per­son­al­ised com­ment or­der­ing on all websites

Roman LeventovDec 6, 2023, 6:51 PM
35 points
21 comments6 min readLW link

Men­tal health benefits and down­sides of psychedelic use in ACX read­ers: sur­vey results

RationalElfOct 25, 2021, 10:55 PM
118 points
18 comments10 min readLW link

«Boundaries», Part 3a: Defin­ing bound­aries as di­rected Markov blankets

Andrew_CritchOct 30, 2022, 6:31 AM
90 points
20 comments15 min readLW link

LOVE in a sim­box is all you need

jacob_cannellSep 28, 2022, 6:25 PM
65 points
72 comments44 min readLW link1 review

My com­pu­ta­tional frame­work for the brain

Steven ByrnesSep 14, 2020, 2:19 PM
157 points
26 comments13 min readLW link1 review

Biolog­i­cal Holism: A New Paradigm?

WaddingtonMay 9, 2021, 10:42 PM
4 points
9 comments19 min readLW link

Pre­dic­tive Pro­cess­ing, Hetero­sex­u­al­ity and Delu­sions of Grandeur

lsusrDec 17, 2022, 7:37 AM
37 points
13 comments5 min readLW link

A Prince, a Pau­per, Power, Panama

Alok SinghSep 27, 2022, 7:10 AM
10 points
0 comments1 min readLW link
(alok.github.io)

The cir­cu­lar prob­lem of epistemic irresponsibility

Roman LeventovOct 31, 2022, 5:23 PM
5 points
2 comments8 min readLW link

Multi-agent pre­dic­tive minds and AI alignment

Jan_KulveitDec 12, 2018, 11:48 PM
63 points
18 comments10 min readLW link

All the posts I will never write

Alexander Gietelink OldenzielAug 14, 2022, 6:29 PM
54 points
8 comments8 min readLW link

A fu­ture for neuroscience

Mike JohnsonAug 19, 2018, 11:58 PM
22 points
12 comments19 min readLW link

Let There be Sound: A Fris­to­nian Med­i­ta­tion on Creativity

jollybardJul 4, 2020, 3:33 AM
3 points
2 comments1 min readLW link
(jollybard.wordpress.com)

In­tro­duc­tion to the Free-En­ergy The­ory of Mind

IAFF-User-177Dec 24, 2016, 1:15 AM
0 points
0 comments1 min readLW link
(medium.com)

Free-en­ergy, re­in­force­ment, and utility

IAFF-User-177Dec 26, 2016, 11:02 PM
0 points
0 comments1 min readLW link
(medium.com)

Refine­ment of Ac­tive In­fer­ence agency ontology

Roman LeventovDec 15, 2023, 9:31 AM
16 points
0 comments5 min readLW link
(arxiv.org)

Cri­tique of some re­cent philos­o­phy of LLMs’ minds

Roman LeventovJan 20, 2023, 12:53 PM
52 points
8 comments20 min readLW link

Goal al­ign­ment with­out al­ign­ment on episte­mol­ogy, ethics, and sci­ence is futile

Roman LeventovApr 7, 2023, 8:22 AM
20 points
2 comments2 min readLW link

Why Si­mu­la­tor AIs want to be Ac­tive In­fer­ence AIs

Apr 10, 2023, 6:23 PM
93 points
9 comments8 min readLW link1 review

Gaia Net­work: An Illus­trated Primer

Jan 18, 2024, 6:23 PM
3 points
2 comments15 min readLW link

FixDT

abramdemskiNov 30, 2023, 9:57 PM
59 points
15 comments14 min readLW link1 review

Ap­ply to the Con­cep­tual Boundaries Work­shop for AI Safety

ChipmonkNov 27, 2023, 9:04 PM
50 points
0 comments3 min readLW link

A Brief In­tro­duc­tion to ACI, 3.5: How ACI im­proves Ac­tive Inference

Akira PyinyaApr 29, 2023, 11:51 PM
−7 points
0 comments2 min readLW link

Vi­pas­sana Med­i­ta­tion and Ac­tive In­fer­ence: A Frame­work for Un­der­stand­ing Suffer­ing and its Cessation

sturbMar 21, 2024, 12:32 PM
50 points
8 comments19 min readLW link

«Boundaries/​Mem­branes» and AI safety compilation

ChipmonkMay 3, 2023, 9:41 PM
57 points
17 comments8 min readLW link

Align­ing an H-JEPA agent via train­ing on the out­puts of an LLM-based “ex­em­plary ac­tor”

Roman LeventovMay 29, 2023, 11:08 AM
12 points
10 comments30 min readLW link

For­mal­iz­ing «Boundaries» with Markov blankets

ChipmonkSep 19, 2023, 9:01 PM
21 points
20 comments3 min readLW link

A multi-dis­ci­plinary view on AI safety research

Roman LeventovFeb 8, 2023, 4:50 PM
43 points
4 comments26 min readLW link

A short ‘deriva­tion’ of Watan­abe’s Free En­ergy Formula

Wuschel SchulzJan 29, 2024, 11:41 PM
13 points
6 comments7 min readLW link

Agent mem­branes and causal distance

ChipmonkJan 2, 2024, 10:43 PM
20 points
3 comments3 min readLW link

A re­ply to Byrnes on the Free En­ergy Principle

Roman LeventovMar 3, 2023, 1:03 PM
28 points
16 comments14 min readLW link

Clar­ify­ing the free en­ergy prin­ci­ple (with quotes)

Ryo Oct 29, 2023, 4:03 PM
8 points
0 comments9 min readLW link
No comments.