if the proof is to be formal, it needs to plug into the mathematical formalizations one would use to do the social science form of this. and if you want to deal with certain kinds of social systems, it’s more complicated than that. I’m also an experienced generalist nerd with a lot to say; unfortunately, even if we’re real good at babbling, they’re right to look at people funny even if they have the systems programming experience or what have you. seems like you’re doing stuff that doesn’t plug into reality very well. can you provide a demonstration of it as a multi-agent simulation, for example? various levels of shareable multi-agent sim tool to demonstrate whatever effect you’d like; then the only proof you need provide circumstantially is the applicability of the simulation, which means if we find a way the metaphor breaks, we can go back to the drawing board for the simulated example; in software and life in general, tests only give us coverage of the example we try, but at least we can assert that the test passes.
https://neuralmmo.github.io/ ← neural network training focused, designed for “love in a simbox” style usage, from openai. there are a few more from openai and deepmind I skipped mentioning, browse their githubs if curious.
it needs to plug into the mathematical formalizations one would use to do the social science form of this.
Could you clarify what you mean with a “social science form” of a mathematical formalisation? I’m not familiar with this.
they’re right to look at people funny even if they have the systems programming experience or what have you.
It was expected and understandable that people look funny at the writings from a multi-skilled researcher with new ideas that those people were not yet familiar with. Let’s move on from first impressions.
simulation
If with simulation, we can refer to a model that is computed to estimate a factor on which further logical deduction steps are based on, that would connect up with Forrest’s work (it’s not really about multi-agent simulation though).
Based on what I learned from Forrest, we need to distinguish the ‘estimation’ factors from the ‘logical entailment’ factors. That the notion of “proof” is only with respect to that which can be logically entailed. Everything else is about assessment. In each case, we need to be sure we are doing the modelling correctly.
For example, it could be argued that step ‘b’ below is about logical entailment, though according to Forrest most would argue that it is an assessment. Given that it depends on both physics and logic (via comp-sci modeling), it depends on how one regards the notion of ‘observation’, and where that is empirical or analytic observation.
- b; If AGI/APS is permitted to continue to exist, then it will inevitably, inexorably, implement and manifest certain convergent behaviors.
- c; that among these inherent convergent behaviors will be at least all of:.
− 1; to/towards self existence continuance promotion.
− 2; to/towards capability building capability, a increase seeking capability, a capability of seeking increase, capability/power/influence increase, etc.
− 3; to/towards shifting ambient environmental conditions/context to/towards favoring the production of (variants of, increases of) its artificial substrate matrix.
Note again: the above is not formal reasoning. It is a super-short description of what two formal reasoning steps would cover.
but if we can take the type signature from a simulation, then we can attempt to do formal reasoning about its possibility space given the concrete example. if we don’t have precise types, we can’t reason through these systems. b seems to me to be a falsifiable claim that cannot be determined true or false from pure rational computation, it requires active investigation. we have evidence of it, but that evidence needs to be cited.
if the proof is to be formal, it needs to plug into the mathematical formalizations one would use to do the social science form of this. and if you want to deal with certain kinds of social systems, it’s more complicated than that. I’m also an experienced generalist nerd with a lot to say; unfortunately, even if we’re real good at babbling, they’re right to look at people funny even if they have the systems programming experience or what have you. seems like you’re doing stuff that doesn’t plug into reality very well. can you provide a demonstration of it as a multi-agent simulation, for example? various levels of shareable multi-agent sim tool to demonstrate whatever effect you’d like; then the only proof you need provide circumstantially is the applicability of the simulation, which means if we find a way the metaphor breaks, we can go back to the drawing board for the simulated example; in software and life in general, tests only give us coverage of the example we try, but at least we can assert that the test passes.
https://hash.ai/platform/engine—https://hash.ai/models?sort=popularity (looks really sleek and modern, can do complicated cellular automata or smooth movement stuff)
https://insightmaker.com/explore (designed for sharing and has existing simulations of … varying quality)
https://simlin.com/
https://www.collimator.ai/ ← fancy
https://ncase.me/loopy/ ← cheesy demo one
https://github.com/evoplex/evoplex ← cellular automata focused
https://neuralmmo.github.io/ ← neural network training focused, designed for “love in a simbox” style usage, from openai. there are a few more from openai and deepmind I skipped mentioning, browse their githubs if curious.
honorable mention, looks kinda stuffy: https://helipad.dev/
this one has built in causal inference tools, which is cool I guess? https://github.com/zykls/whynot
Could you clarify what you mean with a “social science form” of a mathematical formalisation?
I’m not familiar with this.
It was expected and understandable that people look funny at the writings from a multi-skilled researcher with new ideas that those people were not yet familiar with.
Let’s move on from first impressions.
If with simulation, we can refer to a model that is computed to estimate a factor on which further logical deduction steps are based on, that would connect up with Forrest’s work (it’s not really about multi-agent simulation though).
Based on what I learned from Forrest, we need to distinguish the ‘estimation’ factors from the ‘logical entailment’ factors. That the notion of “proof” is only with respect to that which can be logically entailed. Everything else is about assessment. In each case, we need to be sure we are doing the modelling correctly.
For example, it could be argued that step ‘b’ below is about logical entailment, though according to Forrest most would argue that it is an assessment. Given that it depends on both physics and logic (via comp-sci modeling), it depends on how one regards the notion of ‘observation’, and where that is empirical or analytic observation.
Note again: the above is not formal reasoning. It is a super-short description of what two formal reasoning steps would cover.
but if we can take the type signature from a simulation, then we can attempt to do formal reasoning about its possibility space given the concrete example. if we don’t have precise types, we can’t reason through these systems. b seems to me to be a falsifiable claim that cannot be determined true or false from pure rational computation, it requires active investigation. we have evidence of it, but that evidence needs to be cited.
How does your approach compare with https://www.metaethical.ai/?