Hey Steven, I’ll answer your question/suggestion below. One upfront request: please let us know if this helps. We’ll write a follow-up post on LW explaining this.
As mentioned in the appendix, most of what we wrote up is generalized from concrete people (not made-up, my IRL company Digital Gaia) trying to build a specific concrete AI thing (software to help farmers and leaders of regeneration projects maximize their positive environmental impact and generate more revenue by being able to transparently validate their impact to donors or carbon credit buyers). We talked extensively to people in the ag, climate and nature industries, and came to the conclusion that the lack of transparent, unbiased impact measurement and validation—ie, exactly the transaction costs you mention—is the reason why humanity is massively underinvested in conservation and regeneration. There are gazillions of “climate AI” solutions that purport to measure and validate impact, but they are all fundamentally closed and centralized, and thus can’t eliminate those transaction costs. In simple terms, none of the available systems, no matter how much money they spent on data or compute, can give a trustworthy, verifiable, privacy-preserving rationale for either scientific parameters (“why did you assume the soil carbon captured this year in this hectare was X tons?”) or counterfactuals (“why did you recommend planting soybeans with an alfalfa rotation instead of a maize monoculture?”). We built the specific affordances that we did—enabling local decision-support systems to connect to each other forming a distributed hierarchical causal model that can perform federated partial pooling—as a solution to exactly that problem:
The first adopters (farmers) already get day-1 benefits (a model-based rationale that is verifiable and privacy-preserving), using models and parameters bootstrapped from the state of the art of open transaction-cost-reduction: published scientific literature, anecdotal field reports on the Web, etc.
The parameter posteriors contributed by the first adopters drive the flywheel. As more adopters join, network effects kick in and transaction RoI increases: both parameter posteriors become increasingly truthful and easier to verify (posterior estimates from multiple sources mostly corroborate each other, confidence bands get narrower).
Any remaining uncertainty, in turn, drives incentives for scientists and domain experts to refine models and perform experiments, which will benefit all adopters by making their local impact rationales and recommendations more refined.
As an open network, models and parameters can be leveraged in adjacent domains, which then generate their own adjacencies, eventually covering the entire spectrum of science and engineering. For instance, we have indoor farms and greenhouses interested in our solution; they would need to incorporate not only agronomic models but also energy consumption and efficiency models. This then opens the door to industrial and manufacturing use cases, and so on and so forth...
We validated the first two steps of this theory in a pilot; it worked so well that our pilot users keep ringing us back saying they need us to turn it into production-ready software...
Disclaimer: We did not fully implement or validate two important pieces of the architecture that are alluded to in the post: free energy-based economics and trust models. These are not crucial for a small-scale, controlled pilot, but would be relevant for use at scale in the wild.
Hey Steven, I’ll answer your question/suggestion below. One upfront request: please let us know if this helps. We’ll write a follow-up post on LW explaining this.
As mentioned in the appendix, most of what we wrote up is generalized from concrete people (not made-up, my IRL company Digital Gaia) trying to build a specific concrete AI thing (software to help farmers and leaders of regeneration projects maximize their positive environmental impact and generate more revenue by being able to transparently validate their impact to donors or carbon credit buyers). We talked extensively to people in the ag, climate and nature industries, and came to the conclusion that the lack of transparent, unbiased impact measurement and validation—ie, exactly the transaction costs you mention—is the reason why humanity is massively underinvested in conservation and regeneration. There are gazillions of “climate AI” solutions that purport to measure and validate impact, but they are all fundamentally closed and centralized, and thus can’t eliminate those transaction costs. In simple terms, none of the available systems, no matter how much money they spent on data or compute, can give a trustworthy, verifiable, privacy-preserving rationale for either scientific parameters (“why did you assume the soil carbon captured this year in this hectare was X tons?”) or counterfactuals (“why did you recommend planting soybeans with an alfalfa rotation instead of a maize monoculture?”). We built the specific affordances that we did—enabling local decision-support systems to connect to each other forming a distributed hierarchical causal model that can perform federated partial pooling—as a solution to exactly that problem:
The first adopters (farmers) already get day-1 benefits (a model-based rationale that is verifiable and privacy-preserving), using models and parameters bootstrapped from the state of the art of open transaction-cost-reduction: published scientific literature, anecdotal field reports on the Web, etc.
The parameter posteriors contributed by the first adopters drive the flywheel. As more adopters join, network effects kick in and transaction RoI increases: both parameter posteriors become increasingly truthful and easier to verify (posterior estimates from multiple sources mostly corroborate each other, confidence bands get narrower).
Any remaining uncertainty, in turn, drives incentives for scientists and domain experts to refine models and perform experiments, which will benefit all adopters by making their local impact rationales and recommendations more refined.
As an open network, models and parameters can be leveraged in adjacent domains, which then generate their own adjacencies, eventually covering the entire spectrum of science and engineering. For instance, we have indoor farms and greenhouses interested in our solution; they would need to incorporate not only agronomic models but also energy consumption and efficiency models. This then opens the door to industrial and manufacturing use cases, and so on and so forth...
We validated the first two steps of this theory in a pilot; it worked so well that our pilot users keep ringing us back saying they need us to turn it into production-ready software...
Disclaimer: We did not fully implement or validate two important pieces of the architecture that are alluded to in the post: free energy-based economics and trust models. These are not crucial for a small-scale, controlled pilot, but would be relevant for use at scale in the wild.