Here are some common questions I get, along with answers.
How do individuals make money?
By evaluating arguments, line by line, in a way where their evaluations are public.
They do this on a social media platform (similar to X), where each element on the feed is a formal proposition and the user has two slider bars for “confidence” and “value” both ranging from 0 − 1.
Why would someone spend their time evaluating arguments?
Because others are willing to pay them.
This can be either (1) an individual trying to persuade another individual of a proposition (2) a group or organization dedicating capital to a specific set of propositions (3) an individual choosing where publicly funded capital ought be allocated (primary reason for “value” metric).
Why would others pay them?
Because pointing out their publicly documented beliefs that contradict one another is a good way to demonstrate to the public why they are obviously wrong.
This is fundamentally the same reason analytic philosophers have other philosophers write out arguments in the first place. It’s easy to point out where they are wrong.
Why would we want to pay someone for publicly making themselves look obviously wrong?
Because we want them to change their beliefs.
Imagine here, anyone you’ve wanted to change the mind of..
Why won’t there be many contradictory beliefs held by anyone?
Because they don’t want to be obviously wrong.
Think politicians.
How much money goes to which arguments?
The market decides.
What makes this system different than other communication tools?
It is capable of identifying and directing discourse to the various cruxes we never seem to reach.
There’s a number of influential intellectual leaders I’ve been wanting to speak with for many years because I want to change their beliefs. Our social communication system is broken. If I had access to a ledger that they rigorously used to document their accepted beliefs and sound lines of reasoning, I wouldn’t need their presence to have an effective argument with them. I could simply insert the “missing proposition” into such a system and ‘buy them a wager’ on the truth they’re not looking at.
This project is really important for a different reason though.
This can also be used as an interpretable aligned foundation for the truth of a symbolic AI that could scale an alignment attractor faster than a capabilities one.
I think this could be the one thing capable of pulling the fire alarm Eliezer’s been talking about for 20 years. As in, I think society would convey big important ideas to each other better (the lesson we were supposed to learn from “don’t look up.”) If we had a system that gave individuals control over the general attention mechanism of society.
There is a ton more..
This is going to need to be decentralized, humanity verified and hit a couple other security metrics as well. It will take a huge collaboration between the AI sector and crypto. I can’t afford a ticket to any fancy conferences where they are building crypto cities or thinking about collective intelligence formally like this, let alone tying to merge it into a project like Lenat or Hillis had. If anyone knows someone in the AI safety grant community, that would be willing to look at what I’m not willing to put online, please share my information with them.
Here are some common questions I get, along with answers.
How do individuals make money?
By evaluating arguments, line by line, in a way where their evaluations are public. They do this on a social media platform (similar to X), where each element on the feed is a formal proposition and the user has two slider bars for “confidence” and “value” both ranging from 0 − 1.
Why would someone spend their time evaluating arguments?
Because others are willing to pay them. This can be either (1) an individual trying to persuade another individual of a proposition (2) a group or organization dedicating capital to a specific set of propositions (3) an individual choosing where publicly funded capital ought be allocated (primary reason for “value” metric).
Why would others pay them?
Because pointing out their publicly documented beliefs that contradict one another is a good way to demonstrate to the public why they are obviously wrong. This is fundamentally the same reason analytic philosophers have other philosophers write out arguments in the first place. It’s easy to point out where they are wrong.
Why would we want to pay someone for publicly making themselves look obviously wrong?
Because we want them to change their beliefs. Imagine here, anyone you’ve wanted to change the mind of..
Why won’t there be many contradictory beliefs held by anyone?
Because they don’t want to be obviously wrong. Think politicians.
How much money goes to which arguments? The market decides.
What makes this system different than other communication tools? It is capable of identifying and directing discourse to the various cruxes we never seem to reach.
There’s a number of influential intellectual leaders I’ve been wanting to speak with for many years because I want to change their beliefs. Our social communication system is broken. If I had access to a ledger that they rigorously used to document their accepted beliefs and sound lines of reasoning, I wouldn’t need their presence to have an effective argument with them. I could simply insert the “missing proposition” into such a system and ‘buy them a wager’ on the truth they’re not looking at.
This project is really important for a different reason though. This can also be used as an interpretable aligned foundation for the truth of a symbolic AI that could scale an alignment attractor faster than a capabilities one.
I think this could be the one thing capable of pulling the fire alarm Eliezer’s been talking about for 20 years. As in, I think society would convey big important ideas to each other better (the lesson we were supposed to learn from “don’t look up.”) If we had a system that gave individuals control over the general attention mechanism of society.
There is a ton more.. This is going to need to be decentralized, humanity verified and hit a couple other security metrics as well. It will take a huge collaboration between the AI sector and crypto. I can’t afford a ticket to any fancy conferences where they are building crypto cities or thinking about collective intelligence formally like this, let alone tying to merge it into a project like Lenat or Hillis had. If anyone knows someone in the AI safety grant community, that would be willing to look at what I’m not willing to put online, please share my information with them.