Consider taking the action of simultaneously collapsing all the stars except our sun into black holes.(Suppose you can somehow do this without generating supernovas.)
To me, this seems like a highly impactful event, potentially vastly curtailing the future potential of humanity.
But to an 11th century peasant, all this would mean is that the stars in the night sky would slowly go out over the course of millenia. Which would have very little impact on the peasants life.
This is so spot-on. The peasant wouldn’t feel impacted, even if they understood and were aware of this happening. Even if the peasant would like humanity to have a cosmic endowment, they might not believe we’d be able to take advantage of it. Could you imagine an 11th century peasant thinking of Dyson Spheres?
I’ve been thinking about arxiv-sanity lately and I think it would be cool to have a sort of “LW papers” where we share papers that are relevant to the topics discussed on this website. I know that’s what link posts are supposed to be for, but I don’t want to clutter up the main feed. Many of the papers I want to share are related to the topics we discuss, but I don’t think they’re worthy of their own posts.
I might start linking papers in my short-form feed.
I’ve also been thinking about this. I think link-posts are a good first-step and maybe we should make more link-posts for papers we find interesting. But one issue that I have with LW is that it’s pretty blog-like (similar to Reddit and HackerNews); so for some of these things it could be difficult for old papers to accumulate reviews and comments over a long period of people reading them.
What should be the optimal initial Karma for a post/comment?
By default on reddit and lesswrong, posts start with 1 karma, coming from the user upvoting themselves. On lesswrong right now, it appears that this number can be set higher by strongly upvoting yourself. But this need not be the case. Posts and comments could start with either positive or negative karma. If posts start with larger positive karma, this might incentivize people to post/comment more often. Likewise, if posts or comments start off with negative karma, this acts as a disincentive to post.
A related idea would be to create a special board where posts/comments start off with large negative karma, but each upvote from users would give the poster more karma than usual. As a result, people would only post there if they expected their post to “break-even” in terms of Karma.
By default on reddit and lesswrong, posts start with 1 karma, coming from the user upvoting themselves.
Actually, on LessWrong, I’m fairly sure the karma value of a particular user’s regular vote depends on the user’s existing karma score. Users with a decent karma total usually have a default vote value of 2 karma rather than 1, so each comment they post will have 2 karma to start. Users with very high karma totals seem to have a vote that’s worth 3 karma by default. Something similar happens with strong votes, though I’m not sure what kind of math is used there.
Aside: I’ve sometimes thought that users should be allowed to pick a value for their vote that’s anywhere between 1 and the value of their strong upvote, instead of being limited to either a regular vote (2 karma in my case) or a strong vote (6 karma). In my case, I literally can’t give people karma values of 1, 3, 4, or 5, which could be useful for more granular valuations.
This is tangential, but I think I understand why you can’t pick a value for your vote. The idea behind giving high-karma users stronger votes was that a high-karma user having the same level of approval for a post is stronger evidence of the post’s quality. Something that Alicorn likes-but-not-enough-to-strong-upvote has a better shot at being good than something that new_user_420 likes-but-not-enough-to-strong upvote.
This might just be me, but I really hate the floating action button on LW. It’s an eyesore on what is otherwise a very clean website. The floating action button was designed to “Represent the primary action on a screen” and draw the user’s attention to itself. It does a great job at it, but since “ask us anything, or share your feedback” is not the primary thing you’d want to do, it’s distracting.
Not only does it do that, but it also give the impression that this is another cobbled together Meteor app and therefore my brain instantly makes me associate it with all the other crappy Meteor apps.
The other thing is that when you click on it, it’s doesn’t fit in with the rest of the site theme. LW has this great black-grey-green color scheme, but if you click on the FAB, you are greeted with a yellow waving hand, and when you close it, you get this ugly red (1) in the corner of your screen.
It also kinda pointless since the devs and mods on this website are all very responsive and seem to be aware of everything that gets posted.
I could understand it at the start of LW 2.0 when everything was still on fire, but does anyone use it now?
People in fact use it quite a bit. That said, you can hide it in your user-settings.
Also, doublechecking if by default it appears as anything than a subdued circle, like this?
(Occasionally we get reports of it being much-more-obtrusive, by showing more messages or something immediately. If it’s doing that for you, apologies. That is a bug)
Here’s an idea. Buy a container ship, and retrofit it with amphibious vehicles, shipping container houses, and associated utility and safety infrastructure. Then take it to any major costal city and rent the shipping container houses at a fraction of the price of the local rent.
You could also convert some of the space into office space and stores.
Assuming people can live in a single 40′ shipping container, the price per person should be minimal. You can buy some pretty big old ships for less than individual houses and we can probably upper bound the cost per unit by looking at cruise ship berth prices.
The best part? You can do all your construction where wages are cheap and ship the apartment anywhere it’s needed.
Note that some rationalists tried something like this. They later concluded it wasn’t worth the effort – boats are hard to work with – and switched to living in trucks (which didn’t solve the scale problem but solved their own personal problems).
Problems with the idea include:
It’s pretty hard to get permits for large-berth ships (and not that easy for small berth ships either)
Ships have all kinds of hidden costs you don’t notice at first
There are huge upfront costs to figuring out how to go about this
It seems likely that if you attempt to do this at scale, you’ll just trigger some kind of government response that renders your operation not-much-cheaper, so you pay the upfront costs but not reap the scale benefits
(The Rationalist Fleet plan was to try working with a tugboat first, to get a basic sense of how operating ships work before trying something larger which would come with weirder problems. The expectation was that by the time they got to the larger ship they’d hire a captain and crew, but it was still useful to have a rough sense of the set of problems they’d face. The Tugboat was not large enough to really get the economy of scale that a larger ship would have. But, it is still my impression that the whole thing turned out to be more trouble than it was worth)
I have memories from the time of there being other Silicon Valley Startups that had attempted something in this space and it didn’t appear to have worked out either, although I’m not sure how to find it now.
I think for it work you’d definitely need to do it on a larger scale. When you go on a cruise, you pay money to abstract away all of the upkeep associated with operating a boat.
I read the post and did some more research. The closest analog to what I’m thinking looks to be google’s barge project, which encountered some regulatory barriers during the construction phase. However, the next closest thing is this startup and they seem to be the real deal. With regards to what you brought up>
It’s pretty hard to get permits for large-berth ships (and not that easy for small berth ships either)
Correct me if I’m wrong, but AFAICT most regulation is for parking a boat at a berth. I don’t think the permits are nearly as strict if you are out in the bay. I don’t think coastal housing can scale. That’s why I mentioned the amphibious vehicles. Or more realistically, a ferry to move people to and from the housing ship.
Ships have all kinds of hidden costs you don’t notice at first
There are huge upfront costs to figuring out how to go about this
Yeah, there’s no getting around that. It’s the kind of thing that you contract out to a maritime engineering firm. 10 million for the ship, maybe 5 million for the housing units, maybe another 5 million to pay an engineering firm to design the thing to comply with relevant regulations. Throw in another 5 million to assemble the thing. Then who knows how much to cut through all the red tape. However, rents keep going up significantly faster than inflation. Condos in SF seem to be on the order of ~1 million per bedroom. I think you could easily recoup your costs after deploying a single 60-120 bedroom ship.
It seems likely that if you attempt to do this at scale, you’ll just trigger some kind of government response that renders your operation not-much-cheaper, so you pay the upfront costs but not reap the scale benefits
I think this is the big one, rent seekers are going to do everything they can to stop you if you’re a credible threat. I really don’t know how politics works at that level. I’d imagine you’d need to appease the right people and make PR moves that make your opponents look like monsters. Then again, if you go with the cargoship model, it’s not like you’ve lost your capital investment. You can just pack up and move to a different city anywhere in the world. You can also spread a fleet around to multiple cities across the world so as to avoid triggering said response while building up credibility/power.
With the ongoing drama that is currently taking place. I’m worried that the rationalist community will find itself inadvertently caught up in the culture war. This might cause a large influx of new users who are more interested in debating politics than anything else on LW.
It might be a good idea to put a temporary moratorium/barriers on new signups to the site in the event that things become particularly heated.
Something in this space seems pretty plausible to me. We are always monitoring contributions from new users, so I think we would notice relatively quickly, and I agree that as soon as we see a suspicious uptick we might want to limit contributions, but I do think I would want to wait until we see the initial signs.
I wonder whether it would be a good idea to set a “controversial” (e.g. culture war) flag to some posts, and simply not allow new users to comment on those posts.
With some explanation, like: “These posts do not represent the intended content of Less Wrong, which is rationality, artificial intelligence, effective altruism, et cetera.”
To reduce the administrative work, we could assume by default that all articles are noncontroversial, and only set the flag to those where some problems happen or we explicitly expect them to happen. Maybe some keywords in content, such as “Trump” or “social justice”, could show a dialog whether the author wants to flag their article as “controversial (inaccessible to freshly registered users)”.
Yup. This is already a thing we keep an eye out for for new users (I’m less likely to approve a new user if they seem primarily interested in arguing politics), and I agree it makes more sense to be on the lookout for it right now.
When new users post content, moderations check whether they’re spammers, and whether they seem to meet the basic quality bar we want for site users. (In some cases we block accounts, in some cases we send them a message noting that their content isn’t generally up to the standards of the site)
Meta-philosophy hypothesis: Philosophy is the process of reifying fuzzy concepts that humans use. By “fuzzy concepts” I mean things where we can say “I know it when I see it.” but we might not be able to describe what “it” is.
Examples that I believe support the hypothesis:
This shortform is about the philosophy of “philosophy” and this hypothesis is an attempt at an explanation of what we mean by “philosophy”.
In epistemology, Bayesian epistemology is a hypothesis that explains the process of learning.
In ethics, an ethical theory attempts to make explicit our moral intuitions.
A clear explanation of consciousness and qualia would be considered philosophical progress.
>”I know it when I see it.” but we might not be able to describe what “it” is.
hard to generate easy to verify functions. Related: Gendlin’s ‘sharp’ blank, or a blank that knows what it is looking for, eg tip of the tongue phenomena, or forgetting what you’re looking for and then remembering when you see it.
I notice that there’s a fair bit of “thread necromancy” on LessWrong. I don’t think it’s a bad thing, but I think it would be cool to have an option to filter comments based on the time gap between when the post was made and when the comment was made. That way it’s easier to see to see what the discussion was like around the time when the post was made.
On a related note, does LessWrong record when upvotes are made? It would also be cool to have a “time-machine” to see how up-votes and down-votes in a thread evolve over time. Could be good for analysing the behaviour of threads in the short term, and a way to see how community norms change in the long term.
You can order the comments by oldest first, which gives you at least some of that.
We do also record when all the votes are cast, so a timemachine is possible, though querying and aggregating all the votes for a large thread might be too much for a browser client.
I’ve been thinking about how one could get continuous physics from a discrete process. Suppose you had a differential equation, and you wanted to make a discrete approximation to it. Furthermore, suppose you had a discrete algorithms for simulating this differential equation that takes in a parameter, say, dt which controls the resolution of the simulation. As dt tends toward zero, the dynamics of the simulated diff eq will tend towards the dynamics of the real diff eq.
Now suppose, we have a a turing machine that implements this algorithm as a subroutine. More precisely, the turing machine runs a simulations of diff equation at a resolutions of 1 then 1⁄2, then 1⁄3 and so on and so forth.
Finally, suppose their we’re a conscious observer in this simulation, at what resolution would they expect their physics to be simulated? Depending on one’s notion of anthropics, one could argue that at any resolution, there is a finite amount of observers in lower resolution simulations, but an infinite amount in higher resolution simulations. Consequently, the observer should expect to live in a universe with continuous physics.
The standard Solomonoff prior discounts hypotheses by 2^(-l), where l is the number of bits required to describe them. However, we can easily imagine a whole class of priors, each with a different discount rate. For instance, one could discount by 1/Z2^(-(2l)) where Z is a normalizing factor to get probabilities to add up to one. Why do we put special emphasis on this rate of discounting rather than any other rate of discounting?
I think that we can justify this discount rate with the principle of maximum entropy, as distributions with steeper asymptotic discounting rates will have lower entropy than distributions with shallower asymptotic discounting rates and any distribution with a shallower discounting rate than 2^(-l) would (probably) diverge and therefore constitute an invalid probability distribution.
Are there arguments/situations justifying steeper discounting rates?
For some context, marine cloud brightening(MCB) is a geoengineering proposal to disperse aerosolized salt water particles into the lower atmosphere using ships. MCB appears to be one of the more promising geoengineering proposals, with more optimistic estimates placing the cost of reversing climate change at ~ $200 million per year.. However, one of the main technical issues with the proposal is how to actually lift and aerosolize the water particles. Current proposals propose using wind powered ships. However, if the power requirements are not sufficient, nuclear power could be used. Estimates for how much power needs to be delivered in order to aerosolize the water are as low as 30MW. This compares favourably to the estimated 60MW from a Russian nuclear icebreaker. Furthermore, the extra 200-300MW of waste heat from the reactor can be used to heat the water before spraying it. When hot aerosolized water comes into contact with the air, it expands and floats up, allowing the salt to rise in the same way that current marine ship exhaust rises into the atmosphere.
As of last night, yes: click the Gear at the top-right of the Latest Posts page. (we don’t yet have it working for All Posts or Recent Discussion, but will try soon)
I think that with a single exception of at TIL post a week ago, I don’t think we have much visibility or impact on the broader social media world.
I wonder if it would be a good idea for this site to have the ability to temporarily lock out users without a login in the event that something on this site goes viral. Did we see a spike in new users signing up around a week ago?
It’s hard for me to keep track of new users because we’re also having a wave of spam, but my impression is that new-user rates are similar to what they typically are, but that activity in general is higher than usual (roughly matching the peaks that it hit in December).
Due to the corona virus, masks and disinfectants are starting to run out in many locations. Still working on the mask situation, but it might be possible to make your own hand sanitizer by mixing isopropyl alcohol or ethanol with glycerol. The individual ingredients might be available even if hand sanitizer isn’t. From what I gather, you want to aim for for at least 90% alcohol. Higher is better.
Inspired by the recent post on impact measures, I though of an example illustrating the subjective nature of impact.
Consider taking the action of simultaneously collapsing all the stars except our sun into black holes.(Suppose you can somehow do this without generating supernovas.)
To me, this seems like a highly impactful event, potentially vastly curtailing the future potential of humanity.
But to an 11th century peasant, all this would mean is that the stars in the night sky would slowly go out over the course of millenia. Which would have very little impact on the peasants life.
This is so spot-on. The peasant wouldn’t feel impacted, even if they understood and were aware of this happening. Even if the peasant would like humanity to have a cosmic endowment, they might not believe we’d be able to take advantage of it. Could you imagine an 11th century peasant thinking of Dyson Spheres?
(This is the subject of the subsequent post, and will continue to be discussed in a few more posts to follow)
That is certainly a vivid mental image!
I’ve been thinking about arxiv-sanity lately and I think it would be cool to have a sort of “LW papers” where we share papers that are relevant to the topics discussed on this website. I know that’s what link posts are supposed to be for, but I don’t want to clutter up the main feed. Many of the papers I want to share are related to the topics we discuss, but I don’t think they’re worthy of their own posts.
I might start linking papers in my short-form feed.
FWIW I think link-posts are just fine for this. Although I’m not sure I understand exactly what your goal is
I’ve also been thinking about this. I think link-posts are a good first-step and maybe we should make more link-posts for papers we find interesting. But one issue that I have with LW is that it’s pretty blog-like (similar to Reddit and HackerNews); so for some of these things it could be difficult for old papers to accumulate reviews and comments over a long period of people reading them.
What should be the optimal initial Karma for a post/comment?
By default on reddit and lesswrong, posts start with 1 karma, coming from the user upvoting themselves. On lesswrong right now, it appears that this number can be set higher by strongly upvoting yourself. But this need not be the case. Posts and comments could start with either positive or negative karma. If posts start with larger positive karma, this might incentivize people to post/comment more often. Likewise, if posts or comments start off with negative karma, this acts as a disincentive to post.
A related idea would be to create a special board where posts/comments start off with large negative karma, but each upvote from users would give the poster more karma than usual. As a result, people would only post there if they expected their post to “break-even” in terms of Karma.
Actually, on LessWrong, I’m fairly sure the karma value of a particular user’s regular vote depends on the user’s existing karma score. Users with a decent karma total usually have a default vote value of 2 karma rather than 1, so each comment they post will have 2 karma to start. Users with very high karma totals seem to have a vote that’s worth 3 karma by default. Something similar happens with strong votes, though I’m not sure what kind of math is used there.
Aside: I’ve sometimes thought that users should be allowed to pick a value for their vote that’s anywhere between 1 and the value of their strong upvote, instead of being limited to either a regular vote (2 karma in my case) or a strong vote (6 karma). In my case, I literally can’t give people karma values of 1, 3, 4, or 5, which could be useful for more granular valuations.
This is tangential, but I think I understand why you can’t pick a value for your vote. The idea behind giving high-karma users stronger votes was that a high-karma user having the same level of approval for a post is stronger evidence of the post’s quality. Something that Alicorn likes-but-not-enough-to-strong-upvote has a better shot at being good than something that new_user_420 likes-but-not-enough-to-strong upvote.
This is indeed one of the major considerations.
This might just be me, but I really hate the floating action button on LW. It’s an eyesore on what is otherwise a very clean website. The floating action button was designed to “Represent the primary action on a screen” and draw the user’s attention to itself. It does a great job at it, but since “ask us anything, or share your feedback” is not the primary thing you’d want to do, it’s distracting.
Not only does it do that, but it also give the impression that this is another cobbled together Meteor app and therefore my brain instantly makes me associate it with all the other crappy Meteor apps.
The other thing is that when you click on it, it’s doesn’t fit in with the rest of the site theme. LW has this great black-grey-green color scheme, but if you click on the FAB, you are greeted with a yellow waving hand, and when you close it, you get this ugly red (1) in the corner of your screen.
It also kinda pointless since the devs and mods on this website are all very responsive and seem to be aware of everything that gets posted.
I could understand it at the start of LW 2.0 when everything was still on fire, but does anyone use it now?
/rant
People in fact use it quite a bit. That said, you can hide it in your user-settings.
Also, doublechecking if by default it appears as anything than a subdued circle, like this?
(Occasionally we get reports of it being much-more-obtrusive, by showing more messages or something immediately. If it’s doing that for you, apologies. That is a bug)
This solves my problem, thank you. Also it does look just like the screenshot, no problems other than what I brought up when you click on it.
I turned it off long ago, and had forgotten it exists.
Here’s an idea. Buy a container ship, and retrofit it with amphibious vehicles, shipping container houses, and associated utility and safety infrastructure. Then take it to any major costal city and rent the shipping container houses at a fraction of the price of the local rent.
You could also convert some of the space into office space and stores.
Assuming people can live in a single 40′ shipping container, the price per person should be minimal. You can buy some pretty big old ships for less than individual houses and we can probably upper bound the cost per unit by looking at cruise ship berth prices.
The best part? You can do all your construction where wages are cheap and ship the apartment anywhere it’s needed.
Note that some rationalists tried something like this. They later concluded it wasn’t worth the effort – boats are hard to work with – and switched to living in trucks (which didn’t solve the scale problem but solved their own personal problems).
Problems with the idea include:
It’s pretty hard to get permits for large-berth ships (and not that easy for small berth ships either)
Ships have all kinds of hidden costs you don’t notice at first
There are huge upfront costs to figuring out how to go about this
It seems likely that if you attempt to do this at scale, you’ll just trigger some kind of government response that renders your operation not-much-cheaper, so you pay the upfront costs but not reap the scale benefits
(The Rationalist Fleet plan was to try working with a tugboat first, to get a basic sense of how operating ships work before trying something larger which would come with weirder problems. The expectation was that by the time they got to the larger ship they’d hire a captain and crew, but it was still useful to have a rough sense of the set of problems they’d face. The Tugboat was not large enough to really get the economy of scale that a larger ship would have. But, it is still my impression that the whole thing turned out to be more trouble than it was worth)
I have memories from the time of there being other Silicon Valley Startups that had attempted something in this space and it didn’t appear to have worked out either, although I’m not sure how to find it now.
I think for it work you’d definitely need to do it on a larger scale. When you go on a cruise, you pay money to abstract away all of the upkeep associated with operating a boat.
I read the post and did some more research. The closest analog to what I’m thinking looks to be google’s barge project, which encountered some regulatory barriers during the construction phase. However, the next closest thing is this startup and they seem to be the real deal. With regards to what you brought up>
Correct me if I’m wrong, but AFAICT most regulation is for parking a boat at a berth. I don’t think the permits are nearly as strict if you are out in the bay. I don’t think coastal housing can scale. That’s why I mentioned the amphibious vehicles. Or more realistically, a ferry to move people to and from the housing ship.
Yeah, there’s no getting around that. It’s the kind of thing that you contract out to a maritime engineering firm. 10 million for the ship, maybe 5 million for the housing units, maybe another 5 million to pay an engineering firm to design the thing to comply with relevant regulations. Throw in another 5 million to assemble the thing. Then who knows how much to cut through all the red tape. However, rents keep going up significantly faster than inflation. Condos in SF seem to be on the order of ~1 million per bedroom. I think you could easily recoup your costs after deploying a single 60-120 bedroom ship.
I think this is the big one, rent seekers are going to do everything they can to stop you if you’re a credible threat. I really don’t know how politics works at that level. I’d imagine you’d need to appease the right people and make PR moves that make your opponents look like monsters. Then again, if you go with the cargoship model, it’s not like you’ve lost your capital investment. You can just pack up and move to a different city anywhere in the world. You can also spread a fleet around to multiple cities across the world so as to avoid triggering said response while building up credibility/power.
Nod. See also the entire seasteading movement. (I’m not sure how that turned out, but I haven’t heard of anything impressive coming out of it)
With the ongoing drama that is currently taking place. I’m worried that the rationalist community will find itself inadvertently caught up in the culture war. This might cause a large influx of new users who are more interested in debating politics than anything else on LW.
It might be a good idea to put a temporary moratorium/barriers on new signups to the site in the event that things become particularly heated.
Something in this space seems pretty plausible to me. We are always monitoring contributions from new users, so I think we would notice relatively quickly, and I agree that as soon as we see a suspicious uptick we might want to limit contributions, but I do think I would want to wait until we see the initial signs.
I wonder whether it would be a good idea to set a “controversial” (e.g. culture war) flag to some posts, and simply not allow new users to comment on those posts.
With some explanation, like: “These posts do not represent the intended content of Less Wrong, which is rationality, artificial intelligence, effective altruism, et cetera.”
To reduce the administrative work, we could assume by default that all articles are noncontroversial, and only set the flag to those where some problems happen or we explicitly expect them to happen. Maybe some keywords in content, such as “Trump” or “social justice”, could show a dialog whether the author wants to flag their article as “controversial (inaccessible to freshly registered users)”.
Yup. This is already a thing we keep an eye out for for new users (I’m less likely to approve a new user if they seem primarily interested in arguing politics), and I agree it makes more sense to be on the lookout for it right now.
What do you mean “approve a new user”? AFAIK, registration is totally free.
When new users post content, moderations check whether they’re spammers, and whether they seem to meet the basic quality bar we want for site users. (In some cases we block accounts, in some cases we send them a message noting that their content isn’t generally up to the standards of the site)
“Synchronized moderating” could be an olympic sport I guess :P (We both wrote a reply with functionally the same content at the same time)
Meta-philosophy hypothesis: Philosophy is the process of reifying fuzzy concepts that humans use. By “fuzzy concepts” I mean things where we can say “I know it when I see it.” but we might not be able to describe what “it” is.
Examples that I believe support the hypothesis:
This shortform is about the philosophy of “philosophy” and this hypothesis is an attempt at an explanation of what we mean by “philosophy”.
In epistemology, Bayesian epistemology is a hypothesis that explains the process of learning.
In ethics, an ethical theory attempts to make explicit our moral intuitions.
A clear explanation of consciousness and qualia would be considered philosophical progress.
>”I know it when I see it.” but we might not be able to describe what “it” is.
hard to generate easy to verify functions. Related: Gendlin’s ‘sharp’ blank, or a blank that knows what it is looking for, eg tip of the tongue phenomena, or forgetting what you’re looking for and then remembering when you see it.
I notice that there’s a fair bit of “thread necromancy” on LessWrong. I don’t think it’s a bad thing, but I think it would be cool to have an option to filter comments based on the time gap between when the post was made and when the comment was made. That way it’s easier to see to see what the discussion was like around the time when the post was made.
On a related note, does LessWrong record when upvotes are made? It would also be cool to have a “time-machine” to see how up-votes and down-votes in a thread evolve over time. Could be good for analysing the behaviour of threads in the short term, and a way to see how community norms change in the long term.
You can order the comments by oldest first, which gives you at least some of that.
We do also record when all the votes are cast, so a timemachine is possible, though querying and aggregating all the votes for a large thread might be too much for a browser client.
I’ve been thinking about how one could get continuous physics from a discrete process. Suppose you had a differential equation, and you wanted to make a discrete approximation to it. Furthermore, suppose you had a discrete algorithms for simulating this differential equation that takes in a parameter, say, dt which controls the resolution of the simulation. As dt tends toward zero, the dynamics of the simulated diff eq will tend towards the dynamics of the real diff eq.
Now suppose, we have a a turing machine that implements this algorithm as a subroutine. More precisely, the turing machine runs a simulations of diff equation at a resolutions of 1 then 1⁄2, then 1⁄3 and so on and so forth.
Finally, suppose their we’re a conscious observer in this simulation, at what resolution would they expect their physics to be simulated? Depending on one’s notion of anthropics, one could argue that at any resolution, there is a finite amount of observers in lower resolution simulations, but an infinite amount in higher resolution simulations. Consequently, the observer should expect to live in a universe with continuous physics.
Related: https://en.wikipedia.org/wiki/Super-recursive_algorithm#Schmidhuber’s_generalized_Turing_machines
The standard Solomonoff prior discounts hypotheses by 2^(-l), where l is the number of bits required to describe them. However, we can easily imagine a whole class of priors, each with a different discount rate. For instance, one could discount by 1/Z2^(-(2l)) where Z is a normalizing factor to get probabilities to add up to one. Why do we put special emphasis on this rate of discounting rather than any other rate of discounting?
I think that we can justify this discount rate with the principle of maximum entropy, as distributions with steeper asymptotic discounting rates will have lower entropy than distributions with shallower asymptotic discounting rates and any distribution with a shallower discounting rate than 2^(-l) would (probably) diverge and therefore constitute an invalid probability distribution.
Are there arguments/situations justifying steeper discounting rates?
I’ve been thinking about 2 things lately:
-Nuclear marine propulsion
-Marine cloud brightening
For some context, marine cloud brightening(MCB) is a geoengineering proposal to disperse aerosolized salt water particles into the lower atmosphere using ships. MCB appears to be one of the more promising geoengineering proposals, with more optimistic estimates placing the cost of reversing climate change at ~ $200 million per year.. However, one of the main technical issues with the proposal is how to actually lift and aerosolize the water particles. Current proposals propose using wind powered ships. However, if the power requirements are not sufficient, nuclear power could be used. Estimates for how much power needs to be delivered in order to aerosolize the water are as low as 30MW. This compares favourably to the estimated 60MW from a Russian nuclear icebreaker. Furthermore, the extra 200-300MW of waste heat from the reactor can be used to heat the water before spraying it. When hot aerosolized water comes into contact with the air, it expands and floats up, allowing the salt to rise in the same way that current marine ship exhaust rises into the atmosphere.
Is there a way to filter out the corona virus posts? They’re really starting to clog up the front page.
As of last night, yes: click the Gear at the top-right of the Latest Posts page. (we don’t yet have it working for All Posts or Recent Discussion, but will try soon)
The following links show LW’s presence on multiple social media websites.
Reddit: https://www.reddit.com/domain/lesswrong.com/top/?sort=top&t=all
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&query=lesswrong&sort=byPopularity&type=storyHackernews: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&query=www.lesswrong.com&sort=byPopularity&type=story
Twitter: https://twitter.com/search?q=lesswrong&src=typed_query
I think that with a single exception of at TIL post a week ago, I don’t think we have much visibility or impact on the broader social media world.
I wonder if it would be a good idea for this site to have the ability to temporarily lock out users without a login in the event that something on this site goes viral. Did we see a spike in new users signing up around a week ago?
You’re definitely missing stuff with the Hacker News search, like once every other month or so get a big hit like this or this.
You’re right, I’ve fixed the query, but I don’t think it changes the conclusion much.
It’s hard for me to keep track of new users because we’re also having a wave of spam, but my impression is that new-user rates are similar to what they typically are, but that activity in general is higher than usual (roughly matching the peaks that it hit in December).
Due to the corona virus, masks and disinfectants are starting to run out in many locations. Still working on the mask situation, but it might be possible to make your own hand sanitizer by mixing isopropyl alcohol or ethanol with glycerol. The individual ingredients might be available even if hand sanitizer isn’t. From what I gather, you want to aim for for at least 90% alcohol. Higher is better.