“Neural Networks for Modeling Source Code Edits” https://arxiv.org/abs/1904.02818
Seems like a fascinating line of inquiry, though possibly problematic from the perspective of unaligned AI self-improvement.
“Neural Networks for Modeling Source Code Edits” https://arxiv.org/abs/1904.02818
Seems like a fascinating line of inquiry, though possibly problematic from the perspective of unaligned AI self-improvement.
Good point. I really could have done a better job of getting my point across.
Ideas that might pan out are generally plausible now given the evidence available, even if they cannot be proved, whereas bogus, crank ideas generally ignore what we know to claim something contradictory.
I think this is an important point to recognize. If an idea agrees with observation but makes predictions that can’t currently be tested, it should be given more consideration than an idea which contradicts existing observations.
To start with, the idea as it’s expressed is wrong. The objects on the sky that we call planets are proper planets and not stars or moons.
I disagree, but perhaps I was not clear enough in my description of the idea. In particular I was not using the modern definitions of sun, star, moon, and planet. The ancient definition of “planet” was an object that wanders across the sky. Also, by “moon” I was trying to mean a body which shines by reflected light rather than producing light of its own like the sun does.
I do like your suggestion to look at mathematics for how to deal with statements whose truth is unknown.
That’s an interesting point of view. It makes me wonder if there’s a useful definition of consciousness along the same vein as the “negative entropy” definition of life (meaning something is alive if it reverses entropy in its local environment).
An advantage I have found of knowing my IQ is that I can consider the normal distribution of IQ scores and determine roughly how many people are smarter than I am in a given population (such as the city I live in, or the surrounding metropolitan area). In particular, it helps me to understand why I’m typically the smartest person in any particular group I participate in, but also reminds me that there are a large number of people smarter than I am within convenient travel distance, despite our social circles not obviously overlapping.
The problem is that the visionary ideas ahead of their time are indistinguishable from the crank ones
Expressed very succinctly, thank you.
I suppose what I’m really wondering is whether there’s some feature which can be perceived in the structure of the idea and its ramifications which indicate that it is on the right track, which would distinguish it from crank. Clearly there’s nothing obvious or someone would have found it by now and made a bunch of correct predictions a long time ago. Still, it makes me wonder if there’s something remaining to be found there.
Even if someone HAD postulated that there existed a distance so great that the sun would look like a point, and that our stars might be suns to them, they wouldn’t be “right” in any useful sense of the word. There are zero predictions nor behavior changes to make based on that hypothesis.
On the one hand, I agree that beliefs should guide our expectations and in general should be required to “pay rent” as in the post you reference. On the other hand, truth is truth, regardless of whether it can be perceived as such. I am reminded of https://www.readthesequences.com/Belief-In-The-Implied-Invisible , though as written it doesn’t directly apply.
I don’t like the outcome of “this is too far beyond our current capabilities so it is irrational to think about”. Is there a place in rational thought for considering ideas that cannot presently be tested, but may point the way for future explorers who are better equipped?
Perhaps you’ve got the best conclusion given the constraints:
the big problem isn’t finding more ideas, but in deciding which ones are worth giving up immediate resources to pursue sooner
Though I find that ever so slightly depressing to consider.
Perfect! And done.
My apologies, it’s the best I could come up with. I’m open to suggestions.
This article seems to have some bearing on decision theory, but I don’t know enough about it or quantum mechanics to say what that bearing might be.
I’d be interested to know others’ take on the article.
The Stoical scheme of supplying our wants by lopping off our desires, is like cutting off our feet when we want shoes.
Lovely quote, thank you.
In that case, how do you handle the problem of humans wanting the “wrong” things? (Meaning people wanting things that ultimately result in bad outcomes for themselves or others.)
Would altering reality so it more closely aligns with the humans’ desires include avoiding negative consequences, side effects, and externalities?
I think that’s my choice as well. Humans expectations are much narrower than reality appears to be. If reality conformed to human expectation then no one would ever be surprised, which I think would be sad.
So of the expanded options, which would you choose?
Perhaps there is nothing which it is like to be a bat.
There is a lot to be gained by delegating to a central authority the responsibility of maintaining a credible threat of retaliation.
Thanks for rubbing salt in the wound. (Only a tiny bit serious.)
Back when mining was possible on a standard desktop computer I mined a block in my first week, and received 50 bitcoins. A couple years later, I found that bitcoins were trading at the mind-blowing sum of $1 each, and cashed in. (In my pitifully weak defense, I was really short on money at the time.)
If I had done something sensible, like sold a few each time the price went up 10x, I’d have a pile of cash and probably some bitcoins left.
Weep for me, oh ye internets.