He doesn’t need to stall for time to transfigure. He could have already been doing it over the last two chapters.
Fhyve
I have one of these. Can confirm, pretty good relative to other similarly priced knives I’ve tried, and even better than a high quality knife of the same age, when both hadn’t been properly maintained.
In the spirit of this thread, take a typing class. I find that taking classes are an effective way to get over motivation blocks, if that’s what is preventing you from learning touch typing.
I’m a math undergrad, and I definitely spend more time in the second sort of style. I find that my intuition is rather reliable, so maybe that’s why I’m so successful at math. This might be hitting into the “two cultures of mathematics”, where I am definitely on the theory builder/algebraist side. I study category theory and other abstract nonsense, and I am rather bad (relative to my peers) at Putnam style problems.
The difference is that saying there is a territory is also a model. The way I would rephrase map/territory into this language is “the model is not the data.”
This is the best place to apply effort for my goals, because I think that there might be some problems underlying MIRI’s epistemology and philosophy of math that is causing confusion in some of their papers.
That it hasn’t been radically triumphant isn’t strong evidence towards its lack of world-beating potential though. Pragmatism is weird and confusing, perhaps it just hasn’t been exposited or argued for clearly and convincingly enough. Perhaps it historically has been rejected for cultural reasons (“we’re doing physicalism so nyah”). I think there is value on clearly presenting it to the LW/MIRI crowd. There are unresolved problems with a naturalistic philosophy that should be pointed out, and it seems that pragmatism solves them.
As for originality, I’m not sure how think about this. Pretty much everything has already been thought of, but it is hard to read all of the literature to be familiar with it. So how do you write? Acknowledge that there probably is some similar exposition, but we don’t know where it is? What if you’ve come up with most of these ideas yourself? What if every fragment of your idea has been thought of, but it has never been put together in this particular way (which I suspect is going to be the case with us). The only reason for not appearing to be original is so not to seem arrogant to people like you who’ve read these arguments before.
Do you have direct, object-level criticisms of our version of pragmatism? Because that would be great. We’ve been having a hard time finding ones that we haven’t already fixed, and it seems really unlikely that there aren’t any. (I’ve been working on this with OP)
The computable algorithm isn’t a meta-model though. It’s just you in a different substrate. It’s not something the agent can run to figure out what to do because it necessarily take more computing power. And there is nothing preventing such a pragmatic agent from having a universe-model that is computable, considering finding a computable algorithm approximating itself, and copying that algorithm over and over.
Intervals and ratios are going to be essentially the same thing for conventional pomodoros. They are some time on, some time off, repeat. It might be weird to have variable pomodoros since the break is for mental fatigue, not reward. Perhaps some mechanism to reward you with an M&M at some time randomly in the second half of your pomodoros?
The most charitable take on it that I can form is a similar one to Scott’s on MBTI: (http://slatestarcodex.com/2014/05/27/on-types-of-typologies/). It might not be validated by science, but it provides a description language with a high amount of granularity over something that most people don’t have a good description language for. So with this interpretation, it is more of a theory in the social sciences sense, a lens at which to look at human motivation, behaviour, etc. This probably differs from, and is a much weaker claim than people at Leverage would make.
I don’t know how I feel about the allegations at the end. It seems that other than connection theory, Leverage is doing good work, and having more money is generally better. I would neither endorse or criticize their use of it, but I think that since I don’t want those tactics used by arbitrary people, I’d fall on the side of criticize. I would also recommend that the aforementioned creator not be so open about his ulterior motives and some other things he has mentioned in the past. All in all, Connection Theory is not what Leverage is selling it as.
Edit: I just commented on the theory side of it. The therapy side (or however they are framing the actual actions side), a therapy doesn’t need its underlying theory to be correct in order to be effective. I am rather confident that actually doing the connection theory exercises will be fairly beneficial, though actually doing a lot of things coming from psychology will probably be fairly beneficial. And other than the hole in your wallet, talking to the aforementioned creator probably is too.
I’d say Nick Bostrom (a respected professor at Oxford) writing Superintelligence (and otherwise working on the project), this (https://twitter.com/elonmusk/status/495759307346952192), some high profile research associates and workshop attendees (Max Tegmark, John Baez, quite a number of Google engineers), give FAI much more legitimacy than connection theory.
If you want a more precise date for whatever reason, it was right at the end of the July 2013 workshop, which was July 19-23. There were a number of leverage folk who had just started the experiment there.
I’m currently interning at MIRI, I had a short technical conversation with Eliezer, a multi hour conversation with Michael Vassar, and other people seem to be taking me as somewhat of an authority on AI topics.
I agree. I want to comment on some of the downvoted posts, but I don’t want to pay the karma
Irrationality Game:
Politics (in particular, large governments such as the US, China, and Russia) are a major threat to the development of friendly AI. Conditional on FAI progress having stopped, I give a 60% chance that it was because of government interference, rather than existential risk or some other problem.
Bayes is epistemological background not a toolbox of algorithms.
I disagree: I think you are lumping two things together that don’t necessarily belong together. There is Bayesian epistemology, which is philosophy, describing in principle how we should reason, and there is Bayesian statistics, something that certain career statisticians use in their day to day work. I’d say that frequentism does fairly poorly as an epistemology, but it seems like it can be pretty useful in statistics if used “right”. It’s nice to have nice principles underlying your statistics, but sometimes ad hoc methods and experience and intuition just work.
Depending on the IQ test, I don’t think your overall score will go down much if you don’t do well on a subsection or two. This is low confidence, and based off one data point though. I have scores ranging from 102 to 136 and my total score somehow comes out to be 141.
That only means you are merely good at arithmetic. Can you prove, say, that there are no perfect squares of the form
3^p + 19(p-1)
where p is prime?
The spaceship “exists” (I don’t really like using exists in this context because it is confusing) in the sense that in the futures where someone figures out how to break the speed of light, I know I can interact with the spaceship. What is the probability that I can break the speed of light in the future?
Then for Many Worlds, what is the probability that I will be able to interact with one of the Other Worlds?
I would not care more about things if I gain information that I can influence them, unless I also gain information that they can influence me. If I gain credence in Many Worlds, then I only care about Other Worlds to the extent that it might be more likely for them to influence my world.
Burning cats is another good example. Can you feel how much fun it is to burn cats? Some people used to have all sorts of fun by burning cats. And this one is harder to do the wrong sort of justification based on bad models than either burning witches or torturing heretics.
Edit: Well, just scrolled down to where you talk about torturing animals. Beat me to it I guess...