You’re right; you have to learn solid background for research. But still, it often makes sense to learn in the reversed order.
estimator
Can you unpack “approximation of Solomonoff induction”? Approximation in what sense?
In my experience, in math/science prerequisites often can (and should) be ignored, and learned as you actually need them. People who thoroughly follow all the prerequisites often end up bogged down in numerous science fields which have actually weak connection to what they wanted to learn initially, and then get demotivated and drop out of their endeavor. This is a common failure mode.
Like, you need probability theory to do machine learning, but some you are unlikely to encounter some parts of it, and also there are parts of ML which require very little of it. It totally makes sense to start with them.
One simple UI improvement for the site: add a link from comments in inbox to that comment in the context of its post; now I have to click twice to get to the post and then scroll down to the comment.
But these are the things pretty much everybody does while learning languages.
Also, I’d like to compare your system against common sense reasoning baseline. What do you think are the main differences between your approach and usual approaches to skill learning? What will be the difference in actions?
I’m asking that because that your guide contains quite long a list of recommendations/actions, while many of them are used (probably intuitively/implicitly) by almost any sensible person. Also, some of the recommendations clearly have more impact than others. So, what happens if we apply the Pareto principle to your learning system? Which 20% are the most important? What is at the core of your approach?
I meant something like this.
… take part in routine conversations; write & understand simple written text; make notes & understand most of the general meaning of lectures, meetings, TV programmes and extract basic information from a written document.
I don’t think it’s strange. Firstly, it does have distinguishing qualities, the question is whether they are relevant or not. So, you choose an analogy which shares the qualities you currently think are relevant; then you do some analysis of your analogy, and come to certain conclusions, but it is easy to overlook a step in the analysis which happens to sufficiently depend on a property that you previously thought was insufficient in the original model, and you can fail to see it, because it is absent in the analogy. So I think that double-checking results provided by analogy thinking is a necessary safety measure.
As for specific examples: something like quantum consciousness by Penrose (although I don’t actually believe it it). Or any other reason why consciousness (not intelligence!) can’t be reproduced in our computer devices (I don’t actually believe it either).
That’s a difficult question to answer, so I’ll give you the first thing I can think of. It’s still me, just a lower percentage of me. I’m not that confident that it can be put to a linear scale, though.
That is one of the reasons why I think binary-consciousness models are likely to be wrong.
There are many differences between brains and computers; they have different structure, different purpose, different properties; I’m pretty confident (>90%) that my computer isn’t conscious now, and the consciousness phenomenon may have specific qualities which are absent in its image in your analogy. My objection to using such analogies is that you can miss important details. However, they are often useful to illustrate one’s beliefs.
Nice, but beware reasoning after you’ve written the bottom line.
As for the actual content, I basically fail to see its area of applicability. For sufficiently complex skills, like say, math, languages or football decision-trees & howto-guides approach will likely fail as too shallow; for isolated skills like changing a tire complex learning approaches are an overkill—just google it and follow the instructions. Can you elaborate languages example further? Because, you know, learning a bunch of phrases from phrasebook to be able to say a few words in a foreign country is a non-issue. Actually learning language is. How would you apply your system to achieve intermediate-level language knowledge? Any other non-trivial skills learning example would also suffice. What skills have you trained by using your learning system, and how?
OK, suppose I come to you while you’re sleeping, and add/remove a single neuron. Will you wake up in your model? Yes, because while you’re naturally sleeping, much more neurons change. Now imagine that I alter your entire brain. Now, the answer seems to be no. Therefore, there must be some minimal change to your brain to ensure that a different person will wake up (i.e. with different consciousness/qualia). This seems strange.
You don’t assume that the person who wakes up always has different consciousness with the person who fell asleep, do you?
It would be the same computer, but different working session. Anyway, I doubt such analogies are precise and allow for reliable reasoning.
p(“your model”) < p(“my model”) < 50% -- that’s how I see things :)
Here is another objection to your consciousness model. You say that you are unconscious while sleeping; so, at the beginning of sleep your consciousness flow disappears, and then appears again when you wake up. But your brain state is different before and after sleep. How does your consciousness flow “find” your brain after sleep? What if I, standing on another planet many light years away from Earth, build atom-by-atom a brain which state is closer to your before-sleep brain state than your after-sleep brain state is?
The reason why I don’t believe these theories with a significant degree of certainty isn’t that I know some other brilliant consistent theory; rather, I think that all of them are more or less inconsistent.
Actually, I think that it’s probably a mistake to consider consciousness a binary trait; but non-binary consciousness assumption makes it even harder to find out what is actually going on. I hope that the progress in machine learning or neuroscience will provide some insights.
That’s a typo; I mean’t that my model doesn’t imply continuous time. By the way, does it make sense to call it “my model” if my estimate of the probability of it being true is < 50%?
So, why do I think that consciousness requires continuity?
I guess, you have meant “doesn’t require”?
I’d say that continuity requirement is the main cause for the divergence in our plausibility rankings, at least.
What is your probability estimate of your model being (mostly) true?
I’ve started commenting here recently, but I’m a long time lurker (>1 year). Also, I was speaking about self-help articles in general, not conditional on whether they are posted on LW—it makes sense, because pretty much anyone can post on LW.
Now I found a somewhat less extreme example of what I think is an OK post on self-help although it doesn’t have scientific references, because a) the author told us what actual results he achieved and, more importantly, b) the author explained why he thinks that the advice works in the first place.
Personally, I don’t find your post consistent with my observations, but it’s not my main objection—my main objection is that throwing an instruction without any justification is a bad practice, especially on such a controversial topic, especially in a rationalist community.
I find a model plausible if it isn’t contradicted by evidence and matches my intuitions.
My model doesn’t imply discrete time; I don’t think I can precisely explain why, because I basically don’t know how consciousness works at that level; intuitively, just replace t + dt with t + 1. Needless to say, I’m uncertain of this, too.
Honestly, my best guess is that all these models are wrong.
Now, what arguments cause you to find your model plausible?
OK, either I wake up in a room with no envelope or die (deterministically) depends on which envelope you have put in my room.
What exactly happens in the process of cloning certainly depends on a particular cloning technology; the real one is that which shares continuous conscious experience line with me. The (obvious) way to detect which was real for an outsider is to look at where it came from—if it was built as a clone, then, well, it is a clone.
Note that I’m not saying that it’s the true model, just that I currently find it more plausible; none of the consciousness theories I’ve seen so far is truly satisfactory.
I’ve read the Ebborian posts and wasn’t convinced; a thought experiment is just a thought experiment, there are many ways it can be flawed (that is true for all the thought experiments I proposed in this discussion, btw). But yes, that’s a problem.
So, taking a look at what you actually propose to do, this reduces to a) learn some phrases from the tourist phrasebook and b) learn the rest of the language while c) avoiding high-stakes situations where you need language knowledge. Reminds me of this.
Articles on such topics are notorious for their average bad quality. Reformulating in Bayesian terms, the prior probability of your statements being true is low, so you should provide some proofs or evidence—or why I (or anyone) should believe you? Have you actually checked if it works? Have you actually checked if it works for somebody else?
I don’t think that personal achievements are a bullet-proof argumentation for such an advice. Still, when I read something like this, I’m pretty sure that it contains valuable information, although it is probably a mistake to follow such advice verbatim anyway. So, if you have Hamming-level credentials, it will help.
As for your article, probably the only way to fix it is to add proofs to your statements. What evidence supports them? Is there any psychological research to back up your claims? Why do you think it is optimal (or near-optimal) way to learn skills?
This is a good self-help article. Can you see the reference list? :)
Do you think you won’t awaken in a room with no in the envelope?
I think that I either wake up in a room with no in the envelope, or die, in which case my clone continues to live.
Yes, but I also think conscious experience is halted during regular sleep. Also, should multiple copies survive, his conscious experience will continue in multiple copies. His subjective probability of finding himself as any particular copy depends on the relative weightings (i.e. self-locating uncertainty).
I find this model implausible. Is there any evidence I can update on?
Read what is a matrix, how to add, multiply and invert them, what is a determinant and what is an eigenvector and that’s enough to get you started. There are many algorithms in ML where vectors/matrices are used mostly as a handy notation.
Yes, you will be unable to understand some parts of ML which substantially require linear algebra; yes, understanding ML without linear algebra is harder; yes, you need linear algebra for almost any kind of serious ML research—but it doesn’t mean that you have to spend a few years studying arcane math before you can open a ML textbook.