This might be sound reasoning. In this particular case you’ve made up a number and more or less based it on some idea of optimal egg allocation. That is all very well, but was not exactly what I meant to say by using that phrase or by the comment you replied to and wasn’t my original intention when replying to EY.
I can follow much of the reasoning and arguments on this site. But I’m currently unable to judge their overall credence. That is, are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground?
I’m concerned that although consistently so, the LW community is updating on fictional evidence. My questions in the original comment were meant to inquire the basic principles, the foundation of the sound argumentation that is based upon those basic premises. That is, are you creating models to treat subsequent models or are the propositions based on fact?
An example here is the treatment and use of MWI, the conclusions, arguments and further estimations based on it. No doubt MWI is the only consistent non-magic interpretation of quantum mechanics. But that’s it, an interpretation. A logical consistent deduction. Or should I rather call it an induction, as the inference seems to be of greater generality than the premises, at least withing the LW community? But that’s besides the point. The problem here is that such conclusions are widely considered to be weak evidence to base further speculations and estimations on it.
What I’m trying to argue here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of exponential evolving superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, ideas that are themselves not based on firm ground.
Now you might argue, it’s all about probability estimations. Someone else might argue that reports from the past do NOT need to provide evidence to what will occur in the future. But the gist of the matter is that a coherent and consistent framework of sound argumentation based on unsupported inference is nothing more than its description implies. It is fiction. Scientific evidence is at least providing hints. Imagination allows for endless possibilities while scientific evidence at least provides hints of what might be possible and what impossible. Science only has to provide the ability to assess your data. The experience of its realization is a justification that bears a hint. Any hint that empirical criticism provides gives you new information on which you can build on. Not because it bears truth value but because it gives you an idea of where you want to go. An opportunity to try something. There’s that which seemingly fails or contradicts itself and that which seems to work and is consistent.
And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.
And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.
I cannot fault this reasoning. From everything I have read in your comments this seems to be the right conclusion for you to make given what you know. Taking the word of a somewhat non-mainstream community would be intellectually reckless. For my part there are some claims on LW that I do not feel I am capable of reaching a strong conclusion on—even accounting for respect for expert opinions.
Now I’m curious, here you have referred to “LW” thinking in general, while we can obviously consider also LW conclusions on specific topics. Of all those positions that LW has a consensus on (and are not nearly universally accepted by all educated people) are there any particular topics for which you are confident of either confirming or denying? For example “cryonics is worth a shot” seems to be far more easy to judge than conclusions about quantum mechanics and decision theory.
This might be sound reasoning. In this particular case you’ve made up a number and more or less based it on some idea of optimal egg allocation. That is all very well, but was not exactly what I meant to say by using that phrase or by the comment you replied to and wasn’t my original intention when replying to EY.
I can follow much of the reasoning and arguments on this site. But I’m currently unable to judge their overall credence. That is, are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground?
I’m concerned that although consistently so, the LW community is updating on fictional evidence. My questions in the original comment were meant to inquire the basic principles, the foundation of the sound argumentation that is based upon those basic premises. That is, are you creating models to treat subsequent models or are the propositions based on fact?
An example here is the treatment and use of MWI, the conclusions, arguments and further estimations based on it. No doubt MWI is the only consistent non-magic interpretation of quantum mechanics. But that’s it, an interpretation. A logical consistent deduction. Or should I rather call it an induction, as the inference seems to be of greater generality than the premises, at least withing the LW community? But that’s besides the point. The problem here is that such conclusions are widely considered to be weak evidence to base further speculations and estimations on it.
What I’m trying to argue here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of exponential evolving superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, ideas that are themselves not based on firm ground.
Now you might argue, it’s all about probability estimations. Someone else might argue that reports from the past do NOT need to provide evidence to what will occur in the future. But the gist of the matter is that a coherent and consistent framework of sound argumentation based on unsupported inference is nothing more than its description implies. It is fiction. Scientific evidence is at least providing hints. Imagination allows for endless possibilities while scientific evidence at least provides hints of what might be possible and what impossible. Science only has to provide the ability to assess your data. The experience of its realization is a justification that bears a hint. Any hint that empirical criticism provides gives you new information on which you can build on. Not because it bears truth value but because it gives you an idea of where you want to go. An opportunity to try something. There’s that which seemingly fails or contradicts itself and that which seems to work and is consistent.
And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.
I cannot fault this reasoning. From everything I have read in your comments this seems to be the right conclusion for you to make given what you know. Taking the word of a somewhat non-mainstream community would be intellectually reckless. For my part there are some claims on LW that I do not feel I am capable of reaching a strong conclusion on—even accounting for respect for expert opinions.
Now I’m curious, here you have referred to “LW” thinking in general, while we can obviously consider also LW conclusions on specific topics. Of all those positions that LW has a consensus on (and are not nearly universally accepted by all educated people) are there any particular topics for which you are confident of either confirming or denying? For example “cryonics is worth a shot” seems to be far more easy to judge than conclusions about quantum mechanics and decision theory.