An aspiring rationalist who has been involved in the Columbus Rationality community since January 2016.
J Thomas Moros
I think it is interesting that you think it is not very neglected. I assume you think that because languages like Rust, Kotlin, Go, Swift, and Zig have received various funding levels. Also, academic research is funding languages like Haskell, Scala, Lean, etc.
I suppose that is better than nothing. However, from my perspective, that is mostly funding the wrong things and even funding some of those languages inadequately. As I mentioned, Rust and Go show signs of being pushed to market too soon in ways that will be permanently harmful to the developers using them. Most of those languages aren’t improving programming languages in any meaningful way. They are making very minor changes at the margin. Of the ones I listed, I would say only Rust and Scala have made any real advances in mainstream languages, and Scala is still mired in many problems because of the JVM ecosystem. On the other hand, the Go language has been heavily funded and pushed by Google and has set programming languages back significantly.
I would say there is almost no path to funding a language that is both meant for widespread general use and pushes languages forward. Many of the languages that have received funding did so by luck and were funded too late in the process and underfunded. There is no funding that actually seeks out good early-stage languages and funds them.
Also, many of those languages got funding by luck. Luck is not a funding plan.
[Question] Programming Language Early Funding?
Thanks for the summary of various models of how to figure out what to work on. While reading it, I couldn’t help but focus on my frustration about the “getting paid for it” part. Personally, I want to create a new programming language. I think we are still in the dark age of computer programming and that programming languages suck. I can’t make a perfect language, but I can take a solid step in the right direction. The world could sure use a better programming language if you ask me. I’m passionate about this project. I’m a skilled software developer with a longer career than all the young guns I see. I think I’ve proved with my work so far that I am a top-tier language designer capable of writing a compiler and standard library. But...… this is almost the definition of something you can’t and won’t be paid for. At least not until you’ve already published a successful language. That fact greatly contributes to why we can’t have better programming languages. No one can afford to let them incubate as long as needed. Because of limited resources, everyone has to push to release it as fast as possible. Unlike other software, languages have very strict backward compatibility requirements, so improving them is a challenge and inevitably leads to real issues as the language grows over time. However, they can never fix previous mistakes or address design changes needed to support new features.
I’m confused by the judges’ lack of use of the search capabilities. I think we need more information about how the judges are selected. It isn’t clear to me that they are representative of the kinds of people we would expect to be acting as judges in future scenarios of superintelligent AI debates. For example, a simple and obvious tactic would be to ask both AIs what one ought to search for in order to be able to verify their arguments. An AI that can make very compelling arguments still can’t change the true facts that are known to humanity to suit their needs.
This is not sound reasoning because of selection bias. If any of those predictions had been correct, you would not be here to see it. Thus, you cannot use their failure as evidence.
As someone who believes in moral error theory, I have problems with the moral language (“responsibility to lead ethical lives of personal fulfillment”, “Ethical values are derived from human need and interest as tested by experience.”).
I don’t think that “Life’s fulfillment emerges from individual participation in the service of humane ideals” or “Working to benefit society maximizes individual happiness.” Rather I would say some people find some fulfillment in those things.
I am vehemently opposed to the deathist language of “finding wonder and awe in the joys and beauties of human existence, its challenges and tragedies, and even in the inevitability and finality of death.” Death is bad and should not be accepted.
I assume there are other things I would disagree with, but those are a few that stand out when skimming it.
I agree with your three premises. However, I would recommend using a different term than “humanism”.
Humanism is more than just the broad set of values you described. It is also a specific movement with more specific values. See for example the latest humanist manifesto. I agree with what you described as “humanism” but strongly reject the label humanist because I do not agree with the other baggage that goes with it. If possible, try to come up with a term that directly states the value you are describing. Perhaps something along the lines of “human flourishing as the standard of value”?
I am signed up for cryonics with Alcor and did so in 2017. I checked and the two options you listed are consistent with the options I was given. I didn’t have a problem with them, but I can understand your concern.
I have had a number of interactions with Alcor staff both during the signup process and since. I always found them pleasant and helpful. I’m sorry to hear that you are having a bad experience. My suggestion would be to get the representative on the phone and discuss your concerns. Obviously, final wording should be handled in writing but I think a phone conversation would help you both understand what would be acceptable to both of you.
In my opinion, the responses you have gotten probably arise from one of two sources. It is possible that she simply didn’t read what you wrote carefully enough and fell back to boilerplate language that is closer to what their legal counsel has approved. She likely doesn’t have the authority to accept major changes herself. If that is not what happened, then it is most likely that Alcor is trying to push the option they are pushing to avoid legal issues, the issues they have had with family in the past, and delay in cryopreservation. They want a clear-cut decision procedure that doesn’t depend on too many third parties. If cryopreservation is to go well, it needs to be done in a timely fashion. Ideally, you want whoever is performing it to have a clear and immediate path to begin if it is warranted. Any judgment call or requirement to get consent could cause unnecessary delays. You might think it will be clear, but any chance your wife could claim that she should have been consulted and wasn’t could cause legal problems. Thus, Alcor may be forced to consult her in all but the most clear-cut cases. Again, just schedule a call.
As a proponent of cryonics, I hope you will persist and work through this issue. Please message me if there are other questions I can answer for you. If you choose not to proceed, you can choose to keep the insurance policy and designate another recipient rather than canceling it.
P.S. Having researched all the Cryonics organizations, Alcor is by far the best. They are still small but they are working the hardest to become a fully professional organization. Their handling of the legal issues and financial structure is much better. The Cryonics Institute (CI) is run by well-meaning people who are less professional. They are more of a volunteer organization. Having attended a CI annual meeting I was disappointed by the insufficiently conservative and far-sighted investment strategy. I think CI may actually be underfunded for the goal of existing 100 years from now.
While I can understand why many would view advances toward WBE as an AI-safety risk, many in the community are also concerned with cryonics. WBE is an important option for the revival of cryonics patients. So I think the desirability of WBE should be clear. It just may be the case that we need to develop safe AI first.
As someone interested in seeing WBE become a reality, I have also been disappointed by the lack of progress. I would like to understand the reasons for this better. So I was interested to read this post, but you seem to be conflating two different things. The difficulty of simulating a worm and the difficulty of uploading a worm. There are a few sentences that hint both are unsolved, but they should be clearly separated.
Uploading a worm requires being able to read the synaptic weights, thresholds, and possibly other details from an individual worm. Note that it isn’t accurate to say it must be alive. It would be sufficient to freeze an individual worm and then spend extensive time and effort reading that information. Nevertheless, I can imagine that might be very difficult to do. According to wormbook.org, C. elegans has on the order of 7,000 synapses. I am not sure we know how to read the weight and threshold of a synapse. This strikes me as a task requiring significant technological development that isn’t in line with existing research programs. That is, most research is not attempting to develop the technology to read specific weights and thresholds. So it would require a significant well-funded effort focused specifically on it. I am not surprised this has not been achieved given reports of lack of funding. Furthermore, I am not surprised there is a lack of funding for this.
Simulating a worm should only require an accurate model of the behavior of the worm nervous system and a simulation environment. Given that all C. elegans have the same 302 neurons this seems like it should be feasible. Furthermore, the learning mechanism of individual neurons, operation of synapses, etc. should all be things researchers outside of the worm emulation efforts should be interested in studying. Were I wanting to advance the state of the art, I would focus on making an accurate simulation of a generic worm that was capable of learning. Then simulate it in an environment similar to its native environment and try to demonstrate that it eventually learned behavior matching real C. elegans including under conditions which C. elegans would learn. That is why I was very disappointed to learn that the “simulations are far from realistic because they are not capable of learning.” It seems to me this is where the research effort should focus and I would like to hear more about why this is challenging and hasn’t already been done.
I believe that worm uploading is not needed to make significant steps toward showing the feasibility of WBE. The kind of worm simulation I describe would be more than sufficient. At that point, reading the weights and thresholds of an individual worm becomes only an engineering problem that should be solvable given a sufficient investment or level of technological advancement.
A study by Alcor trained C. elegans worms to react to the smell of a chemical. They then demonstrated that the worms retained this memory even after being frozen and revived. Were it possible to upload a worm, the same exact test would show that you had successfully uploaded a worm with that memory vs. one without that memory.
Study here: Persistence of Long-Term Memory in Vitrified and Revived Caenorhabditis elegans
I think you are being overly optimistic about homomorphic encryption. The uFAI doesn’t need to have absolute control over how the computation happens. Nor does it need to be able to perfectly predict the real-world results of running some computation. It only needs some amount of information leakage. The current best example I can think of is timing attacks on cryptographic protocols. The protocol itself should be secure, but a side-channel causes insecurity. Another example would be the meltdown and spectre vulnerabilities. How do you know your computational system doesn’t have something like that?
The first idea that comes to mind is pausing. Assuming the uFAI has some way to pause computation (which is a very common operation but, an equivalent might be achieved by performing a specific computation in a tight loop). Then the uFAI might be able to create morse code by alternating steps of pause, long computation, and short computation. I imagine that exact thing is naive, but do you really believe the uFAI won’t find any such vulnerability in your system?
I doubt the lack of 6-door cars has much to do with aesthetics. Doors and tight door seals are some of the more complex and expensive portions of the car body. Doors also pose challenges for crash safety as you have a large opening in the car body weakening the main body’s structural integrity in an accident. I suspect that the reason there are so few cars with 6 doors is the extra cost of manufacturing cars. That would lead to increased car costs. Most purchasers don’t value the extra convenience of the additional doors enough relative to the added price. Any company producing such a car would find a very small market which might make it not worth it to the manufacturer.
- Mar 22, 2021, 6:16 AM; 4 points) 's comment on Six-Door Cars by (
Recently many sources have reported a “CA variant” with many of the same properties as the English and South African strains. I haven’t personally investigated, but that might be something to look into. Especially given the number of rationalists in CA.
As others have already answered better than I, first avoid being obligated for such large unexpected charges. The customer in the example may have canceled their credit card, but they are still legally obligated to pay that money.
To answer the actual question of how to put limits. You can use privacy.com They allow you to create new credit card numbers that bill to your bank account but can have limits both in terms of total charges and monthly charges. You can also close any number at any time without impact on your personal finances. It is meant for safety and privacy for online shopping. You set up a card for each service. For example, create a card with you auto-bill the electric bill to. Set a limit that no more than say $200 can be charged to it each month. Any transaction that would push it over that limit will be declined, even automatic payments you have scheduled.
I’d be interested in seeing a write up on whether people who’ve had COVID need to be vaccinated. I have a friend who was sick with COVID symptoms for 3 weeks and tested positive for SARS-CoV-2 shortly after the onset of symptoms. He is now being told by medical professionals that he needs to be vaccinated just the same as everyone else. I tried to look up the data on this. Sources like CDC, Cleavland Clinic, and Mayo Clinic all state that people need to be vaccinated even if they have had COVID. However, their messaging seems to be contradictory. There are many appeals to “we don’t know”. The reasoning doesn’t appear to be any more complex than “vaccine good” and “immunity from infection ‘not known’”. There is no discussion of things I would expect like, the difference between testing positive with no symptoms, had symptoms but never tested, or tested positive and never had symptoms. While I can imagine reasons why immunity induced from the vaccine and from infection would be different, my prior is that most of the effects are going to be the same. There is repeated reference to not knowing how long immunity developed from infection lasts, but by definition, we have had less time to see how long immunity from the vaccine lasts. So our evidence about the vaccine would be weaker. I could say a lot more, but I’ll leave it at that.
To avoid any confusion: My actual model is that if you’ve had COVID19 then the vaccine would act as a booster. So I’d say people who’ve had it should get vaccinated eventually but should be among the lowest priority. That should be modulated by the probability that you had COVID and the fact that asymptotic COVID may be less likely to develop immunity. On the other hand, having had asymptomatic COVID is probably evidence that you will be asymptotic if you get it again. That is not the message that is being given to the public.
It’s unfortunate that we have this mess. But couldn’t this have been avoided by defaulting to minimal access? Per Mozilla (https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies), if a cookie’s domain isn’t set, it defaults to the domain of the site excluding subdomains. If instead, this defaulted to the full domain, wouldn’t that resolve the issue? The harm isn’t in allowing people to create cookies that span sites, but in doing so accidentally, correct? The only concern is then tracking cookies. For this, a list of TLDs which it would be invalid to specify as the domain would cover most cases. Situations like github.io are rare enough that there could simply be some additional DNS property they set which makes it invalid to have a cookie at that domain level.
Similarly, the secure and http-only properties ought to default to true.
Even after reading your post, I don’t think I’m any closer to comprehending the illusionist view of reality. One of my good and most respected friends is an illusionist. I’d really like to understand his model of consciousness.
Illusionists often seem to be arguing against strawmen to me. (Notwithstanding the fact that some philosophers actually do argue for such “strawman” positions). Dennet’s argument against “mental paint” seems to be an example of this. Of course, I don’t think there is something in my mental space with the property of redness. Of course “according to the story your brain is telling, there is a stripe with a certain type of property.” I accept that the most likely explanation is that everything about consciousness is the result of computational processes (in the broadest sense that the brain is some kind of neural net doing computation, not in the sense that it is anything actually like the Von Neumann architecture computer that I am using to write this comment). For me, that in no way removes the hard problem of consciousness, it only sharpens it.
Let me attempt to explain why I am unable to understand what the strong illusionist position is even saying. Right now, I’m looking at the blue sky outside my window. As I fix my eyes on a specific point in the sky and focus my attention on the color, I have an experience of “blueness.” The sky itself doesn’t have the property of phenomenological blueness. It has properties that cause certain wavelengths of light to scatter and other wavelengths to pass through. Certain wavelengths of light are reaching my eyes. That is causing receptors in my eyes to activate which in turn causes a cascade of neurons to fire across my brain. My brain is doing computation which I have no mental access to and computing that I am currently seeing blue. There is nothing in my brain that has the property of “blue”. The closest thing is something analogous to how a certain pattern of bits in a computer has the “property” of being ASCII for “A”. Yet I experience that computation as the qualia of “blueness.” How can that be? How can any computation of any kind create, or lead to qualia of any kind? You can say that it is just a story my brain is telling me that “I am seeing blue.” I must not understand what is being claimed, because I agree with it and yet it doesn’t remove the problem at all. Why does that story have any phenomenology to it? I can make no sense of the claim that it is an illusion. If the claim is just that there is nothing involved but computation, I agree. But the claim seems to be that there are no qualia, there is no phenomenology. That my belief in them is like an optical illusion or misremembering something. I may be very confused about all the processes that lead to my experiencing the blue qualia. I may be mistaken about the content and nature of my phenomenological world. None of that in any way removes the fact that I have qualia.
Let me try to sharpen my point by comparing it to other mental computation. I just recalled my mother’s name. I have no mental access to the computation that “looks up” my mother’s name. Instead, I go from seemingly not having ready access to the name to having it. There is no qualia associated with this. If I “say the name in my head”, I can produce an “echo” of the qualia. But I don’t have to do this. I can simply know what her name is and know that I know it. That seems to be consistent with the model of me as a computation. That if I were a computation and retrieved some fact from memory, I wouldn’t have direct access to the process by which it was retrieved from memory, but I would suddenly have the information in “cache.” Why isn’t all thought and experience like that? I can imagine an existence where I knew I was currently receiving input from my eyes that were looking at the sky and perceiving a shade which we call blue without there being any qualia.
For me, the hard problem of consciousness is exactly the question, “How can a physical/computational process give rise to qualia or even the ‘illusion’ of qualia?” If you tell me that life is not a vital force but is instead very complex tiny machines which you cannot yet explain to me, I can accept that because, upon close examination, those are not different kinds of things. They are both material objects obeying physical laws. When we say qualia are instead complex computations that you cannot yet explain to me, I can’t quite accept that because even on close examination, computation and qualia seem to be fundamentally different kinds of things and there seems to be an uncrossable chasm between them.
I sometimes worry that there are genuine differences in people’s phenomenological experiences which are causing us to be unable to comprehend what others are talking about. Similar to how it was discovered that certain people don’t actually have inner monologues or how some people think in words while others think only in pictures.
Do we need to RSVP in some way?
I think this isn’t a strong enough statement. Indeed, the median narrative is longer. However, even the modal narrative ought to include at least one unspecified obstacle occurring. In a three-year plan, the most frequent scenarios have something go wrong.