Startup Founder, Web Developer, Writer.
rayalez
Thanks, Ill try zink again, and one day get around to doung blood test.
I think insomnia starts gradually and progressively gets worse after a few days, maybe a week, my hypothesis was that D3 was building up, apparently it has a very long half life. I took about 2000-3000 IU, it’s not that much, right?
I didn’t try K2 supplement, but I’ve just googled and turns out spinach and kale have a lot of K, and I eat a lot of those.
I used to take calcium-magnesium-zink supplement, not sure if it was durig the time I took D3. Could lack of zink be an issue, should I try it?
I don’t know if I need it, never took the blood test, I tried it based on all the artiles about how good it is and how most people are deficient in it. Also I live in a cold climate and definitely don’t go out enough, so I don’t get a lot of sunlight.
I took D3 with light breakfast, in the morning.
Thank you for a great post!
Taking a D3 supplement, even in small amounts, seems to cause a horrible insomnia for me. I have problems with sleep even without it, but on D3 it gets a lot worse, I sleep like 3 hours per night. I’m like 80-90% sure it’s D3, when I take it it gets worse, a few days after I stop it gets better.
Do you have any ideas on what could cause this and how could I fix it? I already take magnesium and choline, and I take D3 in the morning, but that don’t seem to help much. Melatonin doesn’t do anything, it gets me to sleep in the evening, but then I wake up anyway.
Hey, everyone! My first post here. Just testing out this awesome platform. Curious to see where it goes.
Value Arbitrage
New discussion platform for Less Wrong community (Mastodon instance)
Thank you for your reply!
For a long time, the way ANNs work kinda made sense to me, and seemed to map nicely onto my (shallow) understanding of how human brain works. But I could never imagine how could the values/drives/desires be implemented in terms of ANN.
The idea that you can just quantify something you want as a metric, feed it as an input, and see if the output is closer to what we want is new to me. It was a little epiphany, that seems to make sense, so it prompted me to write this post.
Evolutionary, I guess human/animal utility function would be something like “How many copies of myself have I made? Let’s maximize that.” But from the subjective perspective, it’s probably more like “Am I receiving the pleasure from the reward system my brain happened to develop?”
For sure there are a bunch of different impulses/drives, but they all are just little rewards for transforming the current state of the world into the one our brain prefers, right? Maybe they have appeared randomly, but if you were to design one intentionally, is that how you would go about it?
Learning
Get inputs from eyes/ears.
Recognize patterns, make predictions.
Compare predictions to how things turned out, update the beliefs, improve the model of the world.
Repeat.
General intelligence taking actions towards it’s values
Perceive the difference between the state of world, and the state I want.
Use the model of the world that I’ve learned to predict the outcomes of possible actions.
If I predict that applying action to the world will lead to rewards—take action.
See how it turned out, update the model, repeat.
I agree that specific goals can also have unintended consequences. It just occurred to me that this kind of problem would be much easier to solve than trying to align the abstract values, and the outcome is the same—we get what we want.
Oh, and I totally agree that there’s probably a ton of complexity when it comes to the implementation. But it would be pretty cool to figure out at least the general idea of what intelligence and consciousness are, what things we need to implement, and how they fit together.
How AI/AGI/Consciousness works—my layman theory
I am working on a project with this purpose, and I think you will find it interesting:
It is intended to be a community for intelligent discussion about rationality and related subjects. It is still a beta version, and has not launched yet, but after seeing this topic, I have decided to share it with you now.
It is based on the open source platform that I’m building:
https://github.com/raymestalez/nexus
This platform will address most of the issues discussed in this thread. It can be used both like a publishing/discussion platform, and as a link aggregator, because it supports both twitter-like discussion, reddit-like communities, and medium-like long form articles.
This platform is in active development, and I’m very interested in your feedback. If LessWrong community needs any specific functionality that is not implemented yet—I will be happy to add it. Let me know what you think!
I am working on a project with the similar purpose, and I think you will find it interesting:
It is intended to be a community for intelligent discussion about rationality and related subjects. It is still a beta version, and has not launched yet, but after seeing this topic, I have decided to share it with you now.
If you find it interesting and can offer some feedback—I would really appreciate it!
Hey, everyone! Author of rationalfiction.io here.
I am actively building and improving our website, and I would be happy to offer it as a new platform for LW community, if there’s interest.
I can take care of the hosting, and build all the necessary features.
I’ve been thinking about creating a LW-like website for a while now, but I wasn’t sure that it will work. After reading this post I have decided that I’m going launch and see where it goes.
If there’s any ideas or suggestions about how such platform can be improved or what features we’ll need—let’s discuss them.
By the way, the platform is open source(though I will probably fork it as a separate project and develop it in a new repo).
Thanks!
It works well on my iPad, haven’t tested it on the phones yet. I will.
There are links to author’s RSS feed in the post footer and on the profile pages.
Is there a reason you don’t want to use the site? I’d appreciate any feedback or ideas on how I can make it better.
rationalfiction.io—publish, discover, and discuss rational fiction
In startups, it is so called “MVP”—minimal viable product, a simplest version that you can show users to get some feedback and see if it works. It is the first step to building a startup.
To me it’s a pretty huge accomplishment, I’m really proud of myself =) Most of the work went not into coding the website, but into figuring out what it is. I needed a thing that would be valuable, and that I would be excited to work on for the following few years.
A competent programmer could probably create something like that in a week, but because I’m just learning web development(along with writing, producing videos, and other stuff) it took me longer. At the moment it’s the best thing I’ve created, so I’m really happy about it.
Also it’s actually the 3rd iteration of my startup idea(first one was a platform for publishing fiction, 2nd - platform for publishing webcomics.)
I’ve launched the first version of my startup, lumiverse:
I want lumiverse to become the perfect place for people to publish, discover and discuss great educational videos. I want to build a friendly and intelligent community, make it easy for video creators to find an audience, and make it easy for viewers to discover awesome videos.
I also have finaly made the first few episodes of Orange Mind—my video series about rationality.
What makes buying insurance rational?
Rationality 101 - how would you introduce a person to the rationalist concepts? What are the best topics to learn/explain first?
I’m new to the subject, so I’m sorry if the following is obvious or completely wrong, but the comment left by Eliezer doesn’t seem like something that would be written by a smart person who is trying to suppress information. I seriously doubt that EY didn’t know about Streisand effect.
However the comment does seem like something that would be written by a smart person who is trying to create a meme or promote his blog.
In HPMOR characters give each other advice “to understand a plot, assume that what happened was the intended result, and look at who benefits.” The idea of Roko’s basilisk went viral and lesswrong.com got a lot of traffic from popular news sites(I’m assuming).
I also don’t think that there’s anything wrong with it, I’m just sayin’.
Do you guys ever have meetups in English? Do you know if anyone in Moscow does?