Open thread, June 27 - July 3, 2016
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Rationality lessons from Overwatch, a multiplayer first-person shooter:
1) Learning when you’re wrong: The killcam, which shows how I died from the viewpoint of the person who killed me, often corrects my misconception of how I died. Real life needs a killcam that shows you the actual causes of your mistakes. Too bad that telling someone why they are wrong is usually considered impolite.
2) You get what you measure: Overwatch’s post-game scoring gives metals for teamwork activities such as healing and shots blocked and this contributes to players’ willingness to help their teammates.
3) Living in someone else’s shoes: The game has several different classes of characters that have different strengths and weaknesses. Even if you rarely play a certain class, you get a lot from occasionally playing it to gain insight into how to cooperate with and defeat members of this class.
Addressing 1) “Learning when you’re wrong” (in a more general sense):
Absolutely a good thing to do, but the problem is that you’re still losing time making the mistakes. We’re rationalists; we can do better.
I can’t remember what book I read it in, but I read about a practice used in projects called a “pre-mortem.” In contrast to a post-mortem, in which the cause of death is found after the death, a pre-mortem assumes that the project/effort/whatever has already failed, and forces the people involved to think about why.
Taking it as a given that the project has failed forces people to be realistic about the possible causes of failures. I think.
In any case, this struck me as a really good idea.
Overwatch example: If you know the enemy team is running a Mcree, stay away from him to begin with. That flashbang is dangerous.
Real life example: Assume that you haven’t met your goal of writing x pages or amassing y wealth or reaching z people with your message. Why didn’t you?
I read about pre-mortem-like questions in a book called Decisive: How to Make Better Choices in Life and Work by Chip Heath and Dan Heath.
That’s probably it; I read it recently. Thanks!
Goes into the “shit LW people say” bin :-D
On a tiny bit more serious note, I’m not sure the killcam is as useful as you say. It shows you how you died, but not necessarily why. The “why” reasons look like “lost tactical awareness”, “lingered a bit too long in a sniper’s field of view”, “dived in without team support”, etc. and on that level you should know why you died even without a killcam.
Other lessons from Overwatch: if a cute small British girl blinks past you, shoot her in the face first :-D
“Other lessons from Overwatch: if a cute small British girl blinks past you, shoot her in the face first :-D”
Pfft
Rationalists play Reaper. Shoot EVERYONE IN ALL THE FACES.
Pfft
Rationalists play whatever class at the moment is convenient for shooting everyone in the face in the most speedy and efficient manner :-P
So...Reaper.
Reaper gets relatively little value from cooperating with teammates so I hope that rationalists don’t find Reaper to be the best for them.
Cooperation is not a terminal goal. Winning the game is.
If I don’t see my team’s Reaper (or Tracer) ever, but the rear ranks of the enemy team mysteriously drop dead on a regular basis, that’s perfectly fine.
Agreed, but if a virtue and comparative advantage of rationalists is cooperating than our path to victory won’t often involve us using Reaper or Tracer.
Do you play on the Xbox?
I’m a bit mystified by how cooperation became a “virtue and comparative advantage of rationalists”. I understand why culturally, but if you start from the first principles, it doesn’t follow. In a consequentialist framework there is no such thing as virtue, the concept just doesn’t exist. And cooperation should theoretically be just one of the many tools of a rationalist who is trying to win. In situations where it’s advantageous she’ll cooperate and where it isn’t she won’t.
Nope, I play on a PC.
Rationality is systematized winning. If failure to cooperate keeps people like us from winning then we should make cooperation a virtue and practice it when we can. (I’m literally playing Overwatch while I answer this.)
The situation is symmetrical: if eagerness to cooperate keeps people like us from winning then we should make non-cooperation a virtue and practice it when we can.
My multitasking isn’t as good :-)
I guess it comes down to what has a higher marginal benefit, learning to cooperate or learning to succeed without cooperation.
Why are you phrasing this as either-or? We don’t need to decide whether a hammer or a screwdriver has a “higher marginal benefit”, we use both as appropriate. Cooperating is conditional on it being useful, sometimes it’s a good idea and sometimes it’s not.
Getting back to Overwatch, there are cases where you need to grab an assassin and go hunting for the enemy sniper, and there are cases where you need to be a healbot and just stand behind your tank...
I was wrong. Reaper and Mei can greatly benefit from cooperation.
I really enjoyed blacklight:retribution for the instant rationality training. There is literally an update button that lets you wallhack for a second or so. This makes you vulnerable as well, so there is a cost of information. You must keep making choices between information and taking actions based on your current model.
I am trying to outline main trends in AI safety this year, may I ask an advise what I should add or remove from the following list?
1.Elon Musk became the main player in AI field with his OpenAI program. But the idea of AI openness now opposed by his mentor Nick Bostrom, who is writing an article which is questioning safety of the idea of openness in the field of AI. http://www.nickbostrom.com/papers/openness.pdf Personally I think that here we see an example of arrogance of billionaire. He intuitively come to idea which looks nice, appealing and may work in some contexts. But to prove that it will actually work, we need rigorous prove.
Google seems to be one of the main AI companies and its AlphaGo won in Go game in human champion. Yudkowsky predicted after 3:0 that AlphaGo has reached superhuman abilities in Go and left humans forever behind, but AlphaGo lost the next game. It made Yudkowsky to said that it poses one more risk of AI – a risk of uneven AI development, that it is sometimes superhuman and sometimes it fails.
The number of technical articles on the field of AI control grew exponentially. And it is not easy to read them all.
There are many impressive achievements in the field of neural nets and deep learning. Deep learning was Cinderella in the field of AI for many years, but now (starting from 2012) it is princess. And it was unexpected from the point of view of AI safety community. MIRI only recently updated its research schedule and added studying of neural net based AI safety.
The doubling time in some benchmarks in deep learning seems to be 1 year.
Media overhype AI achievements.
Many new projects in AI safety had started, but some are concentrated on safety of self-driving cars (Even Russian KAMAZ lorry building company investigate AI ethics).
A lot of new investment going into AI research and salaries in field are rising.
Military are increasingly interested in implementing AI in warfare.
Google has AI ethic board, but what it is doing is unclear.
It seems like AI safety and implementation is lagging from actual AI development.
OpenAI is significantly more nuanced than you might expect. E.g look at interviews with Ilya Sustskever where he discusses AI safety, or consider that Paul Christiano is (briefly) working for them. Also, where did you get the description of Bostrom as “Elon Musk’s mentor?”
Musks seems to be using many ideas from Bostrom: he tweets about his book on AI, he mention his idea about simulation.
I think that there is difference between idea of Open AI as it was suggested by Musk in the beginning and actual work in the organisation named “Open AI”. The latter seems to be more balanced.
Public understanding by reading a few blog posts might not give a good overview over the reasons for which Open AI was started. I think looking at the actual actions might be a better way to try to understand what the project is supposed to do.
I read that you joined Open AI and I think it is good project now, but the idea of “openness of AI” was fairly criticised by Bostrom in his new article. But it seems that the organisation named “OpenAI” will do much more than promote openness. There is a little confusion between the name of organisation and the idea of letting everybody to run their own AI code.
I joked that the same way we could create Open Nuke project which will deliver reactors to every household which would probably result in in very balanced world where every household could annihilate any other household, and so everybody is very polite and crime is almost extinct.
I have no affiliations with Open AI. In this case I’m driven by “Don’t judge a book by it’s cover”-motivations. Especially in high stakes situations.
I think taking the name of an organisation as ultimate authority of what the organisation is about is a bit near-sighted.
Making good strategic decisions is complicated. It requires looking at where a move is likely to lead in the future.
The Einstein Toolkit Consortium is developing and supporting open software for relativistic astrophysics
this is a core product, that you can attach modules to for specific models that you want to run. able to handle GR on a cosmological scale !
http://einsteintoolkit.org/
I tried to follow the link but the whole framework (ETK + Cactus + Loni an so on...) is so scattered and so poorly documented that it discouraged me.
I have the idea that only those who already use Cactus intensively will know how to use the toolkit.
Say you are a strong believer and advocate for the Silicon Valley startup tech culture, but you want to be able to pass an Ideological Turing Test to show that you are not irrational or biased. In other words, you need to write some essays along the lines of “Startups are Dumb” or “Why You Should Stay at Your Big Company Job”. What kind of arguments would you use?
This comment got 6+ responses, but none that actually attempted to answer the question. My goal of Socratically prompting contrarian thinking, without being explicitly contrarian myself, apparently failed. So here is my version:
Most startups are gimmicky and derivative, even or especially the ones that get funded.
Working for a startup is like buying a lottery ticket: a small chance of a big payoff. But since humans are by nature risk-averse, this is a bad strategy from a utility standpoint.
Startups typically do not create new technology; instead they create new technology-dependent business models.
Even if startups are a good idea in theory, currently they are massively overhyped, so on the margin people should be encouraged to avoid them.
Early startup employees (not founders) don’t make more than large company employees.
The vast majority of value from startups comes from the top 1% of firms, like Facebook, Amazon, Google, Microsoft, and Apple. All of those firms were founded by young white males in their early 20s. VCs are driven by the goal of funding the next Facebook, and they know about the demographic skew, even if they don’t talk about it. So if you don’t fit the profile of a megahit founder, you probably won’t get much attention from the VC world.
There is a group of people (called VCs) whose livelihood depends on having a supply of bright young people who want to jump into the startup world. These people act as professional activists in favor of startup culture. This would be fine, except there is no countervailing force of professional critics. This creates a bias in our collective evaluation of the culture.
Argument thread!
You should probably stay at your big company job because the people who are currently startup founders are self-selected for, on average, different things than you’re selecting yourself for by trying to jump on a popular trend, and so their success is only a weak predictor of your success.
Startups often cash out by generating hype and getting bought for ridiculous amounts of money by a big company. But they are very, very often, in more sober analysis, not worth this money. From a societal perspective this is bad because it’s not properly aligning incentives with wealth creation, and from a new-entrant perspective this is bad because you likely fail if the bubble pops before you can sell.
Likely because the answers called for a ITT but provided no questions for the ITT.
Both of those seem to me like failing the Intellectual Turing Test. I would have a hard time thinking that the average person who works at a big company would make those arguments.
You never explained what you mean by “startup culture,” nor “good.”
One can infer something from your arguments. But different arguments definitely appeal to different definitions of “good.” In particular: good for the founder, good for the startup employee, good for the VC, and good for society.
There is no reason to believe that it should be good for all of them. In particular, a belief that equity is valuable to startup employees is good for founders and VCs, but if it is false, it is bad for startup employees. If startups are good for society, it may be good for society for the employees to be deceived. But if startups are good for society, it may be a largely win-win for startups to be considered virtuous and everyone involved in startups to receive status. Isn’t that the kind of thing “culture” does, rather than promulgate specific beliefs?
By “startup culture” you seem to mean anything that promotes startups. Do these form a natural category? If they are all VC propaganda, then I guess that’s a natural category, but it probably isn’t a coherent culture. Perhaps there is a pro-startup culture that confabulates specific claims when asked. But are the details actually motivating people, or is it really the amorphous sense of virtue or status?
Sometimes I see people using “startup culture” in a completely different way. They endorse the claim that startups are good for society, but condemn the current culture as unproductive.
What exactly is the thesis in question? “Startup culture is a valuable piece of a large economy”, for example, is not the same thing as “I should go and create a startup, it’s gonna be great!”.
Not to disagree with this exercise, but I think that the name ITT is overused and should not be applied here. Why not just ask “What are some good arguments against startups?” If you want a LW buzzword for this exercise, how about hypothetical apostasy or premortem?
I think that ITT should be reserved for the narrow situation where there is a specific set of opponents and you want to prove that you are paying attention to their arguments. Even when the conventional wisdom is correct, it is quite common that the majority has no idea what the minority is saying and falsely claims to have rebutted their arguments. ITT is a way of testing this.
That’s a different question.
A good argument against startups might be set VC as an asset class don’t outperfom the stock market. On the other hand it’s unlikely that the average person working at a company would make that argument, so arguing it would fail the ideological turing test.
The question seems like it has more levels of indirection in it than necessary. I mean, to pass an ITT is to behave/speak/write just like someone with the views you’re pretending to have. So how is “Say you believe X and want to pass an ITT by arguing not-X. What would you say?” different from “Say you believe not-X and want to defend it. What would you say?” or, even, just “What are the best arguments for not-X?”?
Being a believer in X inherently means, for a rationalist, that you think there are no good arguments against X. So this should be impossible, except by deliberately including arguments that are, to the best of your knowledge, flawed. I might be able to imitate a homeopath, but I can’t imitate a rational, educated, homeopath, because if I thought there was such a thing I would be a homeopath.
Yes, a lot of people extoll the virtues of doing this. But a lot of people aren’t rational, and don’t believe X on the basis of arguments in the first place. If so, then producing good arguments against X are logically possible, and may even be helpful.
(There’s another possibility: where you are weighing things and the other side weighs them differently from you. But that’s technically just a subcase—you still think the other side’s weights are incorrect—and I still couldn’t use it to imitate a creationist or flat-earther.)
Huh? You are proposing a very stark, black-and-white, all-or-nothing position. Recall that for a rationalist a belief has a probability associated with it. It doesn’t have to be anywhere near 1. Moreover, a rationalist can “believe” (say, with probability > 90%) something against which good arguments exist. It just so happens that the arguments pro are better and more numerous than the arguments con. That does not mean that the arguments con are not good or do not exist.
And, of course, you should not think yourself omniscient. One of the benefits of steelmanning is that it acquaints you with the counterarguments. Would you know what they are if you didn’t look?
Great point!
I guess the point of ITT is that even when you disagree with your opponents, you have the ability to see their (wrong) model of the world exactly as they have it, as opposed to a strawman.
For example, if your opponent believes that 2+2=5, you pass ITT by saying “2+2=5”, but you fail it by saying “2+2=7″. From your perspective, both results are “equally wrong”, but from their perspective, the former is correct, while the latter is plainly wrong.
In other words, the goal of ITT isn’t to develop a “different, but equally correct” map of the territory (because if you would believe in correctness of the opponent’s map, it would also become your map), but to develop a correct map of your opponent’s map (as opposed to an incorrect map of your opponent’s map).
So, on some level, while you pass an ITT, you know you are saying something false or misleading; even if just by taking correct arguments and assigning incorrect weights to them. But the goal isn’t to derive a correct “alternative truth”; it is to have a good model of your opponent’s mind.
No good arguments, or the weight of the arguments for X are greater than the weight of the arguments against X?
You know, I did mention weighing arguments in my post.
No, http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/
In high level debating at the debating world championship the participants are generally able to give good arguments for both sides of every issue.
(Not that I know a thing about the subject, but are you sure this angle is exactly how an ’unbiased re: startups” person would think about it? Why not something more like, “Startups are simply irrelevant, if we get down to it”?)
I didn’t realize that the biggest supporter of UBI in the US is the ex-leader of the Service Employees Union. Guess i will have to read that book next. Have Agars ‘Humanities End’ to tackle next..
http://www.alternet.org/economy/universal-basic-income-solves-robots-taking-jobs
and a write-up on why the elites don’t get the Brexit drama right..
http://www.bloomberg.com/view/articles/2016-06-24/-citizens-of-the-world-nice-thought-but
That seems to be way off. Prediction markets reflects the opinion of those who enter in the market. AFAIK there’s no barrier to the lower income strata of the popoluation. Polls also failed to predict the result, so I would say that it was not a structural failure of the markets.
The thing is, the markets reflect committed-capital-weighted opinions of market participants. This is not an egalitarian democracy.
Given that market participants insure against risks with the prediction market and the event of the Brexit does carry risk to some businesses I’m not sure that’s empircally the case.
Possibly we (meaning I vs Epicurean Dealmaker) have a very different notion of ‘elite’.
I imagine the elite as the 10% (or 5% or 1%, depending on your Pareto distribution) which has enough capital to hedge against market fluctuations (or enough to create it entirely); as far as I understand instead ED means as ‘elite’ anyone who has enough money to invest in a market.
I don’t think this is the issue. If you invest $10m into some market position, your “opinion” literally has one million times the impact of someone who invested $10. It’s not just “people who invest” vs “people who do not invest”. Even among those who invest, the more capital you apply, the more your opinion matters.
Markets are inherently capital-weighted and their opinion necessarily reflects the positions of the rich to a much greater degree.
Is the EU regulations on algorithmic decision-making and a “right to explanation” positive for our future? Does it make a world with UFAI less likely?
Room for improvement in Australia’s overseas development aid
quoted in the Australian Government Independent Review of Aid Effectiveness chapter 1-3
Perhaps we need a common OECD project committee or other multilateral aid review committees so only one reported needed rather than multiple reports—focus on fewer big ambitious projects rather than many small impact projects?
The EA community for historical reason doesn’t do much analysis of government aid (actually, no one does), even though this is a fundamentally public activity in democratic countries. And that’s reasonable, it’s extremely complex to analyse incumbent donors. It’s easier to think on the margins, and from the perspectives of individuals. To get started, I read through the Australian Governments Independent Review of Aid Effectiveness to identify the counter-intuitive takeaways.
what’s the current scope of Australia’s aid operations
Why is this a timely issue
Not to mention the emergence of history’s pre-eminent aid effectiveness focussed civic community—effective altruists
Effective Development Group:
-quoted in the Australian Government Independent Review of Aid -Effectiveness Chapter 1-3
----Policy proposals----
Multilateral aid consolidation
The Australian Government’s Independent Review of Aid Effectiveness identified that the principle operating procedure for Australian foreign aid should be value for money. Those multilateral organisations that they have recently found and in the future those which they find to have a poor or worse overall assessment of value for money should be stripped of their funding, which is probably in the hundreds of millions and possibly into the billions
References: see part 3 of Independent Review of Aid Effectiveness
Independence from aid
To ensure Australia’s aid partners don’t become dependent on Australian foreign aid, thus destabilising foreign economies stability and self-reliance—e.g. undercutting farmers produce at the markets thus depriving them of incentives to produce, thus becoming more dependent and creating less surplus and thus greater deprivation and poverty over the long term and greater costs to our aid budget
Scaling down aid or halting expansion of aid in geographic areas identified by the review where there is both a low case for expansion but high reliance on bilateral delivery channels
References: see part 3 of Independent Review of Aid Effectiveness
Defragmentation
(see print screen of page 39 in chapters 1-3 of the report)
To put it simply, there are too many small ineffective programs and these are costing wellbeing and Australian dollars.
-chapters 1-3
Public communication
Aid budget given to communicating effectiveness or otherwise:
Seconded recommendations that are obvious
National interest scepticism
In the quest to optimize my sleep I have found over the last days that I relaxed a lot more as usual. I sleep on the side but I put cushion between my back and the wall so that part of my weight rests on the back and part rests on the mattress of the bed.
Are there any real reasons why standard beds are flat? Or is it just a cultural custom like our standard toilet design that exists for stupid reasons?
not that I know of. Various suggestions of sleeping with a body pillow exist. Hammocks exist. Plenty of people take naps on couches or in reclining chairs.
I wonder if it has anything to do with ease of manufacture.
I am sure you have read this: www.lesswrong.com/r/discussion/lw/mvf/
(relevant side note) Traditional Japanese beds are harder and thinner than western beds.
As far as I can see it doesn’t discuss sleeping surfaces that aren’t flat.
no unfortunately it does not, but it has other details that might be informative.
Is post-rationalism dead? I’m following some trails and the most updated material is at least three years old.
If so, good riddance?
If I put the phrase into Google I find http://thefutureprimaeval.net/postrationalism/ that was written in 2015 as one of the results, so the phrase got used more recently than three years ago.
In general the term isn’t important to many of the people that Scott put under that label when he wrote his map. http://www.ribbonfarm.com/ is still alive and well. David Chapman also still writes.
That was my starting point too, but I noticed that most new content linked there specifically about PR seems to have been written pre-2015. If those authors still write, I get the impression that they are not writing about PR anymore.
That makes me suspect that postrationalism was never a ‘thing’.
Scott used the term when he draw his map and a few people thought that it describes a cluster but most of the involved people don’t care for the term.
It’s similar to a term like Darwinism that wasn’t primarily about self-labeling.
Estimation of timing of AI risk
I want to once again try to assess expected time until Strong AI. I will estimate prior probability of AI, and then try to update it based on recent evidences.
At first, I will try to prove the following prior probability of AI: “If AI is possible, it most likely will be built in the 21 century, or it will be proven that the task has some very tough hidden obstacles”. Arguments for this prior probability:
Science power argument. We know that humanity was able to solve many very complex tasks in the past, and it took typically around 100 years. That is flight of heavy than air objects, nuclear technologies, space exploration. 100 years is enough for several generations of scientists to concentrate on a complex task and extract everything about it which we could do without some extraordinary insight from outside our knowledge. We are already working on AI for 65 years, by the way.
Moore’s argument Moore’s law will run out of its power in the 21 century, but this will not stop growth of stronger and stronger computers for a couple decades.
This growth will result from cheaper components, from large number of interconnected computers, for cumulative production of components and from large money investment. It means that even if Moore’s laws stops (that is there will be no more progress in microelectronic chips technology), in 10-20 years from that day the power of most powerful computers in the world will continue to grow, but in lower and lower speed, and may grow 100 −1000 times from the moment of Moore law ending.
But such computer will be very large, power consuming and expensive. They will cost hundreds billions of dollars and consume gigawatts of energy. The biggest computer planned now is 200 petaflops Intel “Summit” and event if Moore law end on it, it means that 20 exaflops computers will be eventually built.
There also several almost unused option: quantum computers, superconducting, FPGA, new ways of parallelization, graphen, memristors, optics, use of genetically modified biological neurons for calculations.
All it means that: A) 10 power 20 flops computers will be eventually built. (And its is comparable with some estimates of human brain capacity.) B) They will be built in the 21 century. C) 21 century will see the biggest advance in computer power compare with any other century and almost all, what could be built, will be built in 21 century and not after.
So, the computer on which AI may run will be built in the 21 century.
“3” Uploading argument
The uploading even of a worm is lagging, but uploading provides upper limit on AI timing. There is no reason to believe that scanning human brain will take more than 100 years.
Conclusion from prior: Flat probability distribution.
If we know for sure that AI will be built in the 21 century, we could give it flat probability, which gives it equal probability to appear in any year, around 1 per cent. (It results in cumulated exponential probability by the way, but we will not concentrate on it now). We could use this probability as prior for our future updates of it. Now we will consider argument for updating this prior probability.
Updates of the prior probability.
Now we could use this prior probability of AI to estimate timing of AI risks. Before we discussed AI in general, but now we add the word “risk”.
Arguments for rising AI risks probability in near future:
We don’t need a) self improving b) super human с) universal d) world domination AI for extinction catastrophe. All these conditions are not necessary. Extinction is simpler task than friendliness. Even a program which helps to built biological viruses and is local, non self-improving, not agent and specialized could create enormous harm by helping to build hundreds designed pathogens-viruses in the hands of existential terrorists. Extinction-grade AI may be simple. And it also could come earlier in time than full friendly AI. While UFAI may be ultimate risk, we may not be able to survive until it because of simpler form of AIs, almost on the level of computer viruses. In general earlier risks overshadow later risks.
We should take lower estimates of timing of AI arrival based on precautionary principle. Basically this means that we should treat 10 per cent probability of its arrival as 100 per cent.
We may use events of last several years for update our estimation of AI timing. In last years we saw enormous progress in AI based on neural nets. The doubling time of AI efficiency in different test is around 1 year now, and it win on many games (Go, Poker so on). Belief in AI possibility rose in recent years, which result in overhype and large growth in investments as well as many new startups. Specialized hardware for neural nets was built. If such growth will continue for 10-20 years, it would mean 1000- 1 000 000 growth in AI capabilities, which must include reaching of human level AI.
AI is increasingly used to built new AIs. AI writes programs, help to calculate connectome of human brain.
All it means that we should expect human level AI in 10-20 years and superintelligence soon afterwards.
It also means that AI probability is distributed exponentially, from now and until it creation.
The biggest argument against it is also historic: we saw a lot of AI hypes before and they failed to produce meaningful results. AI is always 10 years from now and researchers in AI tend to overestimate it. Humans tend to be overconfident about AI research.
We also are still far from understanding how human brain works, and even simplest question about it may be puzzling. Another way to assess AI timing is idea that AI is unpredictable black swan event, depending from only one idea to appear (it seems that Yudkowsky think so). If someone gets this idea, AI is here.
In this case we should multiply number of independent AI researchers on number of trails, that is number of new ideas they get. I suggest to think that the last rate is constant. In this case we should estimate the number of active and independent AI researchers. It seems that it is growing fuelled by new funding and hype.
So my conclusion is that if we going to be afraid of AI we should estimate it arrival in 2025-3025 and have our preventive ideas ready and deployed to this time. If we want to hope to use AI in preventing other x-risks or in life extension, we should not expect it until second half of 21 century. We should use earlier estimation for bad AI than for Good AI.
That seems to be false. Leonardo da Vinci had drafts of flying machines and it took a lot longer than 100 years to get actual flight.
That is why I use wording “typically around” to show that I meant medium time of large dedicated efforts. Leonardo’s work was not continued by other scientists of 16 century so it was not part of large dedicated effort. It seems that other creators of flying machines tried to invent them from scratch but not based on results of Leonardo.
Even in 19 century collaboration between aviation enthusiasts was very small. Probably they learned from failed attempts of others—like ups, flying wings does not work, lets try rotating wings. if they collaborate more effectively, they could come to the working design quicker.
Then take the project of eternal life. A lot of corporating alchimists worked on it for thousands of years.
I think that it is possible to make your argument even stronger: it took hundred thousand years to go from stone age to bronze age.
But is clear that “total intelligence” of humanity is different on any stage of its development, and when I spoke about hundred years, I thought of “total intelligence” of humanity on the level of 20 century.
Anyway in case of AI it is an analogy, and may not work. The AI problem could be extremely complex and even unsolvable, but we could not bet on it if we are going to be well prepared.
Could this medium time even be exponential regarding the whole human evolution?
Comp Vision and Machine Learning conference on in Vegas. Some recommended reading at the bottom
https://sites.google.com/site/multiml2016cvpr/
and this is one guy blogging it, must be a lot of twittering too...
https://gab41.lab41.org/all-your-questions-answered-cvpr-day-1-40f488103076#.braqj1fdj
Quantified hedonism—Personal Key Performance Indicators
The phrase burn the boats comes from the VIking practice of burning boats on the shore before invading so they have to win and settle. No retreat, it’s an inspiring analogy, but I heard it in the context of another Real Social Dynamics video, so the implication is to approach sets as if there is no retreat? Bizaare, those guys.....anyway that RSDPapa video suggested that personal KPI’s were useful. What’s measured gets improved, or so the saying goes. So which KPI’s should you choose? After some though, I reckon psychological distress, a construct referring to anxiety and depression which conceptualise enduring hedonic losses, and PERMA, a construct referring to the key determinants of subject well-being, seem like appropriate KPI’s.
So how do you measure them? There are validated psychological scales for each.
Psychological distress
PERMA:
Positive emotion
Engagement
Relationships
Meaning—no known scale?
Achievement?
Unfortunately, things get a bit tricky here with achievement. Many psychological scales are paywalled such that you need to buy them specifically (academic institution access is insufficient). If anyone can post a workaround.. :)
Achievement
If you administer these scales on yourself monthly, you can start to build a picture of your hedonic progress in life, quantitatively, albeit abstractly. Too difficult for you? Try this unvalidated scale for PERMA.
Tourism isn’t this esoteric, life changing right of passage experience people will tell you that it is
Or: Why I would want to move to the Cayman Islands (but I don’t have retirement savings of substance or hospital or finance career capital)
I think the urge to travel just to see different countries is a kind of OCD. Unhealthy! The way tourism tends to work commodifies it. It doesn’t accrue that benefit that experience hunting usually does, hedonically. Plus, it’s super expensive and moving tends to accrue hedonic costs anyway. Even though climate does accrue hedonic benefits, it would be unsustainable and lead to negative self past comparison since you are returning to your home country. Not to mention when you travel you tend to compromise on your lifestyle—fitness, exercise, relationships, nutrition, sleep...unacceptable!
Virtual tourism. It’s my new hobby. Sure, it might be interesting to check out the Northern Lights or mecca (both literally desserts, that you are paying for!) but really any place can, by a business or government, be turned into a tourist spot with a bit of work. In real time, moment to moment, I find travellng super boring except when it’s ongoing constant novelty of like, sitting on the roof of a van in a rural area, or I’m on my computer!
I keep hearing about how great travel is. My conclusion is that no, it’s not worth the cost. Or at least, the component I thought they were referring to—sight seeing, isn’t. Other parts of travel are okay, but certainly not lifechanging after the first or second time of eye-openingness.
Case study: Machu Pichu. If that rock in Guatape was difficult enough, consider the downsides of Machu Pichu to get your mind off it. Then put the nail in the coffin with the danger statistics. Consolation prize? Machu pichu on Google street view.
So what is worthwhile when travelling. One, of course, is doing so with the intend of moving—when a place has better opportunities than your past residence. Let’s consider a case that will be relevant to already very high standard of living Westerners—moving to the Carribean. Because really, I can find no better place one might like to move than the Cayman Islands. English is spoken, close to the US and UK, Strategic advantages in the financial industry, without a unsophisticated undiversified economy, as with the rest of the Carribean competitors honestly that tend to dependent on fish or petrol. And, you’re in the Carribean, with enviable climate (a known determinant of subjective wellbeing!). It’s a country that knows the importance of having a strategic advantage that doesn’t mean it’s just a mine, like say Australia, where pushes to develop a more sophisticated economy have failed and derived, which I think is a good sign of a country that won’t thrive in the 21st century....anywho, Google Images the place, it looks way better than the rest of Central and South America and the Carribean as a whole! I’m very suprised I don’t see it topping lists of expat wellbeing or quality of life indexes but I guess it gets it might get missed cause of its size. With the greater income inequality, you can probably hire a personal chef even as a minimum wage worker from the Western world to cook you Chinese food or whatever it is you want, healthy and convenient (not to mention they can belp with maintenance and such).
Alas, maybe I am just in a bad mood. I am travelling right now and have a return flight that is way too far away and I have nothing I left I want to do on this continent. It sucks when the street smells like shit, it’s dusty and smoggy enough to irritate your eyes, cars are loud and dangerous, people are suspicious and don’t move out of the way, and the hotel locks up early for the night, but you don’t know exactly when, and after a certain time you can’t buy water outside so if you don’t have enough you go thirsty and non-brushed cause the water from the tap is unsafe. At least I came across this which will aid my quest to become a better blogger: This is effective copywriting and feedback giving.
Open questions
Thoughts on the King, Warrior, Magician, Lover archetypes? Useful?
Cause prioritisation—community vs institutions*
I’m interested in crowdsourcing identifying disparities between community and institutional cause prioritisation attitudes.
If you could spare a minute could you please rate from 1-10, with a % rating of your estimates of the:
(1) potential impact
(2) prospective neglectedness
(3) political tractability
...of individual media campaigns that would advocate for public debate, discussion and law reform without a specific agenda around each of the following areas:
(a) labour mobility
(b) tobacco control (incl. smoking in developing countries)
(c) risks from artificial intelligence
(d) research re-prioritisation and infrastructure
(e) factory farming
(f) biosecurity
(g) land use reform
(h) developing world health
(i) nuclear security
(j) trade reform
(k) migration
(l) humanitarian aid
(m) lizardmen
Thank you.
In place of a media thread
Extraordinary series—check out ‘how women judge men’
experience often doesn’t matter as much as GMA (g factor) for job performance. - parenthesis mine—GMA is an unconventional term, ‘g’ is more common.
precommitment smart contracts for happiness and health
I feel horrible saying this but I think I would be really upset if I had a kid (adopted or genetic) and they were born or become mentally handicapped or miserable like my biggest fear. It can happen whether you adopt (e.g. car crash) or your give birth, so I will not get myself a dependent. You can’t give them away without suffering lots of hedonic and altruistic losses, anyhow! But, once you get ’clucky and partnered, things change!
Maybe I should do one of those things where I give a trusted reliable person (perhaps even a independent (commercial? automated?) service that does this so they won’t pity me) information I don’t want revealed (like linking all my personal and contact details of this account!) to if I have children to pre-empt doing so! I could put in a waiver for if the weight of objective evidence for having children increasing my happiness according to a tribunal of them and a selected few other intelligent, educated, good-willed people shifts.
Having been at the self-dev, PUA, systems, psychology, lesswrong, kegan, philosophy, and other things—game for a very long time. My discerning eye suggests that some of the model is good, and some is bad. My advice to anyone looking at that model is that there are equal parts shit and diamonds. If you haven’t been reading in this area for 9 years you can’t see what’s what. Don’t hold anything too closely but be a sponge and absorb it all. Throw out the shit when you come across it and keep the diamonds.
At the end of the 4 (KWML) pages suggest some various intelligent and reasonable ways to develop one’s self:
Take up a martial art.
Do something that scares you.
Work on becoming more decisive.
Meditate. Especially on death.
Quit should- ing on yourself.
Find your core values.
Have a plan and purpose for your life.
Boost your adaptability by strengthening your resilience.
Study and practice the skills necessary for completing your goals, become a master of your trade.
Find the principles that you’re loyal to.
Establish some non-negotiable, unalterable terms (or N.U.Ts) and live by them.
Compete in a race like the Warrior Dash.
Strengthen your discipline by establishing habits and daily routines.
Adopt a minimalist philosophy. Declutter your life. Simplify your diet. Get out of debt.
Commit to lifelong learning
Meditate
Create more, consume less.
Work with your hands.
Take part in a rite of passage
Find a mentor
Become a mentor
Join a Fraternal organization like the Freemasons
Carve out a sacred space in your life
Create more, consume less
Leave a legacy
Develop practical wisdom
Become a mentor
Find a mentor
Establish your core values
Develop the virtue of order
Break away from your mother
Develop a life plan
Develop the traits of true leadership
Protect the sanctity of your ideas
Become decisive
Avoid the corruption of money, power,and sex
Live with integrity
These suggestions are not bad. save possibly the suggestion to take up a martial art which I disagree with and doing something that scares you. Anything that gets people to establish their purpose, have a plan and be more the people they want to be is a good thing.
Things like, “Work on becoming more decisive” are likely only to help the people who already think they are not decisive enough. Those who are decisive enough will probably skip it. HOWEVER if you already were* Study and practice the skills necessary for completing your goals, become a master of your trade. decisive and you thought you weren’t you might end up down a rabbit hole trying to work out how to do the thing that you don’t need to do.
Quit should- ing on yourself.
Nate soares has a post on “should’s” as well. http://mindingourway.com/not-because-you-should/ it’s different but also covers the suggestion of not doing what you “should” but doing what you want to do instead.
Study and practice the skills necessary for completing your goals, become a master of your trade.
So yea; do what you are doing with massive focus. Be so good they can’t ignore you TM. etc. etc. This is not the first place to suggest such things. And I strongly believe that for some people this method of delivering advice is exactly what they need. For other’s it’s exactly not what they need. Good luck figuring out if it’s you or not.
Is there more useful signal than noise here? It depends on who you are, where you are, and how good you are at working that out for yourself.
All I can say is—“maybe”.
End note: I hope to soon write up a post on making advice applicable and thinking about basically “It depends on who you are, where you are, and how good you are at working that out for yourself.” in more detail.
What other resources do you support in this field, ELO?
This is really hard to answer in the context of:
I’d be willing to give it a shot. What problems are you working on at the moment?
I’ve done a fair amount of reading and am comfortable in the social/PUA realm but am always on the lookout for more recommended resources (especially higher-level stuff).
Why do you focus on the suggestions that are also made elsewhere instead of what’s unique in the King, Warrior, Magician, Lover framework?
The model is meaningless beyond what it suggests you do. If I were to spend a long time understanding the whole damn model I could possibly end up generating my own predictive set of ideas from that model. Because I have not spent that time—it’s easier for me to just look at the (already generated) outputs of the model and comment on the results. I am not 100% sure that all those suggestions fit within the model itself but generally if the site ends in those kinds of suggestions, as above:
No, if you ignore the model you ignore the reason of why people recommend King, Warrior, Magician, Lover. I don’t think anybody who recommended that book to me did so, because of a shallow list of recommendations that fits into a few bullet points.
This is similar how taking a list of bulletpoints about CFAR knowledge doesn’t compare to evaluating the value that a CFAR workshop provides to it’s attendies.
There’s no value in forming a judgement of a model that one doesn’t understand like this by a shallow look at it.
There plenty of shallow personal development literature out there that people who like to consume listicles but I haven’t heared any recommendations for this book from that audience but mostly from people who think deeper and engage deeply with it.
I will be delighted to hear your review when you get around to writing it up.
My current state is that I haven’t read the full book or used the ideas in my life but I know multiple people who do, who value the ideas highly and who are generally good sources of personal development ideas.
I have travelled there twice, partially to scope it out for a possible move. Here are the downsides:
It is very small, both in terms of geographical size and population. There’s just not a lot of places to go or things to do.
At the same time it is not dense, so you probably need a car.
It is very touristy. Of the things to do, most are tourism-related.
The tech sector is not well-developed, so a tech person like me would probably end up working as a random IT consultant for a bank or law firm or something.
As far as the upsides, you got them mostly right: strong economy, low taxes, good climate, a generally tranquil feeling of life. Overall I think there would be something enormously psychologically beneficial to live in a place where the main political debate is what to do with the budget surplus.
My takeaway is: CI is a great place if you 1) are in the finance sector 2) like “sun and fun” activities like swimming, sailing, and diving 3) don’t have big life ambitions (e.g. start a tech company).
That website looks like a pretty big clickbait. Not footnotes either, which could be me overestimating people who put footnotes, but it might also be that whomever wrote that could be attempting to avoid being accused of wordplay.
What’s wrong with simple hyperlinks to sources? The post explains ideas layed out in a book and links the book.
You have a point. I’m mostly at fault here to be honest as I’m getting slowly more and more skeptical of ‘stuff on the internet’ (the site being called Art of Manliness already gives me some certain ideological connotations) and seeing how many things which look appealing intuitively don’t really yield much tasty fruit in real life, I’ll often label things clickbait rather than actually put some time in them.