Open Thread, Jul. 20 - Jul. 26, 2015
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Would a series of several posts on astrobiology and the Fermi paradox, each consisting of a link to an external post on a personal blog I have just established to contain my musings on the subject and related matters, be appreciated?
Your comments here are consistently interesting, and I’d like to subscribe to your RSS feed.
Yes.
Why not dual post both here and on your blog?
A hidden question of mine is actually how to present them here—copypaste into this space to do a dual post, or merely post links and brief summaries.
It also appears that my bloggery will dither a bit away from what is likely the most interesting to this audience (Fermi paradox and ‘where are they’ and the likely shape of the future of humanity) on occasion into things like basic origin of life theories, geochemistry, what I think SETI should actually be doing compared to what they are doing now, and one or two case studies of one-off radio signals that have never been confirmed. There is definitely a cohesive multiple-part initial burst incoming which I will probably link to in its entirety, but this leaves me wondering how much to link to/reproduce.
These topics haven’t been discussed here much and may actually be more interesting for that reason, whereas general Fermi paradox and future civ models have come up recently.
I don’t know how it is for others, but personally, I am much more likely to read a full text if it’s posted here directly, than if there’s just a link.
You could do what I do: copypaste, and then six months later after all the discussion is done, delete the LW copy and replace it with a link & summary. Best of both worlds, IMO.
I’d be interested in your take on the topic.
I surely would appreciate it.
Yes. absolutely would be of interest.
Has anyone in the history of LW ever said that they don’t want new interesting posts?
Keenly waiting for your posts!
Is anyone interested in another iterated prisoner’s dilemma tournament? It has been nearly a year since the last one. Suggestions are also welcome.
In addition to current posters, these tournaments generate external interest. I, and more importantly So8res, signed up for an account at LessWrong for one of these contests.
Wow, I was not aware of that. I saw that the last one got some minor attention on Hacker News and Reddit, but I didn’t think about the outreach angle. This actually gives me a lot of motivation to work on this year’s tournament.
Oops! I misremembered. So8res’ second post was for that tournament, but his first was two weeks earlier. Shouldn’t have put words in his mouth, sorry!
So, to follow up on this, I’m going to announce the 2015 tournament in early August. Everything will be the same except for the following:
Random-length rounds rather than fixed length
Single elimination instead of round-robin elimination
More tooling (QuickCheck-based test suite to make it easier to test bots, and some other things)
Edit: I am also debating whether to make the number of available simulations per round fixed rather than relying on a timer.
I also played around with a version in which bots could view each other’s abstract syntax tree (represented as a GADT), but I figured that writing bots in Haskell was already enough of a trivial inconvenience for people without involving a special DSL, so I dropped that line of experimentation.
Just an amusing anecdote:
I do work in exoplanet and solar system habitability (mostly Mars) at a university in a lab group with four other professional researchers and a bunch of students. The five of us met for lunch today, and it came out that three of the five had independently read HPMoR to its conclusion. After commenting that Ibyqrzbeg’f Iblntre cyndhr gevpx was a pretty good idea, our PI mentioned that some of the students at Cal Tech used a variant of this on the Curiosity rover- they etched graffiti in to hidden corners of the machine (‘under cover of calibrations’), so that now their names have an expected lifespan of at least a few million years against Martian erosion. It’s a funny story, and also pretty neat to see just how far Eleizer’s pernicious influence goes in some circles.
I just listened to a podcast by Sam Harris called “Leaving the Church: A Conversation with Megan Phelps-Roper”. It’s a phenomenal depiction of the perspective of someone who was born in, but then left, the fundamentalist Westboro Baptist Church.
Most interesting is Megan’s clear perspective on what it was like before she left, and many LWers will recognize concepts like there being no evidence that could have possibly convinced her that her worldview had been wrong, etc. Basically, many things EY warns of in the sequences, like motivated cognition, are things she went through, and she’s great at articulating them.
So the head of BGI, famous for extremely ambitious & expensive genetics projects which are a Chinese national flagship, is stepping down to work on AI because genetics is just too boring these days: http://www.nature.com/news/visionary-leader-of-china-s-genomics-powerhouse-steps-down-1.18059
I haven’t been following estimates lately, but how much do people think it would cost in GPUs to approximate a human brain at this point given all the GPU performance leaps lately? I note that deep learning researchers seem to be training networks with up to 10b parameters using a 4 GPU setup costing, IIRC, <$10k, and given the memory improvements NVIDIA & AMD are working on, we can expect continued hardware improvements for at least another year or two.
(Schmidhuber’s group is also now training networks with 100 layers using their new ‘highway network’ design; I have to wonder if that has anything to do with Schmidhuber’s new NNAISENSE startup, beyond just Deepmind envy… EDIT: probably not if it was founded in September 2014 and the first highway network paper was pushed to arxiv in May 2015, unless Schmidhuber et al set it up to clear the way for commercializing their next innovation and highway networks is it.)
I had some recent discussions with Jacob Cannell about this, where he estimated that (with the right software which we don’t yet have) you could build a human-level AGI with about 1000 modern GPUs. The amortized cost plus electricity (or if you rent from Amazon AWS) is roughly $0.1 per hour per GPU so the total would be around $100 per hour.
FLI just gave Bas Steunebrink $196,650 to work on something called “experience-based AI” or EXPAI, and Bas is one of the co-founders of NNAISENSE. This EXPAI sounds like a traditional hand-coded AI, not ANN based. Possibly they set up the startup without any specific plans, but just in case they wanted to commercialize something?
From a very uninformed perspective, this looks like an area of science where China is leading the way. Can anyone more informed comment on whether that is accurate, and whether there are other areas in which China leads?
There have been a lots of data breaches recently. Is this because of incompetence, or is it really difficult to maintain a secure database? If I’m going to let at least 100 people have access to a database and intelligent hackers really want to get access for themselves do I have much of a chance of stopping the hackers? Restated: have the Chinese and Russians probably hacked into most every database they really want?
I am not close to an expert in security, but my reading of one is that yes, the NSA et. al. can get into any system they want to, even if it is air gapped.
Dilettanting:
It is really really hard to produce code without bugs. (I don’t know a good analogy for writing code without bugs—writing laws without any loopholes, where all conceivable case law had to be thought of in advance?)
The market doesn’t support secure software. The expensive part isn’t writing the software—it’s inspecting for defects meticulously until you become confident enough that defects which remain are sufficiently rare. If a firm were to go though the expense of producing highly secure software, how could they credibly demonstrate to customers the absence of bugs? It’s a market for lemons.
Computers systems comprise hundreds of software components and are only as secure as the weakest one. The marginal returns from securing any individual software component falls sharply—there isn’t much reason to make any component of the system too much more secure than the average component. The security of most consumer components is very weak. So unless there’s an entire secret ecosystem of secured software out there, “secure” systems are using a stack with insecure, consumer, components.
Security in the real world is helped enormously by the fact that criminals must move physically near their target with their unique human bodies. Criminals thus put themselves at great risk when committing crimes, both of leaking personally identifying information (their face, their fingerprints) and of being physically apprehended. On the internet, nobody knows you’re a dog, and if your victim recognizes your thievery in progress, you just disconnect. It is thus easier for a hacker to make multiple incursion attempts and hone his craft.
Edward Snowden was, like, just some guy. He wasn’t trained by the KGB. He didn’t have spying advisors to guide him. Yet he stole who-knows-how-many thousands of top-secret documents in what is claimed to be (but I doubt was) the biggest security breach in US history. But Snowden was trying to get it in the news. He stole thousands of secret document, and then yelled though a megaphone “hey everyone I just stole thousand of secret documents”. Most thieves do not work that way.
Intelligence organizations have budgets larger than, for example, the gross box office receipts of the entire movie industry. You can buy a lot for that kind of money.
Additional note to #3: humans are often the weakest part of your security. If I want to get into a system, all I need to do is convince someone to give me a password, share their access, etc. That also means your system is not only as insecure as your most insecure piece of hardware/software but also as your most insecure user (with relevant privileges). One person who can be convinced that I am from their IT department, and I am in.
Additional note to #4: but if I am willing to forego those benefits in favor of the ones I just mentioned, the human element of security becomes even weaker. If I am holding food in my hands and walking towards the door around start time, someone will hold the door for me. Great, I am in. Drop it off, look like I belong for a minute, find a cubicle with passwords on a sticky note. 5 minutes and I now have logins.
The stronger your technological security, the weaker the human element tends to become. Tell people to use a 12-character pseudorandom password with an upper case, a lower case, a number, and a special character, never re-use, change every 90 days, and use a different password for every system? No one remembers that, and your chance of the password stickynote rises towards 100%.
Assume all the technological problems were solved, and you still have insecure systems go long as anyone can use them.
Great info… but even air-gapped stuff? Really?
My understanding is that a Snowden-leaked 2008 NSA internal catalog contains airgap-hopping exploits by the dozen, and that the existence of successful attacks on air gapped networks (like Stuxnet) are documented and not controversial.
This understanding comes in large measure from a casual reading of Bruce Schneier’s blog. I am not an security expert and my “you don’t understand what you’re talking about” reflexes are firing.
But moving to areas where I know more, I think e.g. if I tried writing a program to take as input the sounds of someone typing and output the letters they typed, I’d have a decent chance of success.
Thanks! As an economist I love your third reason.
This is not a fundamental fact about computation. Rather it arises from operating system architectures (isolation per “user”) that made some sense back when people mostly ran programs they wrote or could reasonably trust, on data they supplied, but don’t fit today’s world of networked computers.
If interactions between components are limited to the interfaces those components deliberately expose to each other, then the attacker’s problem is no longer to find one broken component and win, but to find a path of exploitability through the graph of components that reaches the valuable one.
This limiting can, with proper design, be done in a way which does not require the tedious design and maintenance of allow/deny policies as some approaches (firewalls, SELinux, etc.) do.
Both.
I wonder why you exclude the Americans from the list of the attackers :-/
The answer is no, I don’t think so, because while maintaining a secure database is hard, it’s not impossible, especially if the said database not connected to the ’net in any way.
I see, as many others may, that currently we are living in a NN (Neural Networks) renaissance. They are not as good as one may wish them to be, in fact sometimes they seem quite funny.
Still, after some unexpected advances from the last year onward, they look quite unstoppable to me. Further advances are plausible and their applications in playing the Go game for example, can bring us some very interesting advances and achievements. Even some big surprise is possible here.
Anybody else shares my view?
You are not alone. I think NNs are definitely the best approach to AI, and recent progress is quite promising. They have had a lot of success on a number of different AI tasks. From machine vision to translation to video game playing. They are extremely general purpose.
Here’s a recent quote from Schmidhuber (who I personally believe is most likely to create AGI.)
Meanwhile I also saw what Schmidhuber has to say and it is very interesting. He is talking about the second NN renaissance which is now.
I wouldn’t be to much surprised, if a dirty general AI would be achieved this way. Not that it’s very likely yet, but possible. And it could be quite nasty, as well. Perhaps it’s not only the most promising venue, but also the most dangerous one.
Why do you believe this? Do you think that brain inspired ANN based AI is intrinsically more ‘nasty’ or dangerous than human brains? Why?
Other agents are dangerous to me to the extent that (1) they don’t share my values/goals, and (2) they are powerful enough that in pursuing their own goals, they have little need to take game theoretic consideration of my values. ANN based AI will be similar to other humans in (1), and regarding (2) they are likely to be more powerful than humans since they’ll be running on faster, more capable hardware than human brains, and probably have better algorithms as well.
Schmidhuber’s best case scenario for superintelligence is that they take no interest in humanity, colonize space and leave us to survive on Earth. What’s your best case scenario? Does it seem not much worse to you than the best case scenario for FAI (i.e., if humanity could coordinate to solve the cosmic tragedy of the commons problem and wait until we know how to safely build an AGI that shares some compromise, e.g., weighted average, of all human values)?
Your points 1 and 2 are true but only in degrees. Humans vary significantly in terms of altruism (1) and power (2). Hitler—from what I’ve read—is a good example of a powerful, non-altruistic human. Martin Luther King and Ghandi are examples of highly altruistic humans (the first patterned directly after Jesus, the second patterned after Jesus and Bhudda). Now, it could be the case that these two were more selfish than they appear at first, because they were motivated by reward in the afterlife. Well perhaps to a degree, but that line of argument mostly fails as a complete explanation (and even if true, could also potentially become a strategy).
Finally, brain inspired ANNs != human brains. We can take inspiration from the best examples of human capabilities and qualities while avoiding the worst, and then extrapolate to superhuman dimensions.
Altruism can be formalized by group decision/utility functions, where the agent’s utility function implements some approximation of the ideal aggregate of some vector of N individual functions (ala mechanism design, and clarke tax style policies in particular).
We explore AGI mind space and eventually create millions and then billions of super-wise/smart/benevolent AI’s. This leads to a new political system—perhaps based on fast cryptoprotocols and new approximations of ideal group decision policies from mechanism design. Operating systems as we know them are replaced with AIs which eventually become something like mental twins, friends, trusted advisers, and political representatives. The main long term objective of the new AI governance is universal resurrection—implemented perhaps in a 100 years or so by turning the moon into a large computing facility. Well before that, existing humans begin uploading into the metaverse.
The average person alive today becomes a basically immortal sim but possesses only upper human intelligence. Those who invested wisely and get in at the right time become entire civilizations unto themselves (gods) - billions or trillions of times more powerful. The power/wealth gap grows without bound. It’s like Jesus said: “To him who has is given more, and from him who has nothing is taken all.”
However, allocating all of future wealth based on however much wealth someone had on the eve of the singularity is probably sub-optimal. The best case would probably also involve some sort of social welfare allocation policy, where the AIs spend a bunch of time evaluating and judging humans to determine a share of some huge wealth allocation. All the dead people who are recreated as sims will need wealth/resources, so decisions need to be made concerning how much wealth each person gets in the afterlife. There are very strong arguments for the need for wealth/money as an intrinsic component of any practical distributed group decision mechanism.
Perhaps the strongest argument against UFAI likelihood is sim-anthropic: the benevolent posthuman civs/gods (re)create far more historical observers than the UFAIs, as part of universal resurrection. Of course, this still depends on us doing everything in our power to create FAI.
Thanks for the clear explanation of your views. What do you see as the main obstacles to achieving this?
I’m really worried that mere altruism isn’t enough. If the other agent is more powerful, any subtle differences in values or philosophical views between myself and the other agent could be disastrous, as they optimize the universe according to their values/views which may turn out to be highly suboptimal for me. Consider the difference between average and total utilitarianism, or different views on whether we should assume the universe must be computable, what prior/measure to put on the multiverse, or how to deal with anthropics, e.g. simulation argument.
But I don’t want them to blindly accept my current values/views either, since they may be wrong. Humans seem to have some sort of general problem solving / error correcting algorithm which we call “doing philosophy”, and maybe we can teach that to ANN-based AI more easily than we could program it by hand, so in that sense maybe ANN-based AI actually could be less “nasty” than other approaches.
To me, achieving a near optimal outcome is difficult but not impossible, given enough time, but I don’t see how to get the time. The current leaders in ANN-based AI don’t seem to appreciate the magnitude of the threat, or the difficulty of solving the problem. (Besides Schmidhuber, who apparently does see the threat but is ok with it? Now that Bostrom’s book has been out for a year and presumably most people who are ever going to read it has already read it, I’m not sure what’s going to change their minds.) Perhaps ANN-based AI could be considered more “nasty” in this sense because it seems easier to be complacent about it, thinking that when the time comes, we’ll just teach them our values, whereas trying to design a de novo AGI brings up a bunch of issues like exactly what utility function to give it, or what decision theory or prior, that perhaps makes it easier to see the larger problem.
(The other main obstacle I see is the strong economic and psychological incentives to achieve AGI ASAP, but that’s the same whether we’re talking about ANN-based AI or other kinds of AI.)
My optimistic scenario above assumes not only that we solve the technical problems but also that the current political infrastructure doesn’t get in the way—and in fact just allows itself to be dissolved.
In reality of course I dont think it will be that simple.
There are technical problems like value learning, and then there are socio-political problems. AGI is likely to cause systemic unemployment and thus a large recession which will force politics to get involved. The ideal scenario may be a shift to increased progressive/corporate tax combined with UBI or something equivalent. In the worst cases we have full scale depression and political instability.
Related to that will be the legal decisions concerning rights for AGI (or lack thereof). AGI rights seem natural, but they will also be difficult to enforce. AGI will be hard to define, and a poor definition can easily lead to strange perverse incentives.
Then there are the folks who don’t believe in machine consciousness, or uploading, and basically will view all this as a terrible disaster. It’s probably good that we’ve raised AI risk awareness amongst academics and elites, but AI may now have mainstream branding issues.
One question/concern I have been monitoring for a while now is the response from conservative Christianity. It’s not looking good. Google “Singularity image of the beast” to get an idea.
In terms of risk, it could be that most of the risk lies in an individual or group using AGI to takeover the world, not from failure of value learning itself. Many corporations are essentially dictatorships or nearly so—there is no reason for a selfish CEO to encode anyone else’s values into the AGI they create. Human risk rather than technical.
You already live in a world filled with a huge sea of agents which have values different than your own. We create new generations of agents all the time, and eventually infuse them with power and responsibility. We don’t need to achieve ‘perfect’ value alignment (and that is probably non coherent regardless). We need only to align value distributions.
That being said, I do believe that the AGI we create will be far more aligned with our values than our children are.
The real fear is perhaps that of being left behind. The only solution to that really is to use AGI to accelerate the development of uploading.
From what I see, they have a wide spectrum of opinions. Schmidhuber is also unusual in that—for whatever reasons—he’s pretty open about his views on the long term future, whereas many other researchers are more reserved.
Also, most of the top ANN researchers do not see a clear near term path to AGI—or else they would be implementing it already. They are focused on extending out from current solutions. Value learning comes later, in terms of natural engineering dependencies.
Well yes—in the ANN approach that is the most likely solution. And actually its the most likely solution regardless, because designing a human complexity utility/value function by hand is just not workable.
What kind of problems do you think this will lead to, down the line?
This is true, but:
I’m not comparing ANN-based AGI to the status quo, but to a future with some sort of near-optimal FAI.
The new agents we currently create aren’t much more powerful than ourselves, and cannot take over the universe and foreclose the possibility of a better outcome.
Humans or humanity as a whole seem capable of making moral and philosophical progress, and this capability is likely to persist in future generations. I’m not sure the same will be true of ANN-based AGIs.
I look forward to your post explaining this, but again my fear is that since to a large extent I don’t know what my own values are (especially when it comes to post-Singularity problems like how to reorganize the universe on a large scale, i.e., whether we should run it according to Eliezer’s Fun Theory, or convert it to hedonium, or what sort of hedonium exactly, or to spend most of the resources available to me on some sort of attempt to break out of any potential simulations we might be in, or run simulations of my own), straightforward approaches at value learning won’t work when it comes to people like me, and there won’t be time to work out how to teach the AGI to solve these and other philosophical problems.
Because we care about preserving our personal identities whereas many AGIs probably won’t, AGIs will be faced with fewer constraints when it comes to improving themselves or designing new generations of AGIs, and along with a time advantage that is likely quite large in subjective time, this probably means that AGIs will always have a large advantage in intelligence until they reach the maximum feasible level in this universe and human uploads slowly catch up. Are you not worried that during this time, the AGIs will take over the universe and reorganize it according to their imperfect understanding of our values, which will look disastrous when we become superintelligences ourselves and figure out what we really want?
Hopefully none—but the conservative protestant faction seems to have considerable political power in the US, which could lead to policy blunders. Due to that one stupid book (revelations), the xian biblical worldview is almost programmed to lash out at any future system which offers actual immortality. The controversy over stem cells and cloning is perhaps just the beginning.
On the other hand, out of all religions, liberal xtianity is perhaps closest to transhumanism, and could be its greatest ally.
As an example, consider this quote:
This sounds like something a transhumanist might say, but it’s actually from C.S. Lewis:
Divinization or apotheosis is one of the main belief currents underlying xtianity, emphasized to varying degrees across sub-variations and across time.
..
The practical real world FAI that we can create is going to be a civilization that evolves from what we have now—a complex system of agents and hierarchies of agents. ANN-based AGI is a new component, but there is more to a civilization than just the brain hardware.
Humanity today is enormously more powerful than our ancestors from say a few thousand years ago. AGI just continues the exponential time-acceleration trend, it doesn’t necessarily change the trend.
From the perspective of humanity of a thousand years ago, friendliness mainly boils down to a single factor: will the future posthuman civ ressurrect them into a heaven sim?
Why not?
One of the main implications of the brain being a ULM is that friendliness is not just a hardware issue. There is a hardware component in terms of the value learning subsystem, but once you solve that, it is mostly a software issue. It’s a culture/worldview/education issue. The memetic software of humanity is the same software that we will instill into AGI.
I don’t see how that is a problem. You may not know yourself completely, but have some estimation or distribution over your values. As long as you continue to exist into the future, and as long as you have a significant share in the future decision structure (ie wealth or voting rights), then that should suffice—you will have time to figure out your long term values.
This is a potential worry, but it can probably be prevented.
The brain is reasonably efficient in terms of intelligence per unit energy. Brains evolved from the bottom up, and biological cells are near optimal nanocomputers (near optimal in terms of both storage density in DNA, and near optimal in terms of energy cost per irreversible bit op in DNA copying and protein computations). The energetic cost of computation in brains and modern computers alike is dominated by wire energy dissipation in terms of bits/J/mm. Moore’s law is approaching it’s end which will result in hardware that is on par a little better than the brain. With huge investments into software cleverness, we can close the gap and achieve AGI. In 5 years or so, lets say that 1 AGI runs amortized on 1 GPU (neuromorphics doesn’t change this picture dramatically). That means an AGI will only require 100 watts of energy and say $1,000/year. That is about a 100x productivity increase, but in a pinch humans can survive on only $10,000 a year.
Today the foundry industry produces about 10 million mid-high end GPUs per year. There are about 100 million human births per year, and around 4 million per year in the US. Of course if we consider only humans with IQ > 135, then there are only 1 million high IQ humans born per year. This puts some constraints on the likely transition time, and it is likely measured in years.
We don’t need to instill values so perfectly that we can rely on our AGI to solve all of our problems until the end of time—we just need AGI to be similar enough to us that it can function as at least a replacement for future human generations and fulfill the game theoretic pact across time of FAI/god/resurrection.
There’s some truth in the first half of that, but I’m not so sure about the second. Expecting that God will at some point transform us into something beyond present-day humanity is a very different thing from planning to make that transformation ourselves. That whole “playing God” accusation probably gets worse, rather than better, if you’re actually expecting God to do the thing in question on his own terms and his own schedule.
For a far-from-perfect analogy, consider the interaction between creationism and climate change. You might say: Those who fear that human activity might lead to disastrous changes in the climate, including serious harm to humanity, should find their greatest allies in those who believe that in the past God brought about a disastrous change in the earth’s climate and wrought serious harm to humanity. But, no, of course it doesn’t actually work that way; what actually happens is that creationists say “human activity can’t harm the climate much; God promised no more worldwide floods” or “the alleged human influence on climate is on a long timescale, and God will be wrapping everything up soon anyway”.
Not necessarily. There is this whole idea that we are god or some aspect of god—as Jesus famously said ” Is it not written in your law, I said, Ye are gods?”. There is also the interesting concept in early xtianity that christ became a sort of distributed mind—that the church is literally the risen christ. Teilhard de Chardin gave a modern spin on that old idea. See also the assimilation saying. Paul thought something similar when he said things like ” It is no longer I who live, but Christ who lives in me”. So there is this strong tradition that Christ is something that can inhabit people. In that tradition (which really is the most authentic ) god builds the kingdom through humans. Equating the ‘kingdom’ with a positive singularity is a no brainer.
Yes the literalist faction will always wait for some external event, and to them Christ is a singular physical being, but that isn’t the high IQ faction of xtianity.
Creationists are biblical literalists—any hope for an ally is in the more sophisticated liberal variants.
Different configurations of artificial neurons (e.g., RNNs vs CNNs) are better at learning different things. If you build an AGI and don’t test whether it can learn to do philosophy, it may not be able to learn to do philosophy very well. In the rush to build AGIs in order to reap the economic benefits, people probably won’t have time to test for this.
I’m guessing that AGIs will have a very different distribution of capabilities from humans (e.g., they’ll have much more working memory, and be able to do complex calculations instantaneously and with very low error, but bad at certain things that we neglect to optimize for when building them) so they’ll probably develop a different set of memetic software that’s more optimal for them.
I guess that could potentially work while AGIs are maxed out at human level or slightly beyond and costing $1000/year, but if I’m not very optimistic that any social structure we come up with could preserve our share of the universe as the AGIs improve themselves and become more powerful. For example, if an AGI or a group of AGIs figures out a way to colonize the universe using resources under their sole control, why would they give the rest of us a share?
Surely there are lots of foundries (Intel’s for example) that could be retooled to build GPUs if it became profitable to do so?
The hope is that we use this time to develop the necessary social structures to prevent AGIs from taking over the universe (without giving us a significant share of it)?
AGI to me is synonymous with a universal learning machine, and in particular with a ULM that learns at human capability. Philosophy is highly unlikely to require any specialized structures—because humans do philosophy with the same general cortical circuitry that’s used for everything else.
This is a potential problem, but the solution comes naturally if you—do the unthinkable for LWers—and think of AGI as persons/citizens. States invest heavily into educating new citizens beyond just economic productivity, as new people have rights and control privileges, so it’s important to ensure a certain level of value alignment with the state/society at large.
In particular—and this is key—we do not allow religions or corporations to raise people with arbitrary values.
Yeah—but we only need to manage the transition until human uploading. Uploading has enormous economic value—it is the killer derived app for AGI tech, and brain inspired AGI in particular. It seems far now mainly because AGI still seems far, but given AGI then change will happen quickly: first there will be a large wealth transfer to those who developed AGI and or predicted it, and consequently uploading will become up-prioritized.
Yeah—it could be pumped up to 10x current output fairly easily, and perhaps even 100x given a few years.
I expect that individual companies will develop their own training/educational protocols. Government will need some significant prodding to get involved quickly, otherwise they will move very slowly. So the first corps or groups to develop AGI could have a great deal of influence.
One variable of interest—which I am uncertain of—is the timetable involved in forcing a key decision through the court system. For example—say company X creates AGI. Somebody then sues them on behalf of their AGIs for child neglect or rights violation or whatever—how long does it take the court decide if and what types of software could be considered citizens? The difference between 1 year and say 10 could be quite significant.
At the moment it looks like the most straightforward route to having high leverage over the future is to be involved in the creation of AGI.
I also have some hope that philosophy ability essentially comes “for free” with general intelligence, but I’m not sure I want to bet the future of the universe on it. Also, an AGI may be capable of learning to do philosophy, but isn’t motivated to do it, or isn’t motivated to follow the implications of its own philosophical reasoning. A lot of humans for example don’t seem to have much interest in philosophy, but instead things like maximizing wealth and status.
Do you have detailed ideas of how that would work? For example if in 2030, we can make a copy of an AGI for $1000 (cost of a GPU) and that cost keeps decreasing, do we give each of them an equal vote? How do we enforce AGI rights and responsibilities if eventually anyone could buy a GPU card, download some open source software and make a new AGI?
I argued in a previous comment that it’s unlikely that uploads will be able to match AGIs in intelligence until AGIs reach the maximum feasible level allowed by physics and uploads catch up, but I don’t thinking you responded to that argument. If I’m correct in this, it doesn’t seem like the development of uploading tech will make any difference. Why do you think it’s a crucial threshold?
Even 10 years seem too optimistic to me. I think a better bet, if we want to take this approach, would be to convince governments to pass laws ahead of time, or prepare them to pass the necessary laws quickly once we get AGIs. But again, what laws would you want these to be, in detail?
Yep, nothing at all!
Oh yes.
A month ago I touched on this topic in “The Brain as a Universal Learning Machine”. I intend to soon write a post or two specifically focusing on near term predictions for the future of DL AI leading to AGI. My main counterintuitive point is that the brain is actually not that powerful at all at the circuit level.
Quite possible, even quite likely. I think that the nature is trying to tell us this, by just how bad we humans are at arithmetic, for example.
It’s not the algorithms, it’s the circuitry itself that is inefficient. Signals propagate slowly through the brain. They require chemical reactions. Neurons are actually fairly big. You could fill the same space with many smaller transistors.
Here comes the future, unevenly distributed. For crime-fighting purposes, Kuwait intends to record the genome of all of its citizens.
Random analysis! From the fact that they anticipate using $400 million to record and track about 4 million people, you can tell they are talking about using microarrays to log SNP profiles (like 23andme) or microsatellite repeat lengths or some otherwise cheap and easy marker-based approach rather than de novo sequencing. De novo sequencing that many people would be much more human DNA sequence data than has ever been produced in the history of the world, would clog up the current world complement of high throughput sequencers for a long time, would be no more useful for legal purposes, and probably cost $40 billion + (probably more to develop infrastructure).
Iceland has managed to guess the complete sequence for all of its residents from SNPs by getting complete sequences of 3%. (Not that crime-fighting would use anything more than SNPs.)
Does not compute.
You can “guess” some statistical averages for the whole population, but you cannot “guess” the complete sequence for any particular individual.
Of course you can. If you have a giant complete pedigree for most or all of the population and you have SNPs or whole-genomes for a small fraction of the members, and especially if it’s a highly homogenous population, then you can impute full genomes with varying but still-far-better-than-whole-population-base-rate accuracy for any particular entry (person) in the family tree. They’re all highly correlated. This is no odder than noting that you can infer a lot about a parent’s genome from one or two childrens’ genomes despite never seeing the parent’s genome. Your first cousin’s genome says a lot about your genome, and even more if one can put it into a family tree and also has one of your grandparent’s genomes. And if you have all the family trees and samples from most of them...
(This will not work too well for Kuwait since while the citizens may be highly inbred, they do not have the same genealogical records, and citizens are, IIRC, outnumbered by resident foreigners who are drawn from all over the world and especially poor countries. But it does work for Iceland.)
All the coverage says that they used pedigrees, but I’d think that they could be reconstructed from SNPs, rather more accurately.
Throwing away data is rarely helpful.
True. But when the OP says “guess the complete sequence” I assume a much higher accuracy than just somewhat better than the base rate.
You can produce an estimate for the full sequence just on the basis of knowing that the subject is human (with some low accuracy), you can produce a better estimate if you know the subject’s race, you can produce an even better one if you know the specific ethnic background, etc. It’s still a statistical estimate and as such is quite different from actually sequencing the DNA of a specific individual.
How much higher would that be and how do you know the Icelandic imputations do not meet your standards?
A ‘actual’ sequence is itself a ‘statistical estimate’, since even with 30x coverage there will still be a lot of errors… (It’s statistics all the way down, is what I’m saying.) For many purposes, the imputation can be good enough. DNA databases have already shown their utility in tracking down criminals who are not sampled in it but their relatives are. From a Kuwaiti perspective, your quibbles are uninteresting.
You don’t look like a Kuwaiti :-P And, of course, interestingness is in the eye of the beholder...
DSCOVR is finally at L1 and transmitting back photos. I’m using that one as my new desktop background.
I remember being excited about this more than a decade ago; it’s somewhat horrifying to realize that it took longer than New Horizons to reach its destination, though it was traveling through politics, rather than space.
(The non-spectacle value of this mission is at least twofold: the other side of it does solar measurements and replaces earlier CME early warning systems, and this side of it gives us a single temperature and albedo measurement for the Earth, helping with a handful of problems in climate measurement, and thus helping with climate modeling.)
You can see the smoke from the record-breaking recent Canadian and Alaskan wildfires in the photos. Those clouds drifted all the way over here to North Carolina shortly after those pictures were taken.
I’d really like to see the photos taken in the 7 other wavelength bands esp. near infrared and compare this to the rgb pciture. One should be able to see clouds and oceans in the IR too.
This question is inspired by the suprisingly complicated Wikipedia page on correlation and dependence.Can you explain distance correlation and brownias covariance as well as the ‘Randomized Dependence Coefficient’ in lay man’s terms and their application, particularly for rationalists? How about the ‘correlation ratio’, ‘polychoric correlation’ and ‘coefficient of determination’?
All your links are belong to wrongness. Please delete the ‘www’ before en. in en.wikipedia.
Clarity, you have a large number of comments with incorrect Wikipedia links. Your “introspective illusion” comment directly above this one does it correctly. You clearly are capable of generating functional links to Wikipedia pages.
Please take a few minutes to make your recent comments less frustrating to read. It is frankly astounding that so many people have given you this feedback and you are still posting these broken links.
This post would need to be in response to his post (not a lower level replier) or he would not get a notification about it.
A first broad attempt.
The stage is set-up in this way: you observe two set of data, that your model indicates as coming from two distinct sources. The question is: are the two sets related in any way? If so, how much? The ‘measure’ of such is usually called correlation.
From an objective Bayesian point of view, it doesn’t make much sense to talk about correlation between two random variables (it makes no sense to talk about random variable either, but that’s another story), because correlation is always model dependent, and probabilities are epistemic. Two agents observing the same phoenomenon, having different information about it, may very well come to totally opposite conclusions.
From a frequentist point of view, though, the correlation between variables express an objective quantity, all the measure that you mention are attempts at finding out how much correlation there is, making more or less explicit assumptions about your model.
If you think that the two sources are linearly related, then the Pearson coefficient will tell you how much the data supports the model.
If you think the two variables comes from a continuous normal distribution, but you can only observe their integer value, you use polychoric correlation. And so on...
Depending on the assumptions you make, there are different measures of how much correlated the data are.
Is EY’s Cogntive Trope Therapy for real or a parody?
It sounds parodistic yet comes accross as weirdly workable. There is a voice in my head telling me I should not respect myself until I become more of a classical tough-guy type, full of courage and strength. However it does not sound like my father did. It sounds a lot like a teenage bully actually. My father sounded a lot more like show yourself respect by expecting a bit more courage or endurance from yourself. Hm. Carl Jung would have a field day with it.
Two quotes come to mind (emphasis added) —
— Aleister Crowley, Magick in Theory and Practice
— Homestuck
I am not sure about Crowley’s point—the mind being the great enemy as in the mind making all sorts of excuses and rationalizations? That is almost trivially true, however, I think using other parts of the mind to defeat these parts may work better than shutting the whole thing down because then what else can we work with?
It is similar to taking acid. Why do some, but only some people have really deep satori experiences from acid? Acid is just a hallucinogen. It is not supposed to do much. But sometimes the hallucinations overload and shut down big parts of the mind and then we pay attention to the rest and this can lead into the kinds of ego-loss, one-with-everything insights. However, isn’t it really a brute-force way? It’s like wearing a blindfold for months to improve our hearing.
One might ask the same question of HPMOR.
He’s being serious, but not solemn.
It’s for real.
If you want to dig deeper into the idea of seeing your life as a story read the Hero’s Journey by Joseph Campbell and associated literature.
But that one is about how myths and legends in the world seem to follow the same pattern. And then we saw Tolkien and George Lucas following it consciously with LOTR and Star Wars, and then Harry Potter, The Matrix etc. was modelled on those works. Cambell did figure out an ancient pattern for truly immersive entertainment, that one is for sure.
But did Campbell really come up with the idea that Average Guy could also use myths about legendary heroes to reflect upon and improve his own rather petty life? I don’t think in the past people were taking self-help advice from Heracles and Achilles or in the modern world from Neo and Luke Skywalker… it must have been obvious that you as Mr. Average Guy are not made of the same mold as them besides they are fiction anyway, right?
I don’t know how the ancient Greeks related to their legends (although I’m sure that historians of the period do, and it would be worth knowing what they say), but The Matrix and Star Wars are certainly used in that way. Just google “red pill”, or “Do or do not. There is no try.” And these things aren’t just made up by the storytellers. The ideas have long histories.
Literature is full of such practical morality. That is one of its primary functions, from children’s fairy tales (“The Ugly Duckling”, “The Little Red Hen”, “Stone Soup”) to high literature (e.g. Dostoevsky, Dickens, “1984″). Peter Watts (“Blindsight”) isn’t just writing an entertaining story, he’s presenting ideas about the nature of mind and consciousness. Golden age sensuwunda SF is saying “we can and will make the world and ourselves vastly better”, and has indeed been an inspiration to some of those who went out and did that.
Whenever you think you’re just being entertained, look again.
I’m not sure to what extend Campbell personally advocated the Hero’s journey to be used by “Mr. Average Guy” but various NLP folks I know refer to the Hero’s journey in that regard. Steven Gilligan and Roberts Dilts wrote http://www.amazon.com/The-Heros-Journey-Voyage-Discovery/dp/1845902866 Of course then the average guy stops being the average guy. In Eliezers words, he starts taking heroic responsibility.
What I always feel like a character should do in that situation (technology permitting) is to turn on a tape recorder, fight the villain, and listen to what they have to say afterwards. And then try to figure out how to fix the problems the villain is pointing out instead of just feeling bad about themselves.
I guess that sort of works for this. You could write down what the voice in your head is saying, and then read it when you’re not feeling terrible about yourself. And discuss it with other people and see what they think.
The problem with just trusting someone else is that unless you are already on your deathbed, and sometimes not even then, there is nothing you can say where their response will be “killing yourself would probably be a good idea”. There is no correlation between their response and the truth, so asking them is worthless.
I think it’s completely serious, and a good idea. And “si non è vero, è ben trovato”. I’m never without my Cudgel of Modus Tollens.
One of our cats (really, my cat) escaped a few days ago after a cat carrier accident. In between working to find her and having emotional breakdowns, I find myself wanting to know what the actual odds of recovering her are. I can find statistics for “the percentage of pets at a shelter for whom original owners were found”, but not “the percentage of lost pets that eventually make it back to their owners by any means.” Can anyone do better? I don’t like fighting unknown odds.
Additionally, if anyone has experienced advice for locating lost pets—specifically an overly anxious indoor cat escaped outdoors—it would be helpful. We have fliers up around the neighborhood, cat traps in the woods where we believe she’s hiding, and trail cameras set up to try and confirm her location. Foot searches are difficult because of the heat and terrain (I came back with heat exhaustion the first day). I guess what I’m specifically looking for from LW is “here is something you should do that you’re overlooking because bias X/trying to try/similar.”
In my one experience with such a situation, we found our cat (also female, but an outdoor cat) a few days later in a nearby tree. I’ve seen evidence that other cats also may stay in a single tree for days when scared, notably when a neighbor’s indoor cat escaped and was found days later stuck up a tree. Climbing down is more difficult than climbing up, so inexperienced cats getting stuck in trees is somewhat common. My best advice is to check all the nearby trees very thoroughly.
Also, food related sound may encourage her to approach, if there are any she is accustomed to such as food rattling in a dish or taping on a can of cat food with a fork.
Here are some links I compiled on this topic recently when my cousin lost her cat. Best of luck!
TIPS
http://www.missingpetpartnership.org/recovery-tips/lost-cat-shelter-tip-sheet/ http://www.missingpetpartnership.org/recovery-tips/lost-cat-behavior/ http://www.catsinthebag.org/
(CONSULTING) DETECTIVES
http://www.missingpetpartnership.org/lost-pet-help/find-a-pet-detective/pet-detective-directory/ http://www.getmycat.com/pet-detective-database/ (not all consult via phone & email, but it seems many do, e.g. http://www.catprofiler.com/services.html)
eBOOKS
The following book apparently has an epilogue regarding finding missing pets: http://smile.amazon.com/Pet-Tracker-Amazing-Rachel-Detective-ebook/dp/B00UNPGD9Y/ (there’s also an older, dead-tree edition called The Lost Pet Chronicles—Adventures of a K-9 Cop Turned Pet Detective)
http://smile.amazon.com/Three-Retrievers-Guide-Finding-Your/dp/1489577874/ http://www.sherlockbones.com/ http://www.lostcatfinder.com/lost_cat_finder/search_tips.html
FORUM: https://groups.yahoo.com/neo/groups/MissingCatAssistance/info
Looks like we chased the same set of links....I have most of those open in tabs right now. Thank you, though. We’re still searching. Supposedly, frightened indoor cats can spend 10-12 days in hiding before hunger drives them out. We’re at day eight now. It feels about five times as long as that.
Did your cousin’s cat make it home?
She did, yes. It took 9 days and predictably she lost some weight, but she’s otherwise ok. Anyway, I hope you can report similarly good news yourself soon.
I hope so too. We’re up to day 11 now. -_-
How did they get the cat back?
On the last night while searching at the end of the road she lives on, my cousin noticed some movement by a mostly empty lot and when she approached she saw Lily (the cat) run into some weeds there. I wish I could say there was “one weird trick” that definitely helped, but it was actually more like a flurry of facebooking—as much for getting emotional support as for finding leads—and being vigilant enough to be in a position to get lucky.
I recommend that you contact local shelters and search their lost & found sections. Craigslist also has a good lost & found section.
Useful info here, even if you don’t live in Boston: http://www.mspca.org/adoption/boston/lost-and-found/lost.html
In addition to talking to animal shelters, checking in with local veterinarians could be useful as well.
ds
If you think you have come up with a solid, evidence-based reason that you personally should be furious, self-hating, or miserable, bear in mind that these conditions may make you unusually prone to confirmation bias.
Doesn’t every strong emotion take up cognitive capacity that is then unavailable for critical thought? Why do you single out fury, self-hate and being miserable?
It’s not just a matter of cognitive capacity being occupied; it’s a matter of some emotional tendencies being self-limiting while others are self-reinforcing. Miserable people seem to often look for reasons to be miserable; angry people often do obnoxious things to others, which puts the angry person in situations that provoke further anger.
Tim Ferriss interviews Josh Waitzkin
The whole thing is interesting, but there’s a section which might be especially interesting to rationalists about observing sunk cost fallacies about one’s own strategies—having an idea that looks good and getting so attached to it that one fails to notice the idea is no longer as good as it looked at the beginning.
Unfortunately, I can’t find the section quickly—I hope someone else does and posts the time stamp.
What I was wondering lately is if the sunken cost fallacy and commitment devices are two sides of the same coin. Sometimes people need to abandond dysfunctional projects no matter how much they invested, on the other hand, motivating yourself to not abandon a good habit is hard and one way to do that is to sunken-cost-trip yourself, commitment devices like chains.cc, habits diary and so on work more or less that way.
This sounds a lot like that kind of second-order rationality that according to EY does not exist: these commitment devices work by focusing on an irrational argument (“don’t break the chain now, look at what a good record you have so far”) instead of a rational one (“it makes no sense to abandon this good habit now”) because our brain is wired so that it takes the irrational one far easier...
Does anyone know if there is any data on the political views of the Effective Altruist community? Can’t find it in the EA survey.
There is an interesting startup that is about trying to turn cities into villages by trying to make neighbors help each other. You need to verify your address via a scanned document, a neighbor or a a code on a postcard they send you. I think the primary reason they find that verification important is that people are allowed to see the full name, picture and address of people in their own neighborhood. And probably they don’t want to share that with people who are not actually neighbors. This seems to be key selling point of this startup—this is how it differs from any basic neighboor based Facebook group, that you really get to see each others face, name and address and people outside your hood really don’t get to see it so you can be fairly comfortable about sharing it. Besides you can choose a few categories how you can help others e.g. babysitting, petsitting etc. and what kind of common activities you would be interested in.
Here is the bad news: the startup is currently only available in German and only in the city of Vienna, probably due to the postcard thing. They managed to find investors so it is likely they will have an English version and extend it all over the world, in that case they will probably change the name as well, currently the name is fragnebenan.com But I have no idea when will this happen.
Anyway, I was thinking primarily that Rationalists in Berlin may take an interest in this and help them extend fragnebenan.com to Berlin?
This seems quite absurd. Why would I give my data to an obscure startup (who’ll probably sell it sooner or later) and hope people in my neighborhood make the same choice, when I can probably have way better results simply inviting my neighbors for a BBQ?
How many barbeques have you actually thrown?
Of the barbeques you have thrown, how many of those have led to mutually beneficial arrangements?
Of those that have led to mutually beneficial arrangments, how many per BBQ?
Now how much time have you put in to arranging those BBQ vs Value gotten from those BBQs?
I don’t know about your answer, but for me (substituting BBQ for dinner party) the answers respectively are probably about 10, 3, less than one, and WAYYY TO MUCH (if these types of arrangments were my only justification for throwing dinner parties.)
Now contrast this to how much time I’ve spent going through the free stuff offered on craigslist, vs the value I’ve gotten from it. The effort/value ratio is probably inverse. I think a startup that takes the “free services/free stuff” part of craigslist, but solves the unique problems of that segment (similar to what AirBNB has done for housing) could offer significant value.
I didn’t do mere BBQs but threw full-on parties with the neighbors (who I didn’t know at all) and other friends. Later two shared apartments in the same house combined held a huge party that spanned the house and included many of the neighbors. Many good friendships came out of that, and a couple of us moved in together later.
The BBQ idea is just a low-threshold variant of that which doesn’t require copious amounts of alcohol.
For free stuff, we just have a place in the staircase where people drop things that are still good but not needed by their previous owner (mostly books). This works with zero explicit coordination.
I’m kind of amazed/impressed that this works, based on my experience of communal spaces. Don’t people ever leave junk that they can’t be bothered to get rid of? Does anyone adopt responsibility for getting rid of items that have been there a long time and clearly no one wants?
The bigger the party, the more investment—This does not scale the same way a website does. Same thing with putting out free stuff on the steps.
BBQ would not be allowed in my third floor apartment’s balcony as it would stink up the place and it would be dangerous as well and I have no idea where could I store the equipment when not used as we have not much unused space, and my neighbors would be very creeped out if I would just ring on their door and invite them. We live in the same apartment since 2012 and never even talked to neighbors or had a chat. People tend to be very indifferent with each other in this apartment complex and I have no better experience with former ones either. These guys are trying to make a site that acts as an icebreaker—if you really need dog-sitting one day you can try to ask there and if someone helps you out then you have a form of connection and maybe will have a chat after it or something and maybe greet each other and stop for a chat the next time you see each other. The very idea is that the world is urbanizing, due to jobs and all that people who like the more communal village lifestyle are forced into cities where they suffer from the general indifference and impersonality so they try to change it and make cities more village like or suburbia like. They try to counter-act the negative psychological effects of urbanization with a “let’s open our doors to each other” theme.
As for selling data, they have the same data as my utility company. They can link a name with an address. Anyone who walks up to our house will see the name on the door anyway. And a photo, so OK that is more. But overally this is not secret data nor very sensitive.
So don’t have the BBQ on your balcony, but down in the yard. And don’t invite people by knocking, but via old-fashioned nice and friendly handwritten paper letters or a nice and friendly written note on the inside of the building’s door. Bring a grill, a little food and drink, and invite people to contribute their own. I don’t see how this could be easier. In the worst case only two or three people will come, but that’ll be more than this site is likely to do.
I trust my utility company way more than I trust a random startup. Even Facebook, who this obviously competes with, doesn’t ask for scanned identification documents just to access basic functionality.
And you didn’t adress the issue with this site only connecting you with other people who happen to also use it. This alone makes this project unable to compete with simple Facebook neighborhood groups.
But let’s assume they’re super trustworthy and there are people in my neighborhood who use this site. It still looks a lot like a “if you have a hammer, everything looks like a nail” situation. Whatever it is, throw a website and an app at it. Even if a little post-it on the inside of the apartment building’s door would do way more for way less.
We have hundreds of people in this complex. I suspect at least 50% is more extroverted than me as the the uni etc. the ratio was more like 90%. If they did not do the BBQ thing I think I would not have much chance with it…
On the trust. Facebook does not also give you your neighbors location nor a way to check if someone claiming to be in your neighborhood is genuine.
I sort of agree to the extent that showing the address to everybody in the hood is perhaps too much, people would tell each other when they need so, but verifying is IMHO a good idea because it efficiently keeps spammers out. Perhaps sharing the address with everybody in the hood is a way to enforce politeness.
As for the last issue, I have actually a way to test it, as I was looking for a babysitter putting up an ad with a maximal cuteness baby photo in all our 12 stairways. I got two applicants. Out of 12 stairways times 6 levels times dunno like 6 flats. I will put up ads advertising this site some of these days and then if we get like 50 people there try again. But that 2 applicants was for me disappointingly low. Of course it could be that it will be even lower on the site as well.
Bystander Effect? The more people there are that could throw a party the less likely it is any particular does. Be the exception.
I though that relates to stuff like accidents or other emergencies. I made a quick google search and could not find anything that would not relate to people being in trouble and needing help. But I do see it can play a role, a certain kind of waiting for each other to start...
If people don’t care when it’s a poster on a stairwell, why are they going to start caring when it’s a message on a website?
I think “website for local area stuff” has a problem where people think they’d use it far more than they actually would. People don’t care about that sort of thing as much as they think they should, and this sort of thing is the digital equivalent of a home exercise machine that people buy, use once and then leave to moulder.
But ebay does ask for verifying addresses with postcards. Banks ask for verification of addresses.
I don’t think that post-its in the apartment building’s door are an efficient way to communicate. If I could reach all the people in my apartment digitally, I do think that would be great. The problem is rather that it’s unlikely that other people in my apartment building would sign up for such a service.
When I pack up packets for neighbors I sometimes would appreciate a digital way to contact the neighbor.
To effectively implement it in Berlin I think there are three choices:
1) Go to big landlords like degewo. Sell them on the idea that it’s an added benefit to have communities in their apartments. Then let them communicate information that’s currently communicated via hang-outs via the website.
2) Cooperate with government programs for neighborhood building in the Soziale Stadt category.
3) Focus on vibrant areas in Friedrichshain and Kreuzberg with a lot of young people who are eager to adopt new technology. Encourage new people who sign up to do hangouts in their houses.
From those 1) is likely the best strategy. It shouldn’t cost degewo much money. Having a digital channel to their rentees might even save them money. Degewo runs ads so they care about having the image about being different from other landlords.
How do you know?
It’s like not trying to pick up a girl who did not give you any indicator of interest, like a long look or a smile. Perhaps over-cautious, but avoids a lot of embarrassment.
Why do you consider that to be a high leverage action?
I don’t fully understand what high leverage means here, I just think it is cool and helps people to help each other and extending it another 3.5M people would be rather neat. I think they want to do it anyway, it could be easier if they have local contacts who have learned some methods of efficiency here and tend to like startups.
“Help them expand” suggests that you propose to spend time on energy on promoting it.
It seems to me like the website only accepts people from Austria anyway.
No, I meant helping them in programming or other stuff such as the postcard stuff to be able to offer it elsewhere. Sorry if I did not detail it, I thought it is obvious: if you like the idea, consider joining them as bit later co-founders (the whole thing just comes from Nov 2014), as part owners, investors, investing sweat capital mostly, that sort of stuff, the usual startup story.
Or maybe that is not so usual, I have no idea, but I was just thinking if someone calls them and tell them I will help you expand your customer base by 150% if you give me 10% or some other arrangement, this is fairly common for startups?
Actually helping them in programming and stuff like that is investing time and energy. I do focus programming time on things I consider high leverage.
You actually live in Vienna and there programming team is in Vienna and not Berlin. You frequently say that you don’t feel that your job has any meaning. You can program.
If they just managed to find investors they are likely not looking to raise more money at the moment. Even if they would looking for capital there nothing specific about the LW Berlin group when it comes to providing Angel funding for an Austrian company. In that case it also makes sense to argue why that investment better than various other possible investments.
All good points. Also you think there is not much location advantage in extending the service e.g. negotiating a low postcard price with the German Post and so on?
I will not leave a safe job for a startup (I would have considered that before we had a child, now it would be irresponsible) but I do consider contributing in the evenings, this is seriously something I could believe in.
If there are meetings you buy a plane ticket. Vienna isn’t that far from Germany.
When it comes to negotiating the idea is to hire a good salesperson. Most of the people at our meetup are coders who’s aren’t highly skilled salespeople. If I would hire for that role, I wouldn’t pick a person from our LW group.
Today there’s nothing like a real safe job. All companies lay off people from time to time. Working at a job that you like is very useful. It’s beneficial for the child to be around a dad who likes his job instead of a dad who hates his job.
The difference between a job that pays a fixed salary already vs. a startup that may pay dividends or something i n the future if it does not fold is fairly big.
More in the direction of expertise than job. Do you know any SAP consultants? They can always find a job. I am not exactly that but in a similar industry. They cannot be outsourced to India because they need local knowledge like accounting rules and such software are so huge and the space of potential kinds of problems and industry practices and whatnots, also domain experiences is so big that in these types of industry experience never has a point of diminishing marginal returns. People who do it for 30 years are more valuable than people who do it for 15.
Abandoning that kind of investment to become yet another dreamy startup Ruby on Rails type of guy? They are a dime a dozen and young hotshots with 5 years of experience—because there is just not so much to learn—outdo the older ones. It is an up or out—you hit big and then become a Paul Graham and retire from programming into investorship or similar stuff, or you are sooner or later forced out. In that type of world there is no real equivalent of the 50 years old SAP logistics consultants who is seen as something sort of a doyen because he dealt with every kind of crap that happen in a project at a logistics company.
So it sounds really dangerous to abandon that kind of investment for a new start in something different.
But diversifying, using free time to contribute to a project, that could be smart—hedging bets, if the main industry (desktop business software based on domain knowledge and business process experience) somehow collapses then it makes easier to join a different one (cool hot modern web based stuff). That makes sense, getting a foot in in one’s free time in a different industry..
Yes, if not for the risks.
This is an excellent example of the Fallacy of Gray, don’t you think? :-)
That depends on how you think DeVliegendeHollander models the situation in his mind. Modeling people in situations like this isn’t trivial. Given the priors I have about him, there’s learned helplessness that provides a bias towards simply staying in the status quo.
In general most decently skilled developers don’t stay unemployment for longer periods of time if they are in a startup that fails.
If you read his post closely then he says that he doesn’t even consider it. The act of considering it would be irresponsible. I don’t know enough to say that it would be the right choice for him to take that job, but I think he would profit from actually deeply considering it.
My experience is primarily not in the hands-on coding which in my business software world tends to be really primitive (read data, verify it, sum it up, write it somewhere, it is essentially primitive scripting), I don’t think I have even seen an algorithm since school that was as complex as a quicksort which is first year exam material, as it is simply not done. In fact we constantly try to make frameworks where no coding is needed, just configuration, and employ non-programmer domain expert consultants as implementation specialists, but it always fails because people don’t understand that properly that once your configuration gets that advanced that it loops over a set of data and makes if-then decisions, then it is coding again: just usually a poor coding framework. Example
Anyway it is more of a being a general troubleshooter. It is sort of difficult to explain (but actually this is the aspect that is likable about it, which is kind of balances the less likable aspects) that I lack a job description. A manager wants a certain kind of information in a regular report. To provide it, there needs to be software, bought or developed or both (customized), users trained, processes defined, and a bunch of potential other things and nobody really tells how to do it, nobody really tells what to do, it is just the need to achieve a result, an informational output just anyhow, with any sort of combination of technology, people and process. This is the good part of it, how open-ended it is but clearly far more than coding, and the coding part is usually primitive.
The bad part is coding the same bloody report the 100th time only slightly different… or answering the same stupid support call the 100th time because people keep making the same mistakes or forget the previous call. Of course both could be improved by reusable frameworks (often not supported by primitive technologies used), knowledge bases, or writing user manuals but that unfortunately does not depend on one guy, the obstacles to that tend to be organizational, usually short-sightedness.
Okay, then I likely underrated the skill difference between what you are currently doing and the work that exists in a startup like that.
BTW do you have any clue where to go on with this kind of skillset if I ever want to change things or what could be a good Plan B to get a foot in the door in? There are some professions that are really isolated and have little overlaps with anything else, such as doctors and lawyers and I have this impression all this business information management is like that, too. Outsiders know next to nothing about it and insiders tend to not know much about anything else, professionally at least. Ever knew a succesful SAP, Oracle, NAV, CRM, Baan or whatever consultant who is now good at doing something else? I know one guy who held out only for three years and, I sh.t you not, threw it all away and became an undocumented (illegal) snowboarding trainer in the US in the Rockies :) But that is probably not the typical trajectory esp. not after a dozen years.
You might want to think about moving into management.
Wouldn’t that mean focusing less on the reliable parts of the thing (software, process) and far more on the people? I would have to motivate people and suchlike and basically simulate someone who is an extrovert and likes to a talk and this type of normal personality?
That very much depends on the particulars of a managing job and on the company’s culture. Your skills as you described them aren’t really about programming—they are about making shit happen. Management is basically about that, except that the higher you go in ranks, the less you do yourself and the more you have other people do for you. It is perfectly possible to be an effective manager without being a pep-rally style extrovert.
No, I do not think that your fallacy depends on what DVH thinks.
You’re confusing risk aversion and learned helplessness.
Another English irregular verb.
“I can see that this won’t work. You are risk-averse. He exhibits learned helplessness.”
If I’m saying something to have an effect in another person then the quality of my reasoning process depends on whether my model of the other person is correct.
It’s like debugging a phobia at a LW meetup. People complain that language isn’t logical, but in the end the phobia is gone. The fact that the language superficially pattern matches to fallacies is besides the point as long as it has the desired consequences.
No, I’m talking to a person who at least self-labels as schizoid and about whom I have more information beyond that.
If I would think the issue is risk aversion and I wanted to convince him, I would appeal to the value of courage. Risk aversion doesn’t prevent people from considering an option and seeing the act of considering an option as irresponsible.
What result did I achieve here? I got someone who hates his job to think about whether to learn a different skillset to switch to a more enjoyable job and ask for advice about what he could do. He shows more agentship about his situation.
LOL. Let me reformulate that: “If I’m trying to manipulate another person, I can lie and that’s “besides the point as long as it has the desired consequences”. Right? X-)
Saying “There’s no real safe job” is in no lie. It true on it’s surface. If my mental model of DVH is correct it leads to an update in a direction that more in line with reality and saying things to move other people to a more accurate way of seeing the world isn’t lying.
Ahem. So you are saying that if you believe that your lie is justified, it’s no lie.
Let’s try that on a example. Say, Alice is dating Bob, but you think that Bob is a dirtbag and not good for Alice. You want to move Alice “to a more accurate way of seeing the world” and so you invent a story about how Bob has a hobby of kicking kittens and is an active poster on revenge porn forums. You’re saying that this would not be lying because it will move Alice to a more accurate way of seeing Bob. Well...
No. There are two factors:
1) It’s true. There are really no 100% safe jobs.
2) The likely update by the audience is in the direction of a more accurate belief.
Getting Alice to believe that Bob is an active poster on revenge porn forums by saying it likely doesn’t fulfill either criteria 1) or criteria 2).
There is really no 100% safe anything, but I don’t think that when DVH said “I will not leave a safe job for a startup” by “safe” he meant “100% safe”.
That doesn’t prevent the statement from being true. The fact that there’s no 100% safe anything doesn’t turn the statement into a lie while the example that Lumifer provides happens to be clear lying.
I didn’t focus on what he “meant” but on my idea of what I believed his mental model to be.
I don’t think DVH’s mental models have getting inaccurate in any way as a result of my communication. He didn’t pick up the belief “Startups as as safe as my current job”. I didn’t intent to get him to pick up that belief either. I don’t believe that statement either.
My statement thus does fulfill the two criteria:
1) It’s true on it’s surface.
2) It didn’t lead to inaccurate beliefs in the person I’m talking with.
Statement that fulfill both of those criteria aren’t lies.
That would mean that if you say something that is literally true but intended to mislead, and someone figures that out, it’s not a lie.
I have no problem with including intentions as a third category but in general “see that you intention aren’t to mislead” is very simply to “see that you reach an outcome where the audience isn’t mislead” so I don’t list it separately.
It doesn’t (though it does mostly prevent it from being useful), but the statement you made upthread was not that one. It was “Today there’s nothing like a real safe job”, in which context “safe” would normally be taken to mean something like “reasonably safe”, not “exactly 100% safe”.
What do you mean by “on its surface”? What matters is if it’s true in its most likely reasonable interpretation in its context.
Meh. Enough with the wordplays and let’s get quantitative. What do you think the P(DVH will lose his current job before he wants to|he doesn’t leave it for a startup) is? What do you think he thinks it is?
I didn’t just say “safe” I added the qualifier “real” to it. I also started the sentence with “today” with makes it more like a general platitude. I specifically didn’t say your job isn’t safe but made the general statement that no job is really safe.
It happens to be a general platitude commonly repeated in popular culture.
I think he didn’t have a probability estimate for that in his mind at the time I was writing those lines. When you assume he had such a thing you miss the point of the exercise.
does anyone know a program that calculates Bayesian probability’s.
This is far too general a question; there are many programs for calculating many things with ‘Bayesian’ in their name.
can you give me an example?
One example is BUGS, which uses Gibbs sampling to do Bayesian inference in complicated statistical models.
Tell us what you have, and what you’d like to turn it into.
thanks for the info!
are there any list (wikipedia list like) of all programs that calculates Bayesian probability’s?
Turns out there is. Probably not all of the programs though.
Are you trying to do something specific or are you just curious about learning about Bayesian statistics? The software on that list probably won’t be that useful unless you already know a bit about statistics theory and have a specific problem you want to solve.
thanks!
R
Have any snake oil salesmen been right?
I usually immediately disregard anyone who has the following cluster of beliefs:
1: The relevant experts are wrong. 2: I have no relevant expertise in this area. 3: My product/idea/ invention is amazing in a world changing way. 4: I could prove it if only the man didn’t keep me down.
Characteristic 2 is somewhat optional, but I’m not sure about it. Examples of snake oil ideas include energy healing, salt water as car fuel and people who believe in a flat earth. Ignoring 2, Ludwig Boltzmann is not an example (he did not believe that proof of atoms was being suppressed).
I think this does a good job of screening out probably dumb ideas, but are there any false positives?
No, by definition. Snake oil is defined as “does not work.”
But there are examples of denigrated alternative treatments that actually worked to some extent: acupuncture, meditation, aromatherapy etc. Low-carb diets were denigrated for a long time but they’ve been shown to work at least as well as other diets. Fecal transplants have a long, weird history as an alternative therapy, including things like Bedouins eating camel feces to combat certain infections. The FDA was for a long time very restrictive and skeptical about fecal transplants in spite of lots of positive evidence of their efficacy in certain infections.
A pretty good heuristic, but it’s worthwhile to have some open-minded people who investigate these things.
Thanks for the examples:
None of these seem to fulfill 3. They seem to fall into the category of somewhat decent with lots of exaggerated claims and enthusiastic followers.
Fecal transplants are a great example, although wikipedia says that most historical fecal therapies were consumed, and I don’t know if those work (doubt it). Also it doesn’t really fulfill 2 - it was doctors that first pioneered it when it was a weird fringe treatment. And thinking something is weird/extreme and fringe is different than thinking its a crackpot idea. But still a good example.
The healthcare startup scene suprises me.
Why doesn’t the free home doctor service put free (bulk-billed) medical clinics out of business?
Why did MetaMed go out of business?
Regarding MetaMed:
https://thezvi.wordpress.com/2015/06/30/the-thing-and-the-symbolic-representation-of-the-thing/
https://thezvi.wordpress.com/2015/05/15/in-a-world-of-venture-capital/
MetaMed’s service was expensive. I would guess they didn’t find enough takers.
Coincidence or Correlation?
A couple of months ago, I postponed an overnight camping trip due to a gut feeing. I still haven’t taken that particular trip, having focused on other activities.
Today, my local newspaper is reporting that a body was found in that park this morning. My natural human instinct is to think “That could have been me!”… but, of course, instincts are less trustworthy than other forms of thinking.
What are the odds that I’m a low-probability-branch Everett Immortality survivor? Do you think I should pay measurably more attention to such gut feelings in the future? What lessons, if any, are able to be drawn from these circumstances?
This sounds like a case of confirmation bias. In that if your “gut feeling” was never confirmed as something, you probably wouldn’t remember having the gut feeling. You could have been waiting every day for the rest of your life, and still not have gotten the gut-success feeling.
That doesn’t help you recalibrate about it, but I wouldn’t be listening to gut any more or less in the future.
Since you are posting, you know you are an Everett branch survivor. Whether that branch is low-probability is, of course, impossible to tell.
That depends on gut feelings, but I see no reason to update based on this particular incident.
That you should not read the crime / police blotter sections of newspapers.
Hm… how sure should anyone be of that impossibility? For example, if the number of Everett branches isn’t infinite, but merely, say, 10^120, then wouldn’t it be hypothetically possible for a worldline that has relatively few other worldlines that are similar enough to interact on the quantum level to have to macroscopicly-observable effects?
Fair enough.
I don’t; the local region has a small enough population that the main newspaper has only a single section to cover all local stories. Unsubscribing from the RSS feed with local crime stories would also unsubscribe me from local politics, events, fluff, and so forth.
The greatness of LW.
merely 10^120 :-D
Clearly, you do. I wasn’t suggesting wearing blinders not to notice them, I suggested not reading them.
But the penultimate question is kinda answerable, isn’t it? Have a Gut Feeling Journal and see for yourself whether GF works. It should be useful for calibration, at least, and also fun.
Also, DO pay more attention to crime reports and integrate them into your planning. I would have said seek out such reports, were your newspapers more diversified.
The clusterfuck in medical science with some well-intentioned attempts to do it better, not actually well, but somewhat better.
Edited to add: A follow-up on the deworming wars (which might be of interest to EAs as, I think deworming was considered to be an very effective intervention) in this blog—and read the discussion in the comments.
From that article:
well...
As far as I can tell, utility functions are not standard in financial planning. I think this is dumb (that is, the neglect is dumb; utility functions are smart). Am I right? Sure, you don’t know the correct utility function, but see the case for made-up numbers. My guess is to use log of wealth with extra loss-aversion penalties. Wealth is something between ‘net worth’ and ‘disposable savings’.
I had reason to think about this recently from observing a debate over a certain mean/volatility tradeoff. The participants didn’t seem to realize that the right decision depends on the size of the stakes. Now you certainly could realize this intuitively, but an expected-utility calculation would guarantee that you’d pick up on it. Moreover, I tried running the problem with made-up numbers and it became clear that any financially healthy person in that situation should take the riskier higher-mean approach, the opposite conclusion to the consensus.
Not in the first approximation, because utility is (hopefully) a monotonous function and you would end up in the same spot regardless of whether you’re maximizing utility or maximizing wealth.
Well, the first thing that the decision depends on is the risk aversion and there is no single right one-size-fits-all risk aversion parameter (or a function).
But yes, you are correct in that the size of the bet (say, as % of your total wealth) influences the risk-reward trade-off, though I suspect it’s usually rolled into the risk aversion.
Note that the market prices risks on the bet-is-a-tiny-percentage-of-total-wealth basis.
But under conditions of uncertainty, expected utility is not a monotonic function of expected wealth.
I’ll defer to the SSC link on why I think it would be better to make one up—or rather, make up a utility function that incorporates it.
Indeed. The case in question wasn’t a market-priced risk, though, as the reward was a potential tax advantage.
Under uncertainty, you must have a risk aversion parameter—even if you try to avoid specifying one, your choice will point to an implicit one.
You can also use the concept of the certainty equivalent to sorta side-step the uncertainty.
A made-up risk aversion parameter might also be a reasonable way to go about things, though making up a utility function and using the implicit risk aversion from that seems easier. The personal financial planning advice I’ve seen doesn’t use any quantitative approach whatsoever to price risk, which leads to people just going with their gut, which is what I’m calling dumb.
Um, I feel there is some confusion here. First, let’s make distinct what I’ll call a broad utility function and a narrow utility function. The argument to the broad utility function is the whole state of the universe and it outputs how much do you like this particular state of the entire world. The argument to the narrow utility function is a specific, certain amount of something, usually money, and it outputs how much you like this something regardless of the state of the rest of the world.
The broad utility function does include risk aversion, but it is.. not very practical.
The narrow utility function is quite separate from risk aversion and neither of them implies the other one. And they are different conceptually—the narrow utility function determines how much you like/need something, while the risk aversion function determines your trade-offs between value and uncertainty.
Well, I don’t expect personal financial planning advice to be of high quality (unless you’re a what’s called “a high net worth individual” :-D), but its recommendations usually imply a certain price of risk. For example, if a financial planner recommends a 60% stocks / 40% bonds mix over a 100% stocks portfolio, that implies a specific risk aversion parameter.
I have realized I don’t understand the first thing about evolutionary psychology. I used to think the selfish gene of a male will want to get planted into as many wombs as possible and this our most basic drive. But actually any gene that would result in having many children but not so many great-great-grandchildren due to the “quality” of our children being low would get crowded out by the genes that do. Having 17 sons of the Mr. Bean type may not be such a big reproductive success down the road.
Since most women managed to reproduce, we can assume a winner strategy is having a large number of daughters but perhaps for sons the selfish gene may want quality and status more than quantity. Anecdotally, in more traditional societies what typically men want is not a huge army of children but a high-status male heir, a “crown prince”. Arab men traditionally rename themselves after their first son, Musa’s father literally renames himself to Musa’s father: Abu-Musa. This sort of suggests they are less interested in quantity...
At this point I must admit I have no longer an idea what the basic biological male drive is. It is not simply unrestricted polygamy and racking up as many notches as possible. It is some sort of a sweet spot between quantity and quality, and in quality not only the genetic quality of the mother matters but also the education of the sons i.e. investing into fathering, the amount of status that can be inherited and so on? Which suggests more of a monogamous drive.
Besides to make it really complicated, while the ancestral father’s genes may “assume” his daughters will be able to reproduce to full capacity, there is still a value in parenting and generally quality because if the daughter manages to catch a high quality man, an attractive man, her sons may be higher quality, more attractive guys, and thus her sons can have a higher quantity of offspring and basically the man’s “be a good father of my daughter” genes win at the great-grandchildren level!
This kind of modelling actually sounds like something doable with mathemathics, something like game theory, right? We could figure out how the utility function of the selfish gene looks like game-theoretically? Was it done already?
If you’re really curious, I recommend picking up an evolutionary psychology textbook rather than speculating/seeking feedback on speculations from non-experts. Lots of people have strong opinions about Evo Psych without actually having much real knowledge about the discipline.
I don’t really believe in this anecdote; large numbers of children are definitely a point of pride in traditional cultures.
Surely you don’t think daughters are more reproductively successful than sons on average?
Surely I do—it is common knowledge today that about 40% of men and 80% of women managed to reproduce?
Every child has both a mother and a father, and there are about as many men as women, so the mean number of children is about the same for males as for females. But there are more childless men than childless women, because polygyny is more common than polyandry, ultimately because of Bateman’s principle.
But if everyone adopts this strategy, in a few generations women will by far outnumber men, and suddenly having sons is a brilliant strategy instead. You have to think about what strategies are stable in the population of strategies—as you begin to point towards with the comments about game theory. Yes, game theory has of course been used to look at this type of stuff. (I’m certainly not an expert so I won’t get into details on how.)
If you haven’t read The Selfish Gene by Richard Dawkins, it’s a fun read and great for getting into this subject matter. How The Mind Works by Steven Pinker is also a nice readable/popular intro to evolutionary psychology and covers some of the topics you’re thinking about here.
As I understand it, humans are on the spectrum between have maximum number of offspring with low parental investment and have a smaller number with high parental investment. There are indicators (size difference between sexes, size of testes, probably more) which puts us about a third of the way towards the high investment end. So, there’s infidelity and monogamy and parents putting a lot into their kids and parents abandoning their kids.
Humans are also strongly influenced by culture, so you also get customss like giving some of your children to a religion which requires celibacy, or putting your daughters at risk of dowry murder.
Biology is complicated. Applying simple principles like males having a higher risk of not having descendants won’t get you very far.
I’m reminded of the idea that anti-oxidants are good for you. It just didn’t have enough detail (which anti-oxidants? how much? how can you tell whether you’re making things better).
Or cultural variation is mostly determined by genetic variation. It’s hard to empirically distinguish the two.
You can do historic comparison. 500 hundred years ago people in Europe acted very differently than they do today. On the other hand their genes didn’t change that much.
It is even theoretically possible? If there are causal influences in both directions between X and Y, is there a meaningful way to assign relative sizes to the two directions? Especially if, as here, X and Y are each complex things consisting of many parts, and the real causal diagram consists of two large clouds and many arrows going both ways between them.
There no “the selfish gene of the man”. There especially no “the selfish gene of the woman” given how all the genes in woman are also in men. Humans have between 20000 to 25000 genes and all of them are “selfish”.
Yet compared to women, men have not as many copies of genes. Perhaps there are ‘selfish chromosome parts’?:)
A gene that on the X chromosome “wants” to be copied regardless of whether it’s in the male or female body. Thinking in terms of the interest of genes means not only thinking on the level of an individual specimen.
Most of evolution happened in hunter gatherer arrangements not in traditional farmer cultures.
No background in evolutionary psychology, but I’m wondering to which degree ‘good fatherhood’ can be encoded in genes at all. Perhaps maximal reproduction is the most strongly genetically preprogrammed goal in males but it’s cutltural mechanisms that limit this drive (via taboos, marriage etc.) due to advantages for the culture as a whole.
Why not? A male’s genes do not succeed when he impregnates a woman—they only succeed when the child grows to puberty and reproduces. If the presence of a father reduces e.g. infant mortality, that’s a strong evolutionary factor.
But how significant did the the male father role used to be among hunter-gatherers for a good upbringing of a child? If that task was for example shared between the group members (which I think I’ve read before it was) then it’s questionable whether there would be significant differences in knowing one’s genetic father or not. One hint that this might have been the default mode among hunter-gatherers is that monogamy is a minority marriage type among human cultures today 1 (meaning if polygamy was prevalent, it would have been difficult to ensure that all partners of an alpha male would remain faithful). I also think I’ve read that in many ingenious people, women are readily shared among the alpha males. Besides that, it seems that most things that have to do with reproduction considerations seem to be either on the physical attraction level or on a very high cognitive level (Are there enough resources for the upbringing? Is the the mother’s environment healthy?). Predetermined high-level stuff is memetically encoded rather than genetically (or it is just common sense our cognitive abilities enable us to have).
Edited for clarity. Please consider removing the downvote if it makes sense now to you.
Our (nearly) cavemen-optimized brains fear our children will starve or be eaten if we don’t help them. Sexual jealousy is probably genetically encoded meaning lots of men want their mates to be exclusive to them. The following is pure speculation with absolutely no evidence behind it: but I wonder if a problem with open relationships involving couples planning on having kids is that the man might (for genetic reasons) care less for a child even if he knows with certainty that the child is theirs. A gene that caused a man to care more for children whose mothers were thought to be sexually exclusive with the man might increase reproductive fitness.
Yes, until recently it was impossible to know with certainty that the child was his.
And even today feminist organizations are doing their best to keep it that way. For example, they managed to criminalize paternity testing in France.
By that standard, sex is also criminalized in many countries—after all, it’s only legal if the participants consent.
Personally, I’m not a big fan of the French law, but your interpretation of facts seems a little… creative.
They criminalized it for the main purpose that one would need to use it for.
I’m still unsure why I’m vehemently being downvoted for taking up this position. Perhaps it’s because people confuse it for men’s rights extremist thoughts? Why is the possibility being completely disregarded here that it’s only memes and a small set of genetic predispositions (such as reward from helping others via empathy and strong empathy for small humans) that jumpstart decent behavior? I think I’ve read somewhere that kittens learn how to groom by watching other cats. If other mammals can’t fully encode basic needs such as hygiene genetically, how can complex human behaviors? An important implication from this would be that culture carries much more value than we would otherwise attribute to it.
There is a strong predetermined empathy for cute things with big eyes, yes, but is there predetermined high level thinking about sex and offspring? I rather doubt that while OP appears to assume this as a given fact.
If the traditional male role involves making sure the pregnant or nursing woman does not starve, very.
Heh. How about among successful human cultures? :-D
See the link above; it’s not clear that the food provider role of males was actually widely present in prehistoric people, and the upbringing of the children might have been predominantly a task carried out by the entire group, not by a father/mother family structure.
Not sure what causes your amusement. Isn’t there still the possibility that this is memetics rather than genetics?
I don’t see support of this statement in your linked text (which, by the way, dips into politically correct idiocy a bit too often for my liking).
I’m easily amused :-P
What exactly is “this”? Are you saying that there is no genetic basis for males to be attached to their offspring and any attachment one might observe is entirely cultural?
Here is the part I’m referring to: “Nor does the ethnographic record support the idea of sedentary women staying home with the kids and waiting for food to show up with the hubby. We know that women hunt in many cultures, and even if the division of labor means that they are the plant gatherers, they work hard and move around; note this picture (Zihlman 1981:92) of a !Kung woman on a gathering trip from camp, carrying the child and the bag of plants obtained and seven months pregnant! She is averaging many km per day in obtaining the needed resources.”
Attachment to cute babies is clearly genetically predetermined, but I’m trying to argue that it’s not clear at all that considerations whether or not to have sex are genetically determined by other things than physical attraction.
Yes, and how does it show that “it’s not clear that the food provider role of males was actually widely present in prehistoric people”? The observation that women “work hard and move around” does not support the notion that they can feed themselves and their kids without any help from males.
I am not sure I understand. Are you saying that the only genetic imperative for males is to fuck anything that moves and that any constraints on that are solely cultural? That’s not where you started. Your initial question was:
At least it provides evidence that upbringing of the offspring could have worked without a father role. Here are a couple of other hints that may support my argument: Among apes the father is mostly unknown; The unique size and shape of the human penis among great apes is thought to have evolved to scoop out sperm of competing males; The high variability of marriage types suggest that not much is pretetermined in that regard; The social brain hypothesis might suggest that our predecessors had to deal with a lot of affairs and intrigues.
Well, whatever the individual sexual attraction is, but yes. At least, I’m arguing that we can’t reject that possibility.
That’s part of the same complex: If it hasn’t been significant then there wouldn’t even have been be evolutionary pressure for caring farthers (assuming high-level stuff like that can be selected for at all).
But not among individual humans, i.e., most men in polygynous cultures couldn’t afford more than one wife.
I think it can be. If the basic program of the selfish gene is “try to implant me in 100 wombs” once he realizes it is not really likely, there can be a plan B “have a son who will be so high quality and status that he will implant me in 100 wombs”.
But couldn’t high quality and status be highly correlated with attractiveness so that this this trait prevents other traits from being selected for?
Pua x EA?
Associating EA publicly with pickup is likely not good for the EA brand.
I however could imagine a weekend EA event that about doing shared street fundraising and at the same time comfort zone expansion and package it as a personal growth workshop.
On top of my hand I can think of one person who lives in London who I imaging has both the skills to lead such a workshop, who would benefit from doing such a project and who I know well. She’s also female with reduces possible risk for the EA brand.
The idea of imposter syndrome is that you actually have success and don’t feel like you deserve it. Given that you lately wrote that you never actually asked out a girl on which you had a crush before is that true?
I think the real question is—Can YOU? Spend a week on the street doing this, film it and edit, then upload it to a bunch of PUA forums. If you get any traction, come back here with your results, and you’ll likely get a better reaction than the flurry of downvotes I assume you’ll get mentioning pickup on its own.
I know of PUA knowledged people who work as charity fundraisers as they see this as an ability to practice their skills while being paid for something.
LW protip: in order to get a lot of upvotes, find political threads and post comments that imply that US government and liberal academia are fabricating facts and spreading propaganda. Don’t worry, you don’t have to do any dirty work and figure out if in that particular case it is correct or not, it doesn’t matter. You might need to wait a few days, but eventually you’ll receive a bunch of upvotes, all coming in a very short period of time.
Wasn’t there a less passive-aggressive way of expressing this complaint, or a more appropriate context for it?
Please link to a few examples of where this has been successful.
I for one would appreciate it if the discussions of geopolitics, immigration policy, monetary policy, factional and sectional politics, adherence to various national leaders — and anything else about “Us vs. Them” where “Us” and “Them” are defined by which section of Spaceship Earth the parties happen to have spawned in, — would kindly fuck off back to the comments sections of newspaper websites or some other appropriate forums.
The idea of LW as an explicitly Enlightenment project, one that actually contemplates ideas such as “the coherent extrapolated volition of humankind,” “applying the discovery of biases to improve our thinking,” and “refining the art of human rationality,” is something rare and valuable.
Yet another politics comment section, another outrage amplifier, is not.
Rational debate is so hard to find these days, we have to protect it. I wouldn’t be surprised to learn that US government and liberal academia are fabricating facts and spreading propaganda, just to create conflicts among people who merely happened to be born in different cultures or subgroups. I will not post specific examples, to avoid mindkilling, but you probably know what I mean. We should not take part in this insanity.
(Am I doing this right?)