Where, exactly? All I’ve noticed is that there’s less interesting material to read, and I don’t know where to go for more.
Okay, SSC. That’s about it.
Where, exactly? All I’ve noticed is that there’s less interesting material to read, and I don’t know where to go for more.
Okay, SSC. That’s about it.
Nothing terrible was going to happen. As has been pointed out, collisions that energetic or more happen all the time in the upper atmosphere.
But you’re always stuck in one reality.
Let’s take a step back, and ask ourselves what’s really going on here. It’s an interesting idea, for which I thank you; I might use it in a story. But...
By living your life in this way, you’d be divorcing yourself from reality. There is a real world, and if you’re interacting solely with these artificial worlds you’re not interacting with it. That’s what sets off my “no way, no how” alert, in part because it seems remarkably dangerous; anything might happen, your computing infrastructure might get stolen from underneath you, and you wouldn’t necessarily know.
Do you by any chance have those as MP3 or FLAC?
“not impossible” == “possible”. And this article doesn’t show either one.
So, some Inside View reasons to think this time might be different:
The results look better, and in particular, some of Google’s projects are reproducing high-level quirks of the human visual cortex.
The methods can absorb far larger amounts of computing power. Previous approaches could not, which makes sense as we didn’t have the computing power for them to absorb at the time, but the human brain does appear to be almost absurdly computation-heavy. Moore’s Law is producing a difference in kind.
That said, I (and most AI researchers, I believe) would agree that deep recurrent networks are only part of the puzzle. The neat thing is, they do appear to be part of the puzzle, which is more than you could say about e.g. symbolic logic; human minds don’t run on logic at all. We’re making progress, and I wouldn’t be surprised if deep learning is part of the first AGI.
If you’re being generous, you might take the apparent wide applicability of simple techniques and moderate-to-massive computing power as a sign (given that it’s the exact opposite of old-style approaches) that AGI might not be as hard as we think. It does match better with how brains work.
But this particular result is in no way a step towards AI, no. It’s one guy playing around with well-known techniques, that are being used vastly more effectively with e.g. Google’s image labelling. This article should only push your posteriors around if you were unaware of previous work.
The craziness it produced was not code, it merely looked like code. It’s a neat example, but in that particular case not much better than an N-gram markov chain.
If you liked the anime, you will likely find that this is better. If you felt that the anime was flawed, you may well find that the book is not, or not in the same way.
The story is slow, with a great deal of explanations and musings, especially in the beginning; it’s trying to paint an entire world, and that shows. It is the sort of thing that is very difficult to adapt to an animated format. The book, however, was well worth the read.
Only the first volume is out yet, the second to come in July.
nostalgebraist has started work on a new novel, The Northern Caves. It’s off to a slow start, but looks interesting so far.
In addition to what James said, I’m reminded of the mechanism to change screen resolution in Windows XP: It automatically resets to its original resolution in X seconds, in case you can’t see the screen. This is so people can’t break their computers in one moment of weakness.
But you are absolutely allowed to break your computer in “one moment of weakness”; it isn’t even hard. The reason for that dialog is because the computer honestly, genuinely can’t predict if the new screen mode will work.
I don’t believe that it’s mainstream transhumanist thought, in part because most people who’d call themselves transhumanists have not been exposed to the relevant arguments.
Does that help? No?
The problem with this vision of the future is that it’s nearly basilisk-like in its horror. As you said, you had a panic attack; others will reject it out of pure denial that things can be this bad, or perform motivated cognition to find reasons why it won’t actually happen. What I’ve never seen is a good rebuttal.
If it’s any consolation, I don’t think the possibility really makes things that much worse. It constrains FAI design a little more, perhaps, but the no-FAI futures already looked pretty bleak. A good FAI will avoid this scenario right along with all the ones we haven’t thought of yet.
I distinctly remember, at some point in my teens, realizing that other people sometimes thought like me and I could model their reactions as something more than inscrutable environmental hazards. So there’s that.
On the flip side there’s Luv and Hate, which is an (incomplete! still good) rewrite of the Muv-Luv Alternate story with a guest protagonist from… Supreme Commander. Including the ACU.
It’s well-written, mainly character-focused with a few amusing combat interludes, and oh so gratifying after attempting to read the grimdark original.
It’s also a quest. If this doesn’t mean anything to you folks… don’t worry about it, you can treat it as an ordinary story if you wish.
We won’t run out of coal anytime soon. It has other issues, but I think that invalidates his conclusion—coal power plants are pretty cheap, and are already being built.
I’m also more optimistic about politicians. Ten years may be beyond their reelection horizon, but it’s not beyond their “This place is going to hell”-horizon.
Okay. I’m sure you’ve seen this question before, but I’m going to ask it anyway.
Given a choice between
A world with seven billion mildly happy people, or
A world with seven billion minus one really happy people, and one person who just got a papercut
Are you really going to choose the former? What’s your reasoning?
If you have Alzheimer’s, and you want to use cryonics, you should do your very best to get frozen well before you die of the disease.
This is problematic in all jurisdictions I can think of. Even where euthanasia is legal, I don’t know of any cryonics organisations taking advantage, and there might be problems for them if they do. I’d very much like to be proven wrong in this.
It’s a suspiciously pleasant way to go, but I see no reason to look more closely at this. Let’s just be happy he got the end he wanted.
There’s a good chance you’ll have a second elephant failure while the first one is giving birth to a replacement, so at least use RAID6.
Or ZFS RAIDZ2. That’s also great.