Crazy Ideas Thread—October 2015
This thread is intended to provide a space for ‘crazy’ ideas. Ideas that spontaneously come to mind (and feel great), ideas you long wanted to tell but never found the place and time for and also for ideas you think should be obvious and simple—but nobody ever mentions them.
Rules for this thread:
Each crazy idea goes into its own top level comment and may be commented there.
Voting should be based primarily on how original the idea is.
Meta discussion of the thread should go to the top level comment intended for that purpose.
If you create such a thread do the following :
Use “Crazy Ideas Thread” in the title.
Copy the rules.
Add the tag “crazy_idea”.
Create a top-level comment saying ‘Discussion of this thread goes here; all other top-level comments should be ideas or similar’
Add a second top-level comment with an initial crazy idea to start participation.
- 24 Oct 2015 13:44 UTC; 6 points) 's comment on Crazy Global Warming Solution Ideas by (
Slightly crazy idea I’ve been bouncing around for a while: put giant IceCube style neutrino detectors on Mars and Europa. Europa would work really well because of all the water ice. This would allow one to get time delay data from neutrino bursts during a supernova to get very fast directional information as well as some related data.
Spreading around gamma ray detectors would have similar advantages.
Yes, but there’s less reason for that. A big part of the problem with neutrinos is that since only a small fraction are absorbed, it becomes much harder to get good data on what is going on. For example, the typical neutrino pulse from a supernova is estimated to last 5 seconds to 30 seconds, while the Earth is under a tenth of a second in diameter. Gamma rays don’t have quite as much of this problem and we can sort of estimate their directional data better.
On the other hand, the more recent work with neutrinos has been getting better and better at getting angle data which lets us get the same directional data to some extent.
We are probably in a historical simulation. Most historical simulations are not of everyone but just of historically important people. Update on this hypothesis to increase your estimate that your life is historically significant. Look for clues as to why you might be important. For all of us it might be that Eliezer succeeds and we are one of the 10^(big number) simulations of his life and everything surrounding him.
Simulation argument case 3 obviously.
One consequence for ethics in this case is that you can create conscious being by performing interactions equivalent of Turing tests on persons you get in contact with. Bonus points for spreading this meme to bring lots of conscious beings into existence (and put heavy load on the simulator).
But wouldn’t increasing load on the simulator increase the chances of the simulation being turned off, thus negating ALL the conscious and potentially conscious beings it was simulating?
That’s exactly what an agent of the simulator would say.
Cue the rooftop chase.
But just like HPMOR’s hat, the conscious being might switch back to nonsentience once the interaction ends.
Yeah, I wondered to what degree that could be optimized. But if you interact repeatedly and in complex ways than shouldn’t you notice that? Kind of a long-duration Turing test.
Hm, I wonder what the best place to find really happy people is?
Could you elaborate whether you mean in general, in simulations or elsewhere? And how this related to my comment?
The thought was to induce the simulation of good experiences by being in close proximity to happy people.
Ah yes. Interesting idea. But I think it only ‘counts’ if the happyness is conscious. One has to work a bit harder for that.
I understand why most historical simulations would be of historically important people, but why would most or even a lot of simulations be historical simulations?
The set of all simulations is irrelevant in this case. What matters for us is the set of simulations that match our observations. For this set, historical simulations of various forms are naturally expected to be predominate.
The past can’t simulate the future, so we must be in a sim from a future timeline. Loosely speaking this leaves open historical sims and ‘fictional’ sims. From the inside they may be hard to differentiate (consider that harry potter’s world looks historical from his perspective, etc.)
If multiple levels of sim are likely, I have a simple argument that fictional sims are more likely than you’d think: for us to be in a historical sim with respect to the root physical universe, every sim level in the stack must be historical. If even one sim in the tree/stack/chain is fictional, then everything below that level is also fictional.
So ‘fiction’ is something that only increases with sim levels.
See this. Basically, if the future goes well it will have lots of computing power and if a tiny fraction of this power is used to make historical simulations most people in our situation will be living in historical simulations.
I just publish simulation map, in which I conclude that most likely I live in one person me-simulation of a period near AI creation. In fact, there is two possible variants:
This is a simulation of Eliezer’s life, and I just onу of thousands people who are simulated for it with enough details to be conscious observer.
It is only me-simulation, there I am the only really simulated observer, and others are p-zombie and simplified models.
Hypothesis 2 is favoured by some kind of power law in simulation world, that says that simpler and cheaper simulations are more abundant. (e.g. there are more novels than movies in our world). But if it is true I should do something really important in FAI or other x-risks topics. I did many things, like map of x-risks prevention, but it is not enough to be simulated.
The simulation map:
http://lesswrong.com/r/discussion/lw/mv0/simulations_map_what_is_the_most_probable_type_of/
I’m surprised you think he actually has a high change of creating AGI.
EY was here only an example. Now so many players on field that AGI will probably created by someone else. And also it seems that he is not working on coding AI.
Seem to be implying that you are more likely to be in a simulation if historixcally impt. Interesting
Evidence?
In most stories, the majority of the population are NPCs.
There’s a paper on this called the “simulation argument”. It’s not evidence based but logic based.
Bostrom’s paper doesn’t purport to show that we are probably in a simulation, but only the weaker claim that one of these things is true:
Humanity is likely to fail to develop to superpowered “post-human” levels.
Conditional on attaining superpowered post-human civilization, humanity is unlikely to run a lot of historical simulations.
We are probably in a historical simulation.
(Bostrom puts it slightly differently; I think what I’ve written above is clearer and has fewer little holes.)
You will observe that this argument is more or less a triviality; Bostrom’s contribution is thinking of making such an argument rather than filling in difficult steps in the reasoning once the argument is thought of.
I confess that my own response to this is indifference; I think there’s a very good chance that the sort of computational superpowers needed to run a lot of faithful historical simulations will never be ours, and I don’t see why a post-human civilization would bother to run a lot of simulations of their ancestors, so the most the argument can tell me is that it’s not completely impossible that I might be in a simulation. Fair enough, but so what?
(Elaborating on that not-seeing-why: it’s not very clear why our posthuman successors would bother running any ancestor-simulations, but to get “I’m probably in a simulation” out of Bostrom’s argument what’s necessary is either that the bit of my life I’m experiencing right now has been simulated not just once but many many times, or else that the posthumans are going to simulate not only their actual ancestors but many many people very like their ancestors in situations similar to their ancestors’. I see no reason to expect either of those.)
Have you heard of the Resurrection? In many belief systems (of the western mid-east flavour specifically) it is the greatest goal that humanity could ever achieve. Historical simulation could implement it—in fact it is the only way to implement it.
Go find an average Christian or Muslim or other believer-in-the-Resurrection, and say to them “Great news! You and your friends and family are indeed going to be raised from the dead. What’ll happen is that you’ll get to live exactly the same life you’re living now all over again. You will have no recollection of having lived it before, suffering and disease and so on will be the same as ever, and you’ll die at the end. Isn’t that great?”
If they take you seriously enough to bother answering at all, do you really think they’ll say “Yeah, that’s exactly what I’m currently hoping for”?
I think that jacob_cannell’s implication was this but without “you’ll die at the end.” You die at the end physically but the point of the simulation is to obtain your mental state at the end of your life, so you can transfer that to heaven.
(I don’t believe there will ever be any possibility of rerunning a particular human being’s life in any manner that would be even close to his actual life.)
Yeah, I wondered about that. But I don’t think it makes sense. If you can get enough information about particular ancestors to simulate them (as opposed to simulating other people who happen to resemble them) then surely you have enough to put them directly in heaven / paradise / the New Jerusalem / whatever.
I’m inclined to agree. But, since I am the person I am largely because of the life I’ve lived, how can running a simulation that doesn’t replicate my life help to determine the proper mental state to send me to heaven with?
Let me try to imagine the process working as well as possible. I’ve kept a journal for the past ten years, and screenshots of my computer every 30 seconds for nearly the last four (as well as webcam shots that can indicate exactly when I was and was not present at the computer). If someone were to simulate me they would have to simulate someone who went through the experiences and thoughts described in the journal, and who used his computer in the way implied by the screen and webcam shots.
Does all that info actually imply that someone could simply describe my current state, or would you get something more accurate by such a simulation? Perhaps an AI could simply use the info to directly produce a current state, but how would it do that, without simulating something like a process that passes through all that info? In other words, it’s not clear to me that a simulation couldn’t help.
Regarding the last point, basically I was saying that ultimately I don’t expect jacob_cannell’s idea to work, but I don’t think it is unintelligible.
OK, so if I’m understanding correctly your suggestion is that in order to reconstruct your mind it would be necessary to do lots of simulations of you-like minds in order to adjust the (unfathomably many) parameters to find a mind that behaves in the right ways. I concede that that might be so.
It’s an interesting (and disturbing) idea because it suggests that (little bits of?) our lives might be simulated billions of times, with small variations, in the process of trying to reconstruct us. (If, that is, anyone is so interested in reconstructing us at all.) This seems to me to make a big difference to the moral calculus of attempted simulated resurrection—“we can reconstruct your mind-state and put a new instantiation of it somewhere wonderful” sounds like quite a different deal from “we can reconstruct your mind-state and put a new instantiation of it somewhere wonderful—but the reconstruction process will involve billions of simulated minds that more or less closely resemble yours passing through good approximations to all the events of your life that we could find out about”, and I’d be much less happy about the latter.
I have to say that it seems unlikely that enough information exists to do the reconstruction for anyone—even people who save as much information about themselves as you do, which most of us don’t. I mean, in some sense maybe it’s still there since everything we do has effects on everything else in our future light cone, but I’d expect the information to be unusable in something like the way that energy becomes unusable when it turns into waste heat in rough thermal equilibrium with its surroundings.
Yes, there could be moral objections to such a process apart from its likeliness of success. And I agree that there is unlikely to be enough information for it to work in any case.
Why “at the end of … life”? If you’re simulating someone, what’s special about a particular point when the physical body died?
The point at which someone dies is the point at which their mind no longer causally effects the simulation. Naturally they can be copied out before then, but historical accuracy requires at least one version to remain in the sim until death.
And why should the AI care about historical accuracy?
I guess the real question is the difference between minds simulated on the basis of historical data (=”previously existing”) and minds simulated de novo, just plausible human minds invented out of thin air. Why should the AI favour previously existing minds?
BTW, affects the simulation, not effects.
We are assuming an FAI, The FAI cares about historical accuracy to the degree that people care about resurrecting accurate versions of dead family/friends/ancestors, where accuracy is subjective and relative to memories and beliefs.
More generally, the resources available will determine some finite number of minds that can be created. Some individuals will choose to create lots of ‘children’ (generalized to include de novo minds), some will choose to resurrect lots of ancestors, others will choose to use resources only to expand/clone their existing mind, many will probably choose some mix.
Oh, boy, that’s such a can of worms. Let’s resurrect grandpa, except we’ll delete some features of him that we don’t like and try to forget about. Or let’s resurrect my girlfriend from college but let’s make her a nympho.
I would venture a guess that people rarely care about accurate versions of dead people, they would prefer improved ones.
All in all, this just looks like a silicon version of ancestor worship. If you venerate your ancestors or, say, if you are a Mormon you convert them the Mormonism, isn’t that acausal trade in practice? They begat you, you do things for their souls...
Other friends/family/descendants—as well as society in general—is unlikely to want these changes.
People alive today will want accurate versions of themselves to exist in the future. Society/future FAI will also consider this.
Avoid naive pattern matching.
Really? Is there anyone who would prefer an incontinent grandpa raving about today’s degeneracy which the Good Lord will burn out? Or a grandpa who lived to a really advanced stage of Alzheimer’s?
Oh, I do, I do :-) I pick insightful pattern matching instead.
Because that’s the time when you would want to be resurrected.
If I’m being simulated, I have already been “resurrected”. But what is the point of resurrection? You yourself say “so you can transfer … to heaven” and given that, what is the reason for running the simulation at all instead of not collecting $200 and going directly to heaven?
If the simulation isn’t run all the way through, the simulators couldn’t be sure they were resurrecting you instead of someone else (since the mind they were simulating might suddenly have started to do other things that you wouldn’t have done, if they had continued to the simulation.) For example, suppose they base the simulation in part on your Less Wrong comments. If they manage to produce a mind that produces the first half of your comments, then they say, “good enough, let’s move that to heaven,” it might be that the mind they put in heaven would have gone on to produce a second set of comments totally diverse from the real ones that you made. So it ended up being someone else in heaven, not you.
That goes to the issue of who is “you”.
I, a year ago, was a slightly different person than I am now. Both past-me and current-me are me. You are essentially saying that me-who-died is the version that should go to heaven and all the previous versions should not. Why?
We can also reverse the issue—in a simulation, I don’t have to die. If I am hit by a bus, insert a one-minute delay somewhere and the simulated-I will continue to live. Should that longer-lived version go to heaven, then?
Historical consistency—an intervention like that quickly leads to a fictional world that is ranked low in the ress utility function (because people from that fictional world don’t go on to create the actual future resurrection).
So is this whole “res utility function” based on obligations arising out of acausal trades?
In part but there are also can be regular causal trades between simulators within each world. For example a future simulation physically located in say china will necessarily be separate from located in canada. These simulators can trade in the more regular sense.
Right. The simulation is the forward time sweep of an inference engine recreating historical people for the purpose of future resurrection.
If humanity survives to singularity level superintelligence, it’s a rather obvious possibility. Doesn’t even require any advanced violations of physics. It’s actually a nearer term tech than most people think—the simplest forms of it will be possible not long after AGI.
It depends of course on one’s definition of ‘close’ and the currently available information. Identity is subjective though—and that is what makes the approach viable. There is no such thing as the singular canonical correct version of a person. We are distributions over mindspace across the multiverse.
I am a distribution over mindspace..? across the multiverse..? Funny, I don’t feel like a distribution. Do you have any evidence to support or that’s just a word salad?
Identity in general can refer to current self, past self, and future selves all as the same ‘person’. That is a set. Mindspace is just the space of all possible minds, so the person-defining set is a distribution over mindspace.
I’m using ‘multiverse’ in the most general sense (nothing QM specific) to refer to all possible universes/futures etc.
In the same way, is a rock a distribution over rockspace across the multiverse?
Sure, although it doesn’t have much temporal evolution.
But still—for some specific rock, we can’t describe/model/understand it exactly, so we specify it abstractly in a compressed form, said compressed form specifies a distribution over the space of rock-like objects—rockspace.
A few follow-on questions, then.
You say “we can’t describe/model/understand it exactly, so we specify it abstractly”—does that mean we’re talking solely about maps and not about the territory?
What exactly do you mean by a “distribution”? Is it a probability distribution? You made the argument that as things move through time, they are a set of past, present, and (hopefully) future states. Since time is unidirectional, we might even call that an ordered set, a sequence. But a sequence is not a distribution.
The approach “X is a distribution over X-space across the multilverse” seem to be applicable to absolutely everything. If that is so, what is the use of this approach?
Probability distributions can be defined as subsets over possible logical worlds.
Although general, it’s not a typical everyday mode of thought. I invoked it specifically in response to the parent comment:
So in some future resurrection, there would be potentially many versions of each mind from different possible worlds. Across the multiverse, many different simulators will recreate many different past historical sims. Each simulation doesn’t need to exactly recreate it’s own specific history as long as it recreates a specific history. Instead success simply requires adequate coverage across the space of all sims over the multiverse. If you aren’t thinking in terms of distributions over mindspace across the multiverse, you can’t really understand or reason about these concepts.
I still don’t understand.
Let’s drastically simplify things. Consider an ordered set of two mes—me one minute ago and me now. In which sense this set is a probability distribution? What does it mean?
So are you arguing that future resurrections will be, basically, a brute-force approach? In the sense of “We can’t be sure whether A or B happened, so we’ll simulate both A and B branches”? That doesn’t require much in the way of sophisticated concepts, it’s sufficient to see it as exhaustive search, I think.
Also, what counts as “success” and what are the incentives and consequences for succeeding or failing?
In the sense that everything is—we have uncertainty over the physical configurations.
No.
That’s just how complex multi-modal inference works in general. The multiverse complexity comes in from realizing that it is the whole set of similar future worlds creating past simulations.
But this has nothing to do with physical configurations. We have a set of two things—to make things even simpler, let’s make it a rock—that differ in time. Unless you’re going to posit some Time Lord who soars above the time line, assigning probabilities to time snapshots does not make any sense to me.
Lots of people today play video games that contain characters from the past.
True, but I think there are reasons beyond mere lack of capability why those games don’t involve neuron-level simulation of billions of specific past people.
Not if you weight each character by the number of words her or she speaks.
Would it be possible to analyse a database of comments and votes on them to mathematically determine the coalitions among the members of the forum (LW or other)? Could this be used to inform politics, counter politics or otherwise improve political dialog?
Yes, but I don’t know how helpful it would be. It could however, unmask sockpuppets which would be useful.
In general there should be a way to outsource forum moderation tasks like these, rather than everyone in charge of a community having to do it themselves.
Absolutely there should be! But do you know of anyone providing these tools? Reddit has certain mod bots, but I’ve never heard of an anti-sockpuppet one. By comparison, there are blockbots, so people who are politically blue can block the greens if they hate their enemies and refuse to see the other sides POV, but since we don’t have subreddits, anything of that ilk won’t work here.
Omnilibrium tries this, so certainly possible.
hmm… perhaps.
Use something like the reddit system for ranking posts with a twist.
If you drew a directed graph of all users who have upvoted each other and weighted upvotes based on the previous distance between the 2 nodes.
So 1 upvote from someone very dissimilar to yourself who’s a long way from you in the graph who rarely votes becomes far more valuable than 1 upvote from someone who votes on everything, always upvotes every post you make and who’s posts you always upvote.
That is some awesome proposal because it is relevant, specific and likely efficiently implementable. Thank you!
Anyone have an AI (narrow or AGI) that can learn and respond to quizzes?
There is a new Kaggle competition out which you may be interested in
https://www.kaggle.com/c/the-allen-ai-science-challenge/data
Discussion of this thread goes here; all other top-level comments should be ideas or similar.
planning involves: prediction without your intervention of what will happen, predition of with your intervention what will happen, contrasting of it, and of that space breaking it down into tangible, consequential steps such as in a decision trees
If genetic engineering proves effective and expensive parents should have the right to sign a contract binding on their future genetically engineered child which transfers a percentage of this child’s future earnings to the company that financed the child’s genetic engineering. Besides benefiting society by promoting genetic engineering, such contracts could be freely traded and would have a market price that would provide a powerful signal about the value of different types of genetic engineering.
I am not terribly comfortable with the idea of people being bound by contracts they didn’t sign.
I am bound by many contracts signed by Congress and they didn’t even have well-aligned incentives.
The general hypothetically good idea of a government is to have one group that can do such things for the common good and to avoid coordination problems. It often doesnt exactly fulfill that role, but that is an indictment of their behavior rather than license for other groups to behave the same.
Agreed. But I’m even less comfortable with poor parents not being able to buy genetic enhancements for their future children that would more than pay for themselves.
Not unusual in asian countries. Housing often involves cross-generation contracts. But yes, I’m also uncomfortable with that.
Huh? If you mean something like inheriting a house with a mortgage on it, that’s quite different. You can refuse inheritance.
To make it less personal.… imagine that a government wanted to extend the advantages of GE to more of their citizens as possible. They don’t have the money to subsidies it right now so they offer something like your plan. Still, assume that <10% of the population is going to be engineered.
Group by genetic intervention since you don’t know which will be best.
For each couple a projection of their childs likely earning potential given their parents economic status vs the rest of the population is calculated.
If a significantly larger portion of the subjects of that same intervention beat the trend the company gets to claim X% of the tax revenue from those people for their working lives. (note, the subjects do not pay a higher tax rate than other people with the same income)
It’s a long term investment but if the intervention is good it could be worth a very large amount.
The companies interest is in making sure those kids surpass expectations.
Does this bother people the same way the OP’s suggestion does?
It strikes me that you might even be able to do this with non-GE cases. Companies could be offered a similar deal for intervening in the lives of poor, inner city youth through educational programs etc. If they win, everyone wins especially the kids and the value of the contracts could vary as the company applies new interventions or new evidence comes to light about old interventions that have been applied to subjects.
That sound like slavery to me.
More like taxation. You are not obligated to work, but if you do a part of your earnings goes to someone else.
Free Range Slavery. Slavery 4.0. More pleasant for the cattle, more productive for the human ranchers.
AFAIK, an anarchist fellow named Stefan Molyneux came up with the idea of taxation as the latest iteration of slavery as the forced extraction of value by human ranchers from their human cattle.
I’m not an anarchist myself, but I like the analysis.
The Story of Your Enslavement
https://www.youtube.com/watch?v=Xbp6umQT58A
The Handbook of Human Ownership—A Manual for New Tax Farmers
https://www.youtube.com/watch?v=k67_imEHTPE
I don’t like the proposed idea, but let’s not expand the meaning of the words.
Slavery typically means you have a boss who directly tells you what to do, and not only during your working time but 24⁄7.
Taxation typically means you have to pay money to the boss, but the details of how you spend your time are your choice.
To bring it in the near mode, if someone would replace the law “you pay 50% of your earnings to the state” by “you pay 30% of your earnings to the state, and 20% to the Evil Corp”, you probably wouldn’t even notice the difference. Slightly different tax forms, maybe having to fill more papers. On the other hand, you would certainly notice if someone would make you a slave.
Yes, there is a common superset for both slavery and taxation, which means “someone else is using your life to make themselves even more rich”, but there is either a different word for that, or someone has to invent a new one.
Do you think a person stops being a slave if the slave owner gives them 2 hours of free time per day?
Not at all. Slavery is human ownership. Slavery is the right to compel by force.
The interesting part of Molyneux’s analysis is looking at slaves as a herd the human rancher wants to extract value from, and showing how the value extraction strategies have improved over time. Or at least arguing that they have.
What you describe is just one of many strategies of extracting value by force from your property.
It would be very interesting to see a real accounting analysis of slavery over time. What were the margins for a US plantation? For serfs? Has anyone actually seen these numbers?
Depends on what the state spends the money on and what Evil Corp spends the money on.
For it to be taxation it has to be done by a government.
But the cost to the individual is the same.
Yes, but we still don’t let private entities go around and force people to pay taxes to them.
Divorce settlements often involve men being forced to pay a percentage of their income, and student loan obligations are not discharged in bankruptcy.
Putting aside the ethics for a moment, this is fairly close to Clinton’s Hope Scholarships. He set up a system where students got their college fees payed for in exchange for a percentage of future earnings—a doctor would repay a lot to the fund, a teacher would repay much less. Overall it would all average out. This plan failed because future teachers were much more motivated to accept the deal than future doctors. Only those who expected to earn comparatively little will take a deal that takes a percentage of future earnings—if you have a reason to expect you are going to be wealthy, you go for the fixed rate student loans.
As a minor problem, it seems like you might face something like the same problem here. If there are two companies offering these services then parents who plan on high-income kids will avoid the profit-sharing plan. Of course, parents may not be able to plan on this reliably and in secret—if all you are ordering is a kid with 200 IQ and a competitive nature, predicting their outcomes is difficult for anyone.
But as a major problem, if we can get just a little more specific in the genes we tweak.… GenEng Corp will be highly disincentivized from activating the altruism gene; the last thing they want is a contract entitling them to 2% of a world-renowned brain surgeon who works pro bono for Doctors Without Boarders. Genes for greed and competitiveness, if identified, suddenly become an investment strategy. Genes for empathy, altruism, and moral reflection are poor business. I would also expect any genes relating to musical ability and artistic expression to be considered ‘risky’; if you have a potential billionaire in the works, the last thing you want is for her to be distracted by a muse.
Oh, also, intelligence is not really so very good a predictor for lifetime earnings, but longevity and a strong work ethic should do pretty well. Even a teacher’s salary starts to look good after getting a small yearly raise for 100 years. If longevity turns out to be profitable—and surely it would be? -- then lower intelligence and motivation might even be a good idea. You don’t want people who will get bored because they are not challenged. You want people who will work two jobs reliably for many decades, and who will order up a new batch of GenEng kids with their own contracts, without questioning the system.
Don’t let Robin Hanson hear that.
What does “should have the right” mean? Should current parents who invest heavily in prenatal (or other pre-competence) improvements to a child have this right?
I’m generally uncomfortable with the idea of long-term contracts that don’t have fairly clear termination options. I think people and cultures change too much over a few decades to have any clue what’s going to be right for them (let alone others) in the far future.
And if you do include the standard options (primarily bankruptcy), it’s pretty easy to game: declare bankruptcy immediately, before you have any earnings.
“Should have the right to enter a contract” means, generally speaking, that the legal system will enforce this contract.
People can enter into any contracts they want, the question is whether these contracts are enforceable.
In this case there is, of course, the issue of the contract binding a third party (the child).
I don’t understand the specific consequences of buying such a contract from the ones who signed it.
You buy the contract from the person who provided the financing and in return get the payments from the genetically enhanced person.
I’m reminded of this novel.
If the Mediterranean basin was formed by an eruption, relatively recently, than its flora and fauna were founded and built on later by very many introduction events; could a superimposition of putative sea trade roads on a map of genetic closeness of specific groups of organisms (including the improbable ones) let us know how much diversity was brought there in the ancient times?
(Like those molluscs that hopped rides on ships. Some ships had to drown in the open sea, which would, I think, give rise to new populations?)
Eh? The last truly cataclysmic event in the Mediterranean basin was about 5 million years ago—a bit too early for sea trade routes. And if you mean the Santorini eruption, it did not kill off the local flora and fauna necessitating introduction events.
Well, no, I didn’t mean that the sea trade was the only source of introductions, just that it might have left still-distinct ‘tracks’ by adding species less likely to be carried in by the initial flood.
Print thousands of unique puzzles (math, image, text puzzles) of varying difficulty on the windows of buses, trains, metros etc. with subtle, thin white strokes of paint like this. Each puzzle would have a code that would point to a website on which people could post their solutions. Everybody could contribute new puzzles that would be printed somewhere in the public after being reviewed and designed in a consistent way by an editorial team.
To build a giant lookup table. Google is a small giant lookup table, but we need a much, much bigger one.
Google is too much about interfacing with their table, but that should be put aside for the moment. What I want is to input any blob of data and output should be all possible relations this blob of data has with any other blob of data.
For example, if I input an integral (calculus), its solution (function) would be one of the natural outputs. If I input a picture, all pictures of the same object(s) is the natural answer this GLT should return. Then you can filter them further. It goes on and on. The table itself is constantly updated, of course.
The craziness of this idea is only in that I think it would soon replace Google. Otherwise it’s quite basic.
The rub is what you consider the “natural” output.
If you give it a picture of a blackboard is the natural output pictures of other blackboards, is the natural output similar pictures of rooms with similar color schemes ,is it the life story of the poet who wrote the quote written on the blackboard, internet posts which include the quote ,is it the details of the famous historical event where a politician quoted a line from the same poem or is it the details of the car reg in a photo on the wall?
If you upload a photo of a screwdriver should it give you info on how/where to buy that kind, lists of different types of screwdrivers or pictures of the type of screw it’s designed to fit into?
A major problem you run into with this kind of thing is that you get so very very very many potential links. take a normal photo and there’s thousands, possibly millions of things that link to it reasonable only 1 node away and you need to not just filter but to also prioritize.
Except where it’s pretty safe to assume like with math problems you have to give some kind of hint about what kind of thing you’re looking for.
Yes. At first, you have only several relations known for such a blackboard. But the GLT updates automatically, via NN for example. Just as it was an indexing machine. Which it is.
Many paths lead from such a blackboard picture. Perhaps as much as one million or more. Perhaps a window is near this blackboard and the Saint Peter basilica in Rome is clearly visible through. Thus, a whole new line of relations is opened here. You can filter them in and out.
Did I mentioned that this table is giant? It would dwarf Google. In fact, every Google query can be only added to it. Another possible relation in the GLT. Along with the IP, date, time, OS—where and when the query has been made.
Input bit blob (string), output bit blob (string). Those kind of tuples, along with some “meta-data”, are the GLT’s (retrievable of course) members.
Spurious connections would likely be a massive headache like patterns on the wall matching patterns in the shadows of some random photos taken 2000 miles away while the handwriting style gets matched to a pair of Ukrainian schoolchildren who have never been within 3000 miles while the sentence writing style itself gets matched to an internet post by someone completely unrelated who’s never been within 5000 miles talking about potato dishes.
I get that the table is giant but it sounds almost like an expert system which you don’t ask questions but rather throw info at and hope it comes back with what you want.
Even bounded these things can be a headache. I’ve written code that tried to identify duplicate image regions between 2 images and you’d be surprised how many little sections of images that brute force searches can find matches for in others. little areas of sand, particularly generic trees, shapes in clouds, actual duplicated areas which do match but which are rotated through 27 degrees so that you can’t do a straight pixel by pixel match or which are slightly more compressed slightly less compressed.
If, for example, your system stores info about links between every image that has the same pattern of stars in it then you’d likely need more storage space than you could get by turning the earth into computronium. Exponentials are a bugger.
You hit the same problem one 1000000x worse if you’re trying to match on everything everywhere everywhen.
Crazy ideas thread, isn’t it?
Still, it isn’t more crazy that Google would look like in say 1990.
Likely so, but manageable, one way or another.
That would justify 6 out of 20 zeros, wouldn’t it?
Oh I like the idea, some kind of massive expert system would be awesome.
I’m just running through some of the problems since I’ve played with some things in related domains.
Wolphram Alpha (http://www.wolframalpha.com/) does more or less what You have described, so I suppose that Stephen has devised/engineered some kind of “table”. But still, can You give some more technical insights in Your idea, since sounds interesting (for me at least).
BR
WA is quite impressive in some sub-fields. But not nearly enough. What I want are all possible known relations your nick “Ruzeil” has with anything else. A picture (all known pictures) of you and anybody else who may use it as a nick or a (sur)name etc. Then all your posts here and all those who discussed with you...
If there is a known relation anywhere in this world, that relation should be in this GLT. Then you filter out (and aggregate) as you want. Well, the interface let you do it easily and an API exists as well.
Perhaps 10^20 records are in the table, you can play with. The number grows and grows. And you can access to view all of them.
Every relation in this table has its own probability. Some quite high, some not. And are constantly updated as well. Even the number of possible attributes of a relation in the table develops as well.
Needless to say, you can use the table to see networks of relations between elements of any list you choose to provide to this GLT.
Setting aside whether or not this is useful, I’m not convinced that the implementation you described is practical. Google based search on hyperlinks specifically because that was easy to implement. Is there a smaller search space than the entirety of human knowledge on which this would still be useful?
This was basically the idea behind Wolfram Alpha. He also thought it would soon replace google. But :
It’s very very hard to do. Play around with Wolfram Alpha and you’ll soon see that while sometimes it spits out exactly what you want, other times it just can’t understand what you’re looking up.
Most people don’t think this way. They can’t formulate a query in the proper way to get the correct things out of it.
Looks like the semantic web.
I dunno. I don’t think I would use what you’re describing over Google. Filtering the associations with little to no work from the end user is huge. If I type “register s” into google, it instantly understands that I want to know about registering scripts in asp.net due to my previous search history, the types of sites I visit, etc.
I think you are underestimating what a tremendous pain in the ass it will be to manually filter through the massive number of associations with a particular string.
In incognito mode “register script” gives links to various resources (WGA/Library of Congress/etc) directed at screenwriters along with sites directed towards programming in languages I don’t know and don’t care about. And this is after Google has removed/hidden links it believes to be spammy or generally unhelpful toward people who make this search.
The thing is, that every search you make, is going to be appended to the GLT. I said so, that each Google query can be just added to the table. But not only your Google query, if you choose so—but every GLT query as well.
But even without this option, your “register s” example would work better on GLT then on Google.
With this option on, so much easier.
Millions of filters would be inside GLT, already. Yours may be added. It is a main advantage over Google. Quite obvious to me.
That conveys a much different impression than
And how is this functionality
any different from Google in the first place? Are you implying they aren’t already mining information regarding each user’s search-revision and link-clicking habits to improve their filters as whole?
Google is enough and will be enough? They already doing this and that and everything?
Had Brin and Page thought like that, we would be on AltaVista. But there would be no AltaVista as well. Not even an iron ax.
Some people have no imagination, whatsoever. Most of them. Including very many on this site.
This is a crazy idea thread, remember? Someone may pick one of those ideas and put it into life. That’s all that it is. I will not go into technical details, for sure.
shrug
I am interested in your idea but based on your description, I am legitimately uncertain as to how it is measurably different from what Google already does.
I am certainly not saying that Google is and always will be the best.
Currently Google does not give you all the available pictures of an object, from a photo you have.
This “horizontal” knowledge isn’t present in Google’s databases.
Additionally, page ranking, whichever it is currently, does not permit you to sort the answers by yourself. You may want that. Or implement a function like “the shortest”. And many more complex functions.
Sites are just one type of object. You can’t Google for most other objects.
There are some cameras in Africa, showing you water ponds. I want to know, if there is a waterhole, where a lion came into the picture less than 100 seconds ago. Or a warthog. Or both.
And so on.
Above mentioned GLT would give you such answers, Google doesn’t.