Just this guy, you know?
Dagon
reason for downvote: this doesn’t make clear (and is probably wrong about) the tie from game theory descriptions “zero sum” and “nash equilibrium”. I suspect they don’t mean what you think they mean, but perhaps you’re just focusing on other aspects of the decisions, and where the game theory is less directly important.
In fact, neither bike protections nor crime is fixed-sum. If everyone buys locks, thieves go to a bit more effort to defeat the locks, and there’s probably LESS theft, but not zero. The Nash equilibrium for effort-to-secure vs effort-to-steal will depend entirely on payoffs, and there’s no reason to believe it’s legible enough to find (or that it even contains) a zero-crime option.
I think this depends a whole lot on the domain/product, the scalability vs locality question (cookies get worse if more are made in the same place and then distributed, most software doesn’t), and the network effect (software that depends on many people using the same thing).
I love my Kindle (and have since the ugly angular V1). It would be very hard to argue that Amazon is particularly good at “one thing well”, though. Almost none of it’s other products are that focused, and it’s only because serious readers spend so much on books that they’ve kept the Kindle fairly pure.
More generally, focusing on a single thing is HARD. You need to find a thing that people are willing to pay for (either a few paying a lot or a lot paying a bit), and that you can make more cheaply and better than your competitors. For most enterpreneurs, the path to that is to do a lot of things, then cut the ones which aren’t that successful. This is a SEARCH strategy, not a long-term octopus vision. They’re trying to find the one (or few very related) things that they can do well (and do well for themselves). Many of them don’t actually find it, so they either just get used to “do lots of stuff pretty badly”, or give up.
In theory, competition should counteract a lot of those incentives. Since software generally has low marginal costs, the ones with better functionality for passing users should get more market share, and investing in becoming/staying best will be rewarded.
For a lot of it, noise and short-term metrics overwhelm the quality drive, unfortunately. That’s likely because most software is too cheap (because many customers prefer inexpensive crap, so good things don’t get made).
[ I don’t consider myself EA, nor a member of the EA community, though I’m largely compatible in my preferences ]
I’m not sure it matters what the majority thinks, only what marginal employees (those who can choose whether or not to work at OpenAI) think. And what you think, if you are considering whether to apply, or whether to use their products and give them money/status.
Personally, I just took a job in a related company (working on applications, rather than core modeling), and I have zero concerns that I’m doing the wrong thing.
[ in response to request to elaborate: I’m not going to at this time. It’s not secret, nor is my identity generally, but I do prefer not to make it too easy for ’bots or searchers to tie my online and real-world lives together. ]
Most of these kinds of posts should start with Woody Allen’s 1979 quote:
More than any other time in history, mankind faces a crossroads. One path leads to despair and utter hopelessness. The other, to total extinction. Let us pray we have the wisdom to choose correctly.
Agreed, but it’s not just software. It’s every complex system, anything which requires detailed coordination of more than a few dozen humans and has efficiency pressure put upon it. Software is the clearest example, because there’s so much of it and it feels like it should be easy.
I think this leans a lot on “get evidence uniformly over the next 10 years” and “Brownian motion in 1% steps”. By conservation of expected evidence, I can’t predict the mean direction of future evidence, but I can have some probabilities over distributions which add up to 0.
For long-term aggregate predictions of event-or-not (those which will be resolved at least a few years away, with many causal paths possible), the most likely updates are a steady reduction as the resolution date gets closer, AND random fairly large positive updates as we learn of things which make the event more likely.
I kind of see what you’re saying, but I also rather think you’re talking about specifying very different things in a way that I don’t think is required. The closer CS definition of math’s “define a sorted list” is “determine if a list is sorted”. I’d argue it’s very close to equivalent to the math formality of whether a list is sorted. You can argue about the complexity behind the abstraction (Math’s foundations on set theory and symbols vs CS library and silicon foundations on memory storage and “list” indexing), but I don’t think that’s the point you’re making.
When used for different things, they’re very different in complexity. When used for the same things, they can be pretty similar.
It’s fascinating (and a little disturbing and kind of unhelpful in understanding) how much steering and context adjustment that’s very difficult in older/smaller/weaker LLMs becomes irrelevant in bigger/newer ones. Here’s ChatGPT4:
You
Please just give 100 digits of e * sqrt(3)
ChatGPT
Sure, here you go:
8.2761913499119 7879730592420 6406252514600 7593422317117 2432426801966 6316550192623 9564252000874 9569403709858
“Mathematical descriptions” is a little ambiguous. Equations and models are terse. The mapping of such equations to human-level system expectations (anticipated conditional experiences) can require quite a bit of verbosity.
I think that’s what you’re saying with the “algorithms and data structures” part, but I’m unsure if you’re claiming that the property specification of the math is sufficient as a description, and comparable in fidelity to the algorithmic implementation.
Wild guesses here. I’ve done work in optical product identification, but I don’t know how well those challenges translate. Also, it’s an obvious enough idea that I expect there are teams working on it.
Lens and CCD technology is not trivial at those speeds and insane angular resolution. It’s not just about counting pixels, it’s about how to get light to the exact right place on the sensor, for long enough to register. I honestly don’t know if that’s solvable.
More boringly, clouds and nighttime would make this much less useful, especially as enemies can plan missions around the expected detection capabilities. I haven’t done the math, but even on clear days in daytime, dust and haze likely interfere too much for even a few KM distance.
[note: I suspect we mostly agree on the impropriety of open selling and dissemination of this data. This is a narrow objection to the IMO hyperbolic focus on government assault risks. ]
I’m unhappy with the phrasing of “targeted by the Chinese government”, which IMO implies violence or other real-world interventions when the major threats are “adversary use of AI-enabled capabilities in disinformation and influence operations.” Thanks for mentioning blackmail—that IS a risk I put in the first category, and presumably becomes more possible with phone location data. I don’t know how much it matters, but there is probably a margin where it does.
I don’t disagree that this purchasable data makes advertising much more effective (in fact, I worked at a company based on this for some time). I only mean to say that “targeting” in the sense of disinformation campaigns is a very different level of threat from “targeting” of individuals for government ops.
I don’t have confidence in my models of how coherent and competent governments are at getting and using data like this. The primary buyers of location data are advertisers and business planners looking for statistical correlations for targeting and decisions. This is creepy, but not directly comparable to “targeted by the Chinese government”.
My competing theories of “targeted by the Chinese government” threats are:
they’re hyper-competent and have employee/agents at most carriers who will exfiltrate needed data, so stopping the explicit sale just means it’s less visible.
they’re as bureaucratic and confused as everything else, so even if they know where you are, they’re unable to really do much with it.
I think the tension is what does it even mean to be targeted by a government.
Moral weights depend on intensity of conscient experience.
Wow, that seems unlikely. It seems to me that moral weights depend on emotional distance from the evaluator. For some, they’re able to map intensity of conscious experience to emotional sympathy (up to a point; there are no examples and few people who’ll claim that somthing that thinks faster/deeper than them is vastly more important than them).
Just to focus on the underlying tension, does this differ from noting “all models are wrong, some models are useful”?
an AI designer from a more competent civilization would use a principled understanding of vision to come up with something much better than what we get by shoveling compute into SGD
How sure are you that there can be a “principled understanding of vision” that leads to perfect modeling, as opposed to just different tradeoffs (of domain, precision, recall, cost, and error cases)? The human brain is pretty susceptible to adversarial (both generated illusion and evolved camoflage) inputs as well, though they’re different enough that the specific failures aren’t comparable.
I tend to read most of the high-profile contrarians with a charitable (or perhaps condescending) presumption that they’re exaggerating for effect. They may say something in a forceful tone and imply that it’s completely obvious and irrefutable, but that’s rhetoric rather than truth.
In fact, if they’re saying “the mainstream and common belief should move some amount toward this idea”, I tend to agree with a lot of it (not all—there’s a large streak of “contrarian success on some topics causes very strong pressure toward more contrarianism” involved).
Hmm. I don’t doubt that targeted voice-mimicking scams exist (or will soon). I don’t think memorable, reused passwords are likely to work well enough to foil them. Between forgetting (on the sender or receiver end), claimed ignorance (“Mom, I’m in jail and really need money, and I’m freaking out! No, I don’t remember what we said the password would be”), and general social hurdles (“that’s a weird thing to want”), I don’t think it’ll catch on.
Instead, I’d look to context-dependent auth (looking for more confidence when the ask is scammer-adjacent), challenge-response (remember our summer in Fiji?), 2FA (let me call the court to provide the bail), or just much more context (5 minutes of casual conversation with a friend or relative is likely hard to really fake, even if the voice is close).
But really, I recommend security mindset and understanding of authorization levels, even if authentication isn’t the main worry. Most friends, even close ones, shouldn’t be allowed to ask you to mail $500 in gift cards to a random address, even if they prove they are really themselves.
In deep meditation people become disconnected from reality
Only metaphorically, not really disconnected. In truth, in deep meditation, the conscious attention is not focused on physical perceptions, but that mind is still contained in and part of the same reality.
This may be the primary crux of my disagreement with the post. People are part of reality, not just connected to it. Dualism is false, there is no non-physical part of being. The thing that has experiences, thoughts, and qualia is a bounded segment of the universe, not a thing separate or separable from it.
Is your mind causally disconnected from the actual universe? That’s the only way I can understand the merging of minds that share some similarities (but are absolutely not identical across universes that aren’t themselves identical). Your forgetting may make two possible minds superficially the same, but they’re simply not identical.
I don’t know why you think path-based configuration of brain state would be false. That may not be “identity” for all purposes—there may be purposes for which it doesn’t suffice or is too restrictive, but it’s probably good for this case.
So, https://en.wikipedia.org/wiki/PageRank ?