Oops, yeah the written programs are supposed to be deterministic. The point of mentioning the RNG was to handle the fact that an AI might derive its performance from a strong random number generator, which a C code can’t emulate.
notfnofn
To clarify: we are not running any programs, just providing code. In a sense, we are competing at the task of providing descriptions for very large numbers with an upper bound on the size of the description (and the requirement that the description is computable).
I personally used beeminder for this (which I think originated from this community)
Little thought experiment with flavors of Newcomb and Berry’s Paradox:
I have the code of an ASI in front of me, translated into C along with an oracle to a high-quality RNG. This code is N characters. I want to compete with this ASI at the task of writing a 2N-character C code that halts and prints a very large integer. Will I always win?
Sketch of why: I can write my C code to simulate the action of the ASI on a prompt like “write a 2N-character C code that halts and prints the largest integer” using every combination of possible RNG calls and print the max + 1 or something.
Sketch of why not: The ASI can make us both lose by “intending” to print a non-halting program if it is asked to. There might be probabilistic approaches for the ASI as well, where it produces a non-halting program with some chance. If I can detect this in the simulations, I might be able to work around this and still beat the ASI.
Quick note: might be easier to replace your utility function as for some parameter (which is equivalent to the one you have, after rescaling and shifting). Utility functions should be convex but this is very convex, being bounded above.
Utility functions are discussed a lot here; I think it’s worth poking around a bit.
I just read through the sequence. Eliezer is a fantastic writer and surprisingly well-versed in many areas, but he generally writes to convince a broad audience of his perspective. I personally prefer writing that gets into the technical weeds and focuses on convincing the reader of the plausibility of their perspective, instead of the absolute truth of it (which is why I listed Scott Aaronson’s paper first; I’ve read many of his other papers and blogs, including on the topic of free will, and really enjoy them).
I’m going to read https://www.scottaaronson.com/papers/philos.pdf, https://philpapers.org/rec/PERAAA-7, and the appendix here: https://www.lesswrong.com/posts/dkCdMWLZb5GhkR7MG/ (as well as the actual original statements of Searle’s Wall, Johnston’s popcorn, and Putnam’s rock), and when that’s eventually done I might report back here or make a new post if this thread is long dead by then
Okay, let me know if this is a fair assessment:
-
Let’s consider someone meditating in a dark and mostly-sealed room with minimal sensory inputs, and they’re meditating in a way that we can agree they’re having a conscious experience. Let’s pick a 1 second window and consider the CNS and local environment of the meditator during that window.
-
(I don’t know much physics, so this might need adjustment): Let’s say we had a reasonable guess of an “initial wavefunction” of the meditator in that window. Maybe this hypothetical is unreasonable in a deep way and this deserves to be fought. But supposing it can be done, and we had a sufficiently powerful supercomputer, we could encode and simulate possible trajectories of this CNS over a one second window. CF suggests that there is a genuine conscious experience there.
-
Now let’s look at how one such simulation is encoded, which we could view as a long string of 0s and 1s. The tricky part (I think) is as follows: we have a way of understanding these 0s and 1s as particles and the process of interpreting these as states of particles is “simple”. But I can’t convert that understanding rigorously into the length of a program because all programs can do is convert one encoding into another (and presumably we’ve designed this encoding to be as straightforward-to-interpret as possible, instead of as short as possible).
-
Let’s say I have sand swirling around in a sandstorm. I likewise pick a section of this, and do something like the above to encode it as a sequence of integers in a manner that is as easy for a human to interpret as possible, and makes no effort to be compressed.
-
Now I can ask for the K-complexity of the CNS string, given the sand swirling sequence as input (i.e. the size of the smallest turing machine that prints the CNS string with the sand swirling sequence on its input tape). Divide this by the K-complexity of the CNS string. If the resulting fraction is close to zero, maybe there’s a sense in which the sand swirling sequence is really emulating the meditator’s conscious experience. But this is ratio is probably closer to 1. (By the way, the choice of using K-complexity is itself suspect, but it can be swapped with other notions of complexity.)
What I can’t seem to shake is that it seems to be fundamentally important that we have some notion of 0s and 1s encoding things in a manner that is optimally “human-friendly”. I don’t know how this can be replaced with a way that avoids needing a sentient being.
-
Based on your previous posts (and other posts like this), I suspect this might not get any comments explaining the downvotes. So I’ll explain the reason for my downvote, which you may find helpful:
I don’t see any ideas. You start with a really weird, hard-to-read, and I think wrong definition of a Cartesian product, but then never mention cartesian products again. You then don’t define a relation, but I’m guessing that you meant a relation to be a subset of V x V. But then your definition of dependency doesn’t make sense. We usually discuss dependency over things called “random variables” (which are not actual variables in the sense that you’re using them), and it’s hard to find a charitable interpretation of what you could possibly mean in a way that makes sense.
The next section is a bunch of vague ramblings that make no effort to be coherent. How does a relation express a law of physics? What are the variables? How the heck are you getting boundary conditions into a relation? These are not rhetorical questions: I was trying to find a charitable interpretation to make any of these concepts make sense but I couldn’t.
For future posts I think you should:
-
Take the time to properly understand the concepts you want to talk about. I don’t think you know the formal definition of what it means for a random variable X to depend on a random variable Y, and I suspect you might not even know what a random variable is.
-
Properly flesh out your ideas instead of bringing up a bunch of vague concepts and hoping the reader can flesh them out for you. Definitely don’t publish anything that has things like “what are consequences???” in there: if you’re creating a theory you should obviously be the one to define everything.
-
Make sure each line actually follows from the line before it. You can use an AI to help you here: give it your draft and ask if every line convincingly follows from the prior assumptions.
-
I’ve had reddit redirect here for about almost a year now (with some slip ups here and there). It’s been fantastic for my mental health.
Epistemic status: very new to philosophy and theory of mind, but has taken a couple graduate courses in subjects related to the theory of computation.
I think there are two separate matters:
I have a physical object that has a means to receive inputs and will do something based on those inputs. Suppose I now create two machines: one that takes 0s and 1s and converts it into something the object receives, and one that observes the actions of the physical object then spits out an output. Both of these machines operate in time that is simultaneously at most quadratic in the length of the input AND at most linear in the “run time” of the physical object. And both of these machines are “bijective”.
If I create a program that has the same input/outputs as the above configuration (which is highly non-unique, and can vary significantly based on the choice of machines), there is some sense in which the physical object “computes” this program. This is kind of weak since the input/output converting machines can do a lot to emulate different programs, but at least you’re getting things in a similar complexity class.
You have a central nervous system (CNS) which is currently having a “subjective experience”, whatever that means. It is true that your CNS can be viewed as the aforementioned physical object. And while it is also true that, in the previous framework, one would need a very long and complicated program, it also seems to be true that my subjective experience arises from just a specific sequence of inputs.
If we were to only consider how the physical object behaves with a few specific inputs, I think it’s difficult to eliminate any possibilities for what the object is computing. When I see thought experiments like Putnam’s rock, they make sense to me because we’re only looking at a specific computation, not a full input-output set.
Edit: @Davidmanheim I’ve read your reply and agree that I’ve slightly misinterpreted your post. I’ll think about if the above ideas can be salvaged from the angle of measuring information in a long but finite sequence (e.g. Kolmogorov complexity) and reply when I have time.
In general, it feels like the alphabet can be partitioned into “sections” where you can use other letters in the same section for additional variables that will play similar roles. Something like:
[a,b,c,d]; [f,g,h]; [i,j,k]; [m,n]; [p,q]; [r,s,t]; [u,v,w]; [x,y,z]
Sometimes these can be combined: [m,n,p,q]; [p,q,r,s,t]; [r,s,t,u,v,w]; [u,v,w,x,y,z]
Is there a way to for me to prove that I’m a human on this website before technology makes this task even more difficult?
Just commenting to say that this is convincing enough (and the application sufficiently low-effort) for me to apply later this month, conditional on being in a position where I could theoretically accept such an offer.
I don’t think this explanation makes sense. I asked ChatGPT “Can you tell me things about Akhmed Chatayev”, and it had no problem using his actual name over and over. I asked about his aliases and it said
Akhmed Chatayev, a Chechen Islamist and leader within the Islamic State (IS), was known to use several aliases throughout his militant activities. One of his primary aliases was “Akhmed Shishani,” with “Shishani” translating to “Chechen,” indicating his ethnic origin. Wikipedia
Additionally, Chatayev adopted the alias “David
Then threw an error message. Edit: upon refresh it said more:
Akhmed Chatayev, a Chechen Islamist and leader within the Islamic State (IS), was known to use several aliases throughout his militant activities. One of his primary aliases was “Akhmed Shishani,” with “Shishani” translating to “Chechen,” indicating his ethnic origin. Wikipedia
Additionally, Chatayev adopted the alias “David Mayer.” This particular alias led to a notable case of mistaken identity involving a 90-year-old U.S. Army veteran and theater historian named David Mayer. The veteran experienced significant disruptions, such as difficulties in traveling and receiving mail, due to his name being on a U.S. security list associated with Chatayev’s alias. CBC
These aliases facilitated Chatayev
(I didn’t stop copying there; that was the end of the answer. Full chat)
I think their metric might be click and not upvote (or at least, clicking has a heavy weight). Are you more likely to click on a video that pushes an argument you oppose?
As a quick test, you can launch a vpn and open private browsing to see how your recommendations change after a few videos
I notice this is downvoted and by a new user. On the surface, it looks like something I would strongly consider applying to, depending on what happens in my personal life over the next month. Can anyone let me know (either here or privately) if this is reputable?
Jumping in here: the whole point of the paragraph right after defining “A” and “B” was to ensure we were all on the same page. I also don’t understand what you mean by:
Most ordinary people will assume it means that all the rolls were even
and much else of what you’ve written. I tell you I will roll a die until I get two 6s and let you know how many odds I rolled in the process. I then do so secretly and tell you there were 0 odds. All rolls are even. You can now make a probability distribution on the number of rolls I made, and compute its expectation.
I recently came across unsupervised machine translation here. It’s not directly applicable, but it opens the possibility that, given enough information about “something”, you can pin down what it’s encoding in your own language.
So let’s say now that we have a computer that simulates a human brain in a manner that we understand. Perhaps there really could be a sense in which it simulates a human brain that is independent of our interpretation of it. I’m having some trouble formulating this precisely.
Some ideas discussed here + in comments
https://www.astralcodexten.com/p/secrets-of-the-great-families