I’m not as smart as Eliezer, and I’m not pretty good at verbalizing my verbal argument as concise.
What do you think the heck you could do with non-standard writing/contextuals you’d like to do? (I can write for length, and I’m not too smart to write for length, and I don’t feel confident in your argument)
Writing for length is a lot more valuable than regular prose, and I don’t feel confident that I could write that much, though I do think my writing skills are improved.
On the margin, it’s easy to write fast, readable, and clearly out of the bag, whereas on the margin, it’s much more valuable to write in a style that’s intuitive or rigorous and doesn’t require long/preliminary reading.
Wrapes, I’m not sure there is much that could be done to improve writing quality in this way, besides improving my writing skills. I have some ideas, though, enough to move on to this possibility. (But, I’ll leave that to my personal point of view.)
I thought there was—I thought I’d seen one with numbers in the style 1), 2), 3), … going up to 25 -- but I now can’t find it and the obvious hypothesis is that I’m just misremembering what I saw. My apologies.
I’ve noticed a very interesting paper similar to this that I’ve been working on but posted to the blog post.
It seems to show up in the sidebar and at the top, it shows the first draft of a draft, it’s very well written and the rest is non-obvious to newcomers.
It doesn’t even have to be a complete code, so your explanation must be correct.
This is not a code. It’s not an AI. That’s a big thing as a general theorem. It’s not a universal AI, and it’s not a universal AI, or even a universal AI. Not sure what it means or how to define or what it is. There’s a lot of stuff that doesn’t quite fit into my model of the world that the computer does, but it makes me worry. It’s probably a more likely explanation than many other things I can think of in the universe, so if I can’t do that I can’t do that.
In general, we have a better reason to expect human minds to be in the same universe as AI. If you say a universal AI is not able to design a better universal AI, then you are also saying many other things that could be better to get to. You’re saying most things can be faster than human minds in general, which is an impressive fact.
There are lots of examples of this type of reasoning. Some people have talked as recently on Less Wrong. The people in the comments seemed like they should know what they’re talking about. They said that AI is a kind of magical stuff, and therefore it can be used to make stuff happen by taking away its designer’s power, as an application of Occam’s razor. That’s a very different sort of thing than AI or a machine, that just isn’t what you want to do it with, and there are very little of those things.
This is an interesting point from the model of AI. It would be easy to come up with an answer to the question that is not very useful, or even that would be hard to find.
If the answer is “not using it”, then there is a very high probability that the answer will be “use it” (the answer is not very useful). Any question is either inherently confusing, or is something we don’t have a satisfactory answer to, or it’s something that we don’t have a satisfactory answer to. It’s not a trivial problem; but it’s an easy one.
Note that the point of your answer is not to try to understand what the world is like, or what we know.
Why aren’t you looking for a specific example? You might find you can use it or it’s not a specific one, but you should be trying harder to
True, but I don’t think those were Markdown auto-numbers.
I’m not as smart as Eliezer, and I’m not pretty good at verbalizing my verbal argument as concise.
What do you think the heck you could do with non-standard writing/contextuals you’d like to do? (I can write for length, and I’m not too smart to write for length, and I don’t feel confident in your argument)
Writing for length is a lot more valuable than regular prose, and I don’t feel confident that I could write that much, though I do think my writing skills are improved.
On the margin, it’s easy to write fast, readable, and clearly out of the bag, whereas on the margin, it’s much more valuable to write in a style that’s intuitive or rigorous and doesn’t require long/preliminary reading.
Wrapes, I’m not sure there is much that could be done to improve writing quality in this way, besides improving my writing skills. I have some ideas, though, enough to move on to this possibility. (But, I’ll leave that to my personal point of view.)
The numbering in this comment is clearly Markdown auto-numbering. Is there a different comment with numbering that you meant?
For reference, this is how Markdown numbers a list in 3, 2, 1 order:
item
item
item
You were wrong about this aspect of GPT-2. Here is a screenshot of the plain markdown version that we got directly from GPT-2:
I thought there was—I thought I’d seen one with numbers in the style 1), 2), 3), … going up to 25 -- but I now can’t find it and the obvious hypothesis is that I’m just misremembering what I saw. My apologies.
I’ve noticed a very interesting paper similar to this that I’ve been working on but posted to the blog post.
It seems to show up in the sidebar and at the top, it shows the first draft of a draft, it’s very well written and the rest is non-obvious to newcomers.
It doesn’t even have to be a complete code, so your explanation must be correct.
This is not a code. It’s not an AI. That’s a big thing as a general theorem. It’s not a universal AI, and it’s not a universal AI, or even a universal AI. Not sure what it means or how to define or what it is. There’s a lot of stuff that doesn’t quite fit into my model of the world that the computer does, but it makes me worry. It’s probably a more likely explanation than many other things I can think of in the universe, so if I can’t do that I can’t do that.
In general, we have a better reason to expect human minds to be in the same universe as AI. If you say a universal AI is not able to design a better universal AI, then you are also saying many other things that could be better to get to. You’re saying most things can be faster than human minds in general, which is an impressive fact.
There are lots of examples of this type of reasoning. Some people have talked as recently on Less Wrong. The people in the comments seemed like they should know what they’re talking about. They said that AI is a kind of magical stuff, and therefore it can be used to make stuff happen by taking away its designer’s power, as an application of Occam’s razor. That’s a very different sort of thing than AI or a machine, that just isn’t what you want to do it with, and there are very little of those things.
This is an interesting point from the model of AI. It would be easy to come up with an answer to the question that is not very useful, or even that would be hard to find.
If the answer is “not using it”, then there is a very high probability that the answer will be “use it” (the answer is not very useful). Any question is either inherently confusing, or is something we don’t have a satisfactory answer to, or it’s something that we don’t have a satisfactory answer to. It’s not a trivial problem; but it’s an easy one.
Note that the point of your answer is not to try to understand what the world is like, or what we know.
Why aren’t you looking for a specific example? You might find you can use it or it’s not a specific one, but you should be trying harder to