Hey, I’m a student at the University of Copenhagen in Bioinformatics/Computer Science and I’d like to help any way I can. If there’s anything I can do to help let me know.
madhatter
Is there no way to actually delete a comment? :)
never mind this was stupid
Where did the term on the top of page three of this paper after “a team’s chance of winning increases by” come from?
Where did the term on the top of page three of this paper after “a team’s chance of winning increases by” come from?
Will it be feasible in the next decade or so to actually do real research into how to make sure AI systems don’t instantiate anything with any non-negligible level of sentience?
Two random questions.
1) what is the chance of AGI first happening in Russia? Are they laggards in AI compared to the US and China?
2) is there a connection between fuzzy logic and the logical uncertainty of interest to MIRI or not really?
Any value in working on a website with resources on the necessary prerequisites for AI safety research? The best books and papers to read, etc. And maybe an overview of the key problems and results? Perhaps later that could lead to an ebook or online course.
I agree—great idea!
Thoughts on Timothy Snyder’s “On Tyranny”?
Anything not too technical about nanotechnology? (Current state, forecasts, etc.)
Well, “The set of all primes less than 100” definitely works, so we need to shorten this.
More specifically, what should the role of government be in AI safety? I understand tukabel’s intuition that they should have nothing to do with it, but if unfortunately an arms race occurs, maybe having a government regulator framework in place is not a terrible idea? Elon Musk seems to think a government regulator for AI is appropriate.
I really recommend the book Superforecasting by Philip Tetlock and Dan Gardner. It’s an interesting look at the art and science of forecasting, and those who repeatedly do it better than others.
Wow, I hadn’t thought of it like this. Maybe if AGI is sufficiently ridiculous in the eyes of world leaders, they won’t start an arms race until we’ve figured out how to align them. Maybe we want the issue to remain largely a laughingstock.
Sure. The ideas aren’t fleshed out yet, just thrown out there:
http://lesswrong.com/r/discussion/lw/oyi/open_thread_may_1_may_7_2017/
Stuart, since you’re an author of the paper, I’d be grateful to know what you think about the ideas for variants that MrMind suggested in the open thread, as well as my idea of a government regulator parameter.
One idea I had was to introduce a parameter indicating the actions of a governmental regulatory agency. Does this seems like a good variant?
Hi all,
A friend and I (undergraduate math majors) want to work on either exploring a variant or digging deeper into the model introduced in this paper:
Any ideas?
Someone on Hacker News had the idea of putting COVID patients on an airplane to increase air pressure (which is part of how ventilators work, due to Fick’s law of diffusion).
Could this genuinely work?