Recent published SI work concerns AI safety. They have not recently published results on AGI, to whatever extent that is separable from safety research, for which I am very grateful. Common optimization algorithms do apply to mathematical models, but that doesn’t limit their real world use; an implemented optimization algorithm designed to work with a given model can do nifty things if that model roughly captures the structure of a problem domain. Or to put it simply, models model things. SI is openly concerned with exactly that type of optimization, and how it becomes unsafe if enough zealous undergrads with good intentions throw this, that, and their grandmother’s hippocampus into a pot until it supposedly does fantastic venture capital attracting things. The fact that SI is not writing papers on efficient adaptive particle swarms is good and normal for an organization with their mission statement. Foom was a metaphorical onomatopoeia for an intelligence explosion, which is indeed a commonly used sense of the term “technological singularity”.
SI is openly concerned with exactly that type of optimization, and how it becomes unsafe
Any references? I haven’t seen anything that is in any way relevant to the type of optimization that we currently know how to implement. The SI is concerned with notion of some ‘utility function’, which appears very fuzzy and incoherent—what it is, a mathematical function? What does it have at input and what it has at output? The number of paperclips in the universe is given as example of ‘utility function’, but you can’t have ‘universe’ as the input domain to a mathematical function. In the AI the ‘utility function’ is defined on the model rather than the world, and lacking the ‘utility function’ defined on the world, the work on ensuring correspondence of the model and the world is not an instrumental sub-goal arising from maximization of the ‘utility function’ defined on the model. This is rather complicated, technical issue, and to be honest the SI stance looks indistinguishable from confusion that would result from inability to distinguish function of model and the property of the world, and subsequent assumption that correspondence of model and the world is an instrumental goal of any utility maximizer. (Furthermore that sort of confusion would normally be expected as a null hypothesis when evaluating an organization so outside the ordinary criteria of competence)
edit: also, by the way, it it would improve my opinion of this community if, when you think that I am incorrect, you would explain your thought rather than click down vote button. While you may want to signal to me that “i am wrong” by pressing the vote button, that, without other information, is unlikely to change my view on the technical side of the issue. Keep in mind that one can not be totally certain in anything, and while this may be a normal discussion forum that happens to be owned by an AI researcher that is being misunderstood due to poor ability to communicate the key concepts he uses, it might also be a support ground for pseudoscientific research, and the norm of substance-less disagreement would seem to be more probable in the latter than in the former.
Recent published SI work concerns AI safety. They have not recently published results on AGI, to whatever extent that is separable from safety research, for which I am very grateful. Common optimization algorithms do apply to mathematical models, but that doesn’t limit their real world use; an implemented optimization algorithm designed to work with a given model can do nifty things if that model roughly captures the structure of a problem domain. Or to put it simply, models model things. SI is openly concerned with exactly that type of optimization, and how it becomes unsafe if enough zealous undergrads with good intentions throw this, that, and their grandmother’s hippocampus into a pot until it supposedly does fantastic venture capital attracting things. The fact that SI is not writing papers on efficient adaptive particle swarms is good and normal for an organization with their mission statement. Foom was a metaphorical onomatopoeia for an intelligence explosion, which is indeed a commonly used sense of the term “technological singularity”.
Any references? I haven’t seen anything that is in any way relevant to the type of optimization that we currently know how to implement. The SI is concerned with notion of some ‘utility function’, which appears very fuzzy and incoherent—what it is, a mathematical function? What does it have at input and what it has at output? The number of paperclips in the universe is given as example of ‘utility function’, but you can’t have ‘universe’ as the input domain to a mathematical function. In the AI the ‘utility function’ is defined on the model rather than the world, and lacking the ‘utility function’ defined on the world, the work on ensuring correspondence of the model and the world is not an instrumental sub-goal arising from maximization of the ‘utility function’ defined on the model. This is rather complicated, technical issue, and to be honest the SI stance looks indistinguishable from confusion that would result from inability to distinguish function of model and the property of the world, and subsequent assumption that correspondence of model and the world is an instrumental goal of any utility maximizer. (Furthermore that sort of confusion would normally be expected as a null hypothesis when evaluating an organization so outside the ordinary criteria of competence)
edit: also, by the way, it it would improve my opinion of this community if, when you think that I am incorrect, you would explain your thought rather than click down vote button. While you may want to signal to me that “i am wrong” by pressing the vote button, that, without other information, is unlikely to change my view on the technical side of the issue. Keep in mind that one can not be totally certain in anything, and while this may be a normal discussion forum that happens to be owned by an AI researcher that is being misunderstood due to poor ability to communicate the key concepts he uses, it might also be a support ground for pseudoscientific research, and the norm of substance-less disagreement would seem to be more probable in the latter than in the former.