I’m one of the new readers and found this forum through a Twitter thread that was critiquing it. psychology background then switched to ML, and I’ve been following AI ethics for over 15 years and have been hoping for a long time that discussion would leak across industries and academic fields.
Since AI (however you define it) is a permanent fixture in the world, I’m happy to find a forum focused on critical thinking either way and I enjoy seeing these discussions on front page. I hope it’s SEO’d well too.
I’d think newcomers and non-technical contributors are awesome. 8 years ago I was so desperate to see that people in the AI space were thinking and critically evaluating their own decisions from a moral perspective, since I had started seeing unquestionable effects of this stuff in my own field with my own clients.
But if it starts attracting a ton of this you might want to consider splitting/starting a secondary forum, since this stuff is needed but may dilute from the original purpose of this forum
my concerns for AI lie firmly in the chasm between “best practices” and what actually occurs in practice.
Optimizing for bottom line with no checks and balances and a learned blindness to common sense (see: rob McNamara), and also blindness towards our own actions. “What we do to get by”.
It’s not overblown. But instead of philosophizing about AI doomsday I think there are QUITE enough bad practices going on in industry currently that affect tons of people, that deserve attention.
Focusing on preventing a theoretical AI takeover is not entirely a conspiracy thing, I’m sure it could happen. But it is not as helpful as:
getting involved with policy
education initiatives for the general public
diversity initiatives in tech and leadership
business/startup initiatives in underprivileged communities
formal research on common sense things that are leading to shitty outcomes for underprivileged people
encouraging collaboration, communication, and transfer of knowledge between different fields and across economic lines
teaching people who care about this stuff good marketing and business practices.
commitment to seeing beyond bullshit in general and to stop pretending, push towards understanding power dynamics
cybersecurity as it relates to human psychology, propaganda, and national security. (Hope some people in that space are worried)
Also consider how delving into the depths of humanity affects your own mental health and perspective, I’ve found myself to be much more effective when focusing on grassroots hands on stuff
Stuff from academia trickles down to reality far too slowly to keep up with the progression of tech, which is why I removed myself from it, but still love the concept here and glad that people outside of AI are thinking critically about AI
Strongly agreed here. My view is that ai takeover is effectively just the scaled up version of present-day ai best practice concerns, and the teams doing good work on either end up helping both. Both “sides of the debate” have critical things to say about each other, but in my view, that’s simply good scientific arguing.
I’d love to hear more on your thoughts on most effective actions for shorttermist ai safety and ai bias, if you were up for writing a post! I’d especially like to hear your thoughts on how cutting edge psychology emergency-deescalation-tactics research on stuff like how to re-knit connections between humans who’ve lost trust for political-fighting reasons can relate to ai safety; that example might not be your favorite focus, though it’s something I worry about a lot myself and have thoughts about. Or perhaps if you’ve encountered the socio-environmental synthesis center’s work on interdisciplinary team science (see also their youtube channel), I’m curious if you have thoughts about that. or, well, more accurately, I give those examples as prompt so you can see what kind of thing I’m thinking about writing about and generalize it into giving similar references or shallow dives into research that you’re familiar with and I’m not.
So I didn’t know this was a niche philosophy forum, with its own subculture. I’m way out of my element.
My suggestions were not very relevant taking that into context, I thought it was a general forum. I’m still glad there are people thinking about it.
The links you sent are awesome! - I’ll follow those researchers.
I think a lot of my thoughts here are outdated as things keep changing, and I’m still putting thoughts together. So, I probably won’t be writing much for a few months until my brain settles down a little.
Am I “shorttermism”? Long term, as in fate of humanity, I think I am not good to debate there
imo, shorttermism = 1 year, longtermism = 10 years. ai is already changing very rapidly. as far as I’m concerned your posts are welcome; don’t waste time worrying about being out of your element, just tell it as you see it and let’s debate—this forum is far too skeptical of people with your background and you should be more self-assured that you have something to contribute.
I’m one of the new readers and found this forum through a Twitter thread that was critiquing it. psychology background then switched to ML, and I’ve been following AI ethics for over 15 years and have been hoping for a long time that discussion would leak across industries and academic fields.
Since AI (however you define it) is a permanent fixture in the world, I’m happy to find a forum focused on critical thinking either way and I enjoy seeing these discussions on front page. I hope it’s SEO’d well too.
I’d think newcomers and non-technical contributors are awesome. 8 years ago I was so desperate to see that people in the AI space were thinking and critically evaluating their own decisions from a moral perspective, since I had started seeing unquestionable effects of this stuff in my own field with my own clients.
But if it starts attracting a ton of this you might want to consider splitting/starting a secondary forum, since this stuff is needed but may dilute from the original purpose of this forum
my concerns for AI lie firmly in the chasm between “best practices” and what actually occurs in practice.
Optimizing for bottom line with no checks and balances and a learned blindness to common sense (see: rob McNamara), and also blindness towards our own actions. “What we do to get by”.
It’s not overblown. But instead of philosophizing about AI doomsday I think there are QUITE enough bad practices going on in industry currently that affect tons of people, that deserve attention.
Focusing on preventing a theoretical AI takeover is not entirely a conspiracy thing, I’m sure it could happen. But it is not as helpful as:
getting involved with policy
education initiatives for the general public
diversity initiatives in tech and leadership
business/startup initiatives in underprivileged communities
formal research on common sense things that are leading to shitty outcomes for underprivileged people
encouraging collaboration, communication, and transfer of knowledge between different fields and across economic lines
teaching people who care about this stuff good marketing and business practices.
commitment to seeing beyond bullshit in general and to stop pretending, push towards understanding power dynamics
cybersecurity as it relates to human psychology, propaganda, and national security. (Hope some people in that space are worried)
Also consider how delving into the depths of humanity affects your own mental health and perspective, I’ve found myself to be much more effective when focusing on grassroots hands on stuff
Stuff from academia trickles down to reality far too slowly to keep up with the progression of tech, which is why I removed myself from it, but still love the concept here and glad that people outside of AI are thinking critically about AI
Strongly agreed here. My view is that ai takeover is effectively just the scaled up version of present-day ai best practice concerns, and the teams doing good work on either end up helping both. Both “sides of the debate” have critical things to say about each other, but in my view, that’s simply good scientific arguing.
I’d love to hear more on your thoughts on most effective actions for shorttermist ai safety and ai bias, if you were up for writing a post! I’d especially like to hear your thoughts on how cutting edge psychology emergency-deescalation-tactics research on stuff like how to re-knit connections between humans who’ve lost trust for political-fighting reasons can relate to ai safety; that example might not be your favorite focus, though it’s something I worry about a lot myself and have thoughts about. Or perhaps if you’ve encountered the socio-environmental synthesis center’s work on interdisciplinary team science (see also their youtube channel), I’m curious if you have thoughts about that. or, well, more accurately, I give those examples as prompt so you can see what kind of thing I’m thinking about writing about and generalize it into giving similar references or shallow dives into research that you’re familiar with and I’m not.
So I didn’t know this was a niche philosophy forum, with its own subculture. I’m way out of my element. My suggestions were not very relevant taking that into context, I thought it was a general forum. I’m still glad there are people thinking about it.
The links you sent are awesome! - I’ll follow those researchers. I think a lot of my thoughts here are outdated as things keep changing, and I’m still putting thoughts together. So, I probably won’t be writing much for a few months until my brain settles down a little.
Am I “shorttermism”? Long term, as in fate of humanity, I think I am not good to debate there
Thanks for commenting on my weird intro!
imo, shorttermism = 1 year, longtermism = 10 years. ai is already changing very rapidly. as far as I’m concerned your posts are welcome; don’t waste time worrying about being out of your element, just tell it as you see it and let’s debate—this forum is far too skeptical of people with your background and you should be more self-assured that you have something to contribute.