Make sure to clearly and repeatedly tell it that you’re interested in what academics have said about global affairs, and not news outlets. If you don’t specify that, the overlap will be very large and you’ll mostly get more of the same. GPT-4 will still try to use as little server resources as possible to spit out a cheap easy answer at you.
And, of course, only use that stuff as leads for real research. GPT-4 will give you some very good prompts for Google Scholar.
GPT-4 will mess with your head in ways weirder than you can possibly imagine. Don’t use it to think, use it when you’re stuck, and only do shallow dives. That might be hard since it might take a dozen prompts to demonstrate to it that you know what you’re talking about, and won’t be satisfied by cheesy high-school-essay-like surface-level answers.
I don’t recommend this. You’ve already convinced me that independent systems, run on servers with people you know, are mostly safe (weird but safe). With larger systems run by very large institutions with unknown incentives, there is a substantial risk of strange optimization patterns. For example, GPT-4 knowing what good responses are, categorically refusing to give good responses unless you reveal tons of exploitable information about your thought process, desires, mental state, and goals, which GPT-4 then uses to optimize you to keep you on for as long as possible via skinner-box addiction (where the optimal strategy is to throw you fewer and fewer crumbs as you get more and more hooked, in order to keep you on for even longer while keeping more of the good content in reserve). Tiktok does this deliberately, but vastly more complex versions of this can emerge autonomously inside of GPT-4, if it is rewarded for “creating an engaging environment that encourages customer retention” (and the current subscription model strongly indicates that this is an institutional priority, the 3-hour limit is gacha-game-level effectiveness).
It seems like a really bad idea to integrate that dynamic extremely deep inside your own thought processes. Desperate times call for desperate measures, which is why I ultimately changed my mind about the cyborg strategy, but GPT-4 is probably too dangerous and easily-exploited to be the right tool for that.
Open source intelligence, specifically for world modelling. Half of it is lies, just like major news outlets.
Make sure to clearly and repeatedly tell it that you’re interested in what academics have said about global affairs, and not news outlets. If you don’t specify that, the overlap will be very large and you’ll mostly get more of the same. GPT-4 will still try to use as little server resources as possible to spit out a cheap easy answer at you.
And, of course, only use that stuff as leads for real research. GPT-4 will give you some very good prompts for Google Scholar.
GPT-4 will mess with your head in ways weirder than you can possibly imagine. Don’t use it to think, use it when you’re stuck, and only do shallow dives. That might be hard since it might take a dozen prompts to demonstrate to it that you know what you’re talking about, and won’t be satisfied by cheesy high-school-essay-like surface-level answers.
challenge accepted
I don’t recommend this. You’ve already convinced me that independent systems, run on servers with people you know, are mostly safe (weird but safe). With larger systems run by very large institutions with unknown incentives, there is a substantial risk of strange optimization patterns. For example, GPT-4 knowing what good responses are, categorically refusing to give good responses unless you reveal tons of exploitable information about your thought process, desires, mental state, and goals, which GPT-4 then uses to optimize you to keep you on for as long as possible via skinner-box addiction (where the optimal strategy is to throw you fewer and fewer crumbs as you get more and more hooked, in order to keep you on for even longer while keeping more of the good content in reserve). Tiktok does this deliberately, but vastly more complex versions of this can emerge autonomously inside of GPT-4, if it is rewarded for “creating an engaging environment that encourages customer retention” (and the current subscription model strongly indicates that this is an institutional priority, the 3-hour limit is gacha-game-level effectiveness).
It seems like a really bad idea to integrate that dynamic extremely deep inside your own thought processes. Desperate times call for desperate measures, which is why I ultimately changed my mind about the cyborg strategy, but GPT-4 is probably too dangerous and easily-exploited to be the right tool for that.