Edit: removed a bad point.
If you object strongly to the use of the term UBI in the post, you can replace it with something else.
Then I make a number of substantive arguments.
Your response so far is ‘if it’s a UBI it won’t suffer from these issues by its very definition.‘
My response is ‘yes it will, because I believe any UBI policy proposal will degrade into something less than the ideal definition almost immediately when implemented at scale, or just emerge from existing welfare systems piecemeal rather than all at once. Then all the current concerning ‘bad things that happen to people who depend on government money’ will be issues to consider.
Solenoid_Entity
[Question] Is there a ‘time series forecasting’ equivalent of AIXI?
I’m speaking about the policy that’s going to be called UBI when it’s implemented. You’re allowed to discuss e.g. socialism without having to defer to a theoretical socialism that is by definition free of problems.
Anyway, it’s a quibble, feel free to find and replace UBI with ‘the policy we’ll eventually call UBI’, it doesn’t change the argument I make.
Where do I call existing welfare systems UBI? That’s a misunderstanding of my argument.
My point is that I don’t think it’s likely that future real-world policies will BE universal. They’ll be touted as such, they might even be called UBI, but they won’t be universal. I argue they’re likely to emerge from existing social welfare systems, or absorb their infrastructure and institutions, or at least their cultural baggage.
I can see the confusion, and maybe I should have put ‘UBI’ in quotes to indicate that I meant ‘the policy I think we’ll actually get that people will describe as UBI or something equivalent.’
My point is not to argue that existing welfare systems are UBI. I don’t use any non-standard definitions. I don’t call existing welfare systems UBI.
My point is that the real-world policy we’re likely to eventually call UBI probably won’t actually be universal, and if it emerges as a consequence of more and more people relying on social welfare, or else is associated with social welfare culturally, bad things will likely happen. Then I give some examples of the sort of bad things I mean.
I frequently hear people saying something like “and this is why we need a UBI”
This is a good point. I would like it very much if we could implement a UBI policy that did not come with the cultural baggage of existing social welfare systems. I would like it if existing social welfare systems would become more unconditional. I see why people think UBI would achieve this. I think they’re more optimistic than I am about our ability to shed our social attitudes to work and welfare. Maybe it’ll change with demographics, who knows...
I’m writing the original paragraph, and answering a bunch of questions designed to prompt me to reflect.
There are a few Obsidian plugins that do similar stuff using LLMs, (they purport to read your notes and help you something something).
I’m thinking of mocking something up over the next week or so that does this ‘diary questions’ thing in a more interactive way, via the API, from inside Obsidian.
I also realise how much I sound like Chat-GPT in that comment… dammit
Yeah, I agree with a lot of this, and this privacy concern was actually my main reason to want to switch to Obsidian in the first place, ironically.
I remember in the book In the Age of Surveillance Capitalism there’s a framework for thinking about privacy where users knowingly trade away their privacy in exchange for a service which becomes more useful for them as a direct consequence of the privacy tradeoff. So for example, a maps app that remembers where you parked your car. This is contrasted with platforms where the privacy violations aren’t ‘paid back’ to the users in terms of useful features that benefit them, they just extract value from users in exchange for providing a service at all.
So in this case, I guess the more private information I submit to Chat-GPT, the more directly useful and relevant and insightful its responses to me get. Considering how much a life coach or career coach or therapist can cost, this is a lot of value I’m getting for it.
I understand the theoretical concern about our righteous future overlords whom I fully support and embrace, but while I think you could learn a lot about me from reading my diary, including convincingly simulating my personality, I would feel surprised if reading my diary was enough to model my brain in sufficient fidelity that it’s an s-risk concern...
Currently just copy-pasting into GPT-4 via the web interface. I’ve got it working via the GPT-3 API as well today, but for now I prefer to suffer the inconvenience and get the better model. The questions it asks are MUCH more insightful.
Reflective journal entries using GPT-4 and Obsidian that demand less willpower.
The argument is:
1. You probably can’t make it universal.2. If people can be excluded from the program and depend on it, it creates a power differential that can be abused.
3. There are lots of present-day examples of such abuse, so absent a change, that abuse or similar will continue to exist even if we have a UBI.
I’d just explicitly ask the teacher if they’re happy with the instrument’s setup. It’s probably fine, but maybe they’ll tell you it needs work. Generally 1⁄4 instruments aren’t going to sound great anyway, but the setup is still very important.
Thanks, great recommendation! I’ll check it out for sure.
Repugnant levels of violins
The UBI dystopia: a glimpse into the future via present-day abuses
Test post for formatting
On the subject of jargon, there’s one piece of jargon that I’ve long found troubling on LW, and that’s the reference to ‘tech’ (for mental techniques/tools/psycho-technologies), which I’ve seen Duncan use a few times IIRC.
A few issues:
1. It’s exactly the same usage as the word ‘tech’ in the fake scifi ‘religion’ that must not be named (lest you summon its demons to the forum through the Google portal). They do exercises to give them new mental tools, based on reading the lengthy writings of their founder on how to think, and those lessons/materials/techniques are always referred to as ‘tech.’ This doesn’t automatically make our usage of it bad, but it’s probably smart to avoid so closely mirroring their usage imo.
2. Using the word ‘tech’ doesn’t shine much light. I’m aware of the concept of ‘exaptation’ and that things external to the mind can be integrated into the mind much the way that a craftsman stops seeing the hammer as separate from his hand. Still, it doesn’t seem very useful to blur the distinction between mental techniques and reasoning strategies we can learn and internalise by reading blog posts, and literal technology we might use to augment or supplement our thinking abilities.
Amazing, thanks!
I think there may be a typo in the table directly under the heading “Token probability shifts.”
If it’s not a typo, why are both coefficients positive? Aren’t we meant to subtract the vector for ′ ’?