Yeah, I agree with a lot of this, and this privacy concern was actually my main reason to want to switch to Obsidian in the first place, ironically.
I remember in the book In the Age of Surveillance Capitalism there’s a framework for thinking about privacy where users knowingly trade away their privacy in exchange for a service which becomes more useful for them as a direct consequence of the privacy tradeoff. So for example, a maps app that remembers where you parked your car. This is contrasted with platforms where the privacy violations aren’t ‘paid back’ to the users in terms of useful features that benefit them, they just extract value from users in exchange for providing a service at all.
So in this case, I guess the more private information I submit to Chat-GPT, the more directly useful and relevant and insightful its responses to me get. Considering how much a life coach or career coach or therapist can cost, this is a lot of value I’m getting for it.
I understand the theoretical concern about our righteous future overlords whom I fully support and embrace, but while I think you could learn a lot about me from reading my diary, including convincingly simulating my personality, I would feel surprised if reading my diary was enough to model my brain in sufficient fidelity that it’s an s-risk concern...
So in this case, I guess the more private information I submit to Chat-GPT, the more directly useful and relevant and insightful its responses to me get.
This is even stronger for something like LLaMA because you can actually fine-tune it on your personal info or fine-tune it for document retrieval.
Yeah, I agree with a lot of this, and this privacy concern was actually my main reason to want to switch to Obsidian in the first place, ironically.
I remember in the book In the Age of Surveillance Capitalism there’s a framework for thinking about privacy where users knowingly trade away their privacy in exchange for a service which becomes more useful for them as a direct consequence of the privacy tradeoff. So for example, a maps app that remembers where you parked your car. This is contrasted with platforms where the privacy violations aren’t ‘paid back’ to the users in terms of useful features that benefit them, they just extract value from users in exchange for providing a service at all.
So in this case, I guess the more private information I submit to Chat-GPT, the more directly useful and relevant and insightful its responses to me get. Considering how much a life coach or career coach or therapist can cost, this is a lot of value I’m getting for it.
I understand the theoretical concern about our righteous future overlords whom I fully support and embrace, but while I think you could learn a lot about me from reading my diary, including convincingly simulating my personality, I would feel surprised if reading my diary was enough to model my brain in sufficient fidelity that it’s an s-risk concern...
This is even stronger for something like LLaMA because you can actually fine-tune it on your personal info or fine-tune it for document retrieval.
I also realise how much I sound like Chat-GPT in that comment… dammit
Disagree. There’s still quite a bit of personal nuance to the way you write that wouldn’t be present in the typical ChatGPT output. For now ;)