I have to ask, how does one get hold of any of the programs in this vein? I’ve seen Gwern’s TWDNE, and now your experiments with DALL-E, and I’d love to mess with them myself but have no idea where to go. A bit of googling suggests one can buy GPT3 time from OpenAI, but I gather that’s for text generation, which I can do just fine already.
Ah, that put me on the right track. I’ve been asking google the wrong questions; I was looking for a downloadable program that I could run, but it looks like some (all?) of the interesting things in this space are server-side-only. Which I guess makes sense; presumably gargantuan hardware is required.
In the case of OpenAI, the server-side-only constraint, IIRC, is intentional, to prevent people from modifying the model, for AI safety reasons. My understanding is that usually running a model isn’t as compute-intensive as training it in the first place, so I’d expect a user-side application to be viable; just not in line with OpenAI’s modus operandi.
The Bill Watterson one requires me to request black bears attacking a black forest campground at midnight.
Optionally: ”...as pixel art”.
I have to ask, how does one get hold of any of the programs in this vein? I’ve seen Gwern’s TWDNE, and now your experiments with DALL-E, and I’d love to mess with them myself but have no idea where to go. A bit of googling suggests one can buy GPT3 time from OpenAI, but I gather that’s for text generation, which I can do just fine already.
OpenAI has a waitlist you can sign up for to get early access to DALL-E.
Ah, that put me on the right track. I’ve been asking google the wrong questions; I was looking for a downloadable program that I could run, but it looks like some (all?) of the interesting things in this space are server-side-only. Which I guess makes sense; presumably gargantuan hardware is required.
In the case of OpenAI, the server-side-only constraint, IIRC, is intentional, to prevent people from modifying the model, for AI safety reasons. My understanding is that usually running a model isn’t as compute-intensive as training it in the first place, so I’d expect a user-side application to be viable; just not in line with OpenAI’s modus operandi.
I asked a while ago https://www.lesswrong.com/posts/HnD8pqLKGn2bCbXJr/what-s-the-easiest-way-to-currently-generate-images-with
There are a few Google Colab notebooks that you can run online but where you could also run the code offline if you desire.