Ah, forgot to reply to “What does practical things mean?”
Recently it’s involved optimizing my note-taking process, and atm it involves trying to find a decent generalizable workflow for benefiting from AI assistance. Concretely, this has involved looking through a bunch of GitHub repos and software, trying to understand ➀ what’s currently technologically possible (← AutoCodeRover example), ➁ what might become possible within reasonable time before civilizational deadline, ➂ what is even desirable to introduce into my workflow in the first place.
I want to set myself up such that I can maximally benefit from increasing AI capabilities. I’m excited about low-code platforms for LLM-stacks[1], and LLM-based programming languages. The latter thing, taken to its limit, could be called something like a “pseudocode interpreter” or “fuzzy programming language”. The idea is to be able to write a very high-level specification for what you wish to do, and have the lower-level details ironed out by LLM agents. I want my code to be degenerate, in the sense that every subcomponent automatically adjusts itself to fulfil niches that are required for my system to work (this is a bad explanation, and I know it).
The immediate next thing on my todo-list is just… finding a decent vscode extension for integrating AI into whatever I do. I want to be able to say “hey, AI, could you boot up this repository (link) on my PC, and test whether it does thing X?” and have it just do that with minimal confirmation-checks required on my part.
I started trying to begin to make the first babysteps of a draft of something like this for myself via a plugin for Obsidian Canvas in early 2023[2], but then realized other people were gonna build something like this anyway, and I could benefit from their work whenever they made it available.
Thinking high level about what this could look like, but left the project bc I don’t actually know how to code (shh), and LLMs were at that point ~useless for fixing my shortcomings for me.
Ah, forgot to reply to “What does practical things mean?”
Recently it’s involved optimizing my note-taking process, and atm it involves trying to find a decent generalizable workflow for benefiting from AI assistance. Concretely, this has involved looking through a bunch of GitHub repos and software, trying to understand
➀ what’s currently technologically possible (← AutoCodeRover example),
➁ what might become possible within reasonable time before civilizational deadline,
➂ what is even desirable to introduce into my workflow in the first place.
I want to set myself up such that I can maximally benefit from increasing AI capabilities. I’m excited about low-code platforms for LLM-stacks[1], and LLM-based programming languages. The latter thing, taken to its limit, could be called something like a “pseudocode interpreter” or “fuzzy programming language”. The idea is to be able to write a very high-level specification for what you wish to do, and have the lower-level details ironed out by LLM agents. I want my code to be degenerate, in the sense that every subcomponent automatically adjusts itself to fulfil niches that are required for my system to work (this is a bad explanation, and I know it).
The immediate next thing on my todo-list is just… finding a decent vscode extension for integrating AI into whatever I do. I want to be able to say “hey, AI, could you boot up this repository (link) on my PC, and test whether it does thing X?” and have it just do that with minimal confirmation-checks required on my part.
I started trying to begin to make the first babysteps of a draft of something like this for myself via a plugin for Obsidian Canvas in early 2023[2], but then realized other people were gonna build something like this anyway, and I could benefit from their work whenever they made it available.
Thinking high level about what this could look like, but left the project bc I don’t actually know how to code (shh), and LLMs were at that point ~useless for fixing my shortcomings for me.