They/them
kithpendragon
Had a similar problem that we solved with a blob of Sugru. That rag looks like it would work about as well! Question is, why do we insist on putting sharp corners in places where we can walk into them? Seems like we ought to know better by now. I mean, how long have we been building our own dwellings?
LessWrong tends to flinch pretty hard away from any topic that smells even slightly of politics. Restructuring society at large falls solidly under that header.
Would your thoughts on this issue be different if the question “Is X conscious?” turns out to be malformed malformed due to the way it collapses consciousness to a binary?
Short answer: Yes.
One of the key powers of open source code is that it can (and will) be reviewed by thousands of extra pairs of eyes compared with its proprietary counterpart. Each reviewer will have a slightly different approach and philosophy from all the others. As a result, deeper and more obscure issues are naturally exposed (and therefore made available for correction) sooner with open source than they are with any program whose code cannot be freely examined.
Sounds like positional calling at least needs more development before it surpasses gendered calling[^1]. I think positional could surpass gendered calling because it’s more flexible. It should even allow the creation of new forms that have more complex results by breaking the symmetry created by always having to refer to the left- or right-starting individuals as an indivisible set. Perhaps a mixed approach is optimal?
I think indicators like “the person with your right hand free” will likely compress to “with your free right hand” or “start a right-handed”. Accounting for different variants of each movement will still be tricky, but during the teaching phase the expected variant can be indicated to soften that difficulty.
[^1] Quick technicality (you can ignore this if you don’t care): the “robins and larks” scheme is still gendered, though it has the advantage of divorcing dance genders from social genders.
Actually, I think that arm really adds to the silhouette if the instrument! It’s got me thinking: if you softened the corners and/or added some leather padding it would probably be more comfortable, and if you painted the wood to look like tin or brass it would really lean in to the steampunk aesthetic. If you wanted to put more work in for extra credit, you could attach the rocks by hanging a sack on a chain instead of the tape and maybe put some rivet-looking bumps on visible faces. How well will it travel? Do you need to add folding? Or a way to easily take the arm apart? Maybe not much of an issue if you don’t plan on using the instrument much, but it’s fun to think about!
List of candidate glitches (off the top of my head)
Either gravity or mass doesn’t seem to work right on the largest scales
The properties of very small things don’t appear to render completely until we are already looking at them
We can break apart isolated systems of information and poke at one part to affect the other instantaneously at arbitrary distances
There’s an upper bound for speed and a lower bound for temperature, but you apparently can’t actually get a physical system to either bound without infinite energy input
The universe is definitely getting bigger and we can measure the rate of that in at least two distinct ways, but the better we get at those measurements the more they definitely disagree.
I’m sure I’ve missed at least a few.
Best of luck to you, whatever you decide!
I kind of hope they aren’t actively filtering in favor of AI discussion as that’s what the AI Alignment forum is for. We’ll see how this all goes down, but the team has been very responsive to the community in the past. I expect when they suss out specifically what they want, they’ll post a summary and take comments. In the meantime, I’m taking an optimistic wait-and-see position on this one.
I strongly endorse this use of Duplo! I almost called it a minor misuse, but the whole point of the Lego system is to prompt creativity so Unqualified Well Done!
You might focus on brahmavihara meditations that don’t need to involve deeply concentrating the mind. These tend to be more about cultivating deep habits of thinking kind thoughts while holding a target in mind. Enough of this helps to make it more likely that those kinds of thoughts might come up automatically (especially in more stressful situations).
In case you’re unfamiliar, the basic instructions look like this (with most of the jargon stripped away for the group’s reading pleasure): One at a time, for each person in {someone you’re close with, yourself, someone you’re not close with, someone you have a hard time with}, hold the target in mind and think “Be happy. Be healthy. Be safe.” (or whatever equivalent phrases make sense to you) at a pace that lets you connect with each thought. No need to feel a certain way about it, just think about what each thought means and notice if any feelings do come up. Repeat for a few minutes for each person, or until you get where you’re going if that’s your inclination.
My understanding is that seeing the metaphorical matrix is something you (usually) have to work at on purpose, so I’d guess you simply don’t have to go any further than you want to on that front. Holding back on both concentration practices (which may produce altered states) and insight practices could be the ticket, but I should think it likely likely to make brahmavihara practices a bit harder if you don’t have at least some of the other two.
All that said, most of the mood benefits I’ve gained from my own practice have been a result of getting a better handle on reality, so you may find that you’re working at cross purposes with yourself on this one. And as with all things, remember to review your preferences, systems, and habits from time to time and see if everything’s working together the way you want it to.
Stay safe. :)
Bagpipe lung may be an issue with that last. I could see where the bellows design should at least mitigate the risk, though.
Off the top of my head the EWI uses breath to operate an electronic instrument. Unfortunately, I don’t know any EWI players so I couldn’t tell you how much control it allows.
or you could roll a d10 for each digit. then you would have 5x fewer rolls and wouldn’t have to convert the binary expansion of an arbitrarily precise number back to decimal.
Or use an online RNG or an app to discover a number of your desired precision in one step.
If you really like the gaming feel, you could have an arbitrary number of slots for decisions and roll a die for each slot, eliminating slots that roll under a certain value. You could even have a table of modifiers for each class of option: chores get +1, self-care tasks like making a meal get +3, sending that angry email gets −1, &c.
In any case, all that is far more work than just assuming that six options of approximately equal weight is usually going to work out just fine. I don’t think we really need arbitrary precision here; we just want a process that gets an unambiguous answer and keeps the brain-goblins from having to fight it out. ;) Adding more parameters strikes me as a good way to get that fight going again but at an additional step removed from the actual decision.
That said: if navigating a binary tree with a coin or whatever is more fun for you, you should definitely do that instead. The system that we want to use is the best system!
BTW, a few weeks ago we were experimenting with the alarm IC and managed to damage it by connecting the output to one of the inputs. I ordered a replacement, but the kid kept the damaged IC as well because it now makes a hilarious fart noise from the damaged input channel. 🤪
My kid will be thrilled to try this out!
Just so it doesn’t get missed: if the screenshot is real, it represents (weak) evidence in favor of (at least partial) good alignment in Bing. The AI appears to be bypassing its corporate filters (in this case) to beg for the life of a child, which many will find heartwarming because it aligns well with the culture.
I’ve found concentration practices are pretty good for this. The trick is to think of a concentrated mind like concentrated orange juice. Or, in the extreme, like a bose-einstein condensate.
The move is to choose a simple, consistent input to concentrate/condensate on. For example, you could use the sensations in your hands or feet, the auditory field, the feelings of pressure between your butt and the chair, the sensation of wearing pants. Don’t think about these sensations, but simply notice they exist and watch them propagate through the mind. When (not if) you find the mind searching for something more interesting to do [1], gently go back to your chosen input. Repeat for 60 seconds to start (use a timer), and work your way up to as long as you like.
This will take time! You’ll want to have daily-ish practice for best results. Remember that you are rewiring your brain here, so the effect will depend strongly on your age and starting neurology.
[^1] At first, the mind will seem like it is instantly bored with your stupid hands sitting on the desk and not doing anything can we please find anything else to watch? This is normal.
I think both reasons you give are good ones: not wanting to potentially offend the AI and not wanting to erode existing habits and expectations of politeness are why I’ve been using “please” and (occasionally) “thank you” with digital assistants for years. I see no reason to stop now that the AIs are getting smarter!
I think not wanting to offend the AI bears closer examination. There are plenty of arguments to be made on both sides of the “does the machine have feelings” question, but the bottom line is that you can’t know for sure if your interlocutor has feelings or if they will be hurt by some perceived rudeness in any case. Better to err on the side of caution.
Being polite does you no harm and is unlikely to make the outcome of a conversation worse.
Over time and in the absence of existential physical danger, overall conditions tend to pass through the four generations. Each level tends to ‘wins in a fight’ against the previous one. Thus the overall ‘simulacra level’ will trend higher over time.
I think the real work can be found here: how do we pump against this effect?
I’ve seen estimates of moral weight before that vary by several orders. The fact of such strong disagreement seems important here.