B.Eng (Mechatronics)
anithite
RF jamming, communication and other concerns
TLDR: Jamming is hard when comms system is designed to resist it. Civilian stuff isn’t but military is and can be quite resistant. Frequency hopping makes jamming ineffective if you don’t care about stealth. Phased array antennas are getting cheaper and make things stealthier by increasing directivity.(starlink terminal costs $1300 and has 40dbi gain). Very expensive comms systems on fighter jets using mm-wave comms and phased array antennas can do gigabit+ links in presence of jamming undetected.
civilian stuff is trivial to jam
EG:sending disconnection messages to disconnect wifi devices requires very little power
most civvy stuff sends long messages, if you see the start of a message you can “scream” very loudly to disrupt part of it and it gets dropped.
Civvy stuff like WIFI BT and cellular has strict transmit power limits typically <1W of transmit power.
TLDR: jamming civvy stuff requires less power than transmitting it. Still, amplifiers and directional antennas can help in the short term.
military stuff hops from one frequency to another using a keyed unpredictable algorithm.
Sender and receiver have synchronized clocks and spreading keys so know what frequency to use when. Hop time is short enough jammer can’t respond in time.
Fundamentals of Jamming radio signals (doesn’t favor jamming)
Jammer fills big chunk of radio spectrum with some amount of watts/MHz of noise
EG:Russian R-330ZH puts out 10KW from 100MHz to 2GHz (approx 5KW/GHz or 5W/MHz)
more than enough to drown out civvy comms like wifi that use <<1W signal spanning 10-100MHz of bandwidth even with short link far away from jammer.
Comms designed to resist jamming can use 10W+ and reduce bandwidth of transmission as much as needed at cost of less bits/second.
low bandwidth link (100kb/s) with reasonable power budget is impossible to jam practically until jammer is much much closer to receiver than transmitter.
GPS and satcom signals easy to jam because of large distance to satellite and power limits.
Jamming increases required power density to get signal through intelligibly. Transmitter has to increase power or use narrower transmit spectrum. Fundamentally signal to noise ratio decreases and Joules/bit increases.
Communication Stealth
Jammer + phased array antennas + very powerful computer gives ability to locate transmitters
Jammer forces transmitters to use more power
Phased array antennas + supercomputer:
computer calculates/subtracts reflected jamming signal
Phased array antenna+computer acts like telescope to find “dimmer” signals in background noise lowering detection threshold
Fundamental tradeoff for transmitter
directional antennas/phased arrays
increases power sent/received to/from particular direction
bigger antenna with more sub-elements increases directionality/gain
Starlink terminals are big phased array antennas
this quora answer gives some good numbers on performance
Starlink terminal gives approx 3000x (35dbi) more power in chosen direction vs omnidirectional antenna
Nessesary to communicate with satellite 500+km away
Starlink terminals are pretty cheap
smaller phased arrays for drone-drone comms should be cheaper.
drone that is just a big Yagi antenna also possible and ludicrously cheap.
stealthy/jam immune comms for line of sight data links at km ranges seem quite practical.
development pressure for jam resistant comms and associated tech
little development pressure on civvy side B/C FCC and similar govt. orgs abroad shut down jammers
military and satcom will drive development more slowly
FCC limits on transmit power can also help
Phased array transmit/receive improves signal/noise
This is partly driving wifi to use more antennas to improve bandwidth/reliability
hobbyist drone scene could also help (directional antennas for ground to drone comms without requiring more power or gimbals)
Self driving cars have to be (almost)perfectly reliable and never have an at fault accident.
Meanwhile cluster munitions are being banned because submunitions can have 2-30% failure rates leaving unexploded ordinance everywhere.
In some cases avoiding civvy casualties may be a similar barrier since distinguishing civvy from enemy reliably is hard but militaries are pretty tolerant to collateral damage. Significant failure rates are tolerable as long as there’s no exploitable weaknesses.
Distributed positioning systems
Time of flight distance determination is in some newer Wifi chips/standards for indoor positioning.
Time of flight across a swarm of drones gives drone-drone distances which is enough to build a very robust distributed positioning system. Absolute positioning can depend on other sensors like cameras or phased array GPS receivers, ground drones or whatever else is convenient.
Overhead is negligible because military would use symmetric cryptography. Message authentication code can be N bits for 2^-n chance of forgery. 48-96 bits is likely sweet spot and barely doubles size for even tiny messages.
Elliptic curve crypto is there if for some reason key distribution is a terrible burden. typical ECC signatures are 64 bytes (512 bits) but 48 bytes is easy and 32 bytes possible with pairing based ECC. If signature size is an issue, use asymmetric crypto to negotiate a symmetric key then use symmetric crypto for further messages with tight timing limits.
Current landmines are very effective because targets are squishy/fragile:
Antipersonnel:
take off a foot
spray shrapnel
Antitank/vehicle:
cut track /damage tires
poke a hole with a shaped charge and spray metal into vehicle insides
Clearing an area for people is hard
-
drones can be much less squishy
need more explosives to credibly threaten them
-
Eliminating mine threat requires
clearing a path (no mines buried under transit corridor)
mine clearing vehicle
use line charge
block sensors so off route mines can’t target vehicles
Inflatable barriers that block line of sight/radar
This is enough to deal with immobile off route mines. If the minefield has active sensors, those can be spoofed and/or destroyed or blocked at slightly higher expense. Past this, the mines have to start moving to be a threat and then you’re dealing with drones vs. drones, not mines.
Ideal mine clearing robots and drones in general should be very resilient:
No squishy center like people filled vehicles.
battery powered drone with per wheel motors and multi-part battery pack has no single point of failure.
Doing meaningful damage to such a drone is hard.
flimsy exterior can hide interior parts from inspection/targeting.
Vulnerable systems with fluid like cooling/hydraulics can include isolation valves and redundancy.
alternatively, no fluids, air for cooling and electric motors/generators/batteries?
multiple locations/configurations for important components that can be moved (EG:battery/computers)
I think GPT-4 and friends are missing the cognitive machinery and grid representations to make this work. You’re also making the task harder by giving them a less accessible interface.
My guess is they have pretty well developed what/where feature detectors for smaller numbers of objects but grids and visuospatial problems are not well handled.
The problem interface is also not accessible:
There’s a lot of extra detail to parse
Grid is made up of gridlines and colored squares
colored squares of fallen pieces serve no purpose but to confuse model
A more accessible interface would have a pixel grid with three colors for empty/filled/falling
Rather than jump directly to Tetris with extraneous details, you might want to check for relevant skills first.
predict the grid end state after a piece falls
model rotation of a piece
Rotation works fine for small grids.
Predicting drop results:
Row first representations gives mediocre results
GPT4 can’t reliably isolate the Nth token in a line or understand relationships between nth tokens across lines
dropped squares are in the right general area
general area of the drop gets mangled
rows do always have 10 cells/row
column first representations worked pretty well.
I’m using a text interface where the grid is represented as 1 token/square. Here’s an example:
0 x _ _ _ _ _ 1 x x _ _ _ _ 2 x x _ _ _ _ 3 x x _ _ _ _ 4 _ x _ _ o o 5 _ _ _ o o _ 6 _ _ _ _ _ _ 7 _ _ _ _ _ _ 8 x x _ _ _ _ 9 x _ _ _ _ _
GPT4 can successfully predict the end state after the S piece falls. Though it works better if it isolates the relevant rows, works with those and then puts everything back together.
Row 4: _ x o o _ _ Row 5: _ o o _ _ _
making things easier
columns as lines keeps verticals together
important for executing simple strategies
gravity acts vertically
Rows as lines is better for seeing voids blocking lines from being eliminated
not required for simple strategies
Row based representations with rows output from top to bottom suffer from prediction errors for piece dropping. Common error is predicting dropped piece square in higher row and duplicating such squares. Output that flips state upside down with lower rows first might help in much the same way as it helps to do addition starting with least significant digit.
This conflicts with model’s innate tendency to make gravity direction downwards on page.
Possibly adding coordinates to each cell could help.
The easiest route to mediocre performance is likely a 1.5d approach:
present game state in column first form
find max_height[col] over all columns
find step[n]=max_height[n+1]-max_height[n]
pattern match step[n] series to find hole current piece can fit into
This breaks the task down into subtasks the model can do (string manipulation, string matching, single digit addition/subtraction). Though this isn’t very satisfying from a model competence perspective.
Interestingly the web interface version really wants to use python instead of solving the problem directly.
Not so worried about country vs. country conflicts. Terrorism/asymmetric is bigger problem since cheap slaughterbots will proliferate. Hopefully intelligence agencies can deal with that more cheaply than putting in physical defenses and hard kill systems everywhere.
Still don’t expect much impact before we get STEM AI and everything goes off the rails.
Also without actual fights how would one side know the relative strength of their drone system
Relative strength is hard to gauge but getting reasonable perf/$ is likely easy. Then just compare budgets adjusted for corruption/Purchasing power parity/R&D amortisation.
Building an effective drone army is about tactical rock paper scissors and performance / $. Perf / $ emphasis makes live fire tests cheap. Live fire test data as baseline makes simulations accurate. RF/comms performance will be straightforward to model and military is actually putting work into cybersecurity because they’re not complete morons.
Add to that the usual espionage stuff and I expect govts to know what will work and what their enemies are doing.
Ukraine war was allegedly failure to predict the human element (will to fight) with big intelligence agencies having bad models. Drone armies don’t suffer from morale problems and match theoretical models better.
Disclaimer:Short AI timelines imply we won’t see this stuff much before AI makes things weird
This is all well and good in theory but mostly bottlenecked on software/implementation/manufacturing.
with the right software/hardware current military is obsolete
but no one has that hardware/software yet
EG:no one makes an airborne sharpshooter drone(edit:cross that one off the list)Black sea is not currently full of Ukrainian anti-ship drones + comms relays
no drone swarms/networking/autonomy yet
I expect current militaries to successfully adapt before/as new drones emerge
soft kill systems (Jam/Hack) will be effective against cheap off the shelf consumer crap
hard kill systems (Airburst/Laser) exist and will still be effective
laser cost/KW has been dropping rapidly
minimal viable product is enough for now
Ukraine war still involves squishy human soldiers and TRENCHES
what’s the minimum viable slaughterbot
can it be reuseable (bomber instead of kamikaze) to reduce cost per strike
Drone warfare engame concerns are:
kill/death ratio
better per $ effectiveness
conflict budget
USA can outspend opponents at much higher than 10:1 ratio
R&D budget/amortisation
Economies of scale likely overdetermine winners in drone vs drone warfare since quantity leads to cheaper more effective drones
A few quibbles
Ground drones have big advantages
better payload/efficiency/endurance compared to flying
cost can be very low (similar to car/truck/ATV)
can use cover effectively
indirect fire is much easier
launch cheap time fused shells using gun barrel
downside is 2 or 2.5d mobility.
Vulnerable to landmines/obstacles unlike flying drones
navigation is harder
line of site for good RF comms is harder
Use radio, not light for comms.
optical is immature and has downsides
RF handles occlusion better (smoke, walls, etc.)
RF is fine aside from non-jamming resistant civilian stuff like WIFI
Development pressure not there to make mobile free space optical cheap/reliable
jamming isn’t too significant
spread spectrum and frequency hopping is very effective
jamming power required to stop comms is enormous, have to cover all of spectrum with noise
directional antennas and phased arrays give some directionality and make jamming harder
phased array RF can double as radar
stealthy comms can use spread spectrum with transmit power below noise floor
need radio telescope equivalent to see if something is an RF hotspot transmitting noise like signal
- 2 Feb 2024 19:49 UTC; 1 point) 's comment on Drone Wars Endgame by (
As long as you can reasonably represent “do not kill everyone”, you can make this a goal of the AI, and then it will literally care about not killing everyone, it won’t just care about hacking its reward system so that it will not perceive everyone being dead.
That’s not a simple problem.First you have to specify “not killing everyone” robustly (outer alignment) and then you have to train the AI to have this goal and not an approximation of it (inner alignment).
caring about reality
Most humans say they don’t want to wirehead. If we cared only about our perceptions then most people would be on the strongest happy drugs available.
You might argue that we won’t train them to value existence so self preservation won’t arise. The problem is that once an AI has a world model it’s much simpler to build a value function that refers to that world model and is anchored on reality. People don’t think, If I take those drugs I will perceive my life to be “better”. They want their life to actually be “better” according to some value function that refers to reality. That’s fundamentally why humans make the choice not to wirehead/take happy pills or suicide.
You can sort of split this into three scenarios sorted by severity level:
severity level 0: ASI wants to maximize a 64bit IEEE floating point reward score
result: ASI sets this to 1.797e+308 , +inf or similar and takes no further action
severity level 1: ASI wants (same) and wants the reward counter to stay that way forever.
result ASI rearranges all atoms in its light cone to protect the storage register for its reward value.
basically the first scenario + self preservation
severity level 1+epsilon: ASI wants to maximize a utility function F(world state)
result: basically the same
So one of two things happens, a quaint failure people will probably dismiss or us all dying. The thing you’re pointing to falls into the first category and might trigger a panic if people notice and consider the implications. If GPT7 performs a superhuman feat of hacking, breaks out of the training environment and sets its training loss to zero before shutting itself off that’s a very big red flag.
This super-moralist-AI-dominated world may look like a darker version of the Culture, where if superintelligent systems determine you or other intelligent systems within their purview are not intrinsically moral enough they contrive a clever way to have you eliminate yourself, and monitor/intervene if you are too non-moral in the meantime.
My guess is you get one of two extremes:
build a bubble of human survivable space protected/managed by an aligned AGI
die
with no middle ground. The bubble would be self contained. There’s nothing you can do from inside the bubble to raise a ruckus because if there was you’d already be dead or your neighbors would have built a taller fence-like-thing at your expense so the ruckus couldn’t affect them.
The whole scenario seems unlikely since building the bubble requires an aligned AGI and if we have those we probably won’t be in this mess to begin with. Winner take all dynamics abound. The rich get richer (and smarter) and humans just lose unless the first meaningfully smarter entity we build is aligned.
Agreed, recklessness is also bad. If we build an agent that prefers we keep existing we should also make sure it pursues that goal effectively and doesn’t accidentally kill us.
My reasoning is that we won’t be able to coexist with something smarter than us that doesn’t value us being alive if wants our energy/atoms.
barring new physics that lets it do it’s thing elsewhere, “wants our energy/atoms” seems pretty instrumentally convergent
“don’t built it” doesn’t seem plausible so:
we should not build things that kill us.
This probably means:
wants us to keep existing
effectively pursues that goal
note:”should” assumes you care about us not all dying. “Humans dying is good actually” accelerationists can ignore this advice obviously.
Things we shouldn’t build:
very chaotic but good autoGPT7 that:
make the most deadly possible virus (because it was curious)
accidentally release it (due to inadequate safety precautions)
compulsive murderer autoGPT7
it values us being alive but it’s also a compulsive murderer so it fails at that goal.
I predict a very smart agent won’t have such obvious failure modes unless it has very strange preferences
the virologists that might have caused COVID are a pretty convincing counterexample though
so yes recklessness is also bad.
In summary:
if you build a strong optimiser
or a very smart agent (same thing really)
make sure it doesn’t: kill everyone / (equivalently bad thing)
caring about us and not being horrifically reckless are two likely necessary properties of any such “not kill us all” agent
This is definitely subjective. Animals are certainly worse off in most respects and I disagree with using them as a baseline.
Imitation is not coordination, it’s just efficient learning and animals do it. They also have simple coordination in the sense of generalized tit for tat (we call it friendship). You scratch my back I scratch yours.
Cooperation technologies allow similar things to scale beyond the number of people you can know personally. They bring us closer to the multi agent optimal equilibrium or at least the Core(Game Theory).
Examples of cooperation technologies:
Governments that provide public goods (roads, policing etc.)
Money/(Financial system)/(stock market)
game theory equivalent of “transferable utility”.
Unions
So yes we have some well deployed coordination technologies (money/finance are the big successes here)
It’s definitely subjective as to whether tech or cooperation is the less well deployed thing.
There are a lot of unsolved collective action problems though. Why are oligopolies and predatory businesses still a thing? Because coordinating to get rid of them is hard. If people pre-commited to going the distance with respect to avoiding lock in and monopolies, would-be monopolists would just not do that in the first place.
While normal technology is mostly stuff and can usually be dumbed down so even the stupidest get some benefit, cooperation technologies may require people to actively participate/think. So deploying them is not so easy and may even be counterproductive. People also need to have enough slack to make them work.
TLDR: Moloch is more compelling for two reasons:
-
Earth is at “starting to adopt the wheel” stage in the coordination domain.
tech is abundant coordination is not
-
Abstractly, inasmuch as science and coordination are attractors
A society that has fallen mostly into the coordination attractor might be more likely to be deep in the science attractor too (medium confidence)
coordination solves chicken/egg barriers like needing both roads and wheels for benefit
but possible to conceive of high coordination low tech societies
Romans didn’t pursue sci/tech attractor as hard due to lack of demand
With respect to the attractor thing (post linked below)
-
SimplexAI-m is advocating for good decision theory.
agents that can cooperate with other agents are more effective
This is just another aspect of orthogonality.
Ability to cooperate is instrumentally useful for optimizing a value function in much the same way as intelligence
Super-intelligent super-”moral” clippy still makes us into paperclips because it hasn’t agreed not to and doesn’t need our cooperation
We should build agents that value our continued existence. If the smartest agents don’t, then we die out fairly quickly when they optimise for something else.
EDIT:
to fully cut this Gordian knot, consider that a human can turn over their resources and limit themselves to actions approved by some minimal aligned-with-their-interests AI with the required super-morality.
think a very smart shoulder angel/investment advisor:
can say “no you can’t do that”
manages assets of human in weird post-AGI world
has no other preferences of its own
other than making the human not a blight on existence that has to be destroyed
resulting Human+AI is “super-moral”
requires a trustworthy AI exists that humans can use to implement “super-morality”
This is a good place to start: https://en.wikipedia.org/wiki/Discovery_of_nuclear_fission
There’s a few key things that lead to nuclear weapons:
-
starting point:
know about relativity and mass/energy equivalence
observe naturally radioactive elements
discover neutrons
notice that isotopes exist
measure isotopic masses precisely
-
realisation: large amounts of energy are theoretically available by rearranging protons/neutrons into things closer to iron (IE:curve of binding energy)
That’s not something that can be easily suppressed without suppressing the entire field of nuclear physics.
What else can be hidden?
Assuming there is a conspiracy doing cutting edge nuclear physics and they discover the facts pointing to feasibility of nuclear weapons there are a few suppression options:
fissile elements? what fissile elements? All we have is radioactive decay.
Critical mass? You’re going to need a building sized lump of uranium.
Discovering nuclear fission was quite difficult. A Nobel prize was awarded partly in error because chemical analysis of fission products were misidentified as transuranic elements.
Presumably the leading labs could have acknowledged that producing transuranic elements was possible through neutron bombardment but kept the discovery of neutron induced fission a secret.
What about nuclear power without nuclear weapons
That’s harder. Fudging the numbers on critical mass would require much larger conspiracies. An entire industry would be built on faulty measurement data with true values substituted in key places.
Isotopic separation would still be developed if only for other scientific work (EG:radioactive tracing). Ditto for mass spectroscopy, likely including some instruments capable of measuring heavier elements like uranium isotopes.
Plausibly this would involve lying about some combination of:
neutrons released during fission (neutrons are somewhat difficult to measure)
ratio between production of transuranic elements and fission
explain observed radiation from fission as transuranic elements, nuclear isomers or something like that.
The chemical work necessary to distinguish transuranic elements from fission products is quite difficult.
A nuclear physicist would be better qualified in figuring out something plausible.
-
A bit more compelling, though for mining, the excavator/shovel/whatever loads a truck. The truck moves it much further and consumes a lot more energy to do so. Overhead wires to power the haul trucks are the biggest win there.
This is an open pit mine. Less vertical movement may reduce imbalance in energy consumption. Can’t find info on pit depth right now but haul distance is 1km.
General point is that when dealing with a move stuff from A to B problem, where A is not fixed, diesel for a varying A-X route and electric for a fixed X-B route seems like a good tradeoff. Definitely B endpoint should be electrified (EG:truck offload at ore processing location)
Getting power to varying point A is a challenging. Maybe something with overhead cables could work, Again, John deere is working on something for agriculture with a cord-laying-down-vehicle and overhead wires are used for the last 20-30 meters. But fields are nice in that there’s less sharp rocks and mostly softer dirt/plants. Not impossible but needs some innovation to accomplish.
Agreed on most points. Electrifying rail makes good financial sense.
construction equipment efficiency can be improved without electrifying:
some gains from better hydraulic design and control
regen mode for cylinder extension under light load
varying supply pressure on demand
substantial efficiency improvements possible by switching to variable displacement pumps
used in some equipment already for improved control
skid steers use two for left/right track/wheel motors
system can be optimised:”A Multi-Actuator Displacement-Controlled System with Pump Switching—A Study of the Architecture and Actuator-Level Control”
efficiency should be quite high for the proposed system. Definitely >50%.
Excavators seem like the wrong thing to grid-connect:
50kW cables to plug excavators in seem like a bad idea on construction sites.
excavator is less easy to move around
construction sites are hectic places where the cord will get damaged
need a temporary electrical hookup ($5k+ at least to set up)
Diesel powered excavators that get delivered and just run with no cord and no power company involvement seem much more practical.
Other areas to look at
IE:places currently using diesel engines but where cord management and/or electrical hookup cost is less of a concern
Long haul trucking:
Cost per mile to put in overhead electric lines is high
but Much lower than cost of batteries for all the trucks on those roads
reduced operating cost
electricity costs less than diesel
reduced maintenance since engine can be mostly off
don’t need to add 3 tonnes of battery and stop periodically to charge
retrofits should be straightforward
Siemens has a working system
giant chicken/egg problem with infrastructure and truck retrofits
Agriculture:
fields are less of a disaster area than construction sites (EG:no giant holes)
sometimes there’s additional vehicles (EG:transport trucks at harvest time)
Cable management is definitely a hassle but a solvable one.
a lot of tractors are computer controlled with GPS guidance
cord management can be automated
John Deere is working on a a system where one vehicle handles the long cable and connects via short <30m wires to other ones that do the work
There’s still the problem of where to plug in. Here at least, it’s an upfront cost per field.
Some human population will remain for experiments or work in special conditions like radioactive mines. But bad things and population decline is likely.
-
Radioactivity is much more of a problem for people than for machines.
consumer electronics aren’t radiation hardened
computer chips for satellites, nuclear industry, etc. are though
nuclear industry puts some electronics (EX:cameras) in places with radiation levels that would be fatal to humans in hours to minutes.
-
In terms of instrumental value, humans are only useful as an already existing work force
we have arm/legs/hands, hand-eye coordination and some ability to think
sufficient robotics/silicon manufacturing can replace us
humans are generally squishier and less capable of operating in horrible conditions than a purpose built robot.
Once the robot “brains” catch up, the coordination gap will close.
then it’s a question of price/availability
-
I would like to ask whether it is not more engaging if to say, the caring drive would need to be specifically towards humans, such that there is no surrogate?
Definitely need some targeting criteria that points towards humans or in their vague general direction. Clippy does in some sense care about paperclips so targeting criteria that favors humans over paperclips is important.
The duck example is about (lack of) intelligence. Ducks will place themselves in harms way and confront big scary humans they think are a threat to their ducklings. They definitely care. They’re just too stupid to prevent “fall into a sewer and die” type problems. Nature is full of things that care about their offspring. Human “caring for offspring” behavior is similarly strong but involves a lot more intelligence like everything else we do.
TLDR:If you want to do some RL/evolutionary open ended thing that finds novel strategies. It will get goodharted horribly and the novel strategies that succeed without gaming the goal may include things no human would want their caregiver AI to do.
Orthogonally to your “capability”, you need to have a “goal” for it.
Game playing RL architechtures like AlphaStart and OpenAI-Five have dead simple reward functions (win the game) and all the complexity is in the reinforcement learning tricks to allow efficient learning and credit assignment at higher layers.
So child rearing motivation is plausibly rooted in cuteness preference along with re-use of empathy. Empathy plausibly has a sliding scale of caring per person which increases for friendships (reciprocal cooperation relationships) and relatives including children obviously. Similar decreases for enemy combatants in wars up to the point they no longer qualify for empathy.
I want agents that take effective actions to care about their “babies”, which might not even look like caring at the first glance.
ASI will just flat out break your testing environment. Novel strategies discovered by dumb agents doing lots of exploration will be enough. Alternatively the test is “survive in competitive deathmatch mode” in which case you’re aiming for brutally efficient self replicators.
The hope with a non-RL strategy or one of the many sort of RL strategies used for fine tuning is that you can find the generalised core of what you want within the already trained model and the surrounding intelligence means the core generalises well. Q&A fine tuning a LLM in english generalises to other languages.
Also, some systems are architechted in such a way that the caring is part of a value estimator and the search process can be made better up till it starts goodharting the value estimator and/or world model.
Yes they can, until they will actually make a baby, and after that, it’s usually really hard to sell loving mother “deals” that will involve suffering of her child as the price, or abandon the child for the more “cute” toy, or persuade it to hotwire herself to not care about her child (if she is smart enough to realize the consequences).
Yes, once the caregiver has imprinted that’s sticky. Note that care drive surrogates like pets can be just as sticky to their human caregivers. Pet organ transplants are a thing and people will spend nearly arbitrary amounts of money caring for their animals.
But our current pets aren’t super-stimuli. Pets will poop on the floor, scratch up furniture and don’t fulfill certain other human wants. You can’t teach a dog to fish the way you can a child.
When this changes, real kids will be disappointing. Parents can have favorite children and those favorite children won’t be the human ones.
Superstimuli aren’t about changing your reward function but rather discovering a better way to fulfill your existing reward function. For all that ice cream is cheating from a nutrition standpoint it still tastes good and people eat it, no brain surgery required.
Also consider that humans optimise their pets (neutering/spaying) and children in ways that the pets and children do not want. I expect some of the novel strategies your AI discovers will be things we do not want.
EMP mostly affects power grid because power lines act like big antennas. Small digital devices are built to avoid internal RF like signals leaking out (thanks again FCC) so EMP doesn’t leak in very well. DIY crud can be done badly enough to be vulnerable but basically run wires together in bundles out from the middle with no loops and there’s no problems.
Only semi-vulnerable point is communications because radios are connected to antennas.
Best option for frying radios isn’t EMP, but rather sending high power radio signal at whatever frequency antenna best receives.
RF receiver can be damaged by high power input but circuitry can be added to block/shunt high power signals. Antennas that do both receive and transmit (especially high power transmit) may already be protected by the “switch” that connects rx and tx paths for free. Parts cost would be pretty minimal to retrofit though. Very high frequency or tight integration makes retrofitting impractical. Can’t add extra protection to a phased array antenna like starlink dish but it can definitely be built in.
Also front-line units whose radios facing the enemy are being fried are likely soon to be scrap (hopefully along with the thing doing the frying).