This is only true if you restrict “nobility” to Great Britain and if you only count “nobles” those who are considered such in our current day. This is a confusion of the current British noble title (specifically members of “Peerage of Great Britain”) with “land owning rentier class that existed before the industrial revolution”. For our discussion, we need to look at the second one.
I do not have specific numbers of UK, but quoting for Europe from wikipedia (https://en.wikipedia.org/wiki/Nobility#Europe):
”The countries with the highest proportion of nobles were Polish–Lithuanian Commonwealth (15% of an 18th-century population of 800,000[citation needed]), Castile (probably 10%), Spain (722,000 in 1768 which was 7–8% of the entire population) and other countries with lower percentages, such as Russia in 1760 with 500,000–600,000 nobles (2–3% of the entire population), and pre-revolutionary France where there were no more than 300,000 prior to 1789, which was 1% of the population (although some scholars believe this figure is an overestimate). In 1718 Sweden had between 10,000 and 15,000 nobles, which was 0.5% of the population. In Germany it was 0.01%.[46]
In the Kingdom of Hungary nobles made up 5% of the population.[47] All the nobles in 18th-century Europe numbered perhaps 3–4 million out of a total of 170–190 million inhabitants.[48][49] By contrast, in 1707, when England and Scotland united into Great Britain, there were only 168 English peers, and 154 Scottish ones, though their immediate families were recognised as noble.”
Based on above, I think expecting 1% to be landed rentier is a conservative estimate for 18th century for whole Europe. Even if we go with one tenth of this, expecting 0.1% of the population to retain this (which would imply that their population dropped while all other classes increased dramatically), would mean about 68 thousand people in the UK, and over 700 000 in whole Europe.
AND they are expected to live off from rents of land. I doubt that living of land rents is true for the majority of the 1500 current British nobles you referred to.
In my personal experience, LLMs can speed up me 10X times only in very specific circumstances, that are (and have always been) a minor part of my job, and I suspect this is true for most developers:
If I need some small, self-contained script to do something relatively simple. E.g. a bash script for file manipulation or an excel VBA code. These are easy to describe just with a couple of sentences, easy to verify, I often don’t have deep enough knowledge in the particular language to write the whole thing without looking up syntax. And more importantly: as they are small and self-contained, there is no actual need to think much about the maintainability, such as unit testing, breaking into modules and knowing business and most environment context. (Anything containing regex is a subset of this, where LLMs are game changer)
If I need to work on some stack I am not too familiar with, LLMs can speed up learning process a lot by providing working (or almost working) examples. I just type what I need and I can get a rough solution that shows what class and methods I can use from which library, or even what general logic/approach can work as a solution. In a lot of cases, even if the solution provided does not actually work, just knowing about how I can interact with a library is a huge help.
Without this I would have to spend hours or days either doing some online course/reading documentation/experimenting.
The cavet here is that even in cases when the solution works, the provided code is not something that I can simply add into an actual project. Usually it has to be broken up and parts may go to different modules, refactored to be object oriented and use consistent abstractions that make sense for the particular project.
Based on above, I suspect that most people who report 10X increase in software development either:
Had very low levels of development knowledge, so the bar of 10X is very low. Likely they also do not recognize/not yet experienced that plugging raw LLM code into a project can have a lot of negative downstream consequences that you described. (Though to be fair, if they could not write it before, some code is still better than no code at all, so a worthy trade-off).
Need to write a lot of small, self-contained solutions (I can imagine someone doing a lot of excel automation or being a dedicated person to build the shell scripts for devops/operations purposes).
Need to experiment with a lot of different stacks/libraries (although it is difficult to imagine a job where this is the majority of long-term tasks)
Were affected by the anchoring effect: where they recently used LLMs to solve something quickly, and extrapolated from that particular example, not from long-term experience.