Correct, except for the reasons why they were assigned private-network addresses.
In the case of end-user networks, the reason is simple: end-user ISPs issued only one IPv4 address per customer, under the assumption that the customer would attach only one host to the network. This assumption was sometimes tacit, but sometimes explicit as a matter of contract or support policy. It became increasingly inappropriate for broadband users’ actual use.
Customers worked around this by deploying NAT devices. This was sometimes against the ISP’s wishes — to the extent that MAC address cloning (where the NAT device takes on the MAC address of the single host formerly attached directly to the public network) remains a common feature of end-user NAT devices; this originated as a way of fooling the ISP’s equipment into believing the NAT device was the same machine as the single host it replaced.
It was only subsequent to this that consumer ISPs abandoned the pretense of not supporting multiple hosts in the customer’s home — and began selling or leasing NAT devices themselves as a profit center, rather than ignoring or attempting to ban them.
In the case of organizational networks, one typical reason to deploy NAT was (and remains!) address exhaustion: the organization is not able to obtain enough IPv4 addresses from an ISP or registrar to assign a unique public address to each host they wish to attach to the network. Although the ISP doesn’t intend to disallow more hosts, it is unwilling or unable to provide the address space for them. In some parts of the world, multiple levels of NAT are deployed to cope with address exhaustion, a situation that cannot be explained as a security measure at all.
Sure they are. That is, some of them are, provided the local sysadmin made it so.
Ah. You’re referring to networks that have a “local sysadmin”. I’m also considering networks that don’t.
(Most don’t.)
(But networks with local sysadmins can have default-deny firewalls without needing NAT.)
I’m also considering the situation of developers of end-user networked applications, who have to work with whatever kind of network the user’s host happens to be attached to. Those developers have a lot more flexibility — and a lot fewer workarounds to cope with — under publicly-routable v6 than under pervasively-NATted v4.
Consider a scenario where I’m cloning VMs at a rate of, say, one per second and each lives for a couple of minutes. What will your “money-making web server” do about them?
If blocking a single address doesn’t work, with v6 the natural next step is to block the /64, the unit of stateless address autoconfiguration — since that’s the minimal likely unit of common administration. Yes, that’s analogous to blocking a NAT v4 address … but you don’t have to start there.
You’re referring to networks that have a “local sysadmin”. I’m also considering networks that don’t.
You have been making fully general claims about the evilness of NAT, not conditional on whether local networks are (well-)managed or not. I don’t think it is as clear-cut as you make it to be.
The proliferation of behind-the-NAT machines has many reasons—some historical (as you pointed out, there was/is a shortage of IPv4 addresses and ISPs were stingy with allocating them), but some valid reasons of security, convenience, etc. There are a LOT of internal networks belonging to organizations, most of them should stay behind NAT.
Your basic complaint is that NAT makes life hard for developers of network applications. Yes, it does. Suck it up. Reality is complicated and coding for the real world instead of an abstract model is messy. Yes, it would be nice if everything were simple. No, it’s not going to happen.
In the case of end-user networks, the reason is simple: end-user ISPs issued only one IPv4 address per customer, under the assumption that the customer would attach only one host to the network. This assumption was sometimes tacit, but sometimes explicit as a matter of contract or support policy. It became increasingly inappropriate for broadband users’ actual use.
Customers worked around this by deploying NAT devices. This was sometimes against the ISP’s wishes — to the extent that MAC address cloning (where the NAT device takes on the MAC address of the single host formerly attached directly to the public network) remains a common feature of end-user NAT devices; this originated as a way of fooling the ISP’s equipment into believing the NAT device was the same machine as the single host it replaced.
It was only subsequent to this that consumer ISPs abandoned the pretense of not supporting multiple hosts in the customer’s home — and began selling or leasing NAT devices themselves as a profit center, rather than ignoring or attempting to ban them.
In the case of organizational networks, one typical reason to deploy NAT was (and remains!) address exhaustion: the organization is not able to obtain enough IPv4 addresses from an ISP or registrar to assign a unique public address to each host they wish to attach to the network. Although the ISP doesn’t intend to disallow more hosts, it is unwilling or unable to provide the address space for them. In some parts of the world, multiple levels of NAT are deployed to cope with address exhaustion, a situation that cannot be explained as a security measure at all.
Ah. You’re referring to networks that have a “local sysadmin”. I’m also considering networks that don’t.
(Most don’t.)
(But networks with local sysadmins can have default-deny firewalls without needing NAT.)
I’m also considering the situation of developers of end-user networked applications, who have to work with whatever kind of network the user’s host happens to be attached to. Those developers have a lot more flexibility — and a lot fewer workarounds to cope with — under publicly-routable v6 than under pervasively-NATted v4.
If blocking a single address doesn’t work, with v6 the natural next step is to block the /64, the unit of stateless address autoconfiguration — since that’s the minimal likely unit of common administration. Yes, that’s analogous to blocking a NAT v4 address … but you don’t have to start there.
You have been making fully general claims about the evilness of NAT, not conditional on whether local networks are (well-)managed or not. I don’t think it is as clear-cut as you make it to be.
The proliferation of behind-the-NAT machines has many reasons—some historical (as you pointed out, there was/is a shortage of IPv4 addresses and ISPs were stingy with allocating them), but some valid reasons of security, convenience, etc. There are a LOT of internal networks belonging to organizations, most of them should stay behind NAT.
Your basic complaint is that NAT makes life hard for developers of network applications. Yes, it does. Suck it up. Reality is complicated and coding for the real world instead of an abstract model is messy. Yes, it would be nice if everything were simple. No, it’s not going to happen.