Only if these endpoints service incoming public requests.
As I said, NAT puts encumbrances on application design. One of them is “end-user machines only initiate TCP sessions; they don’t listen for them.” This fits badly to a number of application domains including peer-to-peer protocols generally, games, chat systems, VoIP, and so on. The workarounds have been extensive and expensive. Ever worked with STUN?
Yes, but not on internal LANs which is the whole point of the discussion.
Private networks are IP networks that are air-gapped from the public network. We’re talking about networks which have been assigned private-network (RFC 1918) addresses due to IPv4 address exhaustion and ISP market segmentation — but which gateway onto the public network via NAT and expect to access public-network resources. These are not secured from the public network … especially since current client software (i.e. web browsers) promiscuously makes requests to all sorts of endpoints without checking with the user first.
A lot of this actually exists for OSI Layer 8 and 9 reasons (the “financial” and “political” layers of the network design). Justifying NAT on the basis of security is a rationalization, since neither does it provide security that couldn’t be had without it (via a plain firewall), nor was it actually deployed for security reasons.
From the security point of view, I do NOT want general public to be able to distinguish and target separate machines on an internal ’net (at least without putting in some effort for it :-/)
If security was the only concern, we’d shut the damn thing down and reimplement it in Haskell. It ain’t.
But on the other hand, it’s a security problem when a security-sensitive service (say, a money-making web server) can’t distinguish between an abusive client and an innocent one because they happen to be located behind the same NAT. Denying service to a NAT address that emits abuse allows the abusive client to dictate whether the innocent client gets any service. This is unacceptable to a for-profit service, especially if the two clients and the NAT are not actually under common administration, which they typically aren’t today. If all hosts are distinguishable by address, then the security-sensitive service can accept traffic from a good client and reject traffic from a bad one. IPv6 helps with that, by abolishing address-exhaustion as a justification for NAT.
end-user machines only initiate TCP sessions; they don’t listen for them
That’s not a misfeature of NAT—it’s adjustable at the router/firewall. Games, chat, etc. work perfectly well given the appropriate configuration of your router.
We’re talking about networks which have been assigned private-network (RFC 1918) addresses due to IPv4 address exhaustion and ISP market segmentation — but which gateway onto the public network via NAT and expect to access public-network resources.
Correct, except for the reasons why they were assigned private-network addresses. In the networks I’m familiar with the machines were assigned RFC 1918 addresses because it’s convenient (there’s local control over IP assignment), because the network has to deal with machines coming and going (laptops, smartphones), and because many of these machines are not supposed to be accessible from the pubic internet.
These are not secured from the public network
Sure they are. That is, some of them are, provided the local sysadmin made it so.
To give a trivial example, consider a local database server which does not run any browsers and which responds to (and is supposed to only respond to) just local machines—easily done if the local machines use private-network IP addresses which are not routable over the general internet.
If all hosts are distinguishable by address, then the security-sensitive service can accept traffic from a good client and reject traffic from a bad one.
That’s a very naive approach. IPv6 is not an immutable GUID given to a piece of hardware once and forever. A MAC address is something close to that and even then it’s trivially spoofed.
Consider a scenario where I’m cloning VMs at a rate of, say, one per second and each lives for a couple of minutes. What will your “money-making web server” do about them?
Correct, except for the reasons why they were assigned private-network addresses.
In the case of end-user networks, the reason is simple: end-user ISPs issued only one IPv4 address per customer, under the assumption that the customer would attach only one host to the network. This assumption was sometimes tacit, but sometimes explicit as a matter of contract or support policy. It became increasingly inappropriate for broadband users’ actual use.
Customers worked around this by deploying NAT devices. This was sometimes against the ISP’s wishes — to the extent that MAC address cloning (where the NAT device takes on the MAC address of the single host formerly attached directly to the public network) remains a common feature of end-user NAT devices; this originated as a way of fooling the ISP’s equipment into believing the NAT device was the same machine as the single host it replaced.
It was only subsequent to this that consumer ISPs abandoned the pretense of not supporting multiple hosts in the customer’s home — and began selling or leasing NAT devices themselves as a profit center, rather than ignoring or attempting to ban them.
In the case of organizational networks, one typical reason to deploy NAT was (and remains!) address exhaustion: the organization is not able to obtain enough IPv4 addresses from an ISP or registrar to assign a unique public address to each host they wish to attach to the network. Although the ISP doesn’t intend to disallow more hosts, it is unwilling or unable to provide the address space for them. In some parts of the world, multiple levels of NAT are deployed to cope with address exhaustion, a situation that cannot be explained as a security measure at all.
Sure they are. That is, some of them are, provided the local sysadmin made it so.
Ah. You’re referring to networks that have a “local sysadmin”. I’m also considering networks that don’t.
(Most don’t.)
(But networks with local sysadmins can have default-deny firewalls without needing NAT.)
I’m also considering the situation of developers of end-user networked applications, who have to work with whatever kind of network the user’s host happens to be attached to. Those developers have a lot more flexibility — and a lot fewer workarounds to cope with — under publicly-routable v6 than under pervasively-NATted v4.
Consider a scenario where I’m cloning VMs at a rate of, say, one per second and each lives for a couple of minutes. What will your “money-making web server” do about them?
If blocking a single address doesn’t work, with v6 the natural next step is to block the /64, the unit of stateless address autoconfiguration — since that’s the minimal likely unit of common administration. Yes, that’s analogous to blocking a NAT v4 address … but you don’t have to start there.
You’re referring to networks that have a “local sysadmin”. I’m also considering networks that don’t.
You have been making fully general claims about the evilness of NAT, not conditional on whether local networks are (well-)managed or not. I don’t think it is as clear-cut as you make it to be.
The proliferation of behind-the-NAT machines has many reasons—some historical (as you pointed out, there was/is a shortage of IPv4 addresses and ISPs were stingy with allocating them), but some valid reasons of security, convenience, etc. There are a LOT of internal networks belonging to organizations, most of them should stay behind NAT.
Your basic complaint is that NAT makes life hard for developers of network applications. Yes, it does. Suck it up. Reality is complicated and coding for the real world instead of an abstract model is messy. Yes, it would be nice if everything were simple. No, it’s not going to happen.
As I said, NAT puts encumbrances on application design. One of them is “end-user machines only initiate TCP sessions; they don’t listen for them.” This fits badly to a number of application domains including peer-to-peer protocols generally, games, chat systems, VoIP, and so on. The workarounds have been extensive and expensive. Ever worked with STUN?
Private networks are IP networks that are air-gapped from the public network. We’re talking about networks which have been assigned private-network (RFC 1918) addresses due to IPv4 address exhaustion and ISP market segmentation — but which gateway onto the public network via NAT and expect to access public-network resources. These are not secured from the public network … especially since current client software (i.e. web browsers) promiscuously makes requests to all sorts of endpoints without checking with the user first.
A lot of this actually exists for OSI Layer 8 and 9 reasons (the “financial” and “political” layers of the network design). Justifying NAT on the basis of security is a rationalization, since neither does it provide security that couldn’t be had without it (via a plain firewall), nor was it actually deployed for security reasons.
If security was the only concern, we’d shut the damn thing down and reimplement it in Haskell. It ain’t.
But on the other hand, it’s a security problem when a security-sensitive service (say, a money-making web server) can’t distinguish between an abusive client and an innocent one because they happen to be located behind the same NAT. Denying service to a NAT address that emits abuse allows the abusive client to dictate whether the innocent client gets any service. This is unacceptable to a for-profit service, especially if the two clients and the NAT are not actually under common administration, which they typically aren’t today. If all hosts are distinguishable by address, then the security-sensitive service can accept traffic from a good client and reject traffic from a bad one. IPv6 helps with that, by abolishing address-exhaustion as a justification for NAT.
That’s not a misfeature of NAT—it’s adjustable at the router/firewall. Games, chat, etc. work perfectly well given the appropriate configuration of your router.
Correct, except for the reasons why they were assigned private-network addresses. In the networks I’m familiar with the machines were assigned RFC 1918 addresses because it’s convenient (there’s local control over IP assignment), because the network has to deal with machines coming and going (laptops, smartphones), and because many of these machines are not supposed to be accessible from the pubic internet.
Sure they are. That is, some of them are, provided the local sysadmin made it so.
To give a trivial example, consider a local database server which does not run any browsers and which responds to (and is supposed to only respond to) just local machines—easily done if the local machines use private-network IP addresses which are not routable over the general internet.
That’s a very naive approach. IPv6 is not an immutable GUID given to a piece of hardware once and forever. A MAC address is something close to that and even then it’s trivially spoofed.
Consider a scenario where I’m cloning VMs at a rate of, say, one per second and each lives for a couple of minutes. What will your “money-making web server” do about them?
In the case of end-user networks, the reason is simple: end-user ISPs issued only one IPv4 address per customer, under the assumption that the customer would attach only one host to the network. This assumption was sometimes tacit, but sometimes explicit as a matter of contract or support policy. It became increasingly inappropriate for broadband users’ actual use.
Customers worked around this by deploying NAT devices. This was sometimes against the ISP’s wishes — to the extent that MAC address cloning (where the NAT device takes on the MAC address of the single host formerly attached directly to the public network) remains a common feature of end-user NAT devices; this originated as a way of fooling the ISP’s equipment into believing the NAT device was the same machine as the single host it replaced.
It was only subsequent to this that consumer ISPs abandoned the pretense of not supporting multiple hosts in the customer’s home — and began selling or leasing NAT devices themselves as a profit center, rather than ignoring or attempting to ban them.
In the case of organizational networks, one typical reason to deploy NAT was (and remains!) address exhaustion: the organization is not able to obtain enough IPv4 addresses from an ISP or registrar to assign a unique public address to each host they wish to attach to the network. Although the ISP doesn’t intend to disallow more hosts, it is unwilling or unable to provide the address space for them. In some parts of the world, multiple levels of NAT are deployed to cope with address exhaustion, a situation that cannot be explained as a security measure at all.
Ah. You’re referring to networks that have a “local sysadmin”. I’m also considering networks that don’t.
(Most don’t.)
(But networks with local sysadmins can have default-deny firewalls without needing NAT.)
I’m also considering the situation of developers of end-user networked applications, who have to work with whatever kind of network the user’s host happens to be attached to. Those developers have a lot more flexibility — and a lot fewer workarounds to cope with — under publicly-routable v6 than under pervasively-NATted v4.
If blocking a single address doesn’t work, with v6 the natural next step is to block the /64, the unit of stateless address autoconfiguration — since that’s the minimal likely unit of common administration. Yes, that’s analogous to blocking a NAT v4 address … but you don’t have to start there.
You have been making fully general claims about the evilness of NAT, not conditional on whether local networks are (well-)managed or not. I don’t think it is as clear-cut as you make it to be.
The proliferation of behind-the-NAT machines has many reasons—some historical (as you pointed out, there was/is a shortage of IPv4 addresses and ISPs were stingy with allocating them), but some valid reasons of security, convenience, etc. There are a LOT of internal networks belonging to organizations, most of them should stay behind NAT.
Your basic complaint is that NAT makes life hard for developers of network applications. Yes, it does. Suck it up. Reality is complicated and coding for the real world instead of an abstract model is messy. Yes, it would be nice if everything were simple. No, it’s not going to happen.