Google measures global IPv6 usage at 3.5%, up from 1.5% a year ago and 0.65% a year before that. That’s more than doubling as a percentage year-over-year.
God I hope that continues. Death to IPv4 and the NAT insanity it makes necessary.
(hrm. I’m trying to replace the word “God” in that sentence with something less incoherent but containing the same sense of emphasis, and coming up blank. I blame Monday. Suggestions, anyone?)
hrm. I’m trying to replace the word “God” in that sentence with something less incoherent but containing the same sense of emphasis, and coming up blank. I blame Monday. Suggestions, anyone?
Cute Kittens I hope that continues.
(Emphasize the kittens like it’s a curse word, or it will sound ridiculous. You are not trying to avoid cursing, you are trying to introduce it. Also it will sound ridiculous anyways.)
This is interesting because I somehow managed to not recognize that I was trying to curse. I swear all the time in real life and most places online, but not here. It’s not because I’m thinking “I shouldn’t swear because it’s LW,” either; I just don’t even think about swearing because it’s so dramatically out of place, like using a cell phone during a live theater performance.
Rebuttal: Endpoints that talk to services on public networks are part of the public network, not a private network — even if they are behind middleboxes such as firewalls. Endpoints on the public network should be distinguishable one from another.
Applications should be able to count on an addressing system that distinguishes endpoints, not just networks. That assumption was baked into the design of TCP/IP, allowing the creation of a wide variety of network applications. Many early applications don’t work under NAT without application-specific workarounds. NAT has badly encumbered the design of modern applications, to the point where people now assume that there is a hard distinction between “servers” (machines that have public addresses) and “user machines” (that don’t).
In the TCP/IP design, hosts are distinguished by addresses, and services on those hosts are distinguished by port numbers. In the NAT non-design, the hosts on a “private” (not really private, that is, air-gapped) network cannot be distinguished by addresses. As such, applications protocols cannot make intelligent use of addresses, and developers of applications intended to run on hosts located in homes and offices are hampered in what they can offer, by having to work around NAT all the time.
NAT conflates several issues; notably security policy and addressing. The ostensible security benefit (disallowing inbound probing of “private” endpoints) can actually be had without losing the benefits of a public addressing: it’s called a default-deny firewall, it’s existed since before NAT, and you can have it even with public addresses behind it. (Though neither NAT nor default-deny firewalls provide general security, especially in the browser era, where endpoints run nearly-arbitrary software they’ve fetched off the net.)
NAT requires protocol-specific workarounds — either in the middlebox itself, such as port forwarding, or in the application, such as STUN. These deeply encumber application design, in ways that encourage centralization and discourage distributed protocols.
Endpoints that talk to services on public networks are part of the public network, not a private network — even if they are behind middleboxes such as firewalls.
Only if these endpoints service incoming public requests. If my machine, for example, functions solely as an SSH terminal to tunnel into a public server (and has no open ports), I don’t see how it can be counted as a “part of the public network” in any meaningful sense.
Endpoints on the public network should be distinguishable one from another.
Yes, but not on internal LANs which is the whole point of the discussion. From the security point of view, I do NOT want general public to be able to distinguish and target separate machines on an internal ’net (at least without putting in some effort for it :-/)
NAT has badly encumbered the design of modern applications
I don’t think so. NAT just forced the applications to go up one abstraction layer. That’s not necessarily a bad thing.
Besides, in the world of e.g. load balancers and VMs your desire to have a known physical machine sit at a given IP address seems a bit misguided. The endpoints are shifting and fluid nowadays.
Only if these endpoints service incoming public requests.
As I said, NAT puts encumbrances on application design. One of them is “end-user machines only initiate TCP sessions; they don’t listen for them.” This fits badly to a number of application domains including peer-to-peer protocols generally, games, chat systems, VoIP, and so on. The workarounds have been extensive and expensive. Ever worked with STUN?
Yes, but not on internal LANs which is the whole point of the discussion.
Private networks are IP networks that are air-gapped from the public network. We’re talking about networks which have been assigned private-network (RFC 1918) addresses due to IPv4 address exhaustion and ISP market segmentation — but which gateway onto the public network via NAT and expect to access public-network resources. These are not secured from the public network … especially since current client software (i.e. web browsers) promiscuously makes requests to all sorts of endpoints without checking with the user first.
A lot of this actually exists for OSI Layer 8 and 9 reasons (the “financial” and “political” layers of the network design). Justifying NAT on the basis of security is a rationalization, since neither does it provide security that couldn’t be had without it (via a plain firewall), nor was it actually deployed for security reasons.
From the security point of view, I do NOT want general public to be able to distinguish and target separate machines on an internal ’net (at least without putting in some effort for it :-/)
If security was the only concern, we’d shut the damn thing down and reimplement it in Haskell. It ain’t.
But on the other hand, it’s a security problem when a security-sensitive service (say, a money-making web server) can’t distinguish between an abusive client and an innocent one because they happen to be located behind the same NAT. Denying service to a NAT address that emits abuse allows the abusive client to dictate whether the innocent client gets any service. This is unacceptable to a for-profit service, especially if the two clients and the NAT are not actually under common administration, which they typically aren’t today. If all hosts are distinguishable by address, then the security-sensitive service can accept traffic from a good client and reject traffic from a bad one. IPv6 helps with that, by abolishing address-exhaustion as a justification for NAT.
end-user machines only initiate TCP sessions; they don’t listen for them
That’s not a misfeature of NAT—it’s adjustable at the router/firewall. Games, chat, etc. work perfectly well given the appropriate configuration of your router.
We’re talking about networks which have been assigned private-network (RFC 1918) addresses due to IPv4 address exhaustion and ISP market segmentation — but which gateway onto the public network via NAT and expect to access public-network resources.
Correct, except for the reasons why they were assigned private-network addresses. In the networks I’m familiar with the machines were assigned RFC 1918 addresses because it’s convenient (there’s local control over IP assignment), because the network has to deal with machines coming and going (laptops, smartphones), and because many of these machines are not supposed to be accessible from the pubic internet.
These are not secured from the public network
Sure they are. That is, some of them are, provided the local sysadmin made it so.
To give a trivial example, consider a local database server which does not run any browsers and which responds to (and is supposed to only respond to) just local machines—easily done if the local machines use private-network IP addresses which are not routable over the general internet.
If all hosts are distinguishable by address, then the security-sensitive service can accept traffic from a good client and reject traffic from a bad one.
That’s a very naive approach. IPv6 is not an immutable GUID given to a piece of hardware once and forever. A MAC address is something close to that and even then it’s trivially spoofed.
Consider a scenario where I’m cloning VMs at a rate of, say, one per second and each lives for a couple of minutes. What will your “money-making web server” do about them?
Correct, except for the reasons why they were assigned private-network addresses.
In the case of end-user networks, the reason is simple: end-user ISPs issued only one IPv4 address per customer, under the assumption that the customer would attach only one host to the network. This assumption was sometimes tacit, but sometimes explicit as a matter of contract or support policy. It became increasingly inappropriate for broadband users’ actual use.
Customers worked around this by deploying NAT devices. This was sometimes against the ISP’s wishes — to the extent that MAC address cloning (where the NAT device takes on the MAC address of the single host formerly attached directly to the public network) remains a common feature of end-user NAT devices; this originated as a way of fooling the ISP’s equipment into believing the NAT device was the same machine as the single host it replaced.
It was only subsequent to this that consumer ISPs abandoned the pretense of not supporting multiple hosts in the customer’s home — and began selling or leasing NAT devices themselves as a profit center, rather than ignoring or attempting to ban them.
In the case of organizational networks, one typical reason to deploy NAT was (and remains!) address exhaustion: the organization is not able to obtain enough IPv4 addresses from an ISP or registrar to assign a unique public address to each host they wish to attach to the network. Although the ISP doesn’t intend to disallow more hosts, it is unwilling or unable to provide the address space for them. In some parts of the world, multiple levels of NAT are deployed to cope with address exhaustion, a situation that cannot be explained as a security measure at all.
Sure they are. That is, some of them are, provided the local sysadmin made it so.
Ah. You’re referring to networks that have a “local sysadmin”. I’m also considering networks that don’t.
(Most don’t.)
(But networks with local sysadmins can have default-deny firewalls without needing NAT.)
I’m also considering the situation of developers of end-user networked applications, who have to work with whatever kind of network the user’s host happens to be attached to. Those developers have a lot more flexibility — and a lot fewer workarounds to cope with — under publicly-routable v6 than under pervasively-NATted v4.
Consider a scenario where I’m cloning VMs at a rate of, say, one per second and each lives for a couple of minutes. What will your “money-making web server” do about them?
If blocking a single address doesn’t work, with v6 the natural next step is to block the /64, the unit of stateless address autoconfiguration — since that’s the minimal likely unit of common administration. Yes, that’s analogous to blocking a NAT v4 address … but you don’t have to start there.
You’re referring to networks that have a “local sysadmin”. I’m also considering networks that don’t.
You have been making fully general claims about the evilness of NAT, not conditional on whether local networks are (well-)managed or not. I don’t think it is as clear-cut as you make it to be.
The proliferation of behind-the-NAT machines has many reasons—some historical (as you pointed out, there was/is a shortage of IPv4 addresses and ISPs were stingy with allocating them), but some valid reasons of security, convenience, etc. There are a LOT of internal networks belonging to organizations, most of them should stay behind NAT.
Your basic complaint is that NAT makes life hard for developers of network applications. Yes, it does. Suck it up. Reality is complicated and coding for the real world instead of an abstract model is messy. Yes, it would be nice if everything were simple. No, it’s not going to happen.
NAT/PAT as a standard is very limited in its ability to protect against incoming messages. They can not log errors, they do not process unknown responses, they do not process unknown ‘responses’ that are really attacks. No verification that incoming messages have valid formats, minimal or no handling of DoS. Some configurations (NAT without PAT) will intentionally and directly expose all ports on an internal machine to the outside world. And even from an obscurity perspective, it is trivial (and standard!) for servers to be able to distinguish between and identify different machines inside of a NAT/PAT network.
In practice, it should and common home implementations will provide at least some obscurity, but bad firmware and software configuration has and will continue to leave incoming ports open. Even best-case implementations only provide similar protection as a limited (incoming-only) packet filter, which is really not enough protection for the average system. You should always run a stateful firewall—and once you have a stateful firewall, NAT/PAT can not provide excess security. If you absolutely can’t have a firewall, then NAT/PAT is better than nothing… but it’s still not very good or even good enough.
IPv6 does have unique local addresses and a NAT-like feature called NPT, which may be the sort of thing you’re thinking about. But it does so by avoiding many of the worst issues of NAT/PAT configurations, and it does so for ease-of-configuration and ease-of-transfer purposes, with security falling to the security-focused tools.
NAT/PAT does provide some utility, both in that most (good) NAT/PAT implementations at least give default-deny behavior and it makes configuring a network easier. But there are a huge number of resulting issues. The problem is that there isn’t a distinction between “internal networks” and “public servers” or between “public servers” and home machines. They’re not merely a bad metaphor, but an actively misleading one. The internet /requires/ all devices be addressable. If your home machine can access any website, it does so by making itself distinguishable from others on the same internal network and leaving itself exposed to return messages.
NAT works well enough in a www-focused environment—TCP, short connection duration, limited number of expected simultaneous applications, at least one machine not behind NAT—but that’s not all or even most of the internet. Long-lasting connections, ‘connectionless’ messages, any sort of serve-from-home configuration, all require fairly complicated work-arounds.
It’s not merely forced folk to go up one abstraction layer, but to go to some vastly suboptimal designs. Connection-oriented protocols like TCP aren’t very good solutions when latency matters, or where you’re expecting to send only short messages—but they’re essentially required for any ‘server’-to-‘host’ communication. Heartbeat messages should be much more specialized than they currently are: they’re common because the typical NAT/PAT will drop a connection mapping if you take long at all. Middleware servers and psuedo-VPNs like Hamachi are about the worst possible way to handle secure communications between ‘internal networks’, but they’re an industry because NAT/PAT makes full configuration of sane tools very complicated. And the less said about STUN or uPNP, the better.
NAT/PAT as a standard is very limited in its ability to protect against incoming messages.
Of course, since it’s not its function. Firewalls exist for a reason.
The problem is that there isn’t a distinction between “internal networks” and “public servers” or between “public servers” and home machines. They’re not merely a bad metaphor, but an actively misleading one.
I disagree. “Home machine” is a silly name which doesn’t mean much, but the distinction between internal networks and public servers is rather obvious to me.
The internet /requires/ all devices be addressable.
No, I don’t think it does. IP protocol requires an IP address, but that’s not the same thing as requiring devices be addressable. Network bridges and intrusion-detection boxes, for example, are devices that are commonly set up as non-addressable.
If your home machine can access any website, it does so by making itself distinguishable from others on the same internal network and leaving itself exposed to return messages.
Let’s leave home machines out of it and talk about boxes on an internal LAN. The mapping between IP addresses and machines can be established by middleware and doesn’t have to be long-term or permanent. In some cases (e.g. VMs, high availability environments) the end point of a connection can change without the public server being aware of anything at all.
Endpoints not being able to connect to each other makes some functionality costly or impossible. For example, peer to peer distribution systems rely on being able to contact cooperative endpoints. NAT makes that a lot harder, meaning plenty of development and usability costs.
A more mundane example is multiplayer games. When I played warcraft 3, I had lots of issues testing maps I made because no one could connect to games I hosted (I was behind a university NAT, out of my control). I had to rely on sending the map to friends and having them host.
For example, peer to peer distribution systems rely on being able to contact cooperative endpoints.
Unlike what the TCP/IP designers envisioned, current internet is basically client/server. A client always initiates the exchange and should be isolated from unsolicited access. If necessary, P2P access is a solved problem, and it is properly done by applications at a level higher than TCP/IP, anyway.
no one could connect to games I hosted
Arguably the university’s NAT functioned as intended. They did not provide you with internet access for the purpose of hosting games, even if they weren’t actively against it.
Arguably the university’s NAT functioned as intended. They did not provide you with internet access for the purpose of hosting games, even if they weren’t actively against it.
The NAT/firewall was there for security reasons, not to police gaming. This was when I lived in residence, so gaming was a legitimate recreational use.
God I hope that continues. Death to IPv4 and the NAT insanity it makes necessary.
(hrm. I’m trying to replace the word “God” in that sentence with something less incoherent but containing the same sense of emphasis, and coming up blank. I blame Monday. Suggestions, anyone?)
Cute Kittens I hope that continues.
(Emphasize the kittens like it’s a curse word, or it will sound ridiculous. You are not trying to avoid cursing, you are trying to introduce it. Also it will sound ridiculous anyways.)
This is interesting because I somehow managed to not recognize that I was trying to curse. I swear all the time in real life and most places online, but not here. It’s not because I’m thinking “I shouldn’t swear because it’s LW,” either; I just don’t even think about swearing because it’s so dramatically out of place, like using a cell phone during a live theater performance.
I don’t understand why NAT is considered bad. Devices on private networks should have private addresses.
Rebuttal: Endpoints that talk to services on public networks are part of the public network, not a private network — even if they are behind middleboxes such as firewalls. Endpoints on the public network should be distinguishable one from another.
Applications should be able to count on an addressing system that distinguishes endpoints, not just networks. That assumption was baked into the design of TCP/IP, allowing the creation of a wide variety of network applications. Many early applications don’t work under NAT without application-specific workarounds. NAT has badly encumbered the design of modern applications, to the point where people now assume that there is a hard distinction between “servers” (machines that have public addresses) and “user machines” (that don’t).
In the TCP/IP design, hosts are distinguished by addresses, and services on those hosts are distinguished by port numbers. In the NAT non-design, the hosts on a “private” (not really private, that is, air-gapped) network cannot be distinguished by addresses. As such, applications protocols cannot make intelligent use of addresses, and developers of applications intended to run on hosts located in homes and offices are hampered in what they can offer, by having to work around NAT all the time.
NAT conflates several issues; notably security policy and addressing. The ostensible security benefit (disallowing inbound probing of “private” endpoints) can actually be had without losing the benefits of a public addressing: it’s called a default-deny firewall, it’s existed since before NAT, and you can have it even with public addresses behind it. (Though neither NAT nor default-deny firewalls provide general security, especially in the browser era, where endpoints run nearly-arbitrary software they’ve fetched off the net.)
NAT requires protocol-specific workarounds — either in the middlebox itself, such as port forwarding, or in the application, such as STUN. These deeply encumber application design, in ways that encourage centralization and discourage distributed protocols.
In gist: https://en.wikipedia.org/wiki/End-to-end_principle
Only if these endpoints service incoming public requests. If my machine, for example, functions solely as an SSH terminal to tunnel into a public server (and has no open ports), I don’t see how it can be counted as a “part of the public network” in any meaningful sense.
Yes, but not on internal LANs which is the whole point of the discussion. From the security point of view, I do NOT want general public to be able to distinguish and target separate machines on an internal ’net (at least without putting in some effort for it :-/)
I don’t think so. NAT just forced the applications to go up one abstraction layer. That’s not necessarily a bad thing.
Besides, in the world of e.g. load balancers and VMs your desire to have a known physical machine sit at a given IP address seems a bit misguided. The endpoints are shifting and fluid nowadays.
As I said, NAT puts encumbrances on application design. One of them is “end-user machines only initiate TCP sessions; they don’t listen for them.” This fits badly to a number of application domains including peer-to-peer protocols generally, games, chat systems, VoIP, and so on. The workarounds have been extensive and expensive. Ever worked with STUN?
Private networks are IP networks that are air-gapped from the public network. We’re talking about networks which have been assigned private-network (RFC 1918) addresses due to IPv4 address exhaustion and ISP market segmentation — but which gateway onto the public network via NAT and expect to access public-network resources. These are not secured from the public network … especially since current client software (i.e. web browsers) promiscuously makes requests to all sorts of endpoints without checking with the user first.
A lot of this actually exists for OSI Layer 8 and 9 reasons (the “financial” and “political” layers of the network design). Justifying NAT on the basis of security is a rationalization, since neither does it provide security that couldn’t be had without it (via a plain firewall), nor was it actually deployed for security reasons.
If security was the only concern, we’d shut the damn thing down and reimplement it in Haskell. It ain’t.
But on the other hand, it’s a security problem when a security-sensitive service (say, a money-making web server) can’t distinguish between an abusive client and an innocent one because they happen to be located behind the same NAT. Denying service to a NAT address that emits abuse allows the abusive client to dictate whether the innocent client gets any service. This is unacceptable to a for-profit service, especially if the two clients and the NAT are not actually under common administration, which they typically aren’t today. If all hosts are distinguishable by address, then the security-sensitive service can accept traffic from a good client and reject traffic from a bad one. IPv6 helps with that, by abolishing address-exhaustion as a justification for NAT.
That’s not a misfeature of NAT—it’s adjustable at the router/firewall. Games, chat, etc. work perfectly well given the appropriate configuration of your router.
Correct, except for the reasons why they were assigned private-network addresses. In the networks I’m familiar with the machines were assigned RFC 1918 addresses because it’s convenient (there’s local control over IP assignment), because the network has to deal with machines coming and going (laptops, smartphones), and because many of these machines are not supposed to be accessible from the pubic internet.
Sure they are. That is, some of them are, provided the local sysadmin made it so.
To give a trivial example, consider a local database server which does not run any browsers and which responds to (and is supposed to only respond to) just local machines—easily done if the local machines use private-network IP addresses which are not routable over the general internet.
That’s a very naive approach. IPv6 is not an immutable GUID given to a piece of hardware once and forever. A MAC address is something close to that and even then it’s trivially spoofed.
Consider a scenario where I’m cloning VMs at a rate of, say, one per second and each lives for a couple of minutes. What will your “money-making web server” do about them?
In the case of end-user networks, the reason is simple: end-user ISPs issued only one IPv4 address per customer, under the assumption that the customer would attach only one host to the network. This assumption was sometimes tacit, but sometimes explicit as a matter of contract or support policy. It became increasingly inappropriate for broadband users’ actual use.
Customers worked around this by deploying NAT devices. This was sometimes against the ISP’s wishes — to the extent that MAC address cloning (where the NAT device takes on the MAC address of the single host formerly attached directly to the public network) remains a common feature of end-user NAT devices; this originated as a way of fooling the ISP’s equipment into believing the NAT device was the same machine as the single host it replaced.
It was only subsequent to this that consumer ISPs abandoned the pretense of not supporting multiple hosts in the customer’s home — and began selling or leasing NAT devices themselves as a profit center, rather than ignoring or attempting to ban them.
In the case of organizational networks, one typical reason to deploy NAT was (and remains!) address exhaustion: the organization is not able to obtain enough IPv4 addresses from an ISP or registrar to assign a unique public address to each host they wish to attach to the network. Although the ISP doesn’t intend to disallow more hosts, it is unwilling or unable to provide the address space for them. In some parts of the world, multiple levels of NAT are deployed to cope with address exhaustion, a situation that cannot be explained as a security measure at all.
Ah. You’re referring to networks that have a “local sysadmin”. I’m also considering networks that don’t.
(Most don’t.)
(But networks with local sysadmins can have default-deny firewalls without needing NAT.)
I’m also considering the situation of developers of end-user networked applications, who have to work with whatever kind of network the user’s host happens to be attached to. Those developers have a lot more flexibility — and a lot fewer workarounds to cope with — under publicly-routable v6 than under pervasively-NATted v4.
If blocking a single address doesn’t work, with v6 the natural next step is to block the /64, the unit of stateless address autoconfiguration — since that’s the minimal likely unit of common administration. Yes, that’s analogous to blocking a NAT v4 address … but you don’t have to start there.
You have been making fully general claims about the evilness of NAT, not conditional on whether local networks are (well-)managed or not. I don’t think it is as clear-cut as you make it to be.
The proliferation of behind-the-NAT machines has many reasons—some historical (as you pointed out, there was/is a shortage of IPv4 addresses and ISPs were stingy with allocating them), but some valid reasons of security, convenience, etc. There are a LOT of internal networks belonging to organizations, most of them should stay behind NAT.
Your basic complaint is that NAT makes life hard for developers of network applications. Yes, it does. Suck it up. Reality is complicated and coding for the real world instead of an abstract model is messy. Yes, it would be nice if everything were simple. No, it’s not going to happen.
NAT/PAT as a standard is very limited in its ability to protect against incoming messages. They can not log errors, they do not process unknown responses, they do not process unknown ‘responses’ that are really attacks. No verification that incoming messages have valid formats, minimal or no handling of DoS. Some configurations (NAT without PAT) will intentionally and directly expose all ports on an internal machine to the outside world. And even from an obscurity perspective, it is trivial (and standard!) for servers to be able to distinguish between and identify different machines inside of a NAT/PAT network.
In practice, it should and common home implementations will provide at least some obscurity, but bad firmware and software configuration has and will continue to leave incoming ports open. Even best-case implementations only provide similar protection as a limited (incoming-only) packet filter, which is really not enough protection for the average system. You should always run a stateful firewall—and once you have a stateful firewall, NAT/PAT can not provide excess security. If you absolutely can’t have a firewall, then NAT/PAT is better than nothing… but it’s still not very good or even good enough.
IPv6 does have unique local addresses and a NAT-like feature called NPT, which may be the sort of thing you’re thinking about. But it does so by avoiding many of the worst issues of NAT/PAT configurations, and it does so for ease-of-configuration and ease-of-transfer purposes, with security falling to the security-focused tools.
NAT/PAT does provide some utility, both in that most (good) NAT/PAT implementations at least give default-deny behavior and it makes configuring a network easier. But there are a huge number of resulting issues. The problem is that there isn’t a distinction between “internal networks” and “public servers” or between “public servers” and home machines. They’re not merely a bad metaphor, but an actively misleading one. The internet /requires/ all devices be addressable. If your home machine can access any website, it does so by making itself distinguishable from others on the same internal network and leaving itself exposed to return messages.
NAT works well enough in a www-focused environment—TCP, short connection duration, limited number of expected simultaneous applications, at least one machine not behind NAT—but that’s not all or even most of the internet. Long-lasting connections, ‘connectionless’ messages, any sort of serve-from-home configuration, all require fairly complicated work-arounds.
It’s not merely forced folk to go up one abstraction layer, but to go to some vastly suboptimal designs. Connection-oriented protocols like TCP aren’t very good solutions when latency matters, or where you’re expecting to send only short messages—but they’re essentially required for any ‘server’-to-‘host’ communication. Heartbeat messages should be much more specialized than they currently are: they’re common because the typical NAT/PAT will drop a connection mapping if you take long at all. Middleware servers and psuedo-VPNs like Hamachi are about the worst possible way to handle secure communications between ‘internal networks’, but they’re an industry because NAT/PAT makes full configuration of sane tools very complicated. And the less said about STUN or uPNP, the better.
Of course, since it’s not its function. Firewalls exist for a reason.
I disagree. “Home machine” is a silly name which doesn’t mean much, but the distinction between internal networks and public servers is rather obvious to me.
No, I don’t think it does. IP protocol requires an IP address, but that’s not the same thing as requiring devices be addressable. Network bridges and intrusion-detection boxes, for example, are devices that are commonly set up as non-addressable.
Let’s leave home machines out of it and talk about boxes on an internal LAN. The mapping between IP addresses and machines can be established by middleware and doesn’t have to be long-term or permanent. In some cases (e.g. VMs, high availability environments) the end point of a connection can change without the public server being aware of anything at all.
Endpoints not being able to connect to each other makes some functionality costly or impossible. For example, peer to peer distribution systems rely on being able to contact cooperative endpoints. NAT makes that a lot harder, meaning plenty of development and usability costs.
A more mundane example is multiplayer games. When I played warcraft 3, I had lots of issues testing maps I made because no one could connect to games I hosted (I was behind a university NAT, out of my control). I had to rely on sending the map to friends and having them host.
Unlike what the TCP/IP designers envisioned, current internet is basically client/server. A client always initiates the exchange and should be isolated from unsolicited access. If necessary, P2P access is a solved problem, and it is properly done by applications at a level higher than TCP/IP, anyway.
Arguably the university’s NAT functioned as intended. They did not provide you with internet access for the purpose of hosting games, even if they weren’t actively against it.
The NAT/firewall was there for security reasons, not to police gaming. This was when I lived in residence, so gaming was a legitimate recreational use.