TCP and UDP
TCP and UDP — Transport Layer Protocols and Ports
When data travels across a network, it needs more than just a destination address. It needs to arrive at the right application on the right device, in the right order, and — depending on the application — with a guarantee that it arrived at all. That is the job of the Transport Layer, and the two protocols that dominate it are TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).
What the Transport Layer Does
The OSI (Open Systems Interconnection) model has seven layers. The Transport Layer is Layer 4. In the TCP/IP (Transmission Control Protocol/Internet Protocol) model — which is how the modern internet is actually built — it is Layer 3 (sitting above the Internet Layer and below the Application Layer).
The Internet Layer, where IP (Internet Protocol) operates, is responsible for getting a packet from one device to another. It handles addressing and routing. But when a packet arrives at your laptop, IP has done its job — it got the packet there. IP has no idea whether that packet belongs to your web browser, your email client, your SSH (Secure Shell) session, or your music streaming app. All of those applications are running simultaneously on the same device.
The Transport Layer adds a second layer of addressing — port numbers — to direct each packet to the correct application. It also adds reliability features (in the case of TCP), error detection, and flow control. Without the Transport Layer, the Internet Layer would be like delivering a package to a large apartment building without specifying the apartment number.
Ports
A port is a 16-bit number, meaning it can range from 0 to 65535. Port numbers are included in the headers of Transport Layer protocols (TCP and UDP). When a packet arrives at a device, the operating system reads the destination port number and delivers the data to whichever application has registered that port.
Think of the IP address as the street address of a building and the port number as the apartment number inside that building. The postal service (IP) delivers the envelope to the right building; the mailroom (the OS — Operating System) delivers it to the right apartment (the application).
Port Ranges
IANA (Internet Assigned Numbers Authority) divides the 65536 possible port numbers into three ranges:
Well-known ports: 0 through 1023
These are reserved for standard, widely-used services and are assigned by IANA. You will see these constantly in networking and security work:
- Port 20 and 21: FTP (File Transfer Protocol) — data and control channels
- Port 22: SSH (Secure Shell) — encrypted remote login and tunneling
- Port 23: Telnet — unencrypted remote login (obsolete and insecure, but still found in old equipment)
- Port 25: SMTP (Simple Mail Transfer Protocol) — sending email between servers
- Port 53: DNS (Domain Name System) — resolving domain names to IP addresses
- Port 67 and 68: DHCP (Dynamic Host Configuration Protocol) — automatic IP address assignment
- Port 80: HTTP (Hypertext Transfer Protocol) — unencrypted web traffic
- Port 110: POP3 (Post Office Protocol version 3) — receiving email
- Port 143: IMAP (Internet Message Access Protocol) — receiving email with folder synchronization
- Port 443: HTTPS (Hypertext Transfer Protocol Secure) — encrypted web traffic
- Port 445: SMB (Server Message Block) — Windows file sharing
Registered ports: 1024 through 49151
These are registered by software vendors and developers for specific applications. They are not as strictly enforced as well-known ports but are documented with IANA:
- Port 1433: Microsoft SQL Server
- Port 3306: MySQL database
- Port 3389: RDP (Remote Desktop Protocol) — Windows remote desktop
- Port 5432: PostgreSQL database
- Port 5900: VNC (Virtual Network Computing) — remote desktop
- Port 6379: Redis in-memory data store
- Port 8080: HTTP alternative (often used for web proxies or development servers)
- Port 8443: HTTPS alternative
Dynamic and ephemeral ports: 49152 through 65535
These are used temporarily by client applications when initiating outbound connections. When your web browser connects to a web server on port 443, it also needs a port of its own — so the operating system assigns it a temporary ephemeral port (for example, port 52847). The server sends its responses to your IP address on that ephemeral port, and your OS delivers them to your browser.
Once the connection closes, that ephemeral port is released and can be reused. This is why you can have dozens of browser tabs open to different websites simultaneously — each connection has a different ephemeral port on your end, making them distinguishable.
Different operating systems define the ephemeral range differently. Linux defaults to 32768–60999 (configurable via /proc/sys/net/ipv4/ip_local_port_range). The IANA-recommended range starts at 49152.
The 5-Tuple: How Connections Are Uniquely Identified
Any single active network conversation is uniquely identified by a combination of five values, called the 5-tuple:
- Protocol (TCP or UDP)
- Source IP address
- Source port
- Destination IP address
- Destination port
If you have two browser tabs both open to the same web server (same destination IP and port 443), they are distinguished by their different source ports. If two different applications on your machine connect to the same server, they are distinguished by different source ports. Every combination must be unique for every simultaneous connection on your device.
Firewalls, load balancers, NAT (Network Address Translation) devices, and intrusion detection systems all track network sessions using the 5-tuple.
TCP — Transmission Control Protocol
TCP is the protocol that makes the internet feel reliable. When you download a file, load a web page, send an email, or connect to a server over SSH, TCP is ensuring that every byte arrives, that they arrive in order, and that the receiving application gets exactly what was sent.
TCP achieves reliability through several mechanisms working together.
What Makes TCP Reliable
Ordered delivery using sequence numbers
Every byte of data that TCP sends is assigned a sequence number. The TCP header includes a 32-bit sequence number field that indicates the position of this segment's data in the overall byte stream. If segments arrive at the destination out of order — which is common since packets can take different paths through the internet — the receiving TCP stack reassembles them into the correct order before passing the data to the application.
Acknowledgment and retransmission
The receiver must acknowledge every segment it receives. The acknowledgment number in the TCP header tells the sender which byte the receiver expects next (meaning everything before that has been received). If the sender does not receive an ACK (acknowledgment) within a timeout period, it retransmits the unacknowledged data. This continues until an ACK is received or the connection is eventually abandoned.
Error detection using checksums
The TCP header includes a 16-bit checksum that covers the header, the data payload, and a pseudo-header derived from the IP header. If any bit is flipped during transmission, the checksum will not match, and the receiver discards the corrupt segment. The sender, receiving no ACK, retransmits.
Flow control using window size
The receiver advertises a "window size" — how many bytes of data it can buffer and accept at the current moment. The sender cannot have more unacknowledged data in flight than the receiver's advertised window allows. This prevents a fast sender from overwhelming a slow receiver with data faster than it can process. As the receiver's application reads buffered data, the window opens back up and the sender can send more.
Congestion control
TCP also tries to be a good citizen of the network as a whole. If the network between sender and receiver is congested, dropping packets or increasing delays, TCP detects this and reduces its sending rate. When conditions improve, it gradually increases the rate again. The specific algorithms for this include slow start, congestion avoidance, fast retransmit, and fast recovery — all defined in various RFCs (Requests for Comments, the standards documents that define internet protocols).
TCP Header Fields
Understanding the TCP header helps you read packet captures, understand firewall logs, and analyze attacks. The minimum TCP header is 20 bytes.
Source port (16 bits) The port number of the sending application on the sending device.
Destination port (16 bits) The port number of the receiving application on the destination device.
Sequence number (32 bits) The position of the first byte of this segment's data in the overall byte stream. During connection setup, this is the ISN (Initial Sequence Number) — a randomly chosen starting value. Randomizing the ISN is a security measure that prevents a class of attacks where an attacker guesses the sequence number.
Acknowledgment number (32 bits) When the ACK flag is set, this field contains the next sequence number the sender of this segment expects to receive — meaning it has successfully received all bytes up to this number minus one.
Data offset (4 bits) The length of the TCP header in 32-bit words. This is necessary because the header can be extended with options.
Flags (6 bits in the classic definition, extended in modern usage) Control bits that determine what kind of segment this is. These are the most important bits in the header for security analysis. See the detailed breakdown below.
Window size (16 bits) How many bytes of data the sender of this segment can currently accept. This is the flow control mechanism.
Checksum (16 bits) Error detection over the header, data, and IP pseudo-header.
Urgent pointer (16 bits) When the URG (Urgent) flag is set, this field points to the end of urgent data in the segment. Rarely used in modern applications.
Options (variable) Used for extensions like MSS (Maximum Segment Size) negotiation, window scaling, timestamps for RTT (Round-Trip Time) measurement, and SACK (Selective Acknowledgment) — which allows the receiver to acknowledge non-contiguous blocks of data, improving efficiency over lossy links.
TCP Flags — Each One in Detail
The TCP flags are single bits in the header that turn specific behaviors on or off for a given segment. Knowing what each flag means is fundamental to reading packet captures and understanding TCP-based attacks.
SYN — Synchronize
The SYN flag initiates a connection. When set, it tells the receiving side that the sender wants to open a connection and that the sequence number field contains the sender's ISN. Both sides exchange SYN segments at the start of every TCP connection (as part of the three-way handshake described below). SYN segments consume one sequence number even though they carry no data payload.
In security contexts: large numbers of SYN segments from many source addresses targeting one server indicate a SYN flood attack. A single host sending SYN segments to many ports on the same target indicates a port scan.
ACK — Acknowledge
The ACK flag indicates that the acknowledgment number field is valid — that is, the sender is acknowledging receipt of data. After the initial SYN, essentially every segment in a TCP connection has the ACK flag set. The ACK alone (with no data) consumes no sequence number space.
FIN — Finish
The FIN flag indicates that the sender has finished sending data and wants to close its side of the connection. Like SYN, FIN consumes one sequence number. After sending FIN, the sender can still receive data from the other side — the connection is half-closed. A full close requires both sides to send FIN.
RST — Reset
The RST flag abruptly terminates the connection without the normal teardown procedure. It is used in two main situations: when something has gone wrong (an unexpected segment arrives, the connection is in an invalid state) and when a connection attempt is rejected (a SYN arrives at a port with nothing listening). RST is an immediate close — no data is flushed, no teardown occurs.
In security contexts: unexpected RST segments can indicate TCP session hijacking, where an attacker is injecting packets to tear down legitimate connections. RST injection attacks are used for censorship (the Great Firewall of China historically used RST injection to disrupt connections to blocked content) and for terminating connections during intrusion.
PSH — Push
The PSH flag tells the receiving TCP stack not to buffer the data but to pass it immediately to the receiving application. Without PSH, TCP might buffer small amounts of data waiting to accumulate enough to forward efficiently. With PSH, the data goes straight to the application layer. It is useful for interactive applications like SSH or Telnet where every keypress needs to be delivered immediately.
URG — Urgent
The URG flag indicates that the segment contains urgent data that should be processed before the normal data stream. The urgent pointer field indicates where the urgent data ends. In practice, URG is almost never used in modern applications, but it appears in older protocols. It is sometimes exploited in certain denial-of-service attacks targeting poorly written network stacks.
The TCP Three-Way Handshake
Before any data can be exchanged over a TCP connection, both sides must agree on the initial sequence numbers and confirm they are both ready. This process is the three-way handshake — three segments are exchanged.
Why Three Steps?
Both sides need to establish their own ISN and have it acknowledged by the other side. A two-way handshake would only allow one side to confirm the other's ISN. Three messages accomplish both confirmations with the minimum number of round trips.
Step-by-Step
Step 1 — SYN (Client to Server)
The client sends a TCP segment with the SYN flag set and its randomly chosen ISN in the sequence number field. No data payload is included. The acknowledgment number field is ignored (ACK flag is not set).
Example: SYN, seq=1000
This segment says: "I want to open a connection, and I am starting my sequence numbering at 1000."
Step 2 — SYN-ACK (Server to Client)
The server responds with both the SYN and ACK flags set. It acknowledges the client's ISN by setting the acknowledgment number to the client's ISN plus one (because SYN itself consumed one sequence number). It also includes its own randomly chosen ISN in the sequence number field.
Example: SYN-ACK, seq=5000, ack=1001
This segment says: "I acknowledge your sequence number (I expect you to send starting at 1001 next), I want to open my side of the connection, and I am starting my sequence numbering at 5000."
Step 3 — ACK (Client to Server)
The client acknowledges the server's ISN by setting the acknowledgment number to the server's ISN plus one.
Example: ACK, ack=5001
This segment says: "I acknowledge your sequence number. I expect you to send starting at 5001 next."
The handshake is complete. Both sides know each other's ISN, both sides have confirmed the other is ready, and data transfer can begin immediately after this third segment. The ACK in step 3 can carry the first data payload — there is no need to wait for a fourth segment.
What the Handshake Establishes
- Mutual confirmation that both sides are ready and reachable
- Synchronized sequence numbers for ordered delivery
- Negotiation of TCP options (MSS, window scaling, SACK support, timestamps)
The TCP Four-Way Teardown
Closing a TCP connection gracefully is slightly more involved than opening one. Because TCP connections are full-duplex — both sides can send data simultaneously and independently — each side must close its own sending direction independently.
This requires four segments: each side sends FIN and the other acknowledges it.
Step-by-Step
Step 1 — FIN (Initiating side)
The side that has finished sending data (let's say the client) sends a segment with the FIN flag set. This means: "I am done sending data. I will not send any more."
Step 2 — ACK (Other side)
The server acknowledges receipt of the client's FIN. At this point the connection is half-closed: the client will send no more data, but the server may still have data to send. The server's application may continue sending for a period.
Step 3 — FIN (Other side)
When the server has also finished sending its data, it sends its own FIN segment.
Step 4 — ACK (Initiating side)
The client acknowledges the server's FIN.
TIME_WAIT State
After sending the final ACK (step 4), the initiating side (the client in our example) does not immediately close. It enters a state called TIME_WAIT and waits for a period of 2 × MSL (Maximum Segment Lifetime) before releasing the port. MSL is typically 60 to 120 seconds, making the TIME_WAIT period 2 to 4 minutes.
TIME_WAIT exists for two reasons:
First, the final ACK might be lost. If it is lost, the server will retransmit its FIN. If the client had already closed the connection, it would have no record of it and would send an RST, which would confuse the server. By staying in TIME_WAIT, the client can retransmit the final ACK if the server's FIN arrives again.
Second, packets from the previous connection that are delayed in the network — called stale packets — could arrive after the port pair is reused by a new connection. TIME_WAIT ensures those stale packets have expired before the port is reused.
On busy servers, many connections in TIME_WAIT can accumulate and temporarily exhaust available ports. This is a real operational issue for high-traffic services.
TCP Connection States
TCP is a stateful protocol — both endpoints track the current state of every connection. This is important for firewall configuration and troubleshooting.
LISTEN A server application is waiting for incoming SYN segments. A port in LISTEN state is an open, listening port. When you run a web server, it puts port 80 and 443 into LISTEN state.
SYN_SENT The client has sent a SYN and is waiting for a SYN-ACK from the server. If the connection times out here, the server is unreachable or not listening on that port.
SYN_RECEIVED The server received a SYN, sent a SYN-ACK, and is waiting for the final ACK from the client. A server flooded with SYN packets will have a large number of connections stuck in SYN_RECEIVED — this is the signature of a SYN flood attack.
ESTABLISHED The three-way handshake is complete. Data transfer is in progress. This is the normal operating state of an active connection.
FIN_WAIT_1 The local side has sent FIN and is waiting for the ACK from the remote side.
FIN_WAIT_2 The local side received the ACK of its FIN, waiting for the remote side's FIN.
CLOSE_WAIT The remote side sent FIN (the remote is done sending). The local side acknowledged it. The local application has not yet closed its side — it may still be sending data.
LAST_ACK The local side has sent its own FIN and is waiting for the final ACK.
TIME_WAIT Both FINs have been exchanged and acknowledged. The local side is waiting out the 2 × MSL period before fully closing.
CLOSED The connection is fully closed. No resources are allocated.
Viewing Connection States
On a Linux system:
ss -tn
The ss command (socket statistics) is the modern replacement for netstat. The -t flag shows TCP sockets, and -n shows numeric addresses and ports rather than resolving them to names. You will see columns for State, Recv-Q (data waiting to be received by the application), Send-Q (data waiting to be acknowledged), Local Address:Port, and Peer Address:Port.
netstat -tn
The older netstat command works similarly. Adding -p shows which process owns each connection (requires root privileges).
On Windows:
netstat -an
The -a flag shows all connections and listening ports, and -n shows numeric addresses.
UDP — User Datagram Protocol
If TCP is the careful, reliable, confirmation-seeking protocol, UDP (User Datagram Protocol) is the one that just throws data at the destination and moves on. UDP sacrifices TCP's reliability guarantees in exchange for speed, low overhead, and simplicity.
What UDP Does Not Have
UDP has none of TCP's reliability mechanisms:
- No connection establishment — there is no handshake before data is sent
- No acknowledgment — the sender has no idea whether the data arrived
- No retransmission — if a packet is lost, it is gone
- No ordering — if packets arrive out of order, UDP passes them to the application out of order
- No flow control — UDP will send as fast as the application tells it to, regardless of the receiver's capacity
- No congestion control — UDP does not reduce its rate when the network is congested
What UDP Does Have
UDP is not completely featureless. Its header includes:
- Source port (16 bits): the sending application's port
- Destination port (16 bits): the receiving application's port
- Length (16 bits): the length of the UDP header plus data in bytes
- Checksum (16 bits): error detection (optional in IPv4, required in IPv6)
That is it. The entire UDP header is 8 bytes. A minimum TCP header is 20 bytes, and TCP headers with options can be 60 bytes. UDP's minimal overhead means less processing on both ends and more of each packet's capacity is used for actual data.
When UDP Is the Right Choice
The absence of reliability might sound like a disadvantage, but it is a deliberate and often correct design decision for certain types of applications.
Real-time applications: VoIP (Voice over IP), video conferencing, and live streaming
In a phone call, if a small chunk of audio is lost, the worst outcome is a brief moment of static or silence. That is tolerable. What is not tolerable is TCP's retransmission behavior: if the lost audio were retransmitted, it would arrive late — the conversation would be paused while the retransmitted data catches up. Old audio played late is worse than slightly degraded audio played on time. So VoIP protocols like RTP (Real-time Transport Protocol) run over UDP and accept occasional loss rather than correcting it.
Online gaming
Games transmit the positions and actions of all players many times per second. If position data from 100 milliseconds ago is lost, sending it again is useless — the game has moved on. New position data will arrive momentarily. Gaming engines either use UDP directly or build thin reliability layers on top for critical events while using raw UDP for position updates.
DNS — Domain Name System
A DNS query is a short, single question ("what is the IP address of google.com?") and the answer is typically short too. Establishing a full TCP connection (three-way handshake, data exchange, four-way teardown) for a small query-response pair would be expensive overhead. With UDP, the client sends the query and the server sends the answer — two packets, done. If the answer does not arrive, the client simply re-sends the query after a short timeout. DNS does use TCP for large responses (zone transfers and responses exceeding 512 bytes, with modern DNS using TCP for responses over 4096 bytes in some configurations).
DHCP — Dynamic Host Configuration Protocol
DHCP is how devices obtain IP addresses when they join a network. The client cannot use TCP because it does not yet have an IP address to put in the source field. DHCP uses UDP broadcast messages, which do not require a pre-existing connection.
SNMP — Simple Network Management Protocol
Network management tools use SNMP to poll device status (CPU usage, interface statistics, error counts) very frequently — sometimes every few seconds. Occasional lost polls are acceptable; the next poll will arrive soon. TCP's overhead for thousands of polls per minute would be wasteful.
WireGuard VPN (Virtual Private Network)
Modern VPN protocols like WireGuard use UDP because the VPN tunnel itself handles reliability for the encrypted traffic inside it. Using TCP inside TCP creates a phenomenon called TCP over TCP meltdown, where the outer TCP's retransmissions interfere with the inner TCP's behavior and performance collapses.
When Applications Need Reliability Over UDP
Some applications need reliability but also need features that TCP does not provide well — like the ability to handle multiple streams on one connection without head-of-line blocking. These applications implement their own reliability mechanisms on top of UDP.
The most important example is QUIC (Quick UDP Internet Connections), which was developed by Google and standardized by the IETF (Internet Engineering Task Force). QUIC runs over UDP and powers HTTP/3 (Hypertext Transfer Protocol version 3). It provides reliability, ordering, and encryption, but with lower latency than TCP because it integrates the TLS (Transport Layer Security) handshake into the connection setup, reducing round trips.
TCP vs. UDP — Quick Comparison
| Feature | TCP | UDP |
|---|---|---|
| Connection setup | Three-way handshake required | None — send immediately |
| Reliability | Guaranteed delivery via ACKs and retransmission | Best effort — no guarantee |
| Ordering | Segments reassembled in order | Delivered in whatever order they arrive |
| Flow control | Yes — window size mechanism | No |
| Congestion control | Yes — reduces rate under congestion | No |
| Header size | 20 bytes minimum (up to 60 bytes) | 8 bytes |
| Latency | Higher due to handshake and acknowledgment | Lower |
| Throughput | Lower for small transactions; competitive for bulk | Higher for small transactions |
| Typical use cases | HTTP, HTTPS, SSH, FTP, SMTP, database connections | DNS, VoIP, video streaming, online gaming, DHCP, SNMP |
Security Relevance
Understanding TCP and UDP is not just academic. Many of the most common network attacks exploit specific behaviors of these protocols, and defending against them requires understanding why the protocols work the way they do.
TCP-Based Attacks
SYN Flood
A SYN flood exploits the TCP three-way handshake. When a server receives a SYN, it allocates memory to track the half-open connection, sends a SYN-ACK, and waits for the final ACK. It keeps this state in memory for a timeout period (typically 75 seconds).
An attacker sends thousands or millions of SYN packets, often with spoofed source IP addresses. The server creates a half-open connection entry for each one. Since the ACK never arrives (the spoofed source does not know it should send one), the server's connection table fills up. New legitimate connections cannot be established because there is no room in the table. This is a DoS (Denial of Service) attack.
Defense: SYN cookies. When a server uses SYN cookies, it does not allocate any memory when it receives a SYN. Instead, it encodes the connection parameters (ISN, client IP and port, timestamp, and a server secret) into the ISN it sends in the SYN-ACK. When the legitimate ACK arrives, the server decodes the ISN to reconstruct the connection state. No memory is consumed for half-open connections. SYN flood traffic is harmless — the server only allocates resources when the three-way handshake completes. SYN cookies are supported by Linux, Windows, and most modern operating systems.
TCP Session Hijacking
In an established TCP connection, either side's segments are authenticated only by the 5-tuple (source IP, source port, destination IP, destination port) and the correct sequence number. If an attacker can observe the traffic (for example, on an unswitched network segment, or via ARP poisoning) and knows the current sequence numbers, they can inject segments into the connection that appear to come from one of the legitimate parties.
This was a serious threat before widespread adoption of TLS encryption. With TLS, injected segments would not decrypt correctly, making hijacking effectively impossible at the application level. However, session hijacking remains relevant at the TCP level for unencrypted protocols.
RST Injection
An attacker who knows the current sequence numbers of an established connection can send a forged RST segment with the correct sequence number. Both endpoints accept the RST and close the connection. The legitimate parties see an unexplained connection drop.
This technique is used for censorship (governments injecting RST packets to disrupt connections to blocked content), for intrusion prevention (some IDS — Intrusion Detection System — tools send RST packets to terminate suspicious connections), and for attack purposes (terminating sessions between parties).
Port Scanning
Before attacking a system, an attacker needs to know which services are running and on which ports. Port scanning sends connection attempts to every port (or selected ports) on a target and analyzes the responses.
With TCP:
- An open port responds to a SYN with SYN-ACK (the service is listening)
- A closed port responds to a SYN with RST (nothing is listening)
- A filtered port returns nothing (a firewall is dropping the packets)
Nmap (Network Mapper) is the standard tool for port scanning. A basic TCP SYN scan:
nmap -sS 192.168.1.100
Nmap sends SYN packets but does not complete the handshake (it sends RST after receiving SYN-ACK), which is why this is called a "half-open" or "stealth" scan — it generates fewer log entries on the target than a full connect scan.
UDP-Based Attacks
UDP Flood and Amplification
A basic UDP flood sends large volumes of UDP packets to a target to exhaust bandwidth or processing capacity. Because UDP has no handshake and accepts traffic from any source, source addresses can be spoofed freely.
Amplification attacks are more sophisticated. They exploit protocols where the response to a small request is much larger than the request itself. The attacker sends a small request to a public server (DNS, NTP — Network Time Protocol, or memcached) with the victim's IP address spoofed as the source. The public server sends its large response to the victim. By using many such public servers simultaneously, the attacker can direct a massive flood of traffic at a victim using very little of their own bandwidth.
DNS Amplification
DNS is the most commonly exploited amplification protocol. A small DNS query (around 40 bytes) asking for the full DNS record set for a domain can elicit a response of several kilobytes. An attacker sends thousands of such queries per second to open DNS resolvers worldwide, spoofing the victim's IP as the source. Each resolver sends its large response to the victim. The amplification factor can be 50× to 100× or higher.
Defense: BCP38 (Best Current Practice 38) is a network engineering standard that requires ISPs (Internet Service Providers) to filter outbound packets whose source IP address does not belong to the ISP's address space, making IP address spoofing impossible. Widespread adoption of BCP38 would eliminate amplification attacks. Unfortunately, adoption is still incomplete.
Response rate limiting on DNS servers also helps — if a DNS server receives many queries for the same record from the same source in a short period, it rate-limits responses.
The Bottom Line
TCP and UDP are the two workhorses of the Transport Layer. TCP provides reliable, ordered, error-checked delivery with connection management — at the cost of overhead and latency. UDP provides fast, low-overhead, connectionless delivery — at the cost of reliability. The choice between them depends on whether the application needs reliability more than speed. Most applications you interact with daily use TCP (web browsing, email, remote access), while real-time applications (VoIP, video, gaming) and lightweight query-response protocols (DNS, DHCP) use UDP. From a security standpoint, both protocols have known attack surfaces: TCP's handshake is vulnerable to flooding and hijacking; UDP's connectionless nature enables spoofing and amplification. Understanding ports, flags, the handshake, and connection states gives you the foundation to read packet captures, interpret firewall logs, and understand why attacks work the way they do.
Check Your Understanding
-
A firewall administrator sees a large number of connections stuck in the SYN_RECEIVED state on a web server. What type of attack is most likely occurring, and what defense mechanism can the server use to mitigate it without dropping legitimate traffic?
-
A developer is building a real-time voice communication application. They are debating whether to use TCP or UDP as the Transport Layer protocol. What are the specific reasons that UDP is the better choice for this use case, and what tradeoff does choosing UDP require the developer to accept?
Something to Think About
-
The TCP sequence number field is 32 bits, allowing sequence numbers from 0 to 4,294,967,295. On a modern high-speed connection (say, 10 Gbps — Gigabits per second), how long would it take to exhaust the entire sequence number space? What happens when the sequence number wraps around to zero? What does this imply for the security of TCP session hijacking on high-speed links compared to slow links?
-
UDP has no congestion control — it will send at whatever rate the application instructs, regardless of network conditions. TCP, by contrast, reduces its sending rate when it detects congestion. What might happen to TCP flows if a large UDP flow (such as a video stream) is sharing the same network path? Does this give UDP an unfair advantage, and how might network engineers address this at the infrastructure level?
References
-
Official Specification. RFC 793 — "Transmission Control Protocol". Internet Engineering Task Force, 1981. — Original TCP specification covering the three-way handshake, flags, flow control, and reliable delivery.
-
Official Specification. RFC 768 — "User Datagram Protocol". Internet Engineering Task Force, 1980. — Original UDP specification; at three pages, it illustrates how simple UDP is by design.
-
Official Specification. RFC 9293 — "Transmission Control Protocol (TCP)". Internet Engineering Task Force, 2022. — Updated TCP standard that consolidates RFC 793 and its subsequent updates into a single current document.
-
Real-World Incident. CISA — "Advisory AA22-249A: #StopRansomware: Vice Society". Cybersecurity and Infrastructure Security Agency. — Vice Society ransomware used TCP-based lateral movement across SMB port 445; illustrates how transport-layer knowledge applies to incident analysis.
-
Official Registry. IANA — "Service Name and Transport Protocol Port Number Registry". Internet Assigned Numbers Authority. — Authoritative registry of all assigned port numbers and their associated protocols.