Network Programming(MSCS203)
Module -05
Client/Server Design Alternatives – TCP Client Alternatives, TCP Test Client, TCP Iterative server, TCP Concurrent
server, TCP preforked server, no locking around accept, TCP preforked server, file locking around accept, TCP
preforked server, thread locking around accept, TCP preforked server, descriptor passing, TCP concurrent server,
one thread per client
Client/Server Design Alternatives:
Client/Server Design Alternatives refer to the different architectural approaches used in designing
and implementing distributed systems where tasks or workloads are divided between service
providers (servers) and service requesters (clients). These models vary in complexity, scalability,
maintainability, and performance. Here’s a detailed look at the primary client/server design
alternatives:
1. Two-Tier Architecture
Overview:
This is the most basic client/server architecture. It divides the application into two parts:
Client: Typically includes the user interface and application logic.
Server: Manages database services and data storage.
Example:
A desktop application that connects directly to a database (like MySQL or SQL Server).
Advantages:
Simple to develop and deploy.
Fast performance for a small number of users.
Direct communication with the database.
Disadvantages:
Poor scalability.
Limited flexibility and maintainability.
Tight coupling between client and server.
Difficult to manage with a large number of clients.
2. Three-Tier Architecture
Overview:
Introduces an intermediate layer (application server) between client and database server:
Presentation Tier (Client): User interface.
Application Tier (Middle Tier): Business logic and application services.
Data Tier (Server): Database management.
Example:
Web applications using Java EE, .NET, or Django where the business logic is handled separately from
the UI and data.
Advantages:
Better scalability and maintainability.
Improved security and data integrity.
Easier to manage and update individual layers.
Disadvantages:
More complex to develop.
Higher latency than two-tier for small-scale apps.
Requires more infrastructure.
3. N-Tier (Multi-Tier) Architecture
Overview:
Extends the three-tier model by adding more specialized layers (e.g., caching, messaging, service
layers).
Each layer has a specific responsibility, improving modularity.
Example:
Enterprise applications using microservices and APIs for different functionalities (e.g., authentication,
payments, notifications).
Advantages:
Excellent scalability and flexibility.
Easier integration with external systems.
Suitable for large and distributed systems.
Disadvantages:
High complexity and overhead.
Requires robust network infrastructure.
Difficult to debug and trace issues across layers.
4. Peer-to-Peer (P2P) Model
Overview:
In this model, each node acts as both client and server. No centralized server is required.
Example:
File-sharing systems like BitTorrent, blockchain networks, or VoIP apps like Skype.
Advantages:
High fault tolerance and redundancy.
Good scalability for file sharing or decentralized apps.
No single point of failure.
Disadvantages:
Complex synchronization and data consistency.
Security is harder to enforce.
Performance can vary depending on peer availability.
5. Client-Queue-Client (Message-Oriented Middleware)
Overview:
Uses a messaging server or broker to mediate communication between clients. Common in
asynchronous systems.
Example:
Message queues like RabbitMQ, Apache Kafka, or MSMQ used in decoupled systems.
Advantages:
Loose coupling of components.
High reliability and asynchronous communication.
Supports distributed systems and microservices.
Disadvantages:
Increased latency.
Message loss if not configured properly.
More components to manage.
6. Cloud-Based Client/Server Architecture
Overview:
Cloud services act as the server, clients access via the internet.
Example:
SaaS platforms like Google Workspace or Microsoft 365; mobile apps connecting to cloud-hosted
APIs.
Advantages:
Elastic scalability.
Cost-effective via pay-per-use models.
High availability and disaster recovery.
Disadvantages:
Dependent on internet connectivity.
Data privacy and control concerns.
Possible vendor lock-in.
Comparison Table
Feature Two-Tier Three-Tier N-Tier P2P Queue-Based Cloud-Based
Scalability Low Moderate High High High Very High
Complexity Low Moderate High High Moderate Moderate
Performance High Moderate Variable Variable Variable High
Maintainability Low High Very High Moderate High High
Security Basic Improved Advanced Low Improved Advanced
Recommended
Scenario
Model
Small, local business system Two-Tier
Standard enterprise application Three-Tier
Large-scale distributed system N-Tier
Decentralized or collaborative system Peer-to-Peer
Event-driven or asynchronous systems Queue-Based
Mobile/Web application using cloud Cloud-Based
TCP Client Alternatives:
TCP Client Alternatives refer to different methods or protocols that can be used instead of—or
alongside—TCP (Transmission Control Protocol) for client-side communication in networking
applications. While TCP is widely used due to its reliability and ordered delivery, there are several
alternatives that might offer benefits in terms of speed, simplicity, scalability, or specific use case
optimization.
Below is a detailed breakdown of TCP client alternatives:
1. UDP (User Datagram Protocol)
✅ Use Case: Real-time applications like VoIP, video streaming, gaming.
How It Works:
UDP is a connectionless protocol. It sends datagrams (packets) without establishing a connection and
does not guarantee delivery, order, or error checking.
Pros:
Low latency due to no connection setup.
Lower overhead than TCP.
Good for time-sensitive applications.
Cons:
No delivery guarantees.
No packet ordering or retransmission.
Less secure by default.
2. QUIC (Quick UDP Internet Connections)
✅ Use Case: Modern web apps, mobile apps (used by Google services, HTTP/3).
How It Works:
Built on top of UDP, QUIC adds TLS encryption, stream multiplexing, and improved congestion
control.
Pros:
Faster connection establishment than TCP.
Built-in encryption (TLS 1.3).
Connection migration (good for mobile devices).
Cons:
Not as widely supported as TCP yet.
Slightly more complex than TCP.
Can be blocked by restrictive firewalls.
3. SCTP (Stream Control Transmission Protocol)
✅ Use Case: Telecom signaling (like in 5G), multistreaming applications.
How It Works:
SCTP is message-oriented (like UDP) but provides reliable, ordered delivery (like TCP) and supports
multi-streaming and multi-homing.
Pros:
Supports multiple streams over one connection.
Better handling of packet loss.
Enhanced resilience (multi-homing).
Cons:
Less widely supported by OSs and routers.
Harder to implement than TCP/UDP.
Limited adoption in general-purpose apps.
4. WebSocket
✅ Use Case: Web-based real-time communication (chat apps, live updates).
How It Works:
WebSockets use HTTP to establish a full-duplex TCP connection that remains open, allowing
continuous data exchange between client and server.
Pros:
Full-duplex communication.
Works well with web clients (JavaScript).
Maintains persistent connection.
Cons:
Still relies on TCP under the hood.
May not suit high-throughput systems.
Firewall/proxy issues in some cases.
5. HTTP/2 and HTTP/3 (over TCP/QUIC respectively)
✅ Use Case: Modern web services and APIs.
How It Works:
HTTP/2 uses TCP but multiplexes streams to avoid head-of-line blocking.
HTTP/3 uses QUIC over UDP for improved performance and reliability.
Pros:
Stream multiplexing.
Built-in encryption and compression.
Used widely by browsers and APIs.
Cons:
More overhead than raw sockets.
Complexity in handling sessions and states.
6. MQTT (Message Queuing Telemetry Transport)
✅ Use Case: IoT, sensor networks, lightweight messaging systems.
How It Works:
MQTT is a lightweight messaging protocol over TCP (sometimes WebSockets). It uses a
publish/subscribe model.
Pros:
Extremely lightweight and low power.
Supports unreliable networks.
Great for small devices and mobile.
Cons:
Relies on a central broker.
Latency depends on broker performance.
Security must be explicitly configured.
7. CoAP (Constrained Application Protocol)
✅ Use Case: IoT applications with constrained devices and networks.
How It Works:
Runs over UDP and uses a REST-like model similar to HTTP. Supports resource observation and
multicast.
Pros:
Lightweight and efficient.
Designed for low-power, constrained devices.
Can work with DTLS for security.
Cons:
No TCP-style guarantees.
Limited to simple use cases.
Security and reliability require extra layers.
8. gRPC (Google Remote Procedure Call)
✅ Use Case: Microservices, high-performance APIs.
How It Works:
Built on HTTP/2, it uses Protocol Buffers (protobuf) for efficient serialization. Provides features like
streaming and authentication.
Pros:
High-performance communication.
Supports client/server streaming.
Cross-language compatibility.
Cons:
Requires more setup than simple TCP clients.
Overhead from protobuf serialization.
Not ideal for browser clients without proxying.
Comparison Table
Protocol Transport Reliable? Connection-Oriented? Best For
TCP TCP ✅ ✅ General purpose, reliable apps
UDP UDP ❌ ❌ Real-time apps, low latency
QUIC UDP ✅ ✅ Web apps, mobile networks
SCTP IP-based ✅ ✅ Telecom, multi-streaming
WebSocket TCP ✅ ✅ Browser-based real-time apps
Protocol Transport Reliable? Connection-Oriented? Best For
HTTP/3 QUIC ✅ ✅ Modern web services
MQTT TCP ✅ ✅ IoT messaging
CoAP UDP ✅ (with retries) ✅ Constrained networks/devices
gRPC HTTP/2 ✅ ✅ Microservices, APIs
🔍 When to Choose an Alternative to TCP
Situation Recommended Alternative
Real-time video/audio UDP, QUIC
Browser-based chat/game WebSocket
Microservice communication gRPC
IoT sensors MQTT, CoAP
Mobile networks with frequent disconnects QUIC
Need for multiple data streams SCTP, HTTP/2
TCP Test Client, TCP Iterative server:
TCP Test Client and TCP Iterative Server: Detailed Explanation
In a TCP-based network communication system, the client initiates a connection to the server, which
listens for incoming requests and responds accordingly. A TCP Iterative Server handles one client at a
time in a sequential manner. A TCP Test Client is typically used to test connectivity and behavior of a
TCP server.
🔹 TCP Test Client
Purpose:
To connect to a TCP server.
Send requests (like messages or commands).
Receive responses from the server.
Useful for testing and debugging server behavior.
How It Works:
1. Create a TCP socket.
2. Connect to the server using IP address and port.
3. Send data to the server.
4. Receive response from the server.
5. Close the connection after communication is complete.
✅ Example in Python:
python
CopyEdit
import socket
# Define server address and port
server_ip = '127.0.0.1'
server_port = 12345
# Create a TCP socket
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Connect to the server
client_socket.connect((server_ip, server_port))
# Send data
message = "Hello Server"
client_socket.sendall(message.encode())
# Receive response
response = client_socket.recv(1024)
print("Received from server:", response.decode())
# Close the connection
client_socket.close()
🔹 TCP Iterative Server
Purpose:
Accept and handle one client at a time, sequentially.
Simple to implement and suitable for low-load environments.
How It Works:
1. Create a socket and bind it to an IP/port.
2. Listen for incoming connections.
3. Accept a connection from a client.
4. Handle the client request (e.g., receive data and respond).
5. Close the connection and wait for the next client.
✅ Example in Python:
python
CopyEdit
import socket
# Define IP and port
server_ip = '127.0.0.1'
server_port = 12345
# Create a TCP socket
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Bind to the address and port
server_socket.bind((server_ip, server_port))
# Start listening for connections (max 5 in queue)
server_socket.listen(5)
print(f"Server listening on {server_ip}:{server_port}")
while True:
# Accept a new connection
client_socket, client_addr = server_socket.accept()
print(f"Connection from {client_addr}")
# Receive data
data = client_socket.recv(1024).decode()
print(f"Received from client: {data}")
# Send a response
response = "Hello from server"
client_socket.sendall(response.encode())
# Close the client connection
client_socket.close()
🔍 Key Characteristics of TCP Iterative Server
Feature Description
Concurrency ❌ No — handles one client at a time.
Complexity ✅ Simple and easy to implement.
Scalability ❌ Not suitable for high traffic.
Use Case Useful for testing, educational purposes, or lightweight apps.
Resource Usage Low — no threads or processes needed.
🔁 Typical Flow Diagram
csharp
CopyEdit
CLIENT SERVER
| |
create socket() create socket()
| |
connect() ---------------------> accept()
| |
send() -----------------------> recv()
| |
recv() <----------------------- send()
| |
close() ---------------------> close()
🔄 Iterative Server vs. Concurrent Server
Feature Iterative Server Concurrent Server
Clients at a time One Multiple
Use of Threads/Processes No Yes (or async)
Code Complexity Low Higher
Performance Lower Higher
✅ Summary
A TCP Test Client is a tool used to connect to and interact with a server.
A TCP Iterative Server is a simple server that handles one client at a time, ideal for testing or
simple use cases.
For real-world applications with multiple clients, a concurrent server (using threads or
asynchronous I/O) is recommended.
TCP Concurrent server:
✅ TCP Concurrent Server — In Detail
A TCP Concurrent Server is designed to handle multiple clients simultaneously. This is typically
achieved using:
Threads
Processes
Asynchronous I/O (non-blocking)
This approach solves the scalability limitation of a TCP Iterative Server.
🔧 How a Concurrent Server Works
1. The server listens for incoming connections.
2. On accepting a connection, it spawns a new thread/process (or coroutine) to handle that
client.
3. The main server continues to listen for new connections.
4. Each client is serviced in parallel, allowing multiple clients to be handled concurrently.
✅ Example 1: Concurrent TCP Server in Python (Threaded)
python
CopyEdit
import socket
import threading
# Server IP and Port
HOST = '127.0.0.1'
PORT = 12345
def handle_client(client_socket, address):
print(f"[+] New connection from {address}")
while True:
try:
data = client_socket.recv(1024)
if not data:
break
print(f"[{address}] {data.decode()}")
client_socket.sendall(b"Server received: " + data)
except:
break
print(f"[-] Connection from {address} closed.")
client_socket.close()
def main():
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind((HOST, PORT))
server.listen(5)
print(f"[LISTENING] Server running on {HOST}:{PORT}")
while True:
client_socket, addr = server.accept()
thread = threading.Thread(target=handle_client, args=(client_socket, addr))
thread.start()
print(f"[ACTIVE CONNECTIONS] {threading.active_count() - 1}")
if __name__ == "__main__":
main()
✅ Example 2: Concurrent TCP Server in C (Using fork)
CopyEdit
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <arpa/inet.h>
#define PORT 12345
#define BUFFER_SIZE 1024
int main() {
int server_fd, client_fd;
struct sockaddr_in server_addr, client_addr;
socklen_t addr_size;
char buffer[BUFFER_SIZE];
server_fd = socket(AF_INET, SOCK_STREAM, 0);
server_addr.sin_family = AF_INET;
server_addr.sin_port = htons(PORT);
server_addr.sin_addr.s_addr = INADDR_ANY;
bind(server_fd, (struct sockaddr*)&server_addr, sizeof(server_addr));
listen(server_fd, 5);
printf("Server listening on port %d...\n", PORT);
while (1) {
addr_size = sizeof(client_addr);
client_fd = accept(server_fd, (struct sockaddr*)&client_addr, &addr_size);
if (fork() == 0) { // Child process
close(server_fd); // Child doesn't need server socket
int bytes;
while ((bytes = recv(client_fd, buffer, BUFFER_SIZE, 0)) > 0) {
buffer[bytes] = '\0';
printf("Received: %s\n", buffer);
send(client_fd, buffer, bytes, 0);
close(client_fd);
exit(0);
} else {
close(client_fd); // Parent doesn't need client socket
}
return 0;
🔍 Key Characteristics of a Concurrent TCP Server
Feature Description
Handles clients Simultaneously
Concurrency model Threads, Processes, or async I/O
Scalability Higher than iterative
Complexity More complex to implement and debug
Performance Better with many clients
🔁 Comparison: Iterative vs Concurrent Server
Feature Iterative Server Concurrent Server
Clients at a time One Many (in parallel)
Implementation Simple Complex
Use of Threads No Yes (or processes)
Real-time responsiveness Poor Good
Suitable for production? ❌ Only for testing/demo ✅ Yes
🔄 Alternatives for Concurrency
Method Language/Tech Stack Notes
Threads Python, Java Lightweight, shared memory
Processes C, Python (multiprocessing), Unix Safer, but more resource-heavy
Async I/O Node.js, Python (asyncio) Non-blocking, scalable
Multiplexing select(), poll() C/Unix-style I/O event handling
TCP preforked server:
✅ TCP Preforked Server — In Detail
A TCP Preforked Server is a type of concurrent server that pre-creates (forks) multiple child
processes at startup. These processes wait in parallel to handle incoming client connections.
🔹 Why Preforking?
Instead of forking or threading a new process for each client (as in a dynamic concurrent server), the
server creates a fixed number of processes ahead of time. This avoids:
The overhead of creating processes at runtime.
Latency spikes for new connections.
Exhaustion of system resources due to too many child processes.
🔧 Key Concepts
Master Process: Listens for incoming connections.
Child Processes: Accept and handle connections concurrently.
Fixed Pool: Number of child processes is set at server startup.
✅ Advantages
Feature Benefit
Performance Faster response to new connections
Resource Management Controlled number of processes
Predictable Load Easier to tune performance
Stability Fewer forks at runtime = fewer crashes
🔁 Architecture Overview
less
CopyEdit
[Master Process]
-----------------------
| | | | | |
[P1] [P2] [P3] [P4] ... [Pn]
| | | | |
Accepts clients concurrently
✅ Example: TCP Preforked Server in C (POSIX)
CopyEdit
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <arpa/inet.h>
#include <sys/wait.h>
#include <signal.h>
#define PORT 12345
#define BACKLOG 10
#define NUM_CHILDREN 5
#define BUFFER_SIZE 1024
void handle_client(int client_fd) {
char buffer[BUFFER_SIZE];
int bytes;
while ((bytes = recv(client_fd, buffer, sizeof(buffer), 0)) > 0) {
buffer[bytes] = '\0';
printf("Child [%d] received: %s\n", getpid(), buffer);
send(client_fd, buffer, bytes, 0); // Echo back
close(client_fd);
}
void child_process(int listen_fd) {
struct sockaddr_in client_addr;
socklen_t client_len = sizeof(client_addr);
while (1) {
int client_fd = accept(listen_fd, (struct sockaddr*)&client_addr, &client_len);
if (client_fd >= 0) {
handle_client(client_fd);
int main() {
int listen_fd;
struct sockaddr_in server_addr;
listen_fd = socket(AF_INET, SOCK_STREAM, 0);
server_addr.sin_family = AF_INET;
server_addr.sin_port = htons(PORT);
server_addr.sin_addr.s_addr = INADDR_ANY;
bind(listen_fd, (struct sockaddr*)&server_addr, sizeof(server_addr));
listen(listen_fd, BACKLOG);
printf("Preforked server listening on port %d\n", PORT);
// Create fixed number of child processes
for (int i = 0; i < NUM_CHILDREN; i++) {
pid_t pid = fork();
if (pid == 0) {
// Child process
child_process(listen_fd);
exit(0); // just in case
// Parent process waits for children (or runs indefinitely)
while (1) {
pause(); // Keep running or implement signal handling
close(listen_fd);
return 0;
🔒 Synchronization Note
If all child processes call accept() at the same time, it can lead to race conditions. On most UNIX
systems, accept() is safely serialized using the kernel (often via a socket mutex), so only one child will
succeed.
For portable systems, use:
file locking (flock)
semaphores
or let the kernel handle it
✅ Pros and Cons of Preforked Servers
Pros Cons
Predictable resource usage Hard-coded process count (may under/overuse)
No runtime fork overhead Less flexible than thread pools
Reliable and stable Higher memory use than threads
Each process is isolated (safe) Harder to share resources across clients
🔁 Preforked vs Threaded vs Async Servers
Concurrency
Server Type Pros Cons
Model
Preforked Processes Isolation, stability Higher memory, less scalable
Race conditions, shared
Threaded Threads Lower overhead, faster
memory
Async High scalability, low
Event loop Complex programming model
(select/epoll) resource
📌 When to Use a Preforked Server
When process isolation is important (e.g., security).
When you expect moderate concurrency (not thousands of clients).
In POSIX/Linux systems where fork() is efficient.
For multi-core systems, where each process can use a separate core.
no locking around accept:
Great observation! When designing a TCP preforked server, one common question is:
"Is it safe to have multiple child processes calling accept() simultaneously without locking?"
✅ Short Answer:
Yes, on most modern UNIX-like systems (Linux, BSD, macOS), it is safe to have multiple processes
block on the same listening socket and call accept() without explicit locking.
🔍 Why It Works — Kernel-Level Synchronization
The operating system kernel internally serializes access to the listening socket.
Only one of the processes will succeed in accept() when a new connection arrives.
The others will remain blocked until the next connection.
💡 Example Behavior (Linux):
Imagine you have 5 child processes all calling:
CopyEdit
int client_fd = accept(listen_fd, ...);
When a client connects:
The kernel wakes up one of the blocked processes.
That process returns from accept() with the new client socket.
The others remain blocked.
So the kernel effectively does this for you:
plaintext
CopyEdit
[client connects] → [wake up one process → accept() → handle client]
🚫 When You Do Need Locking
You need explicit locking only if:
You're using platforms that don't handle accept queue properly (rare now).
You want to limit which process handles a connection (e.g., based on custom logic).
You're managing shared resources in userspace after accept() (like database access or
memory structures).
✅ Best Practices Without Locking
1. Use a single shared listening socket:
o Pass the same listen_fd to all child processes via fork().
2. Avoid per-process listen sockets.
o They can cause race conditions or port binding errors.
3. Let the OS handle wake-up.
o Trust the kernel's socket wake-up mechanism to do the right thing.
🔄 Summary
Condition Locking Needed?
Linux, BSD, macOS with shared socket ❌ No
Using forked children ❌ No
Need precise control of accept ✅ Yes
Legacy OS without accept queue fairness ✅ Maybe
🛡️ Final Tip
If you're using preforking in a portable or real-time system, and you're not sure about the platform
behavior, use a **mutex, file lock, or semaphore** around accept()` just to be safe — but in 99% of
modern production Linux/Unix cases, you don’t need it.
TCP preforked server, no locking around accept:
✅ TCP Preforked Server — No Locking Around accept()
In a TCP preforked server, you can safely allow multiple child processes to call accept() on the same
listening socket concurrently without any explicit locking on modern UNIX-like systems (e.g., Linux,
BSD, macOS). The kernel handles the coordination.
🔧 What This Means
Each child process blocks on the same shared listen() socket:
CopyEdit
int client_fd = accept(listen_fd, ...);
When a client connects:
The kernel wakes up only one process to handle the new connection.
All others remain blocked until the next connection arrives.
✅ Example: TCP Preforked Server (No Locking Around accept())
Here’s a minimal and complete C example of a preforked TCP server with no synchronization
around accept():
🔁 Server Code (C):
CopyEdit
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <arpa/inet.h>
#include <sys/types.h>
#include <sys/socket.h>
#define PORT 12345
#define BACKLOG 10
#define NUM_CHILDREN 5
#define BUFFER_SIZE 1024
void handle_client(int client_fd) {
char buffer[BUFFER_SIZE];
int bytes;
while ((bytes = recv(client_fd, buffer, sizeof(buffer), 0)) > 0) {
buffer[bytes] = '\0';
printf("[PID %d] Received: %s\n", getpid(), buffer);
send(client_fd, buffer, bytes, 0); // Echo back
close(client_fd);
printf("[PID %d] Connection closed\n", getpid());
void child_process(int listen_fd) {
struct sockaddr_in client_addr;
socklen_t client_len = sizeof(client_addr);
int client_fd;
while (1) {
client_fd = accept(listen_fd, (struct sockaddr*)&client_addr, &client_len);
if (client_fd < 0) {
perror("accept");
continue;
}
printf("[PID %d] Accepted connection\n", getpid());
handle_client(client_fd);
int main() {
int listen_fd;
struct sockaddr_in server_addr;
// Create socket
listen_fd = socket(AF_INET, SOCK_STREAM, 0);
if (listen_fd < 0) {
perror("socket");
exit(EXIT_FAILURE);
// Bind
server_addr.sin_family = AF_INET;
server_addr.sin_port = htons(PORT);
server_addr.sin_addr.s_addr = INADDR_ANY;
if (bind(listen_fd, (struct sockaddr*)&server_addr, sizeof(server_addr)) < 0) {
perror("bind");
exit(EXIT_FAILURE);
// Listen
if (listen(listen_fd, BACKLOG) < 0) {
perror("listen");
exit(EXIT_FAILURE);
printf("Server listening on port %d (Preforked, No Accept Locking)\n", PORT);
// Prefork children
for (int i = 0; i < NUM_CHILDREN; i++) {
pid_t pid = fork();
if (pid == 0) {
// Child process
child_process(listen_fd);
exit(0); // should never reach here
// Parent waits forever (or could monitor children)
while (1) {
pause();
close(listen_fd);
return 0;
🧠 How It Works
All child processes inherit the listening socket from the parent via fork().
They all call accept() on the same socket.
The kernel ensures only one process returns from accept() for each connection.
🔍 Is Locking Really Not Needed?
✅ Linux: Uses the "thundering herd" mitigation — only one process is awakened.
✅ BSD/macOS: Similar handling.
❌ Old or exotic systems: Might need explicit locking (rare today).
🛠️ Compile and Run
bash
CopyEdit
gcc -o prefork_server prefork_server.c
./prefork_server
In a separate terminal, test it using telnet or a custom TCP client:
bash
CopyEdit
telnet 127.0.0.1 12345
# or use netcat
nc 127.0.0.1 12345
You’ll see different child PIDs accepting different clients concurrently.
✅ Summary
Feature Value
Accept locking needed? ❌ No
Concurrent clients ✅ Yes (up to NUM_CHILDREN)
Kernel coordination ✅ Handled automatically
Suitable for production? ✅ With proper process limits
TCP preforked server file locking around accept:
🔐 TCP Preforked Server with File Locking Around accept()
While modern UNIX systems (Linux, BSD, macOS) allow multiple processes to call accept() safely
without explicit synchronization, some scenarios or older UNIX systems may require manual locking
to avoid a "thundering herd" problem—where multiple processes are awakened, but only one
succeeds.
To force mutual exclusion around accept(), we can use file-based locking using:
flock() — advisory locking
fcntl() — POSIX-compatible locking
Here, we'll use flock() for simplicity and portability.
✅ Use Case for Locking Around accept()
Portability to systems with no kernel-level coordination
Debugging or simulating serialized accept()
Fine-grained control over connection acceptance
Logging or auditing which process handles which connection
✅ Example: TCP Preforked Server with File Locking Around accept() (C)
CopyEdit
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <fcntl.h>
#include <sys/file.h>
#include <arpa/inet.h>
#include <sys/types.h>
#include <sys/socket.h>
#define PORT 12345
#define BACKLOG 10
#define NUM_CHILDREN 5
#define BUFFER_SIZE 1024
#define LOCK_FILE "/tmp/accept.lock"
void handle_client(int client_fd) {
char buffer[BUFFER_SIZE];
int bytes;
while ((bytes = recv(client_fd, buffer, sizeof(buffer), 0)) > 0) {
buffer[bytes] = '\0';
printf("[PID %d] Received: %s\n", getpid(), buffer);
send(client_fd, buffer, bytes, 0); // Echo back
close(client_fd);
printf("[PID %d] Connection closed\n", getpid());
void child_process(int listen_fd, int lock_fd) {
struct sockaddr_in client_addr;
socklen_t client_len = sizeof(client_addr);
int client_fd;
while (1) {
// Acquire file lock before calling accept()
flock(lock_fd, LOCK_EX);
client_fd = accept(listen_fd, (struct sockaddr*)&client_addr, &client_len);
// Release file lock after accept()
flock(lock_fd, LOCK_UN);
if (client_fd < 0) {
perror("accept");
continue;
printf("[PID %d] Accepted connection\n", getpid());
handle_client(client_fd);
int main() {
int listen_fd;
struct sockaddr_in server_addr;
// Create listening socket
listen_fd = socket(AF_INET, SOCK_STREAM, 0);
if (listen_fd < 0) {
perror("socket");
exit(EXIT_FAILURE);
// Allow reuse of address
int opt = 1;
setsockopt(listen_fd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt));
// Bind
server_addr.sin_family = AF_INET;
server_addr.sin_port = htons(PORT);
server_addr.sin_addr.s_addr = INADDR_ANY;
if (bind(listen_fd, (struct sockaddr*)&server_addr, sizeof(server_addr)) < 0) {
perror("bind");
exit(EXIT_FAILURE);
// Listen
if (listen(listen_fd, BACKLOG) < 0) {
perror("listen");
exit(EXIT_FAILURE);
// Open lock file
int lock_fd = open(LOCK_FILE, O_CREAT | O_RDWR, 0666);
if (lock_fd < 0) {
perror("lock file open");
exit(EXIT_FAILURE);
printf("Server listening on port %d with accept locking\n", PORT);
// Prefork child processes
for (int i = 0; i < NUM_CHILDREN; i++) {
pid_t pid = fork();
if (pid == 0) {
// Child process
child_process(listen_fd, lock_fd);
exit(0); // Should never reach
// Parent process
while (1) {
pause(); // Wait forever
close(listen_fd);
close(lock_fd);
return 0;
}
🧠 Test
1. Compile:
bash
CopyEdit
gcc -o prefork_server_locked prefork_server_locked.c
2. Run:
bash
CopyEdit
./prefork_server_locked
3. Connect clients:
bash
CopyEdit
telnet 127.0.0.1 12345
# or
nc 127.0.0.1 12345
Each client will be handled by one of the child processes, with accept() calls serialized via a shared
lock file (/tmp/accept.lock).
📊 Comparison
Feature Without Locking With File Locking
Simplicity Easier Slightly more complex
Portability Good (on modern systems) Better for older/unknown systems
Performance Higher (less overhead) Slight overhead from locking
Fairness OS-controlled Can be manually managed
Use case General production Controlled concurrency, legacy
🧠 Clean-up Tip
Remember to remove the lock file if your server stops:
bash
CopyEdit
rm /tmp/accept.lock
✅ Summary
Using file locking around accept() is a way to serialize client handling across forked
processes.
It’s not necessary on most modern UNIX systems, but is useful for control or legacy
compatibility.
flock() is simple and widely supported for this use.
TCP preforked server, thread locking around accept:
✅ TCP Preforked Server with Thread Locking Around accept()
You're asking about a TCP preforked server that uses thread-level synchronization (mutexes) around
the accept() call.
But here's an important distinction:
🔍 "Preforked" vs "Multithreaded"
Term Description
Preforked Multiple processes via fork()
Multithreaded Multiple threads within a single process
🔸 If you're forking child processes, you cannot use thread-level mutexes (like pthread_mutex_t)
across processes — they're not shared by default.
So when you say "thread locking around accept() in a preforked server", you might mean:
✅ Option 1: Multithreaded TCP Server (Single Process, Threads, pthread_mutex_t)
If you want to use pthread_mutex_t to synchronize access to accept(), then the design is
multithreaded, not preforked.
🔁 Design
Main thread creates socket and listens.
Spawns multiple threads.
All threads call accept(), but only one at a time using a mutex lock.
✅ Example: Multithreaded TCP Server with Mutex Lock Around accept() (C)
CopyEdit
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <pthread.h>
#include <arpa/inet.h>
#define PORT 12345
#define BACKLOG 10
#define NUM_THREADS 5
#define BUFFER_SIZE 1024
int listen_fd;
pthread_mutex_t accept_mutex = PTHREAD_MUTEX_INITIALIZER;
void *thread_function(void *arg) {
struct sockaddr_in client_addr;
socklen_t client_len = sizeof(client_addr);
char buffer[BUFFER_SIZE];
int client_fd;
while (1) {
// Lock around accept
pthread_mutex_lock(&accept_mutex);
client_fd = accept(listen_fd, (struct sockaddr*)&client_addr, &client_len);
pthread_mutex_unlock(&accept_mutex);
if (client_fd < 0) {
perror("accept");
continue;
printf("[Thread %ld] Accepted connection\n", pthread_self());
int bytes;
while ((bytes = recv(client_fd, buffer, BUFFER_SIZE, 0)) > 0) {
buffer[bytes] = '\0';
printf("[Thread %ld] Received: %s\n", pthread_self(), buffer);
send(client_fd, buffer, bytes, 0);
close(client_fd);
printf("[Thread %ld] Connection closed\n", pthread_self());
return NULL;
int main() {
struct sockaddr_in server_addr;
pthread_t threads[NUM_THREADS];
// Create socket
listen_fd = socket(AF_INET, SOCK_STREAM, 0);
if (listen_fd < 0) {
perror("socket");
exit(EXIT_FAILURE);
}
// Reuse address
int opt = 1;
setsockopt(listen_fd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt));
// Bind
server_addr.sin_family = AF_INET;
server_addr.sin_port = htons(PORT);
server_addr.sin_addr.s_addr = INADDR_ANY;
if (bind(listen_fd, (struct sockaddr*)&server_addr, sizeof(server_addr)) < 0) {
perror("bind");
exit(EXIT_FAILURE);
// Listen
if (listen(listen_fd, BACKLOG) < 0) {
perror("listen");
exit(EXIT_FAILURE);
printf("Multithreaded server listening on port %d (with mutex around accept)\n", PORT);
// Start worker threads
for (int i = 0; i < NUM_THREADS; i++) {
pthread_create(&threads[i], NULL, thread_function, NULL);
// Join threads (never happens here)
for (int i = 0; i < NUM_THREADS; i++) {
pthread_join(threads[i], NULL);
}
close(listen_fd);
return 0;
🧠 Notes
pthread_mutex_lock() ensures only one thread enters accept() at a time.
Prevents the "thundering herd problem" in some systems where multiple threads might
wake up on the same incoming connection.
Most modern OS kernels handle this gracefully, so locking may not be needed — but it's
useful in teaching or older systems.
🔄 Summary
Server Type Can Use pthread_mutex? Notes
Preforked (multi-process) ❌ No Use flock() or semaphores instead
Multithreaded (single process) ✅ Yes Use pthread_mutex_t
🔧 Want to build a true preforked server with inter-process locking?
✅ Use:
flock() (file locking)
fcntl() (POSIX advisory locking)
sem_open() (POSIX semaphores)
shm_open() + pthread_mutexattr_setpshared()
TCP preforked server, descriptor passing:
✅ TCP Preforked Server with Descriptor Passing
A TCP preforked server with descriptor passing is an advanced server model where:
🔁 The main (parent) process accepts the client connection, then passes the accepted socket
descriptor to one of the child processes via a UNIX domain socket.
This model gives the parent control over load balancing, logging, or supervision — while child
processes focus only on processing connections.
🧠 Why Use Descriptor Passing?
Benefit Description
Centralized accept Only parent accepts — avoids thundering herd
Controlled delegation Parent chooses which child handles each conn
Better monitoring Parent can log, filter, or queue clients
Avoids race conditions No need for locking or accept serialization
🔧 Key Mechanism: UNIX Domain Sockets + sendmsg() / recvmsg()
To send file descriptors (like socket FDs) between processes, use:
socketpair() or a AF_UNIX socket
sendmsg() + SCM_RIGHTS to transfer the descriptor
recvmsg() in the child to receive it
🛠️ Complete Example (in C)
🗂️ Structure
Parent:
o Creates listening TCP socket
o Accepts client connections
o Sends client socket to child via UNIX socket
Child:
o Waits to receive socket descriptor
o Handles the client connection
✅ prefork_descriptor_passing.c
CopyEdit
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <errno.h>
#include <sys/socket.h>
#include <sys/un.h>
#include <sys/types.h>
#include <arpa/inet.h>
#include <netinet/in.h>
#define PORT 12345
#define BACKLOG 10
#define NUM_CHILDREN 3
#define UNIX_SOCK_PATH "/tmp/prefork.sock"
#define BUFFER_SIZE 1024
// Send file descriptor over UNIX domain socket
void send_fd(int unix_sock, int fd_to_send) {
struct msghdr msg = {0};
char buf[CMSG_SPACE(sizeof(fd_to_send))];
memset(buf, 0, sizeof(buf));
struct iovec io = { .iov_base = (void*)"F", .iov_len = 1 };
msg.msg_iov = &io;
msg.msg_iovlen = 1;
msg.msg_control = buf;
msg.msg_controllen = sizeof(buf);
struct cmsghdr *cmsg = CMSG_FIRSTHDR(&msg);
cmsg->cmsg_level = SOL_SOCKET;
cmsg->cmsg_type = SCM_RIGHTS;
cmsg->cmsg_len = CMSG_LEN(sizeof(fd_to_send));
memcpy(CMSG_DATA(cmsg), &fd_to_send, sizeof(fd_to_send));
msg.msg_controllen = cmsg->cmsg_len;
sendmsg(unix_sock, &msg, 0);
// Receive file descriptor over UNIX domain socket
int recv_fd(int unix_sock) {
struct msghdr msg = {0};
char m_buffer[1];
struct iovec io = { .iov_base = m_buffer, .iov_len = sizeof(m_buffer) };
char cmsgbuf[CMSG_SPACE(sizeof(int))];
msg.msg_control = cmsgbuf;
msg.msg_controllen = sizeof(cmsgbuf);
msg.msg_iov = &io;
msg.msg_iovlen = 1;
recvmsg(unix_sock, &msg, 0);
struct cmsghdr *cmsg = CMSG_FIRSTHDR(&msg);
int fd;
memcpy(&fd, CMSG_DATA(cmsg), sizeof(fd));
return fd;
void handle_client(int client_fd) {
char buffer[BUFFER_SIZE];
int bytes;
while ((bytes = recv(client_fd, buffer, sizeof(buffer), 0)) > 0) {
buffer[bytes] = '\0';
printf("[Child %d] Received: %s\n", getpid(), buffer);
send(client_fd, buffer, bytes, 0);
close(client_fd);
int main() {
int tcp_fd, unix_sock[2 * NUM_CHILDREN];
struct sockaddr_in server_addr;
unlink(UNIX_SOCK_PATH);
// Create TCP socket
tcp_fd = socket(AF_INET, SOCK_STREAM, 0);
int opt = 1;
setsockopt(tcp_fd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt));
server_addr.sin_family = AF_INET;
server_addr.sin_port = htons(PORT);
server_addr.sin_addr.s_addr = INADDR_ANY;
bind(tcp_fd, (struct sockaddr*)&server_addr, sizeof(server_addr));
listen(tcp_fd, BACKLOG);
printf("[Parent] TCP Server listening on port %d\n", PORT);
// Prefork children and set up UNIX sockets
for (int i = 0; i < NUM_CHILDREN; i++) {
int sv[2];
if (socketpair(AF_UNIX, SOCK_DGRAM, 0, sv) < 0) {
perror("socketpair");
exit(EXIT_FAILURE);
pid_t pid = fork();
if (pid == 0) {
// Child process
close(sv[0]); // Close parent's end
while (1) {
int client_fd = recv_fd(sv[1]);
handle_client(client_fd);
exit(0);
// Parent process
close(sv[1]); // Close child's end
unix_sock[i] = sv[0];
// Parent accepts and distributes connections
int next_child = 0;
while (1) {
struct sockaddr_in client_addr;
socklen_t addrlen = sizeof(client_addr);
int client_fd = accept(tcp_fd, (struct sockaddr*)&client_addr, &addrlen);
if (client_fd < 0) continue;
printf("[Parent] Accepted connection, passing to child %d\n", next_child);
send_fd(unix_sock[next_child], client_fd);
close(client_fd); // Child owns the FD now
next_child = (next_child + 1) % NUM_CHILDREN; // Round-robin
close(tcp_fd);
return 0;
🧠 How to Compile & Run
bash
CopyEdit
gcc -o prefork_passfd prefork_passfd.c
./prefork_passfd
Then use telnet or nc:
bash
CopyEdit
nc 127.0.0.1 12345
Each client will be handled by one of the children, determined by the parent.
✅ Key Takeaways
Feature Description
Parent accepts Avoids thundering herd problem
Child handles Load distributed via file descriptor passing
Descriptor transfer sendmsg() with SCM_RIGHTS over UNIX sockets
Round-robin dispatch Easily customizable to weighted or conditional logic
Scalable and controlled Parent can monitor, queue, or even reject connections
TCP concurrent server, one thread per client:
✅ TCP Concurrent Server — One Thread per Client
In this server model, the main thread listens for incoming connections, and for each accepted client,
it spawns a new thread to handle communication.
This is one of the simplest and most common ways to implement concurrency in a TCP server.
🔁 Overview: "One Thread Per Client"
vbnet
CopyEdit
Main Thread:
- Listens on socket
- For each client: accept() → create thread
Worker Thread:
- Handles communication with assigned client
- Exits when client disconnects
✅ Advantages
Feature Benefit
Easy to implement Each client handled independently in its thread
True concurrency Multiple clients served at once
Blocking I/O safe Each thread blocks independently
⚠️ Disadvantages
Limitation Description
High memory/thread usage Threads are relatively heavy (stack, scheduling)
Not scalable for 1000s OS may limit number of concurrent threads
Requires thread safety Shared resources must be protected (e.g., mutexes)
✅ Example: One Thread per Client TCP Server in C (POSIX, pthreads)
c
CopyEdit
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <pthread.h>
#include <arpa/inet.h>
#define PORT 12345
#define BACKLOG 10
#define BUFFER_SIZE 1024
void *handle_client(void *arg) {
int client_fd = *(int *)arg;
free(arg);
char buffer[BUFFER_SIZE];
int bytes;
printf("[Thread %ld] Client connected\n", pthread_self());
while ((bytes = recv(client_fd, buffer, sizeof(buffer), 0)) > 0) {
buffer[bytes] = '\0';
printf("[Thread %ld] Received: %s", pthread_self(), buffer);
send(client_fd, buffer, bytes, 0); // Echo back
printf("[Thread %ld] Client disconnected\n", pthread_self());
close(client_fd);
return NULL;
}
int main() {
int listen_fd, *client_fd;
struct sockaddr_in server_addr, client_addr;
socklen_t client_len = sizeof(client_addr);
// Create listening socket
listen_fd = socket(AF_INET, SOCK_STREAM, 0);
if (listen_fd < 0) {
perror("socket");
exit(EXIT_FAILURE);
// Allow reuse of address
int opt = 1;
setsockopt(listen_fd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt));
// Bind
server_addr.sin_family = AF_INET;
server_addr.sin_port = htons(PORT);
server_addr.sin_addr.s_addr = INADDR_ANY;
if (bind(listen_fd, (struct sockaddr*)&server_addr, sizeof(server_addr)) < 0) {
perror("bind");
close(listen_fd);
exit(EXIT_FAILURE);
// Listen
if (listen(listen_fd, BACKLOG) < 0) {
perror("listen");
close(listen_fd);
exit(EXIT_FAILURE);
printf("TCP Concurrent Server (1 thread/client) listening on port %d\n", PORT);
// Main accept loop
while (1) {
client_fd = malloc(sizeof(int));
if (!client_fd) {
perror("malloc");
continue;
*client_fd = accept(listen_fd, (struct sockaddr*)&client_addr, &client_len);
if (*client_fd < 0) {
perror("accept");
free(client_fd);
continue;
pthread_t tid;
if (pthread_create(&tid, NULL, handle_client, client_fd) != 0) {
perror("pthread_create");
close(*client_fd);
free(client_fd);
// Optionally detach to auto-cleanup
pthread_detach(tid);
}
close(listen_fd);
return 0;
🧠 How to Compile and Run
bash
CopyEdit
gcc -pthread -o thread_server thread_server.c
./thread_server
Then, connect using:
bash
CopyEdit
telnet 127.0.0.1 12345
# or
nc 127.0.0.1 12345
🧠 Threading Notes
pthread_detach() makes threads clean themselves up when done (avoids memory leaks).
malloc() is used to pass the client socket safely to the thread function.
For production, implement connection limits, timeouts, and error handling.
✅ Summary
Component Role
Main thread Accepts incoming connections
New thread Created for each client
Thread function Handles client I/O (recv/send)
Cleanup Threads detach and exit on close