NEWS Earn Money with Onidel Cloud! Affiliate Program Details - Check it out

Multi-Region Apache Kafka Deployment Guide: Setting Up KRaft Clusters Across Amsterdam and New York VPS with Zero-Downtime Failover

Building a robust multi-region Kafka infrastructure is essential for organizations requiring high availability and disaster recovery capabilities. This comprehensive guide demonstrates how to deploy Apache Kafka 3.8 using KRaft mode across Amsterdam and New York VPS instances, complete with MirrorMaker 2 for cross-region replication, Schema Registry for data governance, and TLS security.

Prerequisites

Before beginning this deployment, ensure you have:

  • 2 VPS instances: One in Amsterdam and one in New York with minimum 4GB RAM, 4 vCPUs, and 50GB NVMe storage each
  • Ubuntu 24.04 LTS installed on both servers
  • Root or sudo access on both instances
  • Docker and Docker Compose installed
  • Basic understanding of Kafka concepts and networking

Resource Requirements: For production workloads, consider using Onidel VPS in Amsterdam and Onidel VPS in New York with high-performance EPYC Milan processors and HA NVMe storage for optimal performance.

Step 1: Initial Server Setup and Security

Start by updating both servers and installing required packages:

# Update system packages
sudo apt update && sudo apt upgrade -y

# Install Docker and Docker Compose
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER

# Install Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/download/v2.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

# Create working directory
mkdir -p ~/kafka-multi-region && cd ~/kafka-multi-region

Step 2: Generate TLS Certificates for Secure Communication

Create a certificate authority and generate certificates for both regions:

# Generate CA private key
openssl genrsa -out ca-key.pem 4096

# Create CA certificate
openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem -subj "/CN=Kafka-CA"

# Generate server certificates for Amsterdam
openssl genrsa -out amsterdam-key.pem 4096
openssl req -subj "/CN=kafka-amsterdam" -sha256 -new -key amsterdam-key.pem -out amsterdam.csr
openssl x509 -req -days 365 -in amsterdam.csr -CA ca.pem -CAkey ca-key.pem -out amsterdam.pem -extensions v3_req

# Generate server certificates for New York
openssl genrsa -out newyork-key.pem 4096
openssl req -subj "/CN=kafka-newyork" -sha256 -new -key newyork-key.pem -out newyork.csr
openssl x509 -req -days 365 -in newyork.csr -CA ca.pem -CAkey ca-key.pem -out newyork.pem -extensions v3_req

Step 3: Amsterdam Kafka Cluster Deployment

Create the Docker Compose configuration for the Amsterdam cluster:

# amsterdam-docker-compose.yml
version: '3.8'
services:
  kafka-amsterdam:
    image: confluentinc/cp-kafka:7.6.0
    hostname: kafka-amsterdam
    ports:
      - "9092:9092"
      - "9093:9093"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,SSL://YOUR_AMSTERDAM_IP:9093
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9093,CONTROLLER://0.0.0.0:9094
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka-amsterdam:9094
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_SSL_KEYSTORE_FILENAME: kafka.keystore.jks
      KAFKA_SSL_KEYSTORE_CREDENTIALS: keystore_creds
      KAFKA_SSL_KEY_CREDENTIALS: key_creds
      KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.truststore.jks
      KAFKA_SSL_TRUSTSTORE_CREDENTIALS: truststore_creds
    volumes:
      - ./certs:/etc/kafka/secrets
      - kafka-amsterdam-data:/var/lib/kafka/data

  schema-registry-amsterdam:
    image: confluentinc/cp-schema-registry:7.6.0
    hostname: schema-registry-amsterdam
    depends_on:
      - kafka-amsterdam
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry-amsterdam
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'kafka-amsterdam:9092'

volumes:
  kafka-amsterdam-data:

Step 4: New York Kafka Cluster Deployment

Deploy the New York cluster with similar configuration but different node IDs:

# newyork-docker-compose.yml
version: '3.8'
services:
  kafka-newyork:
    image: confluentinc/cp-kafka:7.6.0
    hostname: kafka-newyork
    ports:
      - "9092:9092"
      - "9093:9093"
    environment:
      KAFKA_NODE_ID: 2
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,SSL://YOUR_NEWYORK_IP:9093
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9093,CONTROLLER://0.0.0.0:9094
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_CONTROLLER_QUORUM_VOTERS: 2@kafka-newyork:9094
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
    volumes:
      - ./certs:/etc/kafka/secrets
      - kafka-newyork-data:/var/lib/kafka/data

  schema-registry-newyork:
    image: confluentinc/cp-schema-registry:7.6.0
    hostname: schema-registry-newyork
    depends_on:
      - kafka-newyork
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry-newyork
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'kafka-newyork:9092'

volumes:
  kafka-newyork-data:

Step 5: MirrorMaker 2 Cross-Region Replication

Configure MirrorMaker 2 for bidirectional replication between regions:

# mm2.properties
clusters = amsterdam, newyork
amsterdam.bootstrap.servers = AMSTERDAM_IP:9093
newyork.bootstrap.servers = NEWYORK_IP:9093

# Cross-region replication flows
amsterdam->newyork.enabled = true
newyork->amsterdam.enabled = true

# Topic replication patterns
amsterdam->newyork.topics = .*
newyork->amsterdam.topics = .*

# Security configuration
amsterdam.security.protocol=SSL
newyork.security.protocol=SSL
amsterdam.ssl.truststore.location=/etc/kafka/secrets/kafka.truststore.jks
newyork.ssl.truststore.location=/etc/kafka/secrets/kafka.truststore.jks

# Replication factor and sync settings
replication.factor=3
offset-syncs.topic.replication.factor=3
heartbeats.topic.replication.factor=3

Step 6: Zero-Downtime Failover Configuration

Implement automated failover using health checks and DNS updates:

# Create failover script
cat > failover-check.sh << 'EOF'
#!/bin/bash

AMSTERDAM_BROKER="AMSTERDAM_IP:9093"
NEWYORK_BROKER="NEWYORK_IP:9093"

# Health check function
check_broker() {
  kafka-broker-api-versions --bootstrap-server $1 --command-config client-ssl.properties >/dev/null 2>&1
  return $?
}

# Primary failover logic
if ! check_broker $AMSTERDAM_BROKER; then
  echo "Amsterdam cluster down, switching to New York"
  # Update DNS or load balancer configuration
  # Notify monitoring systems
fi
EOF

chmod +x failover-check.sh

Step 7: Start and Verify Deployment

Launch both clusters and verify connectivity:

# Start Amsterdam cluster
docker-compose -f amsterdam-docker-compose.yml up -d

# Start New York cluster (on NY server)
docker-compose -f newyork-docker-compose.yml up -d

# Verify cluster health
docker exec kafka-amsterdam kafka-topics --bootstrap-server localhost:9092 --list

# Test cross-region replication
kafka-console-producer --bootstrap-server AMSTERDAM_IP:9093 --topic test-topic --producer.config client-ssl.properties
kafka-console-consumer --bootstrap-server NEWYORK_IP:9093 --topic amsterdam.test-topic --consumer.config client-ssl.properties

Best Practices

Conclusion

This multi-region Kafka deployment provides enterprise-grade reliability with zero-downtime failover capabilities across Amsterdam and New York regions. The combination of KRaft mode, MirrorMaker 2 replication, and TLS security ensures your streaming data infrastructure can handle both planned maintenance and unexpected outages.

For organizations requiring even lower latency or specialized compliance requirements, explore our regional comparison guide to determine the optimal VPS placement for your specific workloads. The high-performance infrastructure and advanced networking features available with premium VPS instances make them ideal for mission-critical Kafka deployments.

Share your love