Simple High Availability website with Kubernetes

In the last post I described a simple setup to manually run two instances of the same server to keep the website available even if one of them fails. Now I have tried the same with Kubernetes. For Kubernetes many of these things are provided out of the box, which makes the setup partially simpler but it seems to require more RAM on the nodes.

What we want is a webserver with a storage to save user uploaded files and a database:

But it should be highly available, i.e. distributed over multiple servers so that it continues to run even if one of them fails. With Kubernetes it will look like this:

Now let’s install this step by step:

Linux

We need three Linux nodes with Debian or Ubuntu. They should have 4GB RAM or more (it might work with 2GB RAM but then there won’t be much RAM left for your applications). And we need to install wireguard and open-iscsi:

apt-get update
apt install -y wireguard open-iscsi
systemctl enable iscsid
systemctl start iscsid
modprobe iscsi_tcp
echo "iscsi_tcp" | sudo tee /etc/modules-load.d/iscsi_tcp.conf

K3s – a small Kubernetes version

Kubernetes can be installed with a single command. It can also automatically integrate Wireguard encryption between the nodes:

curl -sfL https://get.k3s.io | sh -s - server --cluster-init --flannel-backend=wireguard-native

After it has started, we need the IP and the token of this first node. The token can be retrieved using

cat /var/lib/rancher/k3s/server/node-token

Now we can start it on the other two nodes by running this command on node 2 and 3:

curl -sfL https://get.k3s.io | sh -s - server --server https://<your node1 IP>:6443  --flannel-backend=wireguard-native --token <your node1 token>

This way we have three nodes that can communicate with each other using an encrypted Wireguard connection.

Longhorn (Synchronized File System)

For different purposes we want to have the same files on all nodes. E.g. when a user uploads a file, it should be available on all three nodes. For this purpose we install Longhorn. Just run this on node 1, it will be installed on the others automatically:

curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
helm repo add longhorn https://charts.longhorn.io
helm repo update
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.5.1/deploy/longhorn.yaml

Traefik

K3s already contains Traefik, which can create Let’s encrypt certificates and route traffic to our webserver. When a certificate was created, it should be saved in a space where Longhorn can sync it with the other nodes, a persistent volume. First we need to create this persistent volume. Create a file traefik-pvc.yaml on node 1 with this content:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: traefik
  namespace: kube-system
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Mi
  storageClassName: longhorn

Then run “kubectl apply -f traefik-pvc.yaml” to create this persistent volume.

Afterward we have to configure Traefik. Because it is already included in K3s, we have to edit the existing configuration using the command

kubectl edit deployment traefik -n kube-system

We have to make three changes:

args

In the “args” section we have to add some arguments to specify that we want to create Let’s encrypt certificates for our domains:

        - --certificatesresolvers.default.acme.httpchallenge=true
        - --certificatesresolvers.default.acme.httpchallenge.entrypoint=web
        - --certificatesresolvers.default.acme.email=<your email address>     
        - --certificatesresolvers.default.acme.caserver=https://acme-v02.api.letsencrypt.org/directory
        - --certificatesresolvers.default.acme.storage=/data/acme.json  

Then, above the “priorityClassName” section we insert a block that will set the file permissions of the acme.json in the persistent volume correctly so that Traefik can access it:

      initContainers:
      - command:
        - sh
        - -c
        - touch /data/acme.json; chmod -v 600 /data/acme.json
        image: busybox:latest
        imagePullPolicy: Always
        name: volume-permissions
        resources: {}
        securityContext:
          runAsGroup: 65532
          runAsNonRoot: true
          runAsUser: 65532
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /data
          name: data
      priorityClassName: system-cluster-critical

Then, in the existing “securityContext” section we add a “fsGroup” row to mount the persistent volume with a certain group id:

      securityContext:
        fsGroup: 65532
        runAsGroup: 65532
        runAsNonRoot: true
        runAsUser: 65532

And finally, in the “volumes” section we change the volume “data” to use our new persistent volume:

    volumes:                                                                                                
      - name: data
        persistentVolumeClaim:
          claimName: traefik
      - emptyDir: {}                                                                                        
        name: tmp

After saving these changes, Kubernetes should automatically restart Traefik. You can see all existing “pods” using

kubectl get pods -A

If you want to restart Traefik, you can do this by just deleting it:

kubectl delete pod <traefik pod name> -n kube-system

Container Registry

To be able to install own containers, e.g. our PHP webserver, we need a container registry, i.e. a service where our nodes can load container images from when they need to install a service. Installing it is quite simple. First we create a separate namespace:

kubectl create namespace registry

Then we create a file registry-all.yaml with its configuration. At the end you have to replace “registry.yourdomain.com” with your real domain for the registry. I.e. you have to create entries in your DNS with the name “registry.yourdomain.com” and the IPs of all three nodes (i.e. three A entries).

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: registry-pvc
  namespace: registry
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: registry
  namespace: registry
spec:
  replicas: 1
  selector:
    matchLabels:
      app: registry
  template:
    metadata:
      labels:
        app: registry
    spec:
      containers:
      - name: registry
        image: registry:2
        ports:
        - containerPort: 5000
        volumeMounts:
        - name: registry-storage
          mountPath: /var/lib/registry
      volumes:
      - name: registry-storage
        persistentVolumeClaim:
          claimName: registry-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: registry-service
  namespace: registry
spec:
  type: NodePort
  ports:
  - port: 5000
    targetPort: 5000
    nodePort: 30500  # Choose a port between 30000-32767
  selector:
    app: registry
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: registry-ingress
  namespace: registry
  annotations:
    kubernetes.io/ingress.class: traefik
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
    traefik.ingress.kubernetes.io/router.tls: "true"
    traefik.ingress.kubernetes.io/router.tls.certresolver: default
    traefik.ingress.kubernetes.io/service.serverPort: "5000"
spec:
  tls:
  - hosts:
    - registry.yourdomain.com
  rules:
  - host: registry.yourdomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: registry-service
            port:
              number: 5000

Then install it using

kubectl apply -f registry-all.yaml

That’s all. Now Traefik will create a Let’s encrypt certificate for it and you should be able to access it on https://registry.yourdomain.com . In the future you should add authentication to the registry. I will try to add documentation for it here.

Galera Cluster (Distributed MariaDB database)

Galera Cluster is a MariaDB (MySQL compatible) database where each node is a master, i.e. the nodes will sync with each other and you can do read and write operations on all of them. Thanks to an existing helm chart from Bitnami, it is easy to install. Just run this on node 1:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
kubectl create namespace galera

Then create a file galera-values.yaml:

## galera-values.yaml

## Global Docker image parameters
##
image:
  registry: docker.io
  repository: bitnami/mariadb-galera
  tag: 10.11.4-debian-11-r0
  pullPolicy: IfNotPresent
  debug: false

## Kubernetes resource requests and limits
##
resources:
  requests:
    memory: 512Mi
    cpu: 250m
  limits:
    memory: 1024Mi
    cpu: 500m

## Persistence configuration
##
persistence:
  enabled: true
  storageClass: longhorn
  accessModes:
    - ReadWriteOnce
  size: 8Gi

## MariaDB configuration
##
auth:
  rootPassword: my-root-password
  replicationUser: repl_user
  replicationPassword: my-repl-password

## Service configuration
##
service:
  type: ClusterIP

## Number of replicas
##
replicaCount: 3

## Galera configuration
##
galera:
  clusterBootstrap: true

And run this afterward:

mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config

helm install my-galera bitnami/mariadb-galera \
  --namespace galera \
  --values galera-values.yaml

Now Galera should run on all three nodes. You can check it using

kubectl get pods -A -o wide

It will also use a Longhorn persistent volume to make re-syncing a new node faster, because then this node will already have the files of the previous failed node.

To get the automatically created root password for the database, you can use this command:

kubectl get secret --namespace galera my-galera-mariadb-galera -o jsonpath="{.data.mariadb-root-password}" | base64 --decode

To connect to the database and e.g. create databases and tables, you can use this:

kubectl exec -it my-galera-mariadb-galera-1 -n galera -- bash

mysql -u root -p

Now you can create an example database that we can use for our PHP example program later:

CREATE DATABASE myappdb;
CREATE USER 'myappuser'@'%' IDENTIFIED BY 'myapppassword';
GRANT ALL PRIVILEGES ON myappdb.* TO 'myappuser'@'%';
FLUSH PRIVILEGES;
EXIT;

PHP Webserver

Now we have everything to run our PHP webserver. Here is a simple example PHP project that we can deploy. Create a directory “php-app” and save a file index.php into it with this content:

<?php
$servername = getenv('DB_HOST');
$username = getenv('DB_USER');
$password = getenv('DB_PASSWORD');
$dbname = getenv('DB_NAME');

// Create connection
$conn = new mysqli($servername, $username, $password, $dbname);

// Check connection
if ($conn->connect_error) {
    die("Connection failed: " . $conn->connect_error);
}
echo "Connected successfully to Galera Cluster";

// Close connection
$conn->close();
?>

And create a Dockerfile in the same directory:

FROM php:8.1-apache
RUN docker-php-ext-install mysqli
COPY . /var/www/html/
RUN chown -R www-data:www-data /var/www/html

Now we have to create a container image from it and upload it to our container registry:

apt-get install docker docker.io
docker build -t registry.yourdomain.com/php-app:latest .
docker push registry.yourdomain.com/php-app:latest

Next we have to give the database credentials to our PHP script by creating a Kubernetes secret:

kubectl create secret generic galera-credentials \
  --from-literal=DB_HOST=my-galera-mariadb-galera.galera.svc.cluster.local \
  --from-literal=DB_USER=root \
  --from-literal=DB_PASSWORD=<your password> \
  --from-literal=DB_NAME=myappdb

Now it is time to set www.yourserver.com to the IPs of all three nodes (i.e. three A entries). Add it in your DNS server.

Then create a file php-all.yaml with this content and using that secret:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: php-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: php-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: php-app
  template:
    metadata:
      labels:
        app: php-app
    spec:
      containers:
      - name: php-container
        image: registry.yourdomain.com/php-app:latest
        ports:
        - containerPort: 80
        envFrom:
        - secretRef:
            name: galera-credentials
        volumeMounts:
        - name: php-storage
          mountPath: /var/www/html/uploads
      volumes:
      - name: php-storage
        persistentVolumeClaim:
          claimName: php-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: php-service
spec:
  selector:
    app: php-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: php-ingress
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
    traefik.ingress.kubernetes.io/router.tls: "true"
    traefik.ingress.kubernetes.io/router.tls.certresolver: default
spec:
  tls:
  - hosts:
    - www.yourdomain.com
  rules:
  - host: www.yourdomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: php-service
            port:
              number: 80

and install it using

kubectl apply -f php-all.yaml

That’s all. Now you should have a working 3 node Kubernetes cluster that will continue to work even if one node fails. And when the node comes back it should automatically be re-integrated. You can access it using

$ curl https://www.yourdomain.com
Connected successfully to Galera Cluster

Cheap high availability website

Because all hosters, even the largest, have sometimes outages, I wanted to find a way to make a website really reliable but without much cost. It should use multiple hosters and multiple DNS services to even work if a whole hoster fails.

In this first attempt it will be about a static website, i.e. DNS and a website with https (by using Let’s Encrypt) using Docker, Traefik and Nginx.

DNS

First we need a reliable DNS server that has an API that Traefik can use for the Let’s Encrypt DNS-01 challenge. PowerDNS is free and is supported by Let’s Encrypt/Traefik. It will be the master for two other name server clusters of two hosters:

“Hoster 1 DNS” and “Hoster 2 DNS” will be secondary name servers that receive all changes from our PowerDNS server. We will make the PowerDNS server hidden, so that its IP is not known and it is harder or impossible to attack. Only the name servers of “Hoster 1” and “Hoster 2” are published for the domain.

PowerDNS

For setting up PowerDNS we will use docker compose. We will use Traefik, MariaDB and PowerDNS. This is a basic configuration:

version: "3.7"
services:
  reverse-proxy:
    image: traefik:v3.1
    restart: always
    command:
      - "--providers.docker"
      - "--providers.docker.exposedByDefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.mydnsresolver-powerdns.acme.dnschallenge=true"
      - "--certificatesresolvers.mydnsresolver-powerdns.acme.dnschallenge.provider=pdns"
      - "--certificatesresolvers.mydnsresolver-powerdns.acme.email=yourletsencryptemail@mail.com"
      - "--certificatesresolvers.mydnsresolver-powerdns.acme.caserver=https://acme-v02.api.letsencrypt.org/directory"
      - "--certificatesresolvers.mydnsresolver-powerdns.acme.storage=/letsencrypt/acme_pdns.json"
    labels:
      - "traefik.enable=true"
      - "traefik.http.middlewares.ipwhitelist.ipwhitelist.sourcerange=otherwebserver/32,172.22.0.0/16"
    environment:
      - "PDNS_API_KEY=yourkey"
      - "PDNS_API_URL=http://powerdns:8081"
      - "PDNS_PROPAGATION_TIMEOUT=300"
    ports:
      - "80:80"
      - "443:443"
    logging:
      options:
        max-size: "100M"
        max-file: "10"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - letsencrypt_prod:/letsencrypt

  pdns-db:
    build:
      context: ./pdns-db
    environment:
      MYSQL_ROOT_PASSWORD: yourdbrootpassword  # Replace with your root password
      MYSQL_DATABASE: powerdns
      MYSQL_USER: powerdns
      MYSQL_PASSWORD: yourdbuserpassword  # Replace with your database user password
    volumes:
      - pdns-db-data:/var/lib/mysql
    restart: always

  powerdns:
    build:
      context: ./powerdns
      dockerfile: Dockerfile
    volumes:
      - pdns-data:/data
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "8081/tcp"
    restart: always
    labels:
      - "traefik.enable=true"
      - "traefik.http.services.powerdns.loadbalancer.server.port=8081"
      - "traefik.http.routers.powerdns-https.rule=Host(`nameserver.your-domain.com`)"
      - "traefik.http.routers.powerdns-https.entrypoints=websecure"
      - "traefik.http.routers.powerdns-https.tls=true"
      - "traefik.http.routers.powerdns-https.tls.certresolver=mydnsresolver-powerdns"
      - "traefik.http.routers.powerdns-https.middlewares=ipwhitelist@docker"

volumes:
    letsencrypt_prod:
    pdns-data:
    pdns-db-data:

This docker-compose.yaml file now contains Traefik with the Let’s encrypt DNS-01 challenge, MariaDB and PowerDNS.

MariaDB

To use MariaDB with PowerDNS, a database and some tables have to be created:

Create a directory pdns-db. Create a “Dockerfile” in it:

FROM mariadb:10.5.19
COPY script/database /docker-entrypoint-initdb.d

Create a directory pdns-db/script/database and copy the file from https://raw.githubusercontent.com/PowerDNS/pdns/master/modules/gmysqlbackend/schema.mysql.sql into it. It will create te necessary tables for PowerDNS when the MariaDB database is started for the first time.

PowerDNS

Create a directory powerdns and save this Dockerfile into it:

FROM powerdns/pdns-auth-49:latest
WORKDIR /etc/powerdns
COPY ./pdns.conf /etc/powerdns/pdns.conf
EXPOSE 53/tcp 53/udp

Create another file called pdns.conf in the same directory:

launch=gmysql

gmysql-host=pdns-db
gmysql-port=3306
gmysql-dbname=powerdns
gmysql-user=powerdns
gmysql-password=yourdbuserpassword

primary=yes

# Here we have to enter the IP addresses of our secondary name servers
also-notify=1.2.3.4
allow-axfr-ips=1.2.3.4

receiver-threads=2

# Activate the API to let Traefik perform the DNS-01 challenge with Let's Encrypt
api=yes
api-key=yourkey  # The PDNS_API_KEY from the docker-compose.yaml
webserver=yes
webserver-address=0.0.0.0
webserver-port=8081
webserver-allow-from=127.0.0.1,::1,172.22.0.0/16

# Logging-Einstellungen (optional)
loglevel=9

Now you can start your containers with “docker-compose up -d”. Then open a shell into the PowerDNS container, e.g. with “docker exec -it your-powerdns-container-name /bin/bash”.

In the container you will find the tool pdnsutil. It is used to manage the DNS entries. You can now use it to create your DNS zone or import a zone file:

pdnsutil create-zone your-domain.com
(or pdnsutil load-zone your-domain.com /etc/powerdns/zones/your-domain.com.zone)

pdnsutil delete-rrset your-domain.com @ SOA
pdnsutil add-record your-domain.com @ SOA 300 "nameserver.yourhoster.com hostmaster.your-server.com 2024090203 3600 600 604800 1440"
pdnsutil add-record your-domain.com @ A 300 10.1.2.3
pdnsutil add-record your-domain.com nameserver A 300 10.1.2.3
...
pdnsutil rectify-zone your-domain.com
pdnsutil list-zone your-domain.com

Now your domain has been created using PowerDNS and was saved in the MariaDB database. Your name server should now already work. But we want it to be hidden and to be synced with one or more secondary name servers.

For this purpose you now have to configure your secondary name server to receive changes from your PowerDNS name server. How this works is described in the DNS documentation of your hoster.

After making changes to your domain you can send them to your secondary name servers using these commands:

pdnsutil increase-serial your-domain.com
pdnsutil rectify-zone your-domain.com
pdns_control notify your-domain.com

Later we want to make changes via the API (for the Let’s encrypt DNS-01 challenge). Then the serial number should be increased automatically. To make this work you have to run these commands once:

pdnsutil set-meta your-domain.com SOA-EDIT-API DEFAULT
pdnsutil set-kind your-domain.com master

Security

In the docker-compose.yaml we have used a whitelist to limit the access to the API to the local containers and one remote server (which we will later see in the Webserver section). Additionally you can limit the access to port 53 to only the same IPs as “allow-axfr-ips” (which you will get from the documentation of your secondary name servers). So basically the only port that is open to the public will be port 80 and 443 for the webserver (which is described below).

That’s all for configuring the name server. Now you just have to enter the names of the secondary name servers into the name server form for your domain at your registrar. When you use two hosters as secondary name servers, your DNS will continue to work even if one hoster fails.

Webserver

To have something useful we will now create two webservers that have the same content but different IPs. In the DNS we will enter both with the same domain. This way the browsers will automatically try to contact both and use the one that is faster or the one that responds if one fails.

This is actually quite simple, the difficult part is only getting the Let’s encrypt certificate. We can add this to docker-compose.yaml:

  web:
    build:
      context: ./web
    restart: always
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.web-http.rule=Host(`www.your-domain.com`)"
      - "traefik.http.routers.web-http.entrypoints=web"
      - "traefik.http.routers.web-http.tls=false"
      - "traefik.http.routers.web-https.rule=Host(`www.your-domain.com`)"
      - "traefik.http.routers.web-https.entrypoints=websecure"
      - "traefik.http.routers.web-https.tls=true"
      - "traefik.http.routers.web-https.tls.certresolver=mydnsresolver-powerdns"

Then we need to create the “web” directory and place this Dockerfile into it:

FROM nginx:alpine
COPY ./html /usr/share/nginx/html

And in the html subdirectory you can place your content, e.g. an index.html.

Add the IP address in your PowerDNS docker container and send it to the secondary name servers:

pdnsutil add-record your-domain.com www A 300 10.1.2.3

pdnsutil increase-serial your-domain.com
pdnsutil rectify-zone your-domain.com
pdns_control notify your-domain.com

Afterward you can open https://www.your-domain.com and it should work. Traefik should be able to retrieve a certificate. You can see it in the docker logs of the Traefik container.

Second webserver

To make the web server highly available, even if one server fails, we will start another webserver. Just use the same docker-compose.yaml but without the pdns-db and powerdns services. I.e. just Traefik and the web service. And you have to make one modification. Because PowerDNS does not run on the local server but a remote server, you have to change the URL:

- "PDNS_API_URL=https://nameserver.your-domain.com"

And we have to add the IP address of the second server to our PowerDNS:

pdnsutil add-record your-domain.com www A 300 10.2.3.4

pdnsutil increase-serial your-domain.com
pdnsutil rectify-zone your-domain.com
pdns_control notify your-domain.com

And you have to whitelist the IP to let it access the PowerDNS API:

- "traefik.http.middlewares.ipwhitelist.ipwhitelist.sourcerange=10.2.3.4/32,172.22.0.0/16"

That’s all. Now you can access https://www.your-domain.com and it will use just one of these servers. If you stop Traefik on one of the servers (to simulate a server failure), you won’t notice a difference when opening the URL. Only when both Traefik instances on both servers are stopped, the website will become unavailable.

So to make the website become unavailable, both servers of both hosters and also all nameservers of both hosters would have to fail at the same time. I.e. in most cases you should have an uptime of 100% now. One problem might be routing problems so it could be a good idea to have the second server in a different location, e.g. on a different continent or have even more than two servers. You can add further servers in the same way as the second server.

Dynamic content

Next it would be good to support dynamic content. First we will use a filesystem that can automatically synchronize changes between the servers. If one of the server fails and comes up again later, it will automatically sync the changes that happened in the meantime.

We will use Syncthing for this purpose. It is easy to set up and encypts the data on transit. We can add it to our docker-compose.yaml like this:

  syncthing:
    image: syncthing/syncthing
    hostname: my-syncthing
    volumes:
      - syncthing:/var/syncthing
    ports:
      - 22000:22000/tcp # TCP file transfers
      - 22000:22000/udp # QUIC file transfers
    restart: unless-stopped

If you want to see the web interface, e.g. for debugging purposes, you can also open that port using “- 8384:8384 # Web UI” but please be careful, because it will use unencrypted http and will not have a password by default. But it can be very useful to examine problems with the sync. You could also put it behind Traefik to use https but then you need to use different domain names for each instance of Syncthing to know which nodes you are currently working with.

On the first start, the service will generate a device ID and an API key that we will need. Use “docker exec” to open a shell in the container and then use these commands:

# API key:
sed -n 's/.*<apikey>\(.*\)<\/apikey>.*/\1/p' /var/syncthing/config/config.xml

# Device ID:
sed -n 's/.*<device id="\([^"]*\)".*/\1/p' /var/syncthing/config/config.xml | head -n 1

We have to do this on both devices to get the API key and device ID from both. We will call them APIKEY1, DEVICEID1, APIKEY2 and DEVICEID2 in the following commands.

Then we have to connect them with each other and configure a few things. Perform this inside the container:

# To ensure that they are just connecting directly without any relays
curl -X PATCH -H "X-API-Key: APIKEY1" -H "Content-Type: application/json" -d '
  {
  "globalAnnounceEnabled": false,
  "localAnnounceEnabled": false,
  "relaysEnabled": false,
  "natEnabled": false
  }
' http://localhost:8384/rest/config/options

# To let the node know of the other node
curl -X PUT -H "X-API-Key: APIKEY1" -H "Content-Type: application/json" -d '  
  {  
    "deviceID": "DEVICEID2",  
    "addresses": ["tcp://10.2.3.4:22000"],  
    "name": "my-syncthing2",  
    "compression": "metadata",  
    "introducer": false,  
    "paused": false,  
    "allowedNetworks": [],  
    "autoAcceptFolders": false,  
    "maxSendKbps": 0,  
    "maxRecvKbps": 0,  
    "ignoredFolders": []  
  }  
' http://localhost:8384/rest/config/devices/DEVICEID2

# To sync the default folder with the other node
apk add jq

curl -X GET -H "X-API-Key: APIKEY1" http://localhost:8384/rest/config/folders/default | \
jq '.devices += [{"deviceID": "DEVICEID2", "introducedBy": "", "encryptionPassword": ""}]' > /tmp/updated_config.json

curl -X PATCH -H "X-API-Key: APIKEY1" -H "Content-Type: application/json" \
-d @/tmp/updated_config.json http://localhost:8384/rest/config/folders/default

# To reduce the sync interval to 1 second, which means each change will appear within a second on the other device.
curl -X PATCH -H "X-API-Key: APIKEY1" -H "Content-Type: application/json" -d '  
  {  
  "fsWatcherEnabled": true,  
  "fsWatcherDelayS": 1  
  }  
' http://localhost:8384/rest/config/folders/default

You have to do this on both nodes so that they know each other. It will sync the folder /var/syncthing/Sync between both servers. When this is done, we can use it for our nginx server:

We add the volume to the “web” service on both nodes:

  web:
    build:
      context: ./web
    restart: always
[...]
    volumes:
      - syncthing:/var/syncthing

And we create a default.conf file for nginx:

server {
    listen 80;

    server_name localhost;

    root /var/syncthing/Sync/html;

    index index.html;

    location / {
        try_files $uri $uri/ =404;
    }
}

Then we change the Dockerfile of the web service:

FROM nginx:alpine
COPY ./default.conf /etc/nginx/conf.d/

Finally you have to rebuild the web container, e.g. like this:

docker-compose up -d --build --force-recreate web

When this is done, you can copy files into /var/syncthing/Sync/html in the Syncthing container on one node and it will appear on the other. Ensure that the owner of the files is “1000.1000”.

So when you now make changes to your web server files, they will automatically be available on both servers after about a second. So both nodes will serve the same files and now you can make changes to the files without re-deploying the container. E.g. you can make changes using PHP scripts.

Next Steps

The final step would be a database that can sync between the servers (maybe a MariaDB Galera Cluster).

SPF problem when forwarding emails

Forwarding emails can be a problem when the SPF (Sender Policy Framework) entry of the sender’s domain is set to “-all”. SPF is an email validation system designed to prevent spam and phishing by verifying that an incoming message was sent from an authorized server.

When an SPF record is set to “-all”, it indicates that no other servers are authorized to send email on behalf of the domain. This means that if an email is forwarded from an unauthorized server to another server, the receiving server may reject the message as a potential spam or phishing attempt.

E.g. if someone sends you an email from example.com and you want to automatically forward your emails from your domain myemailaddress.com to e.g. your GMail account, then this won’t work, because GMail won’t accept the email, because myemailaddress.com is not allowed to send “@example.com” emails. Only the mailservers of example.com are allowed to send such emails.

A solution can be to use an IMAP forwarder, i.e. a program that connects to both IMAP servers (myemailaddress.com and GMail in that example) and automatically sends all emails that appear on myemailaddress.com to GMail).

A free open source solution is the project IMAPTransfer on GitHub.

It is written in Kotlin and can watch the source server for changes and immediately write these new emails into the target IMAP server. Or it can write the new emails (or all emails to create a backup or an archive) into local .eml files . It can even restore these .eml files into another IMAP server.

Switching from altool to notarytool

Apple has deprecated altool which is used to sign programs so that they can be run on other people’s computers. They are signed with a developer’s key and “virus checked”.

Instead of using altool it will soon become necessary to use notarytool. Switching from altool to notarytool is simple. If you have used this before:

xcrun altool --notarize-app --primary-bundle-id com.your.app --username yourmail@yourmail.com --password yourpassword --file YourFile.dmg

You can now just use this command

xcrun notarytool submit YourFile.dmg --apple-id yourmail@yourmail.com --team-id YOURTEAMID --password yourpassword --wait

There is also a way to save the password in a keychain profile using the “store-credentials” command. However the build script cannot access the keychain profile while the device is locked, i.e. you could only run the build script while using the computer.

Fixing curl’s Let’s Encrypt problem on Linux

One of Let’s Encrypt’s root certificates (“DST Root CA X3”) has expired on September 30th, 3021. Now a new root certificate (“ISRG Root X1”) is used. Let’s Encrypt’s Intermediate certificate “Let’s Encrypt R3” was signed by both root certificates. Older versions of curl (7.52) cannot handle this correctly and think that the R3 certificate is no longer valid because its root certificate has expired. However only one of the two root certificates has expired, the other is still valid and thus R3 is valid, too. You can find a diagram of the certificates here: https://letsencrypt.org/certificates/

When you try to use curl 7.52 it can look like this:

# curl --head https://blog.dgunia.de
curl: (60) SSL certificate problem: certificate has expired
More details here: http://curl.haxx.se/docs/sslcerts.html

So one option is to update curl to a newer version, e.g. 7.64 or 7.74, then it works fine. Another option is to remove the expired root certificate (“DST Root CA X3”) from the Linux computer on which you want to use curl.

To remove the certificate, just edit the file /etc/ca-certificates.conf and disable the DST Root CA X3 certificate by writing an exclamation mark in front of it:

!mozilla/DST_Root_CA_X3.crt

Then run update-ca-certificates to read the ca-certificates.conf file and update the system’s certificates. Afterward curl should work fine.

Update: When I compiled the new version of curl I also had to compile a new version of OpenSSL. It seems that the real problem is in OpenSSL 1.0, not in curl. So it seems to be sufficient to update OpenSSL 1.0 to at least OpenSSL 1.1 or to remove the expired certificate. You can now find a blog post that explains this here:

https://letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021/

Making a Java program available as default email handler on Windows

This page will explain how to register a JavaFX maven program that uses the javafx-maven-plugin as a default email handler on Windows. I.e. users can see that program as an option when selecting the default email program.

To customize the installation process of Inno Setup and WiX toolset you need the default configuration files Java uses. In JDK 8 the template.iss and template.wxs files can be found under com/oracle/tools/packager/windows/template.iss in the jar %JAVA_HOME%\lib\ant-javafx.jar ( https://stackoverflow.com/questions/31862568/how-can-i-customize-the-inno-template-iss-file-to-be-used-in-a-javafx-8-build ). You have to copy these files into src/main/deploy/package/windows and rename them to yourapp.iss and yourapp.wxs . Then you can add additional entries to create registry entries to them.

For InnoDB you have to add these lines:

[Registry]

Root: HKCU; Subkey: "SOFTWARE\Classes\YourApp.MailTo"; Flags: uninsdeletekey; ValueType: string; ValueName: ""; ValueData: "URL:MailTo Protocol"
Root: HKCU; Subkey: "SOFTWARE\Classes\YourApp.MailTo"; Flags: uninsdeletekey; ValueType: string; ValueName: "URL Protocol"; ValueData: ""
Root: HKCU; Subkey: "SOFTWARE\Classes\YourApp.MailTo\shell"; Flags: uninsdeletekey
Root: HKCU; Subkey: "SOFTWARE\Classes\YourApp.MailTo\shell\open"; Flags: uninsdeletekey
Root: HKCU; Subkey: "SOFTWARE\Classes\YourApp.MailTo\shell\open\command"; Flags: uninsdeletekey; ValueType: string; ValueName: ""; ValueData: "{app}\YourApp.exe ""%1"" ""%2"" ""%3"" ""%4"" ""%5"" ""%6"" ""%7"" ""%8"" ""%9"""

Root: HKCU; Subkey: "SOFTWARE\Classes\Applications\YourApp"; Flags: uninsdeletekey; ValueType: string; ValueName: ""; ValueData: "URL:MailTo Protocol"
Root: HKCU; Subkey: "SOFTWARE\Classes\Applications\YourApp\DefaultIcon"; Flags: uninsdeletekey; ValueType: string; ValueName: "URL Protocol"; ValueData: "{app}\YourApp.exe"
Root: HKCU; Subkey: "SOFTWARE\Classes\Applications\YourApp\FriendlyAppName"; Flags: uninsdeletekey; ValueType: string; ValueName: ""; ValueData: "YourApp"
Root: HKCU; Subkey: "SOFTWARE\Classes\Applications\YourApp\shell"; Flags: uninsdeletekey
Root: HKCU; Subkey: "SOFTWARE\Classes\Applications\YourApp\shell\open"; Flags: uninsdeletekey
Root: HKCU; Subkey: "SOFTWARE\Classes\Applications\YourApp\shell\open\command"; Flags: uninsdeletekey; ValueType: string; ValueName: ""; ValueData: "{app}\YourApp.exe ""%1"" ""%2"" ""%3"" ""%4"" ""%5"" ""%6"" ""%7"" ""%8"" ""%9"""

Root: HKCU; Subkey: "SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths\YourApp"; Flags: uninsdeletekey; ValueType: string; ValueName: ""; ValueData: "{app}\YourApp.exe"
Root: HKCU; Subkey: "SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths\YourApp"; Flags: uninsdeletekey; ValueType: string; ValueName: "SupportedProtocols"; ValueData: "mailto"

Root: HKCU; Subkey: "SOFTWARE\Clients\StartMenuInternet\YourApp\Capabilities\UrlAssociations"; Flags: uninsdeletekey; ValueType: string; ValueName: "mailto"; ValueData: "YourApp.MailTo"

Root: HKCU; Subkey: "SOFTWARE\RegisteredApplications"; Flags: uninsdeletekey; ValueType: string; ValueName: "YourApp"; ValueData: "SOFTWARE\Clients\StartMenuInternet\YourApp\Capabilities"

They will first register a new class “YourApp.MailTo” that can start your program with command line parameters. Then it will register your program in the “Applications” folder and give it a “FriendlyAppName” so that the system knows how to display it. Afterward it will register your program in the “App Paths” and tell the system that it supports the “mailto” protocol. It creates an UrlAssociation from “mailto” to your “YourApp.MailTo” class and registers it in the “RegisteredApplications” under the name of your program.

You can do the same in the wxs file for creating MSI files. Just add these lines to the component “CleanupMainApplicationFolder”:

<RegistryKey Root="HKCU" Key="SOFTWARE\Classes\YourApp.MailTo">
<RegistryValue Type="string" Value="URL:MailTo Protocol"/>
<RegistryValue Type="string" Name="URL Protocol" Value=""/>
</RegistryKey>
<RegistryKey Root="HKCU" Key="SOFTWARE\Classes\YourApp.MailTo\DefaultIcon">
<RegistryValue Type="string" Name="URL Protocol" Value="[APPLICATIONFOLDER]YourApp.exe"/>
</RegistryKey>
<RegistryKey Root="HKCU" Key="SOFTWARE\Classes\YourApp.MailTo\shell"></RegistryKey>
<RegistryKey Root="HKCU" Key="SOFTWARE\Classes\YourApp.MailTo\shell\open"></RegistryKey>
<RegistryKey Root="HKCU" Key="SOFTWARE\Classes\YourApp.MailTo\shell\open\command">
<RegistryValue Type="string" Value="[APPLICATIONFOLDER]YourApp.exe &quot;%1&quot; &quot;%2&quot; &quot;%3&quot; &quot;%4&quot; &quot;%5&quot; &quot;%6&quot; &quot;%7&quot; &quot;%8&quot; &quot;%9&quot;"/>
</RegistryKey>

<RegistryKey Root="HKCU" Key="SOFTWARE\Classes\Applications\YourApp">
<RegistryValue Type="string" Value="URL:MailTo Protocol"/>
<RegistryValue Type="string" Name="URL Protocol" Value=""/>
</RegistryKey>
<RegistryKey Root="HKCU" Key="SOFTWARE\Classes\Applications\YourApp\DefaultIcon">
<RegistryValue Type="string" Name="URL Protocol" Value="[APPLICATIONFOLDER]YourApp.exe"/>
</RegistryKey>
<RegistryKey Root="HKCU" Key="SOFTWARE\Classes\Applications\YourApp\FriendlyAppName">
<RegistryValue Type="string" Value="YourApp"/>
</RegistryKey>
<RegistryKey Root="HKCU" Key="SOFTWARE\Classes\Applications\YourApp\shell"></RegistryKey>
<RegistryKey Root="HKCU" Key="SOFTWARE\Classes\Applications\YourApp\shell\open"></RegistryKey>
<RegistryKey Root="HKCU" Key="SOFTWARE\Classes\Applications\YourApp\shell\open\command">
<RegistryValue Type="string" Value="[APPLICATIONFOLDER]YourApp.exe &quot;%1&quot; &quot;%2&quot; &quot;%3&quot; &quot;%4&quot; &quot;%5&quot; &quot;%6&quot; &quot;%7&quot; &quot;%8&quot; &quot;%9&quot;"/>
</RegistryKey>

<RegistryKey Root="HKCU" Key="SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths\YourApp">
<RegistryValue Type="string" Value="[APPLICATIONFOLDER]YourApp.exe &quot;%1&quot; &quot;%2&quot; &quot;%3&quot; &quot;%4&quot; &quot;%5&quot; &quot;%6&quot; &quot;%7&quot; &quot;%8&quot; &quot;%9&quot;"/>
<RegistryValue Type="string" Name="SupportedProtocols" Value="mailto"/>
</RegistryKey>

<RegistryKey Root="HKCU" Key="SOFTWARE\Clients\StartMenuInternet\YourApp\Capabilities\UrlAssociations">
<RegistryValue Type="string" Name="mailto" Value="YourApp.MailTo"/>
</RegistryKey>

<RegistryKey Root="HKCU" Key="SOFTWARE\RegisteredApplications">
<RegistryValue Type="string" Name="YourApp" Value="SOFTWARE\Clients\StartMenuInternet\YourApp\Capabilities"/>
</RegistryKey>

Accessing the Android Simulator on a remote computer via VNC

If you are using VNC to access a remote Linux computer and you are trying to run the Android Emulator you might see either a blank window or a distorted window:

Blank window, probably due to the “Hardware” graphics driver.
Distorted window, probably due to a wrong display depth.


To fix it you have to select the “Software” graphics driver when creating the emulator:

Additionally when running your VNC server you have to run it in 24 bit mode:

vncserver :2 -depth 24

Then it should work fine:

Signed macOS programs with Java 14

Since February 2020 Apple requires all programs to be signed, hardened and notarized so that the Gatekeeper on macOS Catalina allows them to be run. Before it was only necessary to sign them so Java 8 could still be used. Now they have to be hardened, with requires XCode 10 and Java 8 cannot be compiled with XCode 10 yet so a newer Java version has to be used. Java 14 contains the “jpackage” program to create a native signed app that can fulfill these requirements (with some additional work).

First you have to build your program, e.g. with maven

mvn package

Then you have to create an app image using jpackage e.g. this way:

$JAVA_HOME/bin/jpackage -n MyApp --input target --main-jar MyApp-1.0.jar --main-class com.myapp.MyApp --module-path libfx --add-modules javafx.controls,javafx.fxml,javafx.web,javafx.swing,javafx.media --icon src/main/deploy/package/macosx/MyApp.icns --type app-image --dest appimageoutput --java-options "-Xmx1024m"

Afterward you have to sign all jar files and dylib files in the appimageoutput directory. My “SignPackage.jar” simply searches for all dylib and jar files in the given directory (also dylib files inside jar files) and signs them with “codesign –timestamp –options runtime –entitlements … — deep -vvv -f –sign “Developer (XXX)” file”:

java -jar SignPackage.jar -d appimageoutput -t -r -k "Developer ID Application: John Public (XXXXXXXXXX)" -e "src/main/deploy/package/macosx/MyApp.entitlements"

codesign --timestamp --entitlements src/main/deploy/package/macosx/MyApp.entitlements --options runtime --deep -vvv -f --sign "Developer ID Application: John Public (XXXXXXXXXX)" appimageoutput/MyApp.app/Contents/MacOS/*

codesign --timestamp --entitlements src/main/deploy/package/macosx/MyApp.entitlements --options runtime --deep -vvv -f --sign "Developer ID Application: John Public (XXXXXXXXXX)" appimageoutput/MyApp.app

Then you can create a DMG file:

$JAVA_HOME/bin/jpackage -n MyApp --mac-package-identifier com.myapp --mac-package-name MyApp --mac-sign --mac-signing-key-user-name "John Public (XXXXXXXXXX)" --app-image appimageoutput

Sign the DMG file:

codesign --timestamp --entitlements src/main/deploy/package/macosx/MyApp.entitlements --options runtime --deep -vvv -f --sign "Developer ID Application: John Public (XXXXXXXXXX)" MyApp-1.0.dmg

And notarize it:

xcrun altool --notarize-app --primary-bundle-id com.myapp --username john@public.com --password mypassword --file MyApp-1.0.dmg

The entitlements file should probably at least contain these lines:

<key>com.apple.security.cs.allow-jit</key>
<true/>
<key>com.apple.security.cs.allow-unsigned-executable-memory</key>
<true/>
<key>com.apple.security.cs.disable-executable-page-protection</key>
<true/>
<key>com.apple.security.cs.disable-library-validation</key>
<true/>
<key>com.apple.security.cs.allow-dyld-environment-variables</key>

A complete “MyApp.entitlements” file could look like this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>com.apple.security.app-sandbox</key>
<false/>
<key>com.apple.security.network.server</key>
<true/>
<key>com.apple.security.network.client</key>
<true/>
<key>com.apple.security.files.user-selected.read-write</key>
<true/>
<key>com.apple.security.cs.allow-jit</key>
<true/>
<key>com.apple.security.cs.allow-unsigned-executable-memory</key>
<true/>
<key>com.apple.security.cs.disable-executable-page-protection</key>
<true/>
<key>com.apple.security.cs.disable-library-validation</key>
<true/>
<key>com.apple.security.cs.allow-dyld-environment-variables</key>
<true/>
</dict>
</plist>

Slow MacBook/Notebook?

If you think your MacBook or notebook is slower than when you bought it you might be right. If you have an Intel CPU a dirty fan can make your computer slower because the CPU gets too hot and thus the CPU frequency is reduced. You can examine it by installing the Intel Power Gadget. This is how it looks if the fan cannot cool the CPU enough so that it stays at 100°C:

What you can see here is that if the CPU is used much (“Utilization”) the temperature reaches 100°C. And to decrease the temperature the CPU frequency is reduced, making the CPU slower. In this case it had only 1,7 Ghz in the end.

After cleaning the fans of the MacBook the cooling worked better and it became much faster again.

Moving a website from one provider to another without interruption

When moving a website from one provider to another you probably want to do this without any downtime, i.e. your users should not notice that the provider has changed. Here are a few tips for this process:

Moving static pages (html, css, …)

For moving static pages you can just copy them from your old webspace onto your new webspace, either using a program on your computer or directly. My old provider and my new provider offered ssh access. So I could just use rsync to copy the data from the old server to the new. I had to run a command like this on the new server:

rsync -rtv --links oldusername@oldserver.com:* www/

Later I could run it again to incrementally copy only changed files.

Creating an SSL certificate for the not-yet connected domain

Let’s say you are moving https://www.mydomain.com from one provider to another. You will need to setup an SSL certificate on the new provider before moving the domain there. Because otherwise there would be a downtime between moving the domain and setting up the SSL certificate. If your provider allows to manually enter a certificate and you cannot download the certificate from your old provider, you can use the “manual” mode of “Let’s encrypt” to create the necessary files. First you have to install Certbot. Then you can create a certificate like this:

mkdir certificate
cd certificate

mkdir logs
mkdir etc
mkdir work

certbot certonly -a manual -i apache -d www.mydomain.com --logs-dir logs --config-dir etc --work-dir work

During the process you will have to create a directory .well-known/acme-challenge on your old server and copy a file with a certain content into it. If you have ssh access it can look like this:

mkdir .well-known
cd .well-known
mkdir acme-challenge
cd acme-challenge
cat > thechallengefilename

Then copy&paste the content of the challenge file into the ssh shell and press CTRL-D and then CTRL-C. Afterward you can continue with certbot.

Then you should have four files in etc/live/www.mydomain.com:

cert.pem    chain.pem   fullchain.pem   privkey.pem

You have to copy the contents of these files onto your new provider’s server as your new SSL certificate. It depends on your provider where you have to put it. E.g. on my provider there was a “manual” mode that allowed to copy cert.pem into the CRT field, privkey.pem into the PrivateKey field and chain.pem into the CAT field. Afterward the SSL certificate was successfully installed.

Moving PHP files and databases

If you have also dynamic content, e.g. PHP files and databases, you have the problem that during the transfer of your domain you cannot know which server a user will use. Because the DNS entry of your domain is saved on multiple DNS servers and it can take 24 hours until they all point to the same server, i.e. to your new server. During that time both servers will be used. However that could mean that some data would be saved into your old database and some into your new and you would have to merge both. To prevent this you can just forward all requests from your old server to your new server using a PHP script like this (from StackOverflow):


&lt;?php

// https://stackoverflow.com/questions/22437548/php-how-to-redirect-forward-http-request-with-header-and-body

error_reporting(E_ALL);
ini_set('display_errors', 1);

/* Set it true for debugging. */
$logHeaders = FALSE;

/* Site to forward requests to.  */
$site = 'https://www.newdomain.com/';

/* Domains to use when rewriting some headers. */
$remoteDomain = 'www.newdomain.com';
$proxyDomain = 'www.olddomain.com';

$request = $_SERVER&#91;'REQUEST_URI'];

$ch = curl_init();

/* If there was a POST request, then forward that as well.*/
if ($_SERVER&#91;'REQUEST_METHOD'] == 'POST')
{
    curl_setopt($ch, CURLOPT_POST, TRUE);
    curl_setopt($ch, CURLOPT_POSTFIELDS, $_POST);
}
curl_setopt($ch, CURLOPT_URL, $site . $request);
curl_setopt($ch, CURLOPT_HEADER, TRUE);

$headers = getallheaders();

/* Translate some headers to make the remote party think we actually browsing that site. */
$extraHeaders = array();
if (isset($headers&#91;'Referer']))
{
    $extraHeaders&#91;] = 'Referer: '. str_replace($proxyDomain, $remoteDomain, $headers&#91;'Referer']);
}
if (isset($headers&#91;'Origin']))
{
    $extraHeaders&#91;] = 'Origin: '. str_replace($proxyDomain, $remoteDomain, $headers&#91;'Origin']);
}

/* Forward cookie as it came.  */
curl_setopt($ch, CURLOPT_HTTPHEADER, $extraHeaders);
if (isset($headers&#91;'Cookie']))
{
    curl_setopt($ch, CURLOPT_COOKIE, $headers&#91;'Cookie']);
}
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);

if ($logHeaders)
{
    $f = fopen("headers.txt", "a");
    curl_setopt($ch, CURLOPT_VERBOSE, TRUE);
    curl_setopt($ch, CURLOPT_STDERR, $f);
}

curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
$response = curl_exec($ch);

$header_size = curl_getinfo($ch, CURLINFO_HEADER_SIZE);
$headers = substr($response, 0, $header_size);
$body = substr($response, $header_size);

$headerArray = explode(PHP_EOL, $headers);

/* Process response headers. */
foreach($headerArray as $header)
{
    $colonPos = strpos($header, ':');
    if ($colonPos !== FALSE)
    {
        $headerName = substr($header, 0, $colonPos);

        /* Ignore content headers, let the webserver decide how to deal with the content. */
        if (trim($headerName) == 'Content-Encoding') continue;
        if (trim($headerName) == 'Content-Length') continue;
        if (trim($headerName) == 'Transfer-Encoding') continue;
        if (trim($headerName) == 'Location') continue;
        /* -- */
        /* Change cookie domain for the proxy */
        if (trim($headerName) == 'Set-Cookie')
        {
            $header = str_replace('domain='.$remoteDomain, 'domain='.$proxyDomain, $header);
        }
        /* -- */

    }
    header($header, FALSE);
}

echo $body;

if ($logHeaders)
{
    fclose($f);
}
curl_close($ch);

?&gt;

And you have to forward all requests to that file by creating a .htaccess file with the following content in the root directory of your old server’s webspace:

RewriteEngine On
RewriteRule .* proxy.php

The problem here is that you need a domain with a different name to forward the requests to. Or you could maybe enter your new server’s IP. I just created another domain (a subdomain of an existing domain that I had already transferred), e.g. transfer.myotherdomain.com and let it host the same files as my main domain. I.e. on the old server I still had the domain www.mydomain.com but now with the .htaccess and proxy.php files. On the new server I had www.mydomain.com and transfer.myotherdomain.com which both pointed to the same web directory and had the same contents.

This way the old www.mydomain.com could forward all requests to the new server using transfer.myotherdomain.com because “www.mydomain.com” was not accessible on the new server yet. When I then transferred the www.mydomain.com domain from the old provider to the new provider the www.mydomain.com on the new server started to receive requests and replaced the old server.

Before uploading the .htaccess file you have to copy (export/import) your databases from your old server onto your new server and enter the new database connections and passwords in your php files if necessary.

Moving email accounts

Just setup the same accounts on your new server that you had on your old server. Then use e.g. Thunderbird to connect to both servers and move the emails from one server to the other or just archive them on your computer.

Moving the domain name

When you move your domain name from one hoster to another you have to configure the nameserver on the target hoster to the same IPs as the original hoster first (until you are ready to make the switch). And when you have done that you should set the nameservers of your domain in the original hoster’s web interface already to the nameservers of your new hoster and wait 24 hours. Because what can happen is when you actually move your domain from one hoster to another that your original hoster will immediately remove your entries from their nameservers. So that clients that don’t know your new nameserver yet (because they are using an old cached response that says that your nameserver is the original nameserver) will ask the wrong nameserver and will get no answer. So you should do this:

  • When you are moving from hoster A to B then configure B’s nameserver in the same way as A’s nameserver (i.e. the same entries and IPs, just different nameservers).
  • Then open the web interface of hoster A and enter the nameservers of hoster B for your domain.
  • Wait 24 hours so that everyone knows the new nameservers.
  • Move your domain from hoster A to hoster B.

Finished

After all these steps you should have successfully moved your domain from one provider to another and the users shouldn’t have noticed it because there was no downtime.