Friday, 9 June 2017

Is this Docker / NGINX / Node setup actually load balancing as expected?

I am setting up a web server using Docker / Node / Nginx. I've been playing around with the setup in docker-compose and have come up with 2 working solutions - but in regards to load balancing, one of them may be too good to be true (as it seemingly allows me save space by not having to create additional images/containers). I am looking for verification if what I am seeing is actually legit, and that multiple images etc are not a requirement for load balancing.

Solution 1 (no additional images):

docker-compose.yml

version: '3'

volumes:
  node_deps:

services:
  nginx:
    build: ./nginx
    image: nginx_i
    container_name: nginx_c
    ports:
        - '80:80'
        - '443:443'
    links:
        - node
    restart: always
  node:
    build: ./node
    image: node_i
    container_name: node_c
    command: "npm start"
    ports:
      - '5000:5000'
      - '5001:5001'
      - '5500:5000'
      - '5501:5001' 
    volumes:
      - ./node:/src
      - node_deps:/src/node_modules

nginx.conf

http {
  ...

  upstream shopster-node {
    server node:5000 weight=10 max_fails=3 fail_timeout=30s;
    server node:5500 weight=10 max_fails=3 fail_timeout=30s;
    keepalive 64;
  }

  server {
    ...
  }

} 

Solution 2 (has additional images):

version: '3'

volumes:
  node_deps:

services:
  nginx:
    build: ./nginx
    image: nginx_i
    container_name: nginx_c
    ports:
        - '80:80'
        - '443:443'
    links:
        - node_one
        - node_two
    restart: always
  node_one:
    build: ./node
    image: node_one_i
    container_name: node_one_c
    command: "npm start"
    ports:
      - '5000:5000'
      - '5001:5001'
    volumes:
      - ./node:/src
      - node_deps:/src/node_modules
  node_two:
    build: ./node
    image: node_two_i
    container_name: node_two_c
    command: "npm start"
    ports:
      - '5500:5000'
      - '5501:5001'
    volumes:
      - ./node:/src
      - node_deps:/src/node_modules

nginx.conf

http {
  ...

  upstream shopster-node {
    server node_one:5000 weight=10 max_fails=3 fail_timeout=30s;
    server node_two:5500 weight=10 max_fails=3 fail_timeout=30s;
    keepalive 64;
  }

  server {
    ...
  }

} 

Both scenarios load the app perfectly, on localhost & on the specified ports. I am certain that scenario 2 is load balancing properly, as it mimics a traditional multi-server scenario.

Is there any way I can verify that scenario 1 is actually load balancing as expected? This would be my preferred approach, I just need to know I can trust it.



via dustintheweb

No comments:

Post a Comment