domingo, 30 de junio de 2019

Docker Stack, el docker-compose para Swarm


  Las bondades que nos proporciona docker-compose se pueden aplicar al swarm. Esto es, la configuración de los servicios por medio de archivos de configuración. Utilizaremos el ejemplo del lab anterior donde tenemos un swarm con un vizualizador y un web simple nginx.

el archivo viz.yml
version: "3.1"

services:
  viz:
    image: dockersamples/visualizer
    ports:
      - 8080:8080
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    deploy:
      placement:
        constraints: [node.role == manager ]


el archivo web.yml
version: "3.1"

services:
    web1:
        image: nginx
        ports:
          - 8081:80
    web2:
        image: httpd
        ports:
          - 8082:80
    web3:
        image: httpd
        ports:
          - 8083:80

Ahora vamos a terminar los servicios viz y web. primero antes de deployarlos en el nuevo stack.

bext@bext-VPCF13WFX:~$ docker stack deploy -c viz.yml viz
Creating network viz_default
Creating service viz_viz
failed to create service viz_viz: Error response from daemon: rpc error: code = InvalidArgument desc = port '8080' is already in use by service 'viz' (1k6akx8sh6yrod7zsi5cvf71a) as an ingress port
bext@bext-VPCF13WFX:~$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                             PORTS
1k6akx8sh6yr        viz                 replicated          1/1                 dockersamples/visualizer:latest   *:8080->8080/tcp
vtuogblw02ak        web                 replicated          2/2                 nginx:latest                      *:80->80/tcp
bext@bext-VPCF13WFX:~$ docker service rm viz
viz
bext@bext-VPCF13WFX:~$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
vtuogblw02ak        web                 replicated          2/2                 nginx:latest        *:80->80/tcp
bext@bext-VPCF13WFX:~$ docker stack deploy -c viz.yml viz
Creating service viz_viz
bext@bext-VPCF13WFX:~$ nano web.yml
bext@bext-VPCF13WFX:~$ docker service rm web
web
bext@bext-VPCF13WFX:~$ docker stack deploy -c web.yml web
Creating network web_default
Creating service web_web2
Creating service web_web3
Creating service web_web1
bext@bext-VPCF13WFX:~$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                             PORTS
uu15oc0t9mc6        viz_viz             replicated          1/1                 dockersamples/visualizer:latest   *:8080->8080/tcp
aszigymy7adn        web_web1            replicated          1/1                 nginx:latest                      *:8081->80/tcp
pqb4wultub03        web_web2            replicated          1/1                 httpd:latest                      *:8082->80/tcp
qpx5b2wyge0d        web_web3            replicated          1/1                 httpd:latest                      *:8083->80/tcp

bext@bext-VPCF13WFX:~$ docker stack ps web
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
y61pa7cwj5oa        web_web1.1          nginx:latest        sw-master           Running             Running 40 seconds ago                       
wd0fh80ckwpj        web_web3.1          httpd:latest        sw-worker-2         Running             Running 22 seconds ago                       
zr1uwvrbdw6p        web_web2.1          httpd:latest        sw-worker-1         Running             Running 21 seconds ago                       

Modificamos el web.yml para redeployarlo.
webv2.yml
version: "3.1"

services:
    web1:
        image: nginx
        ports:
          - 8081:80
        deploy:
            replicas: 3
    web2:
        image: httpd
        ports:
          - 8082:80
        deploy:
           replicas: 2

bext@bext-VPCF13WFX:~$ docker stack deploy -c webv2.yml web
Updating service web_web1 (id: aszigymy7adnix8q5lrhj9v1y)
Updating service web_web2 (id: pqb4wultub03vakv0ym2z41c7)
bext@bext-VPCF13WFX:~$ docker container ls
CONTAINER ID        IMAGE                             COMMAND                  CREATED             STATUS                    PORTS               NAMES
171572f3eff1        nginx:latest                      "nginx -g 'daemon of…"   9 minutes ago       Up 9 minutes              80/tcp              web_web1.1.y61pa7cwj5oai3cocmhf686vb
3e340e21423e        dockersamples/visualizer:latest   "npm start"              14 minutes ago      Up 14 minutes (healthy)   8080/tcp            viz_viz.1.zq62g7exoqfpgj8efqy4h76cd
bext@bext-VPCF13WFX:~$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                             PORTS
uu15oc0t9mc6        viz_viz             replicated          1/1                 dockersamples/visualizer:latest   *:8080->8080/tcp
aszigymy7adn        web_web1            replicated          3/3                 nginx:latest                      *:8081->80/tcp
pqb4wultub03        web_web2            replicated          2/2                 httpd:latest                      *:8082->80/tcp
qpx5b2wyge0d        web_web3            replicated          1/1                 httpd:latest                      *:8083->80/tcp
bext@bext-VPCF13WFX:~$ docker service rm web_web_3
Error: No such service: web_web_3
bext@bext-VPCF13WFX:~$ docker service rm qpx5b2wyge0d
qpx5b2wyge0d
bext@bext-VPCF13WFX:~$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                             PORTS
uu15oc0t9mc6        viz_viz             replicated          1/1                 dockersamples/visualizer:latest   *:8080->8080/tcp
aszigymy7adn        web_web1            replicated          3/3                 nginx:latest                      *:8081->80/tcp
pqb4wultub03        web_web2            replicated          2/2                 httpd:latest                      *:8082->80/tcp



eot.

Docker Swarm mode, visualizador Portainer

Aprovechando el ambiente del lab anterior, utilizaremos un visualizador llamado Portanier para jugar con el Docker Swarm.

En este caso instalamos el portainer en el nodo maestro del swarm.

bext@bext-VPCF13WFX:~$ docker run -d -p 9000:9000 \ 
-v /var/run/docker.sock:/var/run/docker.sock portainer/portainer
Unable to find image 'portainer/portainer:latest' locally
latest: Pulling from portainer/portainer
d1e017099d17: Pull complete 
fac26901c311: Pull complete 
Digest: sha256:cc226d8a06b6d5e24b44a4f10d0d1fd701741e84a852adc6d40bef9424a000ec
Status: Downloaded newer image for portainer/portainer:latest
3fa9e3c1e4ce2b42f47ffb21c9adf5174863fa81d542f16876adc3210605e27b
















eot

Docker Swarm mode recostruyendo ambiente de contenedores, Testing failover



-- Reconfigurar el ambiente del swarm como en el lab anterior.
-- Probar el failover, tirando un worker que este ejecutando el servicio.

Reconfiguración


   En la previa sesión guardamos las detuvimos las máquinas virtuales y apagamos la máquina física. Continuando con el laboratorio. arrancamos la vm master, la cual automáticamente ejecuta nuestro visualizador y el servicio web de prueba. Estos los ejecuta en un solo contenedor, posteriormente arrancamos los 2 workers, pero la configuración de tareas continua en la sw-master.
   Para tener la tarea web en otro contenedor detenemos esta tarea y la volvemos a ejecutar, de esta manera se ejecuta en otro contenedor.
   Aquí la pregunta es si ubieramos arrancado primero los workers y luego el master para que los servicios se distribuyeran a lo largo de todos los workers sería posible?. Por lo que vemos al final de este lab la redistribución automatica al tener más máquinas virtuales no es efectuada. Algo tan preciado tal vez solo sea por medio de configuración.?

bext@bext-VPCF13WFX:~$ docker-machine start sw-master
Starting "sw-master"...
(sw-master) Check network to re-create if needed...
(sw-master) Found a new host-only adapter: "vboxnet0"
(sw-master) Waiting for an IP...
Machine "sw-master" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
bext@bext-VPCF13WFX:~$ docker-machine ls
NAME          ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER     ERRORS
sw-master     -        virtualbox   Running   tcp://192.168.99.111:2376           v18.09.7   
sw-worker-1   -        virtualbox   Stopped                                       Unknown    
sw-worker-2   -        virtualbox   Stopped                                       Unknown    
bext@bext-VPCF13WFX:~$ docker-machine start sw-worker-1
Starting "sw-worker-1"...
(sw-worker-1) Check network to re-create if needed...
(sw-worker-1) Waiting for an IP...
Machine "sw-worker-1" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
bext@bext-VPCF13WFX:~$ docker-machine start sw-worker-2
Starting "sw-worker-2"...
(sw-worker-2) Check network to re-create if needed...
(sw-worker-2) Waiting for an IP...
Machine "sw-worker-2" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
bext@bext-VPCF13WFX:~$ docker-machine ssh sw-master
   ( '>')
  /) TC (\   Core is distributed with ABSOLUTELY NO WARRANTY.
 (/-_--_-\)           www.tinycorelinux.net

docker@sw-master:~$ docker swarm init --advertise-addr 192.168.99.111
Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one.
docker@sw-master:~$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
pdlp62hc1wqzo5wuubp5ailmx *   sw-master           Ready               Active              Leader              18.09.7
qfc9sx1tfe3wms1oera75f80b     sw-worker-1         Ready               Active                                  18.09.7
v28filrne0x5fua7r7rxh6zew     sw-worker-2         Ready               Active                                  18.09.7
docker@sw-master:~$ docker images ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
docker@sw-master:~$ exit
logout
bext@bext-VPCF13WFX:~$ docker images ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
bext@bext-VPCF13WFX:~$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
bext@bext-VPCF13WFX:~$ docker container ls
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
bext@bext-VPCF13WFX:~$ docker-machine ssh sw-master
   ( '>')
  /) TC (\   Core is distributed with ABSOLUTELY NO WARRANTY.
 (/-_--_-\)           www.tinycorelinux.net

docker@sw-master:~$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                             PORTS
1k6akx8sh6yr        viz                 replicated          1/1                 dockersamples/visualizer:latest   *:8080->8080/tcp
h5mriknsbuvo        web                 replicated          1/1                 nginx:latest                      *:80->80/tcp
docker@sw-master:~$ curl http://192.168.99.111:8080
<!doctype html>
<html>
<head>
  <meta charset="utf-8">
  <title>Visualizer</title>
  <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
  <meta name="description" content="">
  <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
  <link href='//fonts.googleapis.com/css?family=Ubuntu+Mono|Open+Sans:400,700,400italic' rel='stylesheet' type='text/css'>
  <style type="text/css">
   .hidden{ display: none; }
  </style>
</head>
<body style='background:#254356'>
  <div class='tabs'>
    <button id='tab-physical'>
      <svg xmlns="http://www.w3.org/2000/svg" width="80" height="80" viewBox="0 0 80 80"><path fill="#FFF" d="M14.752 32.456l-7.72.002v7.553h7.72v-7.554zm9.65 0h-7.72v7.556h7.72v-7.556zm0-9.445h-7.72v7.556h7.72V23.01zm9.65 9.446h-7.72v7.556h7.72v-7.556zm0-9.445h-7.72v7.556h7.72V23.01zm9.648 9.446h-7.72v7.556h7.72v-7.556zm0-9.445h-7.72v7.556h7.72V23.01zm9.65 9.446l-7.72.002v7.553h7.72v-7.554zm-9.65-18.89h-7.72v7.556h7.72v-7.556zm31.938 23.106c-2.51-1.417-5.85-1.61-8.693-.792-.35-2.958-2.337-5.55-4.7-7.41l-.938-.738-.79.89c-1.58 1.79-2.052 4.768-1.838 7.053.16 1.68.697 3.388 1.756 4.737-.805.473-1.717.85-2.53 1.12-1.657.55-3.456.854-5.206.854H3.544l-.105 1.107c-.354 3.7.165 7.402 1.728 10.778l.673 1.343.078.124c4.622 7.68 12.74 10.914 21.584 10.914 17.125 0 31.248-7.48 37.734-23.284 4.335.222 8.77-1.033 10.89-5.082l.54-1.033-1.028-.578zm-57.77 19.982v.002c-2.18 0-3.955-1.735-3.955-3.866 0-2.132 1.774-3.866 3.954-3.866s3.954 1.732 3.954 3.865c0 2.13-1.77 3.864-3.95 3.864zm-.01-5.854c-1.137 0-2.06.9-2.06 2.013 0 1.11.924 2.01 2.06 2.01 1.134 0 2.057-.9 2.057-2.01 0-1.11-.922-2.013-2.057-2.013z"/></svg>
    </button>

  </div>
  <div id="app">
    <!-- content goes here -->
  </div>

  <script type="text/javascript">
    window.MS = '1000';
  </script>
  <script type="text/javascript" src="app.js"></script>
</body>
</html>
docker@sw-master:~$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
pdlp62hc1wqzo5wuubp5ailmx *   sw-master           Ready               Active              Leader              18.09.7
qfc9sx1tfe3wms1oera75f80b     sw-worker-1         Ready               Active                                  18.09.7
v28filrne0x5fua7r7rxh6zew     sw-worker-2         Ready               Active                                  18.09.7
docker@sw-master:~$ docker ps
CONTAINER ID        IMAGE                             COMMAND                  CREATED             STATUS                   PORTS               NAMES
5378758ca05d        nginx:latest                      "nginx -g 'daemon of…"   7 minutes ago       Up 7 minutes             80/tcp              web.1.7pr8m576poedkkwoi7m1jnlh2
9098949ebfb7        dockersamples/visualizer:latest   "npm start"              8 minutes ago       Up 8 minutes (healthy)   8080/tcp            viz.1.fnp74zgxepf4hj9d15jtokb1x
docker@sw-master:~$ docker container ls
CONTAINER ID        IMAGE                             COMMAND                  CREATED             STATUS                   PORTS               NAMES
5378758ca05d        nginx:latest                      "nginx -g 'daemon of…"   7 minutes ago       Up 7 minutes             80/tcp              web.1.7pr8m576poedkkwoi7m1jnlh2
9098949ebfb7        dockersamples/visualizer:latest   "npm start"              9 minutes ago       Up 8 minutes (healthy)   8080/tcp            viz.1.fnp74zgxepf4hj9d15jtokb1x
docker@sw-master:~$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                             PORTS
1k6akx8sh6yr        viz                 replicated          1/1                 dockersamples/visualizer:latest   *:8080->8080/tcp
h5mriknsbuvo        web                 replicated          1/1                 nginx:latest                      *:80->80/tcp
docker@sw-master:~$ docker service scale web=2
web scaled to 2
overall progress: 2 out of 2 tasks 
1/2: running   
2/2: running   
verify: Service converged 
docker@sw-master:~$ docker service   

Usage: docker service COMMAND

Manage services

Commands:
  create      Create a new service
  inspect     Display detailed information on one or more services
  logs        Fetch the logs of a service or task
  ls          List services
  ps          List the tasks of one or more services
  rm          Remove one or more services
  rollback    Revert changes to a service's configuration
  scale       Scale one or multiple replicated services
  update      Update a service

Run 'docker service COMMAND --help' for more information on a command.
docker@sw-master:~$ docker service rm web
web
docker@sw-master:~$ docker service create --name=web --publish=80:80 nginx
vtuogblw02akurqjkeyw9oxz0
overall progress: 1 out of 1 tasks 
1/1: running   
verify: Service converged 
docker@sw-master:~$ 
docker@sw-master:~$ eval $(docker-machine env -u)
-bash: docker-machine: command not found
docker@sw-master:~$ exit
logout
bext@bext-VPCF13WFX:~$ eval $(docker-machine env -u)
bext@bext-VPCF13WFX:~$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
kdaljefl1ech729w83jrdmqr9 *   bext-VPCF13WFX      Ready               Active              Leader              18.09.6
bext@bext-VPCF13WFX:~$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
bext@bext-VPCF13WFX:~$ docker image ls
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
miapp80               latest              0c4a3d184820        3 days ago          131MB
<none>                <none>              2ff52a5267f4        3 days ago          131MB
<none>                <none>              fb93b11a258a        3 days ago          131MB
jalbertomr/lab1       miapp80             58c057288e10        3 days ago          131MB
python_app            latest              68e8339a3346        8 days ago          131MB
jalbertomr/lab1       python_app_1        68e8339a3346        8 days ago          131MB
redis                 <none>              3c41ce05add9        2 weeks ago         95MB
python                2.7-slim            ca96bab3e2aa        2 weeks ago         120MB
portainer/portainer   latest              da2759008147        3 weeks ago         75.4MB
hello-world           latest              fce289e99eb9        6 months ago        1.84kB
bext@bext-VPCF13WFX:~$ 
bext@bext-VPCF13WFX:~$ eval $(docker-machine env sw-master)
bext@bext-VPCF13WFX:~$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
pdlp62hc1wqzo5wuubp5ailmx *   sw-master           Ready               Active              Leader              18.09.7
qfc9sx1tfe3wms1oera75f80b     sw-worker-1         Ready               Active                                  18.09.7
v28filrne0x5fua7r7rxh6zew     sw-worker-2         Ready               Active                                  18.09.7
 
 

Test Failover


bext@bext-VPCF13WFX:~$ docker-machine stop sw-worker-1
Stopping "sw-worker-1"...
Machine "sw-worker-1" was stopped.

Detenemos la sw-worker-1, y vemos que el servicio web se traspasa automaticamente a sw-worker-2.



bext@bext-VPCF13WFX:~$ docker service scale web=2
web scaled to 2
overall progress: 2 out of 2 tasks 
1/2: running   
2/2: running   
verify: Service converged 


bext@bext-VPCF13WFX:~$ docker service scale web=3
web scaled to 3
overall progress: 3 out of 3 tasks 
1/3: running   
2/3: running   
3/3: running   
verify: Service converged 


Arrancamos sw-worker-1, vemos que el servicio web no se reacomoda en la maquina sw-worker-1.
bext@bext-VPCF13WFX:~$ docker-machine start sw-worker-1
Starting "sw-worker-1"...
(sw-worker-1) Check network to re-create if needed...
(sw-worker-1) Waiting for an IP...
Machine "sw-worker-1" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.



Vamos a estresarlo con un escalamiento a 7, a ver si se ocupa la sw-worker-1.





Pues observamos que no se redistribuyen en la máquina recien disponible.

eot

sábado, 29 de junio de 2019

Docker Swarm Mode

Creamos tres máquinas virtuales.

Esta será el master
bext@bext-VPCF13WFX:~$ docker-machine create sw-master
Running pre-create checks...
(sw-master) Default Boot2Docker ISO is out-of-date, downloading the latest release...
(sw-master) Latest release for github.com/boot2docker/boot2docker is v18.09.7
(sw-master) Downloading /home/bext/.docker/machine/cache/boot2docker.iso from https://github.com/boot2docker/boot2docker/releases/download/v18.09.7/boot2docker.iso...
(sw-master) 0%....10%....20%....30%....40%....50%....60%....70%....80%....90%....100%
Creating machine...
(sw-master) Copying /home/bext/.docker/machine/cache/boot2docker.iso to /home/bext/.docker/machine/machines/sw-master/boot2docker.iso...
(sw-master) Creating VirtualBox VM...
(sw-master) Creating SSH key...
(sw-master) Starting the VM...
(sw-master) Check network to re-create if needed...
(sw-master) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env sw-master

Y otros dos más, workers

bext@bext-VPCF13WFX:~$ docker-machine create sw-worker-1
Running pre-create checks...
Creating machine...
(sw-worker-1) Copying /home/bext/.docker/machine/cache/boot2docker.iso to /home/bext/.docker/machine/machines/sw-worker-1/boot2docker.iso...
(sw-worker-1) Creating VirtualBox VM...
(sw-worker-1) Creating SSH key...
(sw-worker-1) Starting the VM...
(sw-worker-1) Check network to re-create if needed...
(sw-worker-1) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env sw-worker-1
bext@bext-VPCF13WFX:~$ docker-machine create sw-worker-2
Running pre-create checks...
Creating machine...
(sw-worker-2) Copying /home/bext/.docker/machine/cache/boot2docker.iso to /home/bext/.docker/machine/machines/sw-worker-2/boot2docker.iso...
(sw-worker-2) Creating VirtualBox VM...
(sw-worker-2) Creating SSH key...
(sw-worker-2) Starting the VM...
(sw-worker-2) Check network to re-create if needed...
(sw-worker-2) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env sw-worker-2

Veamos sus detalles
bext@bext-VPCF13WFX:~$ docker-machine ls
NAME          ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER     ERRORS
sw-master     -        virtualbox   Running   tcp://192.168.99.111:2376           v18.09.7   
sw-worker-1   -        virtualbox   Running   tcp://192.168.99.112:2376           v18.09.7   
sw-worker-2   -        virtualbox   Running   tcp://192.168.99.113:2376           v18.09.7   

Entremos al maestro y hagamoslo maestro del swarm
bext@bext-VPCF13WFX:~$ docker-machine ssh sw-master
   ( '>')
  /) TC (\   Core is distributed with ABSOLUTELY NO WARRANTY.
 (/-_--_-\)           www.tinycorelinux.net

docker@sw-master:~$ docker swar init --advertise-addr 192.168.99.111
unknown flag: --advertise-addr
See 'docker --help'.

Usage: docker [OPTIONS] COMMAND

A self-sufficient runtime for containers

Options:
      --config string      Location of client config files (default "/home/docker/.docker")
  -D, --debug              Enable debug mode
  -H, --host list          Daemon socket(s) to connect to
  -l, --log-level string   Set the logging level
                           ("debug"|"info"|"warn"|"error"|"fatal") (default "info")
      --tls                Use TLS; implied by --tlsverify
      --tlscacert string   Trust certs signed only by this CA (default
                           "/home/docker/.docker/ca.pem")
      --tlscert string     Path to TLS certificate file (default
                           "/home/docker/.docker/cert.pem")
      --tlskey string      Path to TLS key file (default "/home/docker/.docker/key.pem")
      --tlsverify          Use TLS and verify the remote
  -v, --version            Print version information and quit

Management Commands:
  builder     Manage builds
  checkpoint  Manage checkpoints
  config      Manage Docker configs
  container   Manage containers
  engine      Manage the docker engine
  image       Manage images
  manifest    Manage Docker image manifests and manifest lists
  network     Manage networks
  node        Manage Swarm nodes
  plugin      Manage plugins
  secret      Manage Docker secrets
  service     Manage services
  stack       Manage Docker stacks
  swarm       Manage Swarm
  system      Manage Docker
  trust       Manage trust on Docker images
  volume      Manage volumes

Commands:
  attach      Attach local standard input, output, and error streams to a running container
  build       Build an image from a Dockerfile
  commit      Create a new image from a container's changes
  cp          Copy files/folders between a container and the local filesystem
  create      Create a new container
  deploy      Deploy a new stack or update an existing stack
  diff        Inspect changes to files or directories on a container's filesystem
  events      Get real time events from the server
  exec        Run a command in a running container
  export      Export a container's filesystem as a tar archive
  history     Show the history of an image
  images      List images
  import      Import the contents from a tarball to create a filesystem image
  info        Display system-wide information
  inspect     Return low-level information on Docker objects
  kill        Kill one or more running containers
  load        Load an image from a tar archive or STDIN
  login       Log in to a Docker registry
  logout      Log out from a Docker registry
  logs        Fetch the logs of a container
  pause       Pause all processes within one or more containers
  port        List port mappings or a specific mapping for the container
  ps          List containers
  pull        Pull an image or a repository from a registry
  push        Push an image or a repository to a registry
  rename      Rename a container
  restart     Restart one or more containers
  rm          Remove one or more containers
  rmi         Remove one or more images
  run         Run a command in a new container
  save        Save one or more images to a tar archive (streamed to STDOUT by default)
  search      Search the Docker Hub for images
  start       Start one or more stopped containers
  stats       Display a live stream of container(s) resource usage statistics
  stop        Stop one or more running containers
  tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
  top         Display the running processes of a container
  unpause     Unpause all processes within one or more containers
  update      Update configuration of one or more containers
  version     Show the Docker version information
  wait        Block until one or more containers stop, then print their exit codes

Run 'docker COMMAND --help' for more information on a command.

docker@sw-master:~$ docker swarm init --advertise-addr 192.168.99.111
Swarm initialized: current node (pdlp62hc1wqzo5wuubp5ailmx) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-1rcp2rvr3e263y7htid8htusfqetzpu5d7r57ueoxzkika14pf-damdrsza3bp1h76nfpbtrf6ph 192.168.99.111:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. 
 
Veamos sus nodos
docker@sw-master:~$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
pdlp62hc1wqzo5wuubp5ailmx *   sw-master           Ready               Active              Leader              18.09.7
 
Nos vamos al host OS y vemos sus nodos
docker@sw-master:~$ exit
logout
exit status 127
bext@bext-VPCF13WFX:~$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
kdaljefl1ech729w83jrdmqr9 *   bext-VPCF13WFX      Ready               Active              Leader              18.09.6

Nos enlazamos al nodo maestro
bext@bext-VPCF13WFX:~$ eval $(docker-machine env sw-master)
bext@bext-VPCF13WFX:~$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
pdlp62hc1wqzo5wuubp5ailmx *   sw-master           Ready               Active              Leader              18.09.7

Agregamos los workers al maestro
bext@bext-VPCF13WFX:~$ docker-machine ssh sw-worker-1
   ( '>')
  /) TC (\   Core is distributed with ABSOLUTELY NO WARRANTY.
 (/-_--_-\)           www.tinycorelinux.net

docker@sw-worker-1:~$ docker swarm join --token SWMTKN-1-1rcp2rvr3e263y7htid8htusfqetzpu5d7r57ueoxzkika14pf-damdrsza3bp1h76nfpbtrf6ph 192.168.99.111:2377
This node joined a swarm as a worker.
docker@sw-worker-1:~$ exit
logout
bext@bext-VPCF13WFX:~$ docker-machine ssh sw-worker-2
   ( '>')
  /) TC (\   Core is distributed with ABSOLUTELY NO WARRANTY.
 (/-_--_-\)           www.tinycorelinux.net

docker@sw-worker-2:~$ docker swarm join --token SWMTKN-1-1rcp2rvr3e263y7htid8htusfqetzpu5d7r57ueoxzkika14pf-damdrsza3bp1h76nfpbtrf6ph 192.168.99.111:2377
This node joined a swarm as a worker.
docker@sw-worker-2:~$ exit
logout
bext@bext-VPCF13WFX:~$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
pdlp62hc1wqzo5wuubp5ailmx *   sw-master           Ready               Active              Leader              18.09.7
qfc9sx1tfe3wms1oera75f80b     sw-worker-1         Ready               Active                                  18.09.7
v28filrne0x5fua7r7rxh6zew     sw-worker-2         Ready               Active                                  18.09.7

Instalamos un Vizualizador Web
bext@bext-VPCF13WFX:~$ docker service create --name=viz --publish=8080:8080 \
--constraint=node.role==manager \ 
--mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock dockersamples/visualizer
1k6akx8sh6yrod7zsi5cvf71a
overall progress: 1 out of 1 tasks 
1/1: running   
verify: Service converged 
bext@bext-VPCF13WFX:~$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                             PORTS
1k6akx8sh6yr        viz                 replicated          1/1                 dockersamples/visualizer:latest   *:8080->8080/tcp
bext@bext-VPCF13WFX:~$ docker service ps viz
ID                  NAME                IMAGE                             NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
n1ulgayr7t0k        viz.1               dockersamples/visualizer:latest   sw-master           Running             Running about a minute ago 
 
Lo interesante es que podemos acceder al servicio desde las tres IP del swarm




Benchmark de el balanceador de carga

Instalamos un simple servicio web

bext@bext-VPCF13WFX:~$ docker service create --name=web --publish=80:80 nginx
h5mriknsbuvoj3veols35xqcx
overall progress: 1 out of 1 tasks 
1/1: running   
verify: Service converged 


bext@bext-VPCF13WFX:~$ curl http://192.168.99.111
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
bext@bext-VPCF13WFX:~$ 

Lo escalamos a dos contenedores

bext@bext-VPCF13WFX:~$ docker service scale web=2
web scaled to 2
overall progress: 2 out of 2 tasks 
1/2: running   
2/2: running   
verify: Service converged 
bext@bext-VPCF13WFX:~$ 



 Con  Apache Benchmark medimos su performance para luego degradar el escalamiento a solo contenedor y comparar resultados.

bext@bext-VPCF13WFX:~$ ab -n 10000 -c 50 http://192.168.99.111/
This is ApacheBench, Version 2.3 <$Revision: 1807734 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.99.111 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        nginx/1.17.0
Server Hostname:        192.168.99.111
Server Port:            80

Document Path:          /
Document Length:        612 bytes

Concurrency Level:      50
Time taken for tests:   7.719 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      8450000 bytes
HTML transferred:       6120000 bytes
Requests per second:    1295.48 [#/sec] (mean)
Time per request:       38.596 [ms] (mean)
Time per request:       0.772 [ms] (mean, across all concurrent requests)
Transfer rate:          1069.03 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1   16   8.3     15      82
Processing:     5   22   8.5     19      82
Waiting:        5   22   8.4     19      79
Total:         10   38  13.1     34     132

Percentage of the requests served within a certain time (ms)
  50%     34
  66%     39
  75%     42
  80%     43
  90%     52
  95%     68
  98%     79
  99%     86
 100%    132 (longest request)
bext@bext-VPCF13WFX:~$ docker service scale web=1
web scaled to 1
overall progress: 1 out of 1 tasks 
1/1: running   
verify: Service converged 
 
 bext@bext-VPCF13WFX:~$ ab -n 10000 -c 50 http://192.168.99.111/
This is ApacheBench, Version 2.3 <$Revision: 1807734 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.99.111 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        nginx/1.17.0
Server Hostname:        192.168.99.111
Server Port:            80

Document Path:          /
Document Length:        612 bytes

Concurrency Level:      50
Time taken for tests:   75.756 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      8450000 bytes
HTML transferred:       6120000 bytes
Requests per second:    132.00 [#/sec] (mean)
Time per request:       378.778 [ms] (mean)
Time per request:       7.576 [ms] (mean, across all concurrent requests)
Transfer rate:          108.93 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1  365 508.8     16    3064
Processing:     1   13   4.3     15      33
Waiting:        1   13   4.5     15      33
Total:          3  379 505.8     33    3085

Percentage of the requests served within a certain time (ms)
  50%     33
  66%   1006
  75%   1019
  80%   1023
  90%   1029
  95%   1034
  98%   1039
  99%   1047
 100%   3085 (longest request)

Así podemos observar en el renglón transfer rate que en este caso al correr la prueba en dos contenedores es sustancialmente más rápido.



eot

viernes, 28 de junio de 2019

Spring Cloud Netflix, Microservice, Load Balance (Ribbon), java parte 5

Verificación de Asignaciones de Servicios de Forma Balanceada

referecias
https://cloud.spring.io/spring-cloud-static/spring-cloud.html


  Para verificar que servicio Productor fue asignado a cada servicio Consumidor modificaremos los servicios para que nos proporciones información de en que puerto estan asignados, y a que puerto del servicio Productor se asignaron.

  Para esto definimos una clase estatica en el paquete com.bext.model llamada Info, la cual servira de transporte dentro de la aplicación para tomar los valores al arrancar el servicio. y posteriormente esta misma clase estatica Info será usada por el punto de entrada REST /info. que desplegará la información de los servicios tanto productores como consumidores.

La visualización de estos cambios los podemos ver aquí
https://github.com/jalbertomr/SpringCloudNetflix/commit/a20295bf6deed4e3321d870d1278c093ae4031d0



 Ahora para obtener la información de los puertos y asignaciones de los servicios tanto productores y consumidores bastará con darle click en los nombres de los servicios en la página de nuertro servidor eureka. aquí tenemos un resumen de lo que nos arrojan la información de servicios.


Ahora podemos ver que tenemos dos productores en puertos 8080, 8081, y 4 consumidores en puertos 8092, 8094, 8093, 8091. los cuales tomaron diferentes Productores.
la razón de que salga null, pero en la dirección de browser se vea 8080, es por que la respuesta REST /info toma del archivo application.properties el valor que se le asigno al servicio con la llave server.port. pero como no se le asigno ninguna por default toma 8080 y null para la propiedad que lee.

El log de un servicio productor en consola:

2019-06-28 20:49:12.697  INFO 824 --- [           main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@35cabb2a: startup date [Fri Jun 28 20:49:12 CDT 2019]; root of context hierarchy
2019-06-28 20:49:13.287  INFO 824 --- [           main] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
2019-06-28 20:49:13.380  INFO 824 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfiguration' of type [class org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$5d0b4f48] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v1.4.1.RELEASE)

2019-06-28 20:49:14.954  INFO 824 --- [           main] com.bext.SpringBootProducerApplication   : No active profile set, falling back to default profiles: default
2019-06-28 20:49:15.001  INFO 824 --- [           main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@413f69cc: startup date [Fri Jun 28 20:49:15 CDT 2019]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext@35cabb2a
2019-06-28 20:49:16.795  WARN 824 --- [           main] o.s.c.a.ConfigurationClassPostProcessor  : Cannot enhance @Configuration bean definition 'refreshScope' since its singleton instance has been created too early. The typical cause is a non-static @Bean method with a BeanDefinitionRegistryPostProcessor return type: Consider declaring such methods as 'static'.
2019-06-28 20:49:17.026  INFO 824 --- [           main] o.s.cloud.context.scope.GenericScope     : BeanFactory id=68326ab6-45be-32d5-b3dd-64da0e63c72e
2019-06-28 20:49:17.042  INFO 824 --- [           main] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
2019-06-28 20:49:17.089  INFO 824 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration' of type [class org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$5d0b4f48] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2019-06-28 20:49:17.795  INFO 824 --- [           main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8080 (http)
2019-06-28 20:49:17.827  INFO 824 --- [           main] o.apache.catalina.core.StandardService   : Starting service Tomcat
2019-06-28 20:49:17.834  INFO 824 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet Engine: Apache Tomcat/8.5.5
2019-06-28 20:49:18.147  INFO 824 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2019-06-28 20:49:18.147  INFO 824 --- [ost-startStop-1] o.s.web.context.ContextLoader            : Root WebApplicationContext: initialization completed in 3146 ms
2019-06-28 20:49:18.696  INFO 824 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean  : Mapping servlet: 'dispatcherServlet' to [/]
2019-06-28 20:49:18.712  INFO 824 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'characterEncodingFilter' to: [/*]
2019-06-28 20:49:18.712  INFO 824 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
2019-06-28 20:49:18.712  INFO 824 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'httpPutFormContentFilter' to: [/*]
2019-06-28 20:49:18.712  INFO 824 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'requestContextFilter' to: [/*]
2019-06-28 20:49:20.813  INFO 824 --- [           main] s.w.s.m.m.a.RequestMappingHandlerAdapter : Looking for @ControllerAdvice: org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@413f69cc: startup date [Fri Jun 28 20:49:15 CDT 2019]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext@35cabb2a
2019-06-28 20:49:20.964  INFO 824 --- [           main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/info],methods=[GET]}" onto public java.lang.String com.bext.controllers.TestController.infoPage()
2019-06-28 20:49:20.964  INFO 824 --- [           main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/empleado],methods=[GET]}" onto public com.bext.model.Employee com.bext.controllers.TestController.firstPage()
2019-06-28 20:49:20.980  INFO 824 --- [           main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest)
2019-06-28 20:49:20.980  INFO 824 --- [           main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)
2019-06-28 20:49:21.089  INFO 824 --- [           main] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2019-06-28 20:49:21.089  INFO 824 --- [           main] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2019-06-28 20:49:21.198  INFO 824 --- [           main] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2019-06-28 20:49:21.951  WARN 824 --- [           main] c.n.c.sources.URLConfigurationSource     : No URLs will be polled as dynamic configuration sources.
2019-06-28 20:49:21.951  INFO 824 --- [           main] c.n.c.sources.URLConfigurationSource     : To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
2019-06-28 20:49:21.957  WARN 824 --- [           main] c.n.c.sources.URLConfigurationSource     : No URLs will be polled as dynamic configuration sources.
2019-06-28 20:49:21.957  INFO 824 --- [           main] c.n.c.sources.URLConfigurationSource     : To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
2019-06-28 20:49:22.176  INFO 824 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Registering beans for JMX exposure on startup
2019-06-28 20:49:22.207  INFO 824 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Bean with name 'refreshScope' has been autodetected for JMX exposure
2019-06-28 20:49:22.207  INFO 824 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Bean with name 'environmentManager' has been autodetected for JMX exposure
2019-06-28 20:49:22.207  INFO 824 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Bean with name 'configurationPropertiesRebinder' has been autodetected for JMX exposure
2019-06-28 20:49:22.226  INFO 824 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Located managed bean 'environmentManager': registering with JMX server as MBean [org.springframework.cloud.context.environment:name=environmentManager,type=EnvironmentManager]
2019-06-28 20:49:22.257  INFO 824 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Located managed bean 'refreshScope': registering with JMX server as MBean [org.springframework.cloud.context.scope.refresh:name=refreshScope,type=RefreshScope]
2019-06-28 20:49:22.289  INFO 824 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Located managed bean 'configurationPropertiesRebinder': registering with JMX server as MBean [org.springframework.cloud.context.properties:name=configurationPropertiesRebinder,context=413f69cc,type=ConfigurationPropertiesRebinder]
2019-06-28 20:49:22.773  INFO 824 --- [           main] o.s.c.support.DefaultLifecycleProcessor  : Starting beans in phase 0
2019-06-28 20:49:22.789  INFO 824 --- [           main] o.s.c.n.eureka.InstanceInfoFactory       : Setting initial instance status as: STARTING
2019-06-28 20:49:25.957  INFO 824 --- [           main] c.n.e.EurekaDiscoveryClientConfiguration : Registering application empleado-productor with eureka with status UP
2019-06-28 20:49:26.471  INFO 824 --- [           main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
2019-06-28 20:49:26.487  INFO 824 --- [           main] c.n.e.EurekaDiscoveryClientConfiguration : Updating port to 8080
2019-06-28 20:49:26.502  INFO 824 --- [           main] com.bext.SpringBootProducerApplication   : Started SpringBootProducerApplication in 15.734 seconds (JVM running for 18.119)
server.port:null
2019-06-28 20:50:32.820  INFO 824 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring FrameworkServlet 'dispatcherServlet'
2019-06-28 20:50:32.820  INFO 824 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : FrameworkServlet 'dispatcherServlet': initialization started
2019-06-28 20:50:32.877  INFO 824 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : FrameworkServlet 'dispatcherServlet': initialization completed in 57 ms


El log de un servicio consumidor en consola:

2019-06-28 21:01:46.662  INFO 7740 --- [           main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@564fabc8: startup date [Fri Jun 28 21:01:46 CDT 2019]; root of context hierarchy
2019-06-28 21:01:47.215  INFO 7740 --- [           main] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
2019-06-28 21:01:47.309  INFO 7740 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfiguration' of type [class org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$464605d8] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v1.4.1.RELEASE)

2019-06-28 21:01:48.980  INFO 7740 --- [           main] com.bext.SpringBootConsumerApplication   : No active profile set, falling back to default profiles: default
2019-06-28 21:01:49.012  INFO 7740 --- [           main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@3bd323e9: startup date [Fri Jun 28 21:01:49 CDT 2019]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext@564fabc8
2019-06-28 21:01:50.484  INFO 7740 --- [           main] o.s.b.f.s.DefaultListableBeanFactory     : Overriding bean definition for bean 'consumerControllerClient' with a different definition: replacing [Generic bean: class [com.bext.controllers.ConsumerControllerClient]; scope=singleton; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in file [C:\Users\Bext\workspace\employee-consumer\target\classes\com\bext\controllers\ConsumerControllerClient.class]] with [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=springBootConsumerApplication; factoryMethodName=consumerControllerClient; initMethodName=null; destroyMethodName=(inferred); defined in com.bext.SpringBootConsumerApplication]
2019-06-28 21:01:50.900  WARN 7740 --- [           main] o.s.c.a.ConfigurationClassPostProcessor  : Cannot enhance @Configuration bean definition 'refreshScope' since its singleton instance has been created too early. The typical cause is a non-static @Bean method with a BeanDefinitionRegistryPostProcessor return type: Consider declaring such methods as 'static'.
2019-06-28 21:01:51.166  INFO 7740 --- [           main] o.s.cloud.context.scope.GenericScope     : BeanFactory id=e1c2c950-3593-3497-a05d-b5a8a8d58f26
2019-06-28 21:01:51.213  INFO 7740 --- [           main] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
2019-06-28 21:01:51.307  INFO 7740 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration' of type [class org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$464605d8] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2019-06-28 21:01:52.153  INFO 7740 --- [           main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8092 (http)
2019-06-28 21:01:52.182  INFO 7740 --- [           main] o.apache.catalina.core.StandardService   : Starting service Tomcat
2019-06-28 21:01:52.182  INFO 7740 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet Engine: Apache Tomcat/8.5.5
2019-06-28 21:01:52.495  INFO 7740 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2019-06-28 21:01:52.510  INFO 7740 --- [ost-startStop-1] o.s.web.context.ContextLoader            : Root WebApplicationContext: initialization completed in 3498 ms
2019-06-28 21:01:52.950  INFO 7740 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean  : Mapping servlet: 'dispatcherServlet' to [/]
2019-06-28 21:01:52.966  INFO 7740 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'characterEncodingFilter' to: [/*]
2019-06-28 21:01:52.966  INFO 7740 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
2019-06-28 21:01:52.966  INFO 7740 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'httpPutFormContentFilter' to: [/*]
2019-06-28 21:01:52.966  INFO 7740 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'requestContextFilter' to: [/*]
2019-06-28 21:01:53.937  INFO 7740 --- [           main] s.w.s.m.m.a.RequestMappingHandlerAdapter : Looking for @ControllerAdvice: org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@3bd323e9: startup date [Fri Jun 28 21:01:49 CDT 2019]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext@564fabc8
2019-06-28 21:01:54.089  INFO 7740 --- [           main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/info],methods=[GET]}" onto public java.lang.String com.bext.controllers.TestController.infoPage()
2019-06-28 21:01:54.105  INFO 7740 --- [           main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest)
2019-06-28 21:01:54.105  INFO 7740 --- [           main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)
2019-06-28 21:01:54.199  INFO 7740 --- [           main] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2019-06-28 21:01:54.199  INFO 7740 --- [           main] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2019-06-28 21:01:54.324  INFO 7740 --- [           main] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2019-06-28 21:01:56.328  WARN 7740 --- [           main] c.n.c.sources.URLConfigurationSource     : No URLs will be polled as dynamic configuration sources.
2019-06-28 21:01:56.328  INFO 7740 --- [           main] c.n.c.sources.URLConfigurationSource     : To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
2019-06-28 21:01:56.345  WARN 7740 --- [           main] c.n.c.sources.URLConfigurationSource     : No URLs will be polled as dynamic configuration sources.
2019-06-28 21:01:56.345  INFO 7740 --- [           main] c.n.c.sources.URLConfigurationSource     : To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
2019-06-28 21:01:56.595  INFO 7740 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Registering beans for JMX exposure on startup
2019-06-28 21:01:56.626  INFO 7740 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Bean with name 'refreshScope' has been autodetected for JMX exposure
2019-06-28 21:01:56.626  INFO 7740 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Bean with name 'environmentManager' has been autodetected for JMX exposure
2019-06-28 21:01:56.641  INFO 7740 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Bean with name 'configurationPropertiesRebinder' has been autodetected for JMX exposure
2019-06-28 21:01:56.641  INFO 7740 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Located managed bean 'environmentManager': registering with JMX server as MBean [org.springframework.cloud.context.environment:name=environmentManager,type=EnvironmentManager]
2019-06-28 21:01:56.688  INFO 7740 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Located managed bean 'refreshScope': registering with JMX server as MBean [org.springframework.cloud.context.scope.refresh:name=refreshScope,type=RefreshScope]
2019-06-28 21:01:56.720  INFO 7740 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Located managed bean 'configurationPropertiesRebinder': registering with JMX server as MBean [org.springframework.cloud.context.properties:name=configurationPropertiesRebinder,context=3bd323e9,type=ConfigurationPropertiesRebinder]
2019-06-28 21:01:57.379  INFO 7740 --- [           main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8092 (http)
2019-06-28 21:01:57.392  INFO 7740 --- [           main] com.bext.SpringBootConsumerApplication   : Started SpringBootConsumerApplication in 12.718 seconds (JVM running for 14.915)
com.bext.controllers.ConsumerControllerClient@70e13fa
server.port:8092
2019-06-28 21:01:57.408  INFO 7740 --- [           main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@6ff415ad: startup date [Fri Jun 28 21:01:57 CDT 2019]; parent: org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@3bd323e9
2019-06-28 21:01:57.502  INFO 7740 --- [           main] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
2019-06-28 21:01:57.907  INFO 7740 --- [           main] c.netflix.config.ChainedDynamicProperty  : Flipping property: empleado-productor.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
2019-06-28 21:01:57.954  INFO 7740 --- [           main] c.n.u.concurrent.ShutdownEnabledTimer    : Shutdown hook installed for: NFLoadBalancer-PingTimer-empleado-productor
2019-06-28 21:01:58.016  INFO 7740 --- [           main] c.netflix.loadbalancer.BaseLoadBalancer  : Client:empleado-productor instantiated a LoadBalancer:DynamicServerListLoadBalancer:{NFLoadBalancer:name=empleado-productor,current list of Servers=[],Load balancer stats=Zone stats: {},Server stats: []}ServerList:null
2019-06-28 21:01:58.032  INFO 7740 --- [           main] c.n.l.DynamicServerListLoadBalancer      : Using serverListUpdater PollingServerListUpdater
2019-06-28 21:01:58.063  INFO 7740 --- [           main] o.s.c.n.eureka.InstanceInfoFactory       : Setting initial instance status as: STARTING
2019-06-28 21:02:01.495  INFO 7740 --- [           main] c.netflix.config.ChainedDynamicProperty  : Flipping property: empleado-productor.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
2019-06-28 21:02:01.506  INFO 7740 --- [           main] c.n.l.DynamicServerListLoadBalancer      : DynamicServerListLoadBalancer for client empleado-productor initialized: DynamicServerListLoadBalancer:{NFLoadBalancer:name=empleado-productor,current list of Servers=[android-ae23f0022eea:8081, android-ae23f0022eea:8080],Load balancer stats=Zone stats: {defaultzone=[Zone:defaultzone;    Instance count:2;    Active connections count: 0;    Circuit breaker tripped count: 0;    Active connections per server: 0.0;]
},Server stats: [[Server:android-ae23f0022eea:8080;    Zone:defaultZone;    Total Requests:0;    Successive connection failure:0;    Total blackout seconds:0;    Last connection made:Wed Dec 31 18:00:00 CST 1969;    First connection made: Wed Dec 31 18:00:00 CST 1969;    Active Connections:0;    total failure count in last (1000) msecs:0;    average resp time:0.0;    90 percentile resp time:0.0;    95 percentile resp time:0.0;    min resp time:0.0;    max resp time:0.0;    stddev resp time:0.0]
, [Server:android-ae23f0022eea:8081;    Zone:defaultZone;    Total Requests:0;    Successive connection failure:0;    Total blackout seconds:0;    Last connection made:Wed Dec 31 18:00:00 CST 1969;    First connection made: Wed Dec 31 18:00:00 CST 1969;    Active Connections:0;    total failure count in last (1000) msecs:0;    average resp time:0.0;    90 percentile resp time:0.0;    95 percentile resp time:0.0;    min resp time:0.0;    max resp time:0.0;    stddev resp time:0.0]
]}ServerList:org.springframework.cloud.netflix.ribbon.eureka.DomainExtractingServerList@6fcd31c3
serviceInstance.getUri():http://android-ae23f0022eea:8081
{"empId":"1","name":"emp1","designation":"manager","salary":3000.0}
2019-06-28 21:02:31.498  INFO 7740 --- [erListUpdater-0] c.netflix.config.ChainedDynamicProperty  : Flipping property: empleado-productor.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
2019-06-28 21:02:55.950  INFO 7740 --- [nio-8092-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring FrameworkServlet 'dispatcherServlet'
2019-06-28 21:02:55.950  INFO 7740 --- [nio-8092-exec-1] o.s.web.servlet.DispatcherServlet        : FrameworkServlet 'dispatcherServlet': initialization started
2019-06-28 21:02:56.012  INFO 7740 --- [nio-8092-exec-1] o.s.web.servlet.DispatcherServlet        : FrameworkServlet 'dispatcherServlet': initialization completed in 62 ms


eot