Ansible Notesday 5
-------------------------
LB and reverse proxy
--------------------
how to configure load balancer and reverse proxy
webserver
Apache (http) -> webserver:80 -> web page -html
-> we have apache web server running on port 80.
use case,
so, user within the organization or from remote site access the site.
one server has limited resources such as cpu/mem and it can handle certain number of load/connection.
so, if user keep growing, you have to upgrade server power. what if you have millions of users/visitors to your site?
this is a big challange. we have on single server and it is working on its full capcacity.
what can we do to increase the limit?
we already increase the max limit. how can we solve it?
we can launch a new server instances (VM, container) with new IP and configure the web server the same way as the first one.
Challanges?
You have two servers with two IP. so, you have to let your user to connect to new IP. so, you have two different IP to access it. again, your user growth increase and you have new IPs. this sounds not logical.
say, you have 1000s of computer, you can't give all IP to your users. it will be hectic for user so, to get ride of this problem, we will have one server with say ip of .100.
so user will connect to .100. now, any request comes to .100, it will forward request to web server.
And web server thinks .100 is client and returns the request to .100. And .100 forward user request.
IP .100 is never a web server or a client. This is a proxy, so it behaves proxy and reverse proxy.
server serving on other's behave,
firewall
in the web server, you don't have to allow all IP address, you can only allow .100 since request always comes through .100.
so you can enable .100 on firewall.
we have extra level of security here.
.100 [ Proxy server]
-- Webserver1
-- webserver2
-- webserver3
- all the request comes to .100, pass traffic to webserver1
- when load increase add new web server to reverse proxy.
- as soon as first request comes, it goes to w1 and second goes to s2.
- So, it is balancing the load between the servers.
- if load increases, we can add new web server and register with prxy and it is available for service.
For this kind of setup, we can use ansible to configure.
in real scenario, the .100 server we give them name such as host or domain.
so, these names might not be the real physical server. these may be just load balancer and they don't go down.
====================
# yum install php
systemctl status httpd
# cd /var/www/html
<pre>
<?php>
print `ifconfig`;
</php>
</pre>
========================
Configure LB and proxy
Step1. Install haproxy - comes on DVD
# yum install haproxy
Step2. Configure haproxy
# /etc/haproxy/haproxy.cfg
frontend main
bind *:5000
change it to port 8080
frontend main
#bind *:5000
bind *:8080
go down you will see backend
default_backend app
backend app
balance roundrobin
server webserver1 192.168.56.6:80 check
server webserver2 192.168.56.7:80 check
here you will define list of all backend servers.
[root@master ~]# systemctl start haproxy
systemctl start/enable haproxy
Now, go to web server and create test page.
get your IP address and try to access
http://192.168.56.5:8080/
keep refreshing the page, you will see different content.
We manually configured the page.
- hosts: 192.168.56.6,192.168.56.6 # all to install software on all or define individually
tasks:
There is one option available in inventory to group them. and give them group name say web or load balancer.
[mylb]
master ansible_user=root ansible_ssh_pass=changeme ansible_connection=ssh
[myweb]
worker1 ansible_user=root ansible_ssh_pass=changeme ansible_connection=ssh
worker2 ansible_user=root ansible_ssh_pass=changeme ansible_connection=ssh
# cat lb.yaml
- hosts: myweb # myweb comes from
tasks:
- package:
name: "httpd"
- hosts: mylb
tasks:
- package:
name: "haproxy"
# ap lb.yaml
# ansible-playbook lb.yaml
# cat lb.yaml
- hosts: myweb # myweb comes from
tasks:
- package:
name: "httpd"
- hosts: mylb
tasks:
- package:
name: "haproxy"
-step2: config
step3. service haproxy
lets say you configure load balancer: .5
webserver: .6/.7
say, we have load come up, and need to configure new web server.
only thing you have to do is, on inventory, add new IP address under web, it will be configured a web server.
the chanllange is to go to load balancer, and add entry to backend app with new IP.
with the help of ansible, when new ip added to inventory, the config of haproxy will be updated.
Accessing HAProxy stats page
---------------------------
Configuration of haproxy config file
[root@master ~]# cat /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# https://www.haproxy.org/download/1.8/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
# utilize system-wide crypto-policies
ssl-default-bind-ciphers PROFILE=SYSTEM
ssl-default-server-ciphers PROFILE=SYSTEM
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main
bind *:8080
#bind *:5000
acl url_static path_beg -i /static /images /javascript /stylesheets
acl url_static path_end -i .jpg .gif .png .css .js
use_backend static if url_static
default_backend app
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static
balance roundrobin
server static 127.0.0.1:4331 check
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
balance roundrobin
server webserver1 192.168.56.6:80 check
server webserver2 192.168.56.7:80 check
frontend stats
bind *:8084
stats enable
stats uri /stats
stats refresh 10s
stats admin if LOCALHOST
[root@master ~]#
Now access the STATS page by going to masternode (proxy server) IP:8084/stats
http://master:8084/stats
you will see web interface.
-----------------------------------------------
[root@master ~]# ansible --version
ansible 2.9.14
config file = /etc/ansible/ansible.cfg
[root@master ~]# more /etc/ansible/ansible.cfg
[defaults]
inventory = /root/myhosts
[root@master ~]# cat myhosts
#[masterserver]
#master ansible_user=sam
#[WebServer]
#worker1
[mylb]
master ansible_user=root ansible_ssh_pass=changeme ansible_connection=ssh
[myweb]
worker1 ansible_user=root ansible_ssh_pass=changeme ansible_connection=ssh
worker2 ansible_user=root ansible_ssh_pass=changeme ansible_connection=ssh
[root@master ~]# cat /etc/hosts
192.168.56.5 master master.expanor.local
192.168.56.6 worker1 worker1.expanor.local
192.168.56.7 worker2 worker2.expanor.local
[root@master ~]# cat lb.yaml
- hosts: myweb # myweb comes from
tasks:
- package:
name: "httpd"
- hosts: mylb
tasks:
- package:
name: "haproxy"
No comments:
Post a Comment