Vertical scalability
Last updated
Last updated
Vertical scalability, also known as scaling up, refers to the process of increasing the capacity of a single server or system by adding more resources, such as CPU, memory (RAM), or storage. Instead of distributing the workload across multiple machines, vertical scaling enhances the power of one machine to handle more tasks.
Key features of vertical scalability:
Increased capacity: By upgrading the hardware (e.g., faster processors, more memory), a system can handle more data or users.
Single node: All processing and workload are managed on one server or machine.
Simplicity: Vertical scaling is simpler than horizontal scaling (adding more machines), as it avoids the complexity of distributed systems.
Limitations: There's a hardware limit to how much a machine can be upgraded. Once those limits are reached, vertical scaling is no longer feasible.
Vertical scalability is commonly used in scenarios where system architecture relies on a single server, but it can become expensive and has physical resource limitations compared to horizontal scaling.
Sources:
Here a simple sample of how it looks like a vertical scalability:
To carry out a vertical scalability using NGINX as load balancer and host our WebMap application with docker is the following:
Before you begin, ensure you have the following:
Docker installed on your system.
Basic knowledge of Docker and containerization concepts.
The web applications you want to load balance running in separate containers.
Set up docker environment
If you haven't already, install Docker on your system. You can download it from Docker engine. Once Docker is installed, start the Docker daemon.
Create NGINX Load Balancer Container
Before creating the NGINX Load Balancer Container, we need to create a local network into the docker in order to get a better response with the load balancer. Execute the following command:
The name-network will be load-balancer
In this step, we will create a NGINX container to act as a load balancer. You can use the official NGINX image from Docker Hub.
Explanation:
`-d`: Run the container in detached mode.
`-p <EXTERNAL_PORT>:<INTERNAL_PORT>`: Map port <EXTERNAL_PORT> on the host to port <INTERNAL_PORT> in the container.
`--name <NGINX_NAME_CONTAINER>`: Assign a name to the container.
`nginx`: Pull and run the official NGINX image.
Configure NGINX for Load Balancing
NGINX configuration is essential for load balancing. You need to create an NGINX configuration file and mount it into the container.
Create an NGINX configuration file, for example: `nginx.conf`, with the load balancing settings. Below is a example to run a WebMAP app:
``` nginx.conf
```
Now, start the NGINX container with the custom configuration file:
- `-v /path/to/nginx.conf:/etc/nginx/nginx.conf
`: Mount the configuration file into the container in read-only mode.
Run Application WebMap Containers
Ensure that your backend application containers are running. Replace `webmap-test-1` and `webmap-test-2` in the NGINX configuration with the actual names or IP addresses of your application containers.
You can create a WebMap Container following these steps: Containerizing WebMap app.
Test Load Balancer
Access the NGINX load balancer through your web browser or a tool like `curl`. NGINX will distribute incoming requests to your backend servers in a round-robin fashion.
Test the load balancing:
Open the explorer and paste: http://localhost:<EXTERNAL_PORT>
You should see responses from your application containers indicating that the load balancer is working.
This setup can be easily expanded to include more backend servers and customize load balancing strategies to suit your application's needs.