Skip to content

Commit

Permalink
added docker networking
Browse files Browse the repository at this point in the history
  • Loading branch information
SteveSayantan committed Aug 29, 2024
1 parent 3fc04b8 commit 913fab9
Show file tree
Hide file tree
Showing 23 changed files with 681 additions and 12 deletions.
4 changes: 3 additions & 1 deletion SDLC_DevOps.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,6 @@
- Testing: Application stored in VCS (say Git) is deployed in a server. It is tested by Quality Assurance Engineers.
- Deployment: The App is pushed to the production environment.

Using DevOps, our aim is to make the above three processes quicker using Automation.
Using DevOps, our aim is to make the above three processes quicker using Automation.

![devops](./assets/Devops1.jpeg)
24 changes: 13 additions & 11 deletions ansible/ansible-usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,9 +65,8 @@ Now, we can run a command for all the hosts under a particular group (here, **we
## Write Playbook
Playbooks are used to perform multiple tasks. In the following playbook, we install nginx and start nginx.

- First, we need to create an yaml file, say *playbook.yml* . It starts with **---** .
- First, we need to create an yaml file, say *playbook.yml* .

- Maintain proper indentation.

```yaml
---
Expand All @@ -79,27 +78,30 @@ Playbooks are used to perform multiple tasks. In the following playbook, we inst

tasks: # now we specify tasks to be performed

- name: Install nginx # we can give any name to the task
apt: # we want to use the apt module
name: nginx # name of the package
state: present # to install nginx
- name: Install nginx # we can give any name to the task
apt: # we want to use the apt module
name: nginx # name of the package
state: present # to install nginx

- name: Start nginx # this is the name our second task
service: # we want to use service module
name: nginx # we are interested about nginx service
state: started # to start the service
- name: Start nginx # this is the name our second task
service: # we want to use service module
name: nginx # we are interested about nginx service
state: started # to start the service

# we can write multiple playbooks in single file as shown

- name: Second playbook
...


```

- Now, we execute this playbook in the main server using **ansible-playbook** command:

`ansible-playbook -i ./inventory playbook.yml`

- Check the status of nginx by `systemctl status nginx`

- Stop nginx by `sudo systemctl stop nginx`.

## Ansible Roles
It is an efficient way to write complex playbooks.
Expand Down
Binary file added assets/Devops1.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/bridge1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/container&vm.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/docker-flow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/docker_arch.JPG
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/docker_components.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
40 changes: 40 additions & 0 deletions cicd-basics/basics.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
- CI stands for Continuos Integration, CD stands for Continuos Delivery.

- CI is a process where we integrate a set of tools or processes that we need to follow before the delivery of the application.

- CD is the process of deploying our app on a specific platform.

- Before delivering an app, every company must undergo some steps, e.g.,
- Unit Testing
- Static Code Analysis : Performs syntactical analysis, checks for code formatting/indentation, unused variables etc.
- Code Quality/ Vulnerability Test
- Reports: Stats about test coverage, code quality checks etc.
- Deployment

CI/CD helps automate all these steps. Otherwise, performing all these steps for every change in the code will take very long and thereby delaying the delivery.

- Jenkins Pipeline: Suppose, some changes are pushed to our Github. We shall set up Jenkins such that for any PR/commit on the repo it will run a set of actions automatically with the help of multiple tools. Hence Jenkins is called an orchestrator.

- e.g. for a Java application, Jenkin can be configured to run Maven for building, Junit for testing, ALM for reporting etc. whenever there is a PR/commit on our repo.

- Whenever there is some new feature added,

- it is first tested in a Dev environment which consists of a minimal server.
- on success, it is now deployed on a Staging environment which consists of more servers than Dev environment but less than Production.
- finally, it is deployed on the Production environment consisting of lots of servers.

Jenkins can automatically promote our application to be deployed from one env to the other.

- Disadvantages of Jenkins

- While working with Jenkins, generally, we do not put all the load in a single machine. Instead, we create a master node, and connect several ec2 instances to it. Now using the master node, we configure those as worker nodes and schedule them to execute the pipelines.

- But this setup is not scalable as the setup becomes very costly as well as less maintainable.

- Most of the worker nodes may sit idle for a long time.

- To get automatic scale up and scale down, we use GitHub actions.

For every PR, GitHub Actions will spin up a docker container in a remote server for us and everything is executed in it. When there is no change in the code, the container will be deleted and server will be used for some other project in another repo. As a result, there will be no wastage of resources.


39 changes: 39 additions & 0 deletions docker-basics/docker-commands.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
1. `docker run image_name` : creates a new container, pulling the image if needed. Use `--name` flag to assign a name e.g. `docker run --name test node` . Use `-d` flag to run container in background and print container ID.

1. `docker run ubuntu echo Hey` : After creating the container, run *echo Hey* in it.

1. `docker run -p 127.0.0.1:80:8080/tcp nginx:alpine` : Creates a new container from the image (i.e. **nginx:alpine** ) and binds port 8080 of the container to TCP port 80 on 127.0.0.1 of the host. You can also specify udp and sctp ports. Not specifying an IP address (i.e., `-p 80:80` instead of `-p 127.0.0.1:80:80`), makes Docker publish the port on all interface (address `0.0.0.0`).

1. `docker container ls` : shows all the container running currently. To see all containers, use `-a` flag.

1. `docker ps` : shows all the container running currently. To see all containers, use `-a` flag.

1. `docker images` : shows all the images present in the local system.

1. `docker pull image_name:tag` : pulls the image with the specified tag from docker hub, e.g. `docker pull ubuntu:16.04`. The default value of tag is *latest* .

1. `docker run -it image_name:tag` : runs the new container in an interactive environment created from the image. We can optionally specify the tag as `docker run -it ubuntu:16.04` . The default value of tag is *latest* .

1. `docker container exec -it container_id bash` : executes the command *bash* to a running container having id *container_id*. This command fails if the container isn't running.

1. `docker stop container1_id container2_id ...` : To stop one or more running containers. We can also use the assigned name instead of container id.

1. `docker rm container1_id container2_id ...` : To remove one or more container. We can also use the assigned name instead of container id.

1. `docker start container1_id container2_id ...` : To start one or more stopped containers. We can also use the assigned name instead of container id.

1. `docker container inspect container1_id container2_id ...` : To display info on one or more containers. We can also use the assigned name instead of container id.

1. `docker logs container_id` : To fetch the logs of a container. We can also use the assigned name instead of container id. `docker logs --since 5s container_id` shows the logs of the last 5s.

1. `docker container prune` : To remove all stopped containers. Use `-f` flag to avoid prompt.

1. `docker rmi image1_name image2_name ...` : To remove one or more images from the host.

1. `docker commit -m "commit message" container_id new_image_name:tag` : Creates a new image with the given name from the changes done in the container. We can also optionally provide a tag. To run the newly created image, we use `docker run new_image_name:tag`.

1. `docker build -t username/repo_name:tag path` : Starts building an image with the name *username/repo_name*, using the dockerfile present at *path* .We can also optionally provide a tag (defaults as *latest* ). `username` refers to the one associated with dockerhub, `repo_name` refers to the remote repo on dockerhub, where the image will be uploaded.




25 changes: 25 additions & 0 deletions docker-basics/docker-compose.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
### WHAT
docker compose is a tool by Docker Inc . It is used to manage multi-container apps.

### WHY
The applications that can be set up in one container, can be easily handled as only one Dockerfile is present.

But in real-life applications, there are multiple micro-services are involved and each of them is setup in a separate container. E.g., an application could use one container for database, one for caching, one for payment, one for load-balancing etc. Also, there exists some internal dependency among them, e.g., the payment application can only run only when the DB is running.In such cases, running `docker build` for each of them while considering the dependencies among them can be troublesome for a large project.

Using docker compose we can do these very easily only using two commands `docker-compose up` and `docker-compose down`.

For details, check out the [docs](https://docs.docker.com/compose/)

### HOW
For using docker compose, we need to still write Dockerfiles. Additionally, we create a YAML file (compose.yaml) that builds and runs our containers using our Dockerfiles (or sometimes from images).

For examples of docker-compose, checkout [this](https://github.com/docker/awesome-compose)

### USECASES
The following are some common usecases of docker compose:

- Makes local development easier
- For setting up CI/CD at local level
- For testing some changes quickly


18 changes: 18 additions & 0 deletions docker-basics/docker-flow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
## Docker CLI
Let's break down the command `docker run hello-world`,

- **docker** : It refers to the docker cli. It connects to the docker daemon.
- **run** image_name: this command is used to run an image to create a new container.
- **hello-world** : It is the name of the image.

If the **hello-world** image is not present in our local, docker daemon downloads it from online (e.g. Docker Hub). Then it creates a new container from the image and executes it. The output of the container is sent to us via docker-cli by daemon.

If the image is already present, then daemon directly creates the container from that image and executes it.

![flow](../assets/docker-flow.png)

## Docker Image

Docker Images contain a smaller version of Operating System and all the dependecies of our app. Images are built in layers. Each layer is immutable and a collection of files and directories.

Layers receive an ID, calculated via a SHA 256 hash of the layer contents. Thus, if the layer contents change the SHA 256 hash changes as well. If any layer of an image is already present in the local system, it is not downloaded.
39 changes: 39 additions & 0 deletions docker-basics/docker-networking.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
### WHAT
Containers need to communicate to other containers and the host system. However, sometimes we want some containers totally isolated from other ones. The solutions for both of the issues are offered by docker networking.

### HOW
Useful network drivers provided by Docker:

- Whenever we create a container, it is connected to Docker's default bridge network **docker0**, aka **bridge** . A container is connected to **docker0** using a **veth** interface. One end of the **veth** pair is placed inside the container's network namespace, acting as its network interface, while the other end is attached to **docker0**, enabling communication with other containers and the host.
![default_bridge](../assets/bridge1.png)

- In **host** networking, a container directly shares the networking namespace of the Docker host and the container doesn't get its own IP-address allocated. Anyone having access to the host can have access to the container. Hence it can be a security risk.

- Overlay Networking: This mode enables communication between containers across multiple Docker host machines, allowing containers to be connected to a single network even when they are running on different hosts.

### User-defined Networks
We can create custom, user-defined networks, and connect multiple containers to the same network. Once connected to a user-defined network, containers can communicate with each other using container IP addresses or container names.

Here, we shall create bridge networks. Containers in different bridge can not communicate with each other. A container can be connected to more than one networks.

### Useful Commands

- `docker network create network_name` : Creates a new **bridge** network.

- `docker network connect network_name container1_id`: Connects a running container to a network. You can connect a container by name or by ID.

- `docker network disconnect network_name container1_id` : Disconnects container1 from the network. The container must be running to disconnect it from the network. We can use name of the container also.

- `docker network ls` : Lists all the networks.

- `docker network rm network1_id network2_id...` : Removes one or more networks. To remove a network, we must first disconnect any containers connected to it. We can use name of the networks as well.

- `docker run -itd --network=my-net busybox` : To add busybox container to the my-net network.

- `docker run -itd --network=host busybox` : To add busybox container to the host network. The container will not have any IP address as it will use the same IP address and network configuration as the host.

### Important

By default, Compose sets up a separate network for our app. Each container for a service joins that network and is both reachable by other containers on that network, and discoverable by the service's name.

Our app's network is given a name based on the project name.
57 changes: 57 additions & 0 deletions docker-basics/dockerVolumes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
### WHY
When our container needs to store some data in it, the data persists till the container is running. If the container is restarted or gone down, all the data is destroyed. So, it is a very common requirement to persist the data in a Docker container beyond the lifetime of the container.

### HOW
There are 2 different ways how docker solves this problem.

1. Volumes
2. Bind Directory on a host as a Mount

### Bind Mount
When you use a bind mount, a file or directory on the host machine is attached to a container. The file or directory is referenced by its absolute path on the host machine. The container can read and write to the directory. It can not be managed by docker cli.

For details, check out the [docs](https://docs.docker.com/engine/storage/bind-mounts/)

### Volumes
Volumes are quite similar to Bind mount, but more efficient. But volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure and OS of the host machine, volumes are completely managed by Docker. They can be managed using docker cli.

Volume is a logical partition on the host. When we use a volume, a new directory is created within Docker's storage directory on the host machine, and Docker manages that directory's contents.

For details, check out the [docs](https://docs.docker.com/engine/storage/volumes/)

### Examples
We can attach a volume to a container using either `-v` or `--mount` flag. In general, `--mount` is more explicit and verbose.

Check the syntactical differences, [here](https://docs.docker.com/engine/storage/volumes/#choose-the--v-or---mount-flag)

- `docker volume create volume_name` : To create a volume.

- `docker volume create ls` : To list volumes.

- `docker volume inspect volume_name` : To inspect a volume.

- `docker volume rm volume_name` : To remove a volume.

- `docker run -d --name devtest --mount source=myvol2 target=/app nginx:latest` : To mount the volume myvol2 into /app/ in the container.

- `docker run -d --name devtest -v myvol2:/app nginx:latest` : To mount the volume myvol2 into /app/ in the container.

**If we start a container with a volume that doesn't yet exist, Docker creates the volume for us.**

**If we start a container which creates a new volume, and the container has files or directories to be mounted, Docker copies the directory's contents into the volume.**


```
docker container stop devtest
docker container rm devtest
docker volume rm myvol2
```

Go through the docs for better understanding.





13 changes: 13 additions & 0 deletions docker-basics/first-docker-image/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# A Dockerfile must begin with a FROM instruction that specifies the base-image of our image
FROM ubuntu:latest

# it creates and sets the working directory for following instructions
WORKDIR /app

# this is to copy the source file inside /app (The destination dir must have a trailing slash )
COPY . /app/

# it sets the command to be executed when running a container from an image
CMD ./simple-bash


2 changes: 2 additions & 0 deletions docker-basics/first-docker-image/simple-bash
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
#!/bin/bash
echo "It works"
79 changes: 79 additions & 0 deletions docker-basics/installation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
## Installation
You can create an Ubuntu EC2 Instance on AWS and run the below commands to install docker.

```
sudo apt update
sudo apt install docker.io -y
```


#### Start Docker and Grant Access

A very common mistake that many beginners do is, After they install docker using the sudo access, they miss the step to Start the Docker daemon and grant acess to the user they want to use to interact with docker and run docker commands.

Always ensure the docker daemon is up and running.

A easy way to verify your Docker installation is by running the below command

```
docker run hello-world
```

If the output says:

```
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/create": dial unix /var/run/docker.sock: connect: permission denied.
See 'docker run --help'.
```

This can mean two things,
1. Docker deamon is not running.
2. Your user does not have access to run docker commands.


#### Start Docker daemon

You use the below command to verify if the docker daemon is actually started and Active

```
sudo systemctl status docker
```

If you notice that the docker daemon is not running, you can start the daemon using the below command

```
sudo systemctl start docker
```


#### Grant Access to your user to run docker commands

To grant access to your user to run the docker command, you should add the user to the Docker Linux group. Docker group is create by default when docker is installed.

```
sudo usermod -aG docker ubuntu
```

In the above command `ubuntu` is the name of the user, you can change the username appropriately.

**NOTE:** : You need to logout and login back for the changes to be reflected.

#### Docker is Installed, up and running 🥳🥳

Use the same command again, to verify that docker is up and running.

```
docker run hello-world
```
<hr>

## Pushing to Docker Hub

1. Create a repository on Docker Hub. The name of every repo starts with the username (e.g. **stevesayantan/my-first-repo** , **stevesayantan/foo** etc.)

1. The name of the image should be same as that of the repository. Inside a repo, each image is identified using its tag. Hence, every image to be pushed must have a tag.

1. Login to Docker Hub from CLI using `docker login`.

1. Push the image using `docker push repo_name:tag`

Loading

0 comments on commit 913fab9

Please sign in to comment.