Skip to content

Commit 46a588f

Browse files
committedOct 25, 2023
onpremises
1 parent c90ada0 commit 46a588f

7 files changed

+425
-0
lines changed
 

‎on-premises/Glusterfs

+125
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,125 @@
1+
1) Install glusterfs server and client on evry node where your bricks are present.
2+
3+
apt-get install software-properties-common
4+
add-apt-repository ppa:gluster/glusterfs-9
5+
apt update
6+
apt-get install glusterfs-server glusterfs-client -y
7+
systemctl enable glusterd
8+
9+
2) Mount bricks on every node.
10+
11+
3) Create a trusted storage pool between the nodes
12+
Ex :-
13+
gluster peer probe worker1
14+
gluster peer probe worker2
15+
gluster peer probe worker3
16+
gluster peer probe master2
17+
18+
4) Check peer status
19+
gluster peer status
20+
21+
5) Create Distributed GlisterFS Volume
22+
23+
24+
============================== Without Replicas ======================================
25+
26+
gluster volume create dockervol transport tcp master1:/bricks/1 master2:/bricks/1 master3:/bricks/1
27+
gluster volume create dockervol transport tcp master2:/bricks/1 worker1:/bricks/1 worker2:/bricks/1 worker3:/bricks/1 force
28+
29+
30+
gluster volume create dockervol transport tcp master2:/bricks/1 force
31+
32+
33+
===================== With Replicas ====================================
34+
35+
gluster volume create dockervol replica 2 transport tcp master1:/data/disk1 master1:/data/disk2 master1:/data/disk3 master1:/data/disk4 force
36+
37+
38+
39+
40+
41+
6) Start the created volume.
42+
gluster volume start glusterfsVolumeName
43+
44+
mount -t glusterfs localhost:dockervol /mnt/ravenfs
45+
7) Mount the Volume on GlusterFS client
46+
mount -t glusterfs localhost:dockervol /mnt/ravenfs
47+
48+
49+
50+
gluster peer detach worker2
51+
gluster peer probe worker1
52+
53+
54+
gluster volume rebalance distribute status
55+
56+
gluster peer probe worker2
57+
gluster peer probe master1
58+
59+
60+
61+
All clients mounted through the peer which is getting detached need to be remounted using one of the other active peers in the trusted storage pool to ensure client gets notification on any changes done on the gluster configuration and if the same has been done do you want to proceed? (y/n) n
62+
63+
64+
Hostname: master1
65+
Uuid: 3e4d74c3-b083-45a5-8007-99ca0256308a
66+
State: Peer in Cluster (Connected)
67+
68+
Hostname: worker2
69+
Uuid: 077ed051-8a20-4fb2-9a3b-494fd4831019
70+
State: Peer in Cluster (Connected)
71+
72+
73+
74+
sudo mount -t cifs //backupravenserver.file.core.windows.net/backup /home/raven/storage_account -o vers=3.0,credentials=/etc/smbcredentials/backupravenserver.cred,dir_mode=0777,file_mode=0777,serverino
75+
76+
77+
gluster volume create dockervol replica 2 transport tcp worker5:/data/bricks1 worker5:/data/bricks2 worker6:/data/bricks1 worker6:/data/bricks2 worker7:/data/bricks1 worker7:/data/bricks2 force
78+
79+
Write :-
80+
gluster volume top dockervol write-perf bs 256 count 1 brick worker5:/data/bricks1 list-cnt 10
81+
82+
Read :-
83+
gluster volume top dockervol read-perf bs 256 count 1 brick worker5:/data/bricks1 list-cnt 10
84+
85+
86+
87+
88+
89+
90+
Extra Commands :-
91+
92+
apt-get purge glusterfs-server glusterfs-client
93+
94+
worker1:/bricks/disk/1   worker2:/bricks/disk/1 worker3:/bricks/disk/1 worker3:/bricks/disk/2 worker3:/bricks/disk/3 worker4:/bricks/disk/1 worker4:/bricks/disk/2 worker4:/bricks/disk/3 worker5:/bricks/disk/1 worker5:/bricks/disk/2 worker6:/bricks/disk/1 worker6:/bricks/disk/2
95+
96+
gluster volume create ravenvol transport tcp worker1:/bricks/disk/1   worker2:/bricks/disk/1 worker3:/bricks/disk/1 worker3:/bricks/disk/2 worker3:/bricks/disk/3 worker4:/bricks/disk/1 worker4:/bricks/disk/2 worker4:/bricks/disk/3 worker5:/bricks/disk/1 worker5:/bricks/disk/2 worker6:/bricks/disk/1 worker6:/bricks/disk/2 force
97+
98+
===================================================
99+
Add bricks :-
100+
101+
gluster volume add-brick dockervol replica 2 worker6:/bricks/1 worker6:/bricks/2
102+
103+
Remove bricks :-
104+
gluster volume remove-brick dockervol replica 2 worker4:/bricks/1 worker4:/bricks/2 start
105+
106+
107+
108+
==================================================
109+
DRDO :-
110+
111+
Brick1: worker1:/bricks/disk/1
112+
Brick2: worker2:/bricks/disk/1
113+
Brick3: worker3:/bricks/disk/1
114+
Brick4: worker3:/bricks/disk/2
115+
Brick5: worker3:/bricks/disk/3
116+
Brick6: worker4:/bricks/disk/1
117+
Brick7: worker4:/bricks/disk/2
118+
Brick8: worker4:/bricks/disk/3
119+
Brick9: worker5:/bricks/disk/1
120+
Brick10: worker5:/bricks/disk/2
121+
Brick11: worker6:/bricks/disk/1
122+
Brick12: worker6:/bricks/disk/2
123+
gluster volume create ravenvol transport tcp replica 2 worker1:/bricks/disk/1 worker3:/bricks/disk/1 worker2:/bricks/disk/1 worker3:/bricks/disk/2 worker5:/bricks/disk/1 worker3:/bricks/disk/3 worker5:/bricks/disk/2 worker4:/bricks/disk/1 worker6:/bricks/disk/1 worker4:/bricks/disk/2 worker6:/bricks/disk/2 worker4:/bricks/disk/3 force
124+
125+
+9
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
ansible-playbook -i inventory.ini ping.yml -vvv --extra-vars "ansible_user=raven ansible_password=PWD"
2+
3+
4+
ansible-playbook -i hosts.ini playbooks/pull.yaml -v -e "rs=pivotchaindata.com" -e "cn=raven-server" -e "tg=dev" -e "un=pivotchain" -e "pw=devops" --extra-vars "ansible_user=raven ansible_password=PWD" --ask-become-pass
5+
6+
7+
ansible-playbook -i inventory/mycluster/hosts.ini --become --become-user=root cluster.yml --extra-vars "ansible_user=raven ansible_password=Pivo8$Mirco?Cha9" --ask-become-pass
8+
9+

‎on-premises/banking-nginx-config

+144
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,144 @@
1+
server {
2+
listen 80;
3+
server_name pivotchain.in;
4+
client_max_body_size 100M;
5+
return 301 https://pivotchain.in$request_uri;
6+
}
7+
8+
upstream eventapp {
9+
server 10.13.10.12;
10+
server 10.13.10.13;
11+
}
12+
13+
upstream mobileapp {
14+
server 10.13.10.12;
15+
server 10.13.10.13;
16+
}
17+
18+
19+
upstream analytics {
20+
server 10.13.10.12;
21+
server 10.13.10.13;
22+
}
23+
24+
upstream ingress {
25+
server 10.13.10.12;
26+
server 10.13.10.13;
27+
}
28+
29+
30+
31+
upstream nginx {
32+
server 10.13.10.12;
33+
server 10.13.10.13;
34+
}
35+
36+
server {
37+
listen 443 ssl;
38+
server_name pivotchain.in;
39+
ssl_certificate /etc/nginx/ssl/server.crt;
40+
ssl_certificate_key /etc/nginx/ssl/server.key;
41+
client_max_body_size 1024M;
42+
43+
location / {
44+
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload';
45+
add_header X-Content-Type-Options nosniff;
46+
proxy_set_header Host $host;
47+
proxy_set_header X-Real-IP $remote_addr;
48+
proxy_set_header X-Forwarded-for $remote_addr;
49+
proxy_connect_timeout 300;
50+
proxy_read_timeout 300;
51+
proxy_send_timeout 300;
52+
port_in_redirect off;
53+
proxy_pass http://ingress/;
54+
client_max_body_size 1024M;
55+
proxy_ignore_client_abort on;
56+
57+
}
58+
59+
location /event-app/ {
60+
add_header X-Content-Type-Options nosniff;
61+
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload';
62+
proxy_set_header Host $host;
63+
proxy_set_header X-Real-IP $remote_addr;
64+
proxy_set_header X-Forwarded-for $remote_addr;
65+
proxy_connect_timeout 300;
66+
proxy_read_timeout 300;
67+
proxy_send_timeout 300;
68+
port_in_redirect off;
69+
proxy_pass http://eventapp;
70+
client_max_body_size 100M;
71+
72+
}
73+
74+
75+
location /mobile-app/ {
76+
add_header X-Content-Type-Options nosniff;
77+
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload';
78+
proxy_set_header Host $host;
79+
proxy_set_header X-Real-IP $remote_addr;
80+
proxy_set_header X-Forwarded-for $remote_addr;
81+
proxy_connect_timeout 300;
82+
proxy_read_timeout 300;
83+
proxy_send_timeout 300;
84+
port_in_redirect off;
85+
proxy_pass http://mobileapp;
86+
client_max_body_size 100M;
87+
error_log /var/log/nginx/mobileapp_error.log;
88+
access_log /var/log/nginx/mobileapp_access.log;
89+
}
90+
91+
92+
location /analytics-app/ {
93+
add_header X-Content-Type-Options nosniff;
94+
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload';
95+
proxy_set_header Host $host;
96+
proxy_set_header X-Real-IP $remote_addr;
97+
proxy_set_header X-Forwarded-for $remote_addr;
98+
proxy_connect_timeout 300;
99+
proxy_read_timeout 300;
100+
proxy_send_timeout 300;
101+
port_in_redirect off;
102+
proxy_pass http://analytics;
103+
client_max_body_size 1024M;
104+
proxy_ignore_client_abort on;
105+
106+
}
107+
108+
109+
location /nginx/ {
110+
add_header X-Content-Type-Options nosniff;
111+
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload';
112+
proxy_set_header Host $host;
113+
proxy_set_header X-Real-IP $remote_addr;
114+
proxy_set_header X-Forwarded-for $remote_addr;
115+
proxy_connect_timeout 300;
116+
proxy_read_timeout 300;
117+
proxy_send_timeout 300;
118+
port_in_redirect off;
119+
proxy_pass http://nginx;
120+
client_max_body_size 100M;
121+
122+
}
123+
124+
location /ISO/ {
125+
alias /usr/local/ISO_UPDATE/;
126+
autoindex on;
127+
auth_basic "Restricted Content";
128+
auth_basic_user_file /etc/nginx/auth/htpasswd;
129+
}
130+
131+
location /release/ {
132+
alias /usr/local/RELEASE/;
133+
autoindex on;
134+
auth_basic "Restricted Content";
135+
auth_basic_user_file /etc/nginx/auth/htpasswd;
136+
}
137+
138+
location /doc-images/ {
139+
alias /usr/local/dock-images/;
140+
autoindex on;
141+
}
142+
143+
144+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
apiVersion: v1
2+
kind: ServiceAccount
3+
metadata:
4+
creationTimestamp: "2023-09-11T10:34:45Z"
5+
name: default
6+
namespace: default
7+
resourceVersion: "334"
8+
uid: 21f43ae8-21de-4c72-8d78-d5cd32d7e9d0
9+
secrets:
10+
- name: sa1-token
11+
12+
13+
14+
15+
16+
cat default-sa.yaml
17+
apiVersion: v1
18+
kind: ServiceAccount
19+
metadata:
20+
creationTimestamp: "2023-09-11T10:34:45Z"
21+
name: default
22+
namespace: default
23+
resourceVersion: "334"
24+
uid: 21f43ae8-21de-4c72-8d78-d5cd32d7e9d0
25+
secrets:
26+
- name: sa1-token
27+
root@master1:/home/raven/secret# cat secret.yaml
28+
apiVersion: v1
29+
kind: Secret
30+
metadata:
31+
name: sa1-token
32+
annotations:
33+
kubernetes.io/service-account.name: default
34+
type: kubernetes.io/service-account-token
+56
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
************* AWS-backup server *******************
2+
3+
pem key :- raven-server.pem
4+
username :- ubuntu
5+
6+
++++++++++++++++++++++++++++++++++++++++++++++++
7+
8+
Public Ip :-
9+
10+
AWS new backup server
11+
+++++++++++++++++++++++++++++++++++++++++++++++++
12+
Private IP's :-
13+
14+
172.31.3.84 master1
15+
172.31.6.116 master2
16+
172.31.8.20 master3
17+
18+
++++++++++++++++++++++++++++++++++++++++++++++++++
19+
20+
21+
host.ini
22+
23+
[all]
24+
master1 ansible_host=172.31.3.84 ansible_user=ubuntu ansible_ssh_private_key_file=/home/ubuntu/k8s/kubespray-2.21.0/raven-server ansible_connection=ssh new_hostname=master1
25+
master2 ansible_host=172.31.6.116 ansible_user=ubuntu ansible_ssh_private_key_file=/home/ubuntu/k8s/kubespray-2.21.0/raven-server ansible_connection=ssh new_hostname=master2
26+
master3 ansible_host=172.31.8.20 ansible_user=ubuntu ansible_ssh_private_key_file=/home/ubuntu/k8s/kubespray-2.21.0/raven-server ansible_connection=ssh new_hostname=master3
27+
28+
[kube-master]
29+
master1
30+
master2
31+
master3
32+
33+
[etcd]
34+
master1
35+
master2
36+
master3
37+
38+
[all]
39+
master1
40+
master2
41+
master3
42+
43+
[kube-node]
44+
master1
45+
master2
46+
master3
47+
48+
49+
[k8s-cluster:children]
50+
kube-master
51+
kube-node
52+
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
53+
54+
ansible command :-
55+
56+
ansible-playbook -i inventory/mycluster/hosts.ini --become --become-user=root cluster.yml

0 commit comments

Comments
 (0)
Please sign in to comment.