Budget 2023- 2024:
No income tax up to ₹7 lakh, revised tax slabs for new regime
|
5G(NR): NG Based Handover
Introduction:
The basic handover procedures is same in any networks, i.e. UE reports measurement report with neighbor cell PCI and signal strength to source cell, source cell take the decision to start handover procedure to best target cell and Target Cell completes the Handover procedure.
how to copy file, directory from K8s POD to local machine ?
Generally we need some data from K8s PODs to our local machine for debugging and for future references. In this blog we will see that how we can copy the data from K8s POD to the local host directory or we can say to our local host machine and vise versa.
Copy data from PODs to the Local machine.
1- If we are in the POD.
if you are in the K8s POD already, then you can use SCP commands and their options to send the data file/directory to the local machine.
From below commands we can login into the PODs.
kubectl exec -it <POD-Name > /bin/bash
After that Go to that path where the file present. use below command to scp the file to the local machine.
scp -r <filename /directory name > root@<local-machine IP >:<local machine path>
for Ex:
scp -r dummy.pcap root@xx.xx.xx.xx:/home/localdirectory/
2- using kubectl cp command:
# Copy /tmp/foo from a remote pod to /tmp/bar locally
kubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar
we can also use in below format to copy the folder from POD to local host.
kubectl cp server-cudu-f66785456-agbcp:apps/config /home/name/files/config
Copy local directory to remote POD directory:
# Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container
kubectl cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container>
# Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace>
kubectl cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar
we can also use in below format to copy a folder from the localhost directory to a POD.
kubectl cp /home/name/files/config server-cudu-f56776c55646-abcde:apps/config
How to Check CentOS Version ?
There are several reasons, that why we need to check the centos version. For example if we need to downloads a SW package from the internet and install into the system. then we need to know that which version and OS bit is required to download. some times if we debugging some issue, generally we need check the ON version / SW version.
so here we will see the multiple ways to check the CentOS version.
1- using rpm commands check Centos version:
rpm --query centos-release
2- using "hostnamectl" command:
hostnamectl command is supported by CentOS 7 and above versions.
This command also display the Machine ID, Boot ID, hostname (server name) and kernel version also.
hostnamectl
5G-SA Call Flow:
In this section of blog, we have tried to map some of the massages of 5G-SA call flow with channels (physical/Transport/Logical), SRBs over which messages are being transfer, mapped coreset and search space with the messages and RLC mode used during UE attached.
What is ARQ and HARQ?
HARQ does not
retransmit packet/PDU as it is; as done by ARQ technique. HARQ modifies certain physical parameters before
retransmission.
The HARQ is a technique when the receiver
gets a new data with some errors then it tries to make correction if the error
is minor, but if the error is not minor then it sends re-transmission request
to the sender. After getting the data again, it combines the new received data
with previous erroneous data.
If some packets passed from HARQ to upper
layer with a little bit errors which might be acceptable for some applications,
but in any case there is one more mechanism which is ARQ or Automatic Repeat
Request. The ARQ mechanism takes care of residual errors which passed from
HARQ. If there is an error then it discards the packets and a new
re-transmission is requested from the sender. ARQ is an error control protocol.
HARQ:
1. It works at Physical layer but controlled by MAC layer.
2. If the received data has an error then the receiver buffers the data and
requests a re-transmission from the transmitter
3. HARQ works for both UM and AM mode.
4. HARQ provides very fast retransmission
which is suitable for high speeds (eg voice call).
ARQ:
1. It works at RLC layer for RLC AM MODE ONLY.
2. If the received data has an error which is passed through HARQ then it is
discarded, and a new re-transmission is requested from the transmitter.
3. ARQ is responsible for reliability.
Kubernetes requires an existing Docker installation on all nodes master and worker node. If you already have Docker installed, skip ahead to Step 2.
1. Update the package list with the command:
sudo apt-get update
2. Next, install Docker with the below command:
sudo apt-get install docker.io
3. after completing the docker instalation . Check the installation (and version) by entering the following:
sudo docker version
1. Set Docker to launch at boot by entering the following:
sudo systemctl enable docker
2. Verify Docker is running:
sudo systemctl status docker
To start Docker if it’s not running:
sudo systemctl start docker
Since you are downloading Kubernetes from a non-standard repository, it is essential to ensure that the software is authentic. This is done by adding a signing key.
1. Enter the following to add a signing key:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
If you get an error that curl is not installed, install it with:
sudo apt-get install curl
Kubernetes is not included in the default repositories. To add them, enter the following:
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
Repeat on each server node.
Kubeadm (Kubernetes Admin) is a tool that helps initialize a cluster. It fast-tracks setup by using community-sourced best practices. Kubelet is the work package, which runs on every node and starts containers. The tool gives you command-line access to clusters.
1. Install Kubernetes tools with the command:
sudo apt-get install kubeadm kubelet kubectl
sudo apt-mark hold kubeadm kubelet kubectl
Allow the process to complete.
2. Verify the installation with:
kubeadm version
3. Repeat for each server node.
Start by disabling the swap memory on each server:
sudo swapoff –a
If
any issue as below while executing above commsnd:
[ERROR
Swap]: running with swap on is not supported. Please disable swap.
1-
sudo
kubeadm reset
2-
Create
a file in
/etc/systemd/system/kubelet.service.d/20-allow-swap.conf
with
the content:
[Service]
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
3-
sudo
swapoff –a
Decide which server to set as the master node. Then enter the command:
sudo hostnamectl set-hostname master-node
Next, set a worker node hostname by entering the following on the worker server:
sudo hostnamectl set-hostname worker01
If you have additional worker nodes, use this process to set a unique hostname on each. For example:
worker node 1:
sudo hostnamectl set-hostname worker01
worker node 2:
sudo hostnamectl set-hostname worker02
Switch to the master server node, and enter the following:
sudo
kubeadm init --pod-network-cidr=192.168.0.0/16
Once this command finishes, it will display a kubeadm join message at the end. Make a note of the whole entry. This will be used to join the worker nodes to the cluster.
Next, enter the following to create a directory for the cluster:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install the Tigera Calico operator and custom resource definitions.
sudo
kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
Install Calico by creating the necessary custom resource.
sudo
kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
Note: Before creating this manifest, read its contents and make sure its settings are correct for your environment. For example, you may need to change the default IP pool CIDR to match your pod network CIDR.
Confirm that all of the pods are running with the following command.
watch kubectl get pods -n calico-system
Wait until each pod has the
STATUS
of Running
.
Confirm that you now have a node in your cluster with the following command.
kubectl get nodes -o wide
Step
10: Join Worker Node to Cluster
As indicated in Step 7, you can enter the kubeadm join command on each worker node to connect it to the cluster.
1- Switch to the worker01 system and enter the command you noted from Step 7:
Repeat for each worker node on the cluster. Wait a few minutes; then you can check the status of the nodes.
2- Switch to the master server, and enter:
kubectl get nodes
The system should display the worker nodes that you joined to the cluster.
Step-11: install registry
Use a command like the following to start the registry container:
sudo
docker run -d -p 5000:5000 --restart=always --name registry registry:2
s
udo docker container stop registry && sudo docker container rm -v registry
This procedure configures Docker to entirely disregard security for your registry. This is very insecure and is not recommended. It exposes your registry to trivial man-in-the-middle (MITM) attacks. Only use this solution for isolated testing or in a tightly controlled, air-gapped environment.
Edit the daemon.json
file, whose default location is /etc/docker/daemon.json
on Linux
If
the daemon.json
file does not exist, create it. Assuming there are no other settings
in the file, it should have the following contents:
{
"insecure-registries" : ["myregistrydomain.com:5000"]
}
3. Restart Docker for the changes to take effect.
Repeat steps-11 on every Engine host that wants to access your registry.
Step-12 – Intall helm
Members of the Helm community have contributed a helm package for Apt. This package is generally up to date.
curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
$ kubectl get deployments;
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
spring-hello 1 1 1 1 22h
spring-world 1 1 1 1 22h
vfe-hello-wrold 1 1 1 1 14m
$kubectl get services;
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d
spring-hello NodePort 10.103.27.226 <none> 8081:30812/TCP 23h
spring-world NodePort 10.102.21.165 <none> 8082:31557/TCP 23h
vfe-hello-wrold NodePort 10.101.23.36 <none> 8083:31532/TCP 14m
$ kubectl delete deployments vfe-hello-wrold
deployment.extensions "vfe-hello-wrold" deleted
$ kubectl delete service vfe-hello-wrold
service "vfe-hello-wrold" deleted