5G(NR): Xn Based Handover

5G(NR): Xn Based Handover

 

Introduction:

The basic handover procedures is same in any networks, i.e. UE reports measurement report with neighbor cell PCI and signal strength to source cell, source cell take the decision to start handover procedure to best target cell and Target Cell completes the Handover procedure.

 

Impotent Pointers for Xn Handover:

  1. Signal strength of both source gNB and target gNB should be reachable to UE during the HO. 
  2. Xn Handover is similar X2 Handover in 4G LTE
  3. XnAP interface must be established between source and Target gNB.
  4. This type of Handover is only applicable for intra-AMF mobility, i.e. Xn handover cannot be used if Source and Target gNB is connected to different AMF
  5. Xn Handover can be Intra Frequency HO and Inter Frequency HO
  6. Source and Target gNB can be connected with two different UPFs
  7. Tracking Area code should be same. Re-Registration is required after Successful Handover if the Source gNB and Target gNB belong to different Tracking Area (TAC)
  8. Xn Handover is Faster as Compare to N2/NGAP Handover due to short signalling path and 5G Core involved in only for switch the PDU session path


High level setup diagram:

      where both the gNB is served by same AMF and UPF. and for XnHO, XnAP interface is active between source gNB and target gNB.






Signaling Exchange b/w Source gNB and target gNB.












































 

New income tax slabs 2023-2024

 

Budget 2023- 2024: 

No income tax up to ₹7 lakh, revised tax slabs for new regime

 

 


 

5G(NR): NG Based Handover

 5G(NR): NG Based Handover

Introduction:

The basic handover procedures is same in any networks, i.e. UE reports measurement report with neighbor cell PCI and signal strength to source cell, source cell take the decision to start handover procedure to best target cell and Target Cell completes the Handover procedure.

  •  In 5G NG Handover is very similar to S1 Handover in LTE. NG handover is also called inter gNB and Intra AMF Handover. NG handover take place when X2 interface is not available between source gNB and Target gNB or if X2 interface is there but XnHO is not permitted restriction is there at gNB configuration. 
  • NG(N2) Handover can be Intra Frequency HO and Inter Frequency HO both.
  • Below is the NG handover architecture in 5G.
 


Below is the flow diagram of NG(N2) handover.




how to copy file, directory from PODs to local host machine.

how to copy file, directory from K8s POD to local machine ?

     Generally we need some data from K8s PODs to our local machine for debugging and for future references. In this blog we will see that how we can copy the data from K8s POD to the local host directory or we can say to our local host machine and vise versa.

Copy data from PODs to the Local machine. 

1- If we are in the POD.

       if you are in the K8s POD already, then you can use SCP commands and their options to send the data file/directory to the local machine. 

From below commands we can login into the PODs.

kubectl exec -it <POD-Name > /bin/bash

After that Go to that path where the file present. use below command to scp the file to the local machine.

scp -r <filename /directory name > root@<local-machine IP >:<local machine path>

for Ex:

scp -r dummy.pcap root@xx.xx.xx.xx:/home/localdirectory/


2- using kubectl cp command:

# Copy /tmp/foo from a remote pod to /tmp/bar locally

kubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar

we can also use in below format to copy the folder from POD to local host.

kubectl cp server-cudu-f66785456-agbcp:apps/config  /home/name/files/config


Copy local directory to remote POD directory:

# Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container

kubectl cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container>

# Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace>

kubectl cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar

we can also use in below format to copy a folder from the localhost directory to a POD.

kubectl cp /home/name/files/config server-cudu-f56776c55646-abcde:apps/config


How to Check linux OS Version ?

How to Check CentOS Version ?

       There are several reasons, that why we need to check the centos version. For example if we need to downloads a SW package from the internet and install into the system. then we need to know that which version and OS bit is required to download. some times if we debugging some issue, generally we need check the ON version / SW version. 

so here we will see the multiple ways to check the CentOS version.

1- using rpm commands check Centos version:

rpm --query centos-release





2- using "hostnamectl" command:

hostnamectl command is supported by CentOS 7 and above versions. 

This command also display the Machine ID, Boot ID, hostname (server name) and kernel version also.

hostnamectl



3- using "cat /etc/os-release"

cat /etc/os-release



















For full OS version use command as below.

cat /etc/centos-release







4- How to check the system info.

      By using "lscpu" this command, we can check the number of Core, number of CPU, Number of NUMA node and NUMA setting, modal chipset, CPU Architecture and flags which are enabled in the systems.



5G-SA Call Flow: Messages mapped with channels:

 5G-SA Call Flow:

In this section of blog, we have tried to map some of the massages of 5G-SA call flow with channels (physical/Transport/Logical), SRBs over which messages are being transfer, mapped coreset and search space with the messages and RLC mode used during UE attached.




What is ARQ and HARQ?

What is ARQ and HARQ?

ARQ stands for Automatic Repeat Request. This is the protocol used at data link layer. it is an error-control strategy that is used in a two-way communication system.  It is used to achieve reliable data transmission over an unreliable source or service.

 It uses CRC(cyclic redundancy check) to determine, whether the packet received is correct or not. If the packet is received correctly at receiver side, receiver send ACK to the transmitter, but in case if the packet is not received correctly at receiver side, receiver send NACK to the transmitter. And then after receiving NACK from receiver, transmitter re-transmits the same packet again.

 

HARQ does not retransmit packet/PDU as it is; as done by ARQ technique. HARQ modifies certain physical parameters before retransmission.

  

The HARQ is a technique when the receiver gets a new data with some errors then it tries to make correction if the error is minor, but if the error is not minor then it sends re-transmission request to the sender. After getting the data again, it combines the new received data with previous erroneous data.



If some packets passed from HARQ to upper layer with a little bit errors which might be acceptable for some applications, but in any case there is one more mechanism which is ARQ or Automatic Repeat Request. The ARQ mechanism takes care of residual errors which passed from HARQ. If there is an error then it discards the packets and a new re-transmission is requested from the sender. ARQ is an error control protocol.



HARQ:
1. It works at Physical layer but controlled by MAC layer.
2. If the received data has an error then the receiver buffers the data and requests a re-transmission from the transmitter

3. HARQ  works for both UM and AM mode.

4. HARQ provides very fast retransmission which is suitable for high speeds (eg voice call).

ARQ: 
1. It works at RLC layer for RLC AM MODE ONLY.
2. If the received data has an error which is passed through HARQ then it is discarded, and a new re-transmission is requested from the transmitter.

3. ARQ is responsible for reliability.

Install docker - Kubernetes on Ubuntu.

 

Steps to Install docker - Kubernetes on Ubuntu

Set up Docker

Step 1: Install Docker

Kubernetes requires an existing Docker installation on all nodes master and worker node. If you already have Docker installed, skip ahead to Step 2.

1. Update the package list with the command:

sudo apt-get update

2. Next, install Docker with the below command:

sudo apt-get install docker.io

3. after completing the docker instalation . Check the installation (and version) by entering the following:

sudo docker version



Step 2: Start and Enable Docker

1. Set Docker to launch at boot by entering the following:

sudo systemctl enable docker

2. Verify Docker is running:

sudo systemctl status docker

To start Docker if it’s not running:

sudo systemctl start docker



Install Kubernetes

Step 3: Add Kubernetes Signing Key(both master and worker node)

Since you are downloading Kubernetes from a non-standard repository, it is essential to ensure that the software is authentic. This is done by adding a signing key.

1. Enter the following to add a signing key:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add

If you get an error that curl is not installed, install it with:

sudo apt-get install curl







Step 4: Add Software Repositories

Kubernetes is not included in the default repositories. To add them, enter the following:

sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

Repeat on each server node.

Step 5: Kubernetes Installation Tools

Kubeadm (Kubernetes Admin) is a tool that helps initialize a cluster. It fast-tracks setup by using community-sourced best practices. Kubelet is the work package, which runs on every node and starts containers. The tool gives you command-line access to clusters.

1. Install Kubernetes tools with the command:

sudo apt-get install kubeadm kubelet kubectl
sudo apt-mark hold kubeadm kubelet kubectl

Allow the process to complete.

2. Verify the installation with:

kubeadm version

3. Repeat for each server node.



Kubernetes Deployment

Step 6: Begin Kubernetes Deployment

Start by disabling the swap memory on each server:

sudo swapoff –a

If any issue as below while executing above commsnd:

[ERROR Swap]: running with swap on is not supported. Please disable swap.

1- sudo kubeadm reset

2- Create a file in /etc/systemd/system/kubelet.service.d/20-allow-swap.conf with the content:

[Service]
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"

3- sudo swapoff –a



Step 7: Assign Unique Host name for Each Server Node 

Decide which server to set as the master node. Then enter the command:

sudo hostnamectl set-hostname master-node

Next, set a worker node hostname by entering the following on the worker server:

sudo hostnamectl set-hostname worker01

If you have additional worker nodes, use this process to set a unique hostname on each. For example:

worker node 1:

sudo hostnamectl set-hostname worker01

worker node 2:

sudo hostnamectl set-hostname worker02

Step 8: Initialize Kubernetes on Master Node only

Switch to the master server node, and enter the following:

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

Once this command finishes, it will display a kubeadm join message at the end. Make a note of the whole entry. This will be used to join the worker nodes to the cluster.



Next, enter the following to create a directory for the cluster:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config



Step 9: Install Calico

  1. Install the Tigera Calico operator and custom resource definitions.

    sudo kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
  2. Install Calico by creating the necessary custom resource.

    sudo kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
    Note: Before creating this manifest, read its contents and make sure its settings are correct for your environment. For example, you may need to change the default IP pool CIDR to match your pod network CIDR.
  3. Confirm that all of the pods are running with the following command.

    watch kubectl get pods -n calico-system

    Wait until each pod has the STATUS of Running.

  4. Confirm that you now have a node in your cluster with the following command.

      kubectl get nodes -o wide

Step 10: Join Worker Node to Cluster

As indicated in Step 7, you can enter the kubeadm join command on each worker node to connect it to the cluster.

1- Switch to the worker01 system and enter the command you noted from Step 7:

Repeat for each worker node on the cluster. Wait a few minutes; then you can check the status of the nodes.

2- Switch to the master server, and enter:

kubectl get nodes

The system should display the worker nodes that you joined to the cluster.



Step-11: install registry

Use a command like the following to start the registry container:

sudo docker run -d -p 5000:5000 --restart=always --name registry registry:2

note: If error regarding registry already exsits. Remove the registry by below command and then recreate.

sudo docker container stop registry && sudo docker container rm -v registry

After that check the status of socker, below hiligeted line will added in status.

sudo systemctl status docker



Test with insecure registry

This procedure configures Docker to entirely disregard security for your registry. This is very insecure and is not recommended. It exposes your registry to trivial man-in-the-middle (MITM) attacks. Only use this solution for isolated testing or in a tightly controlled, air-gapped environment.

  1. Edit the daemon.json file, whose default location is /etc/docker/daemon.json on Linux

  2. If the daemon.json file does not exist, create it. Assuming there are no other settings in the file, it should have the following contents:

{
  "insecure-registries" : ["myregistrydomain.com:5000"]
}



3. Restart Docker for the changes to take effect.

Repeat steps-11 on every Engine host that wants to access your registry.



Step-12 – Intall helm

-From Apt (Debian/Ubuntu)

Members of the Helm community have contributed a helm package for Apt. This package is generally up to date.

curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm









kubernetes show deployment and delete deployment

 

show deployment

$ kubectl get deployments;

NAME              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
spring-hello      1         1         1            1           22h
spring-world      1         1         1            1           22h
vfe-hello-wrold   1         1         1            1           14m

show services

$kubectl get services;

NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kubernetes        ClusterIP   10.96.0.1        <none>        443/TCP          2d
spring-hello      NodePort    10.103.27.226   <none>        8081:30812/TCP   23h
spring-world      NodePort    10.102.21.165    <none>        8082:31557/TCP   23h
vfe-hello-wrold   NodePort    10.101.23.36     <none>        8083:31532/TCP   14m

delete deployment

$ kubectl delete deployments vfe-hello-wrold

deployment.extensions "vfe-hello-wrold" deleted

delete services

$ kubectl delete service vfe-hello-wrold

service "vfe-hello-wrold" deleted