Home NewsX Securing Containerized Applications with SSH Tunneling in Azure

Securing Containerized Applications with SSH Tunneling in Azure

by info.odysseyx@gmail.com
0 comment 2 views


As cloud engineers and architects embrace containerization, ensuring secure communication has become paramount. Data transfer and access control are important aspects of security to consider. SSH tunneling is a technology that helps achieve secure communication between multiple components of an application or solution. SSH tunneling creates an encrypted channel over an existing SSH connection, allowing secure data transfer between a local machine (SSH client) and a remote server (SSH server).

This article shows you how to set up SSH tunneling between containers running in the cloud that need to communicate with downstream resources via an SSH server hosted on a cloud VM.

SSH tunneling

Before we dive into the implementation, let’s take a quick look at SSH tunneling. SSH tunneling, also known as SSH port forwarding, allows secure communication between two endpoints by creating an encrypted tunnel over an existing SSH connection. This allows data to be securely transferred between the local machine (SSH client) and the remote server (SSH server) through an intermediary channel. Here’s an overview of the different scenarios in which SSH tunneling can be used:

1. Secure remote access to internal services: Organizations have internal services (e.g. databases, internal web applications) that are not exposed to the public Internet for security reasons. SSH tunneling allows employees to securely connect to these internal services from remote locations without exposing the services to the Internet.

2. Bypass Firewall Restrictions: A developer needs to access specific resources behind a company firewall, but the firewall restricts direct access. By setting up an SSH tunnel, the developer can securely pass traffic through the firewall to access the restricted resources.

3. Protecting sensitive data during transmission: Applications need to transfer sensitive data between different components or services, and there is a risk of data interception. SSH tunneling can be used to encrypt data moving between components, keeping it safe while in transit.

4. Securely access remote databases: A developer needs access to a remote database server for maintenance or development purposes, but direct access is not allowed due to security policies. By setting up an SSH tunnel, the developer can securely connect to the remote database server without exposing it to the public Internet.

5. Use insecure protocols safely: Applications communicate between different services using insecure protocols (e.g. FTP, HTTP). Wrapping the insecure protocols in an SSH tunnel secures the communication, protecting the data from being intercepted.

6. Remote Debugging: Developers need to debug applications running on a remote server, but direct access to the debugging port is restricted. SSH tunneling allows developers to securely debug their applications by forwarding the debugging port from the remote server to their local machine.

7. IoT device communication protection: IoT devices need to communicate with a central server, but that communication is vulnerable to interception or tampering. Establishing an SSH tunnel between the IoT device and the central server encrypts and secures the communication, protecting data in transit.

8. Secure file transfer: Files need to be transferred securely between different systems or locations. SSH tunneling allows you to securely transfer files through an encrypted tunnel, ensuring that your data remains confidential and maintains its integrity.

9. Remote Service Access: Users need to securely access services or resources hosted on remote servers. By setting up an SSH tunnel, users can securely access remote services as if they were running locally, protecting data in transit.

10. Protect your web traffic: When accessing a website or web application over an untrusted network, you need to secure your web traffic. SSH tunneling allows you to create a secure connection to a remote server, encrypting your web traffic and protecting it from eavesdropping or interception.

script

This document implements the following scenario:

Navid KaradiMSFT_5-1725349006836.png

Architectural Components

  • Myinfrabnet: A virtual network where downstream resources are deployed.
  • nginxVM: A virtual machine running Nginx, a web server, or a reverse proxy within myInfraVNet. It is assigned a private IP address and cannot be accessed directly from the Internet.
  • nginxVM/NSG: Control inbound and outbound traffic with the network security group associated with the nginxVM.
  • MyAppBinet: The virtual network where container apps are deployed.
  • Container app environment: This environment hosts two containerized applications.
    • My Container App: A simple containerized Python application that fetches content from an NGINX server running on a VM and renders it alongside other content.
    • ssh client container app: Another containerized application used to establish secure SSH tunnels to other resources.
  • Container Registry: Stores container images that can be deployed to container apps.
  • VNet Peering: Enables resources in myAppVNet and myInfraVNet to communicate with each other. Essentially, it connects the two VNets, enabling low-latency, high-bandwidth interconnectivity.
  • SSH tunnel: sshclientcontainerapp in myAppVNet establishes an SSH tunnel to the nginxVM in myInfraVNet, enabling secure communication between containerized apps and VMs.
  • Network Security Group (NSG): nginxVM/NSG ensures that only allowed traffic can reach the nginxVM. It is important to configure this NSG correctly to allow SSH traffic from sshclientcontainerapp and restrict unwanted access.

Scenario Scripting

Based on the scenario described above, we now script the implementation of the architecture. The script creates the necessary resources, configures the SSH tunnel, and deploys the containerized application.

Prerequisites

Before running the script, make sure you have the following prerequisites:

  • Azure CLI is installed on your local machine.
  • Docker is installed on your local machine.
  • A valid Azure subscription.
  • Basic understanding of Azure Container Apps, Azure Container Registry, and Azure Virtual Networks.

Parameters

Let’s start by defining the parameters that will be used in the script. These parameters include the resource group name, location, virtual network name, subnet name, VM name, VM image, VM size, SSH key, admin username, admin password, container app environment name, container registry name, container app image name, SSH client container image name, SSH port, and NGINX port. A random string is generated and appended to the resource group name, container app environment name, and container registry name to ensure uniqueness.

random=$(echo $RANDOM | tr '[0-9]' '[a-z]')
echo "Random:" $random
export RESOURCE_GROUP=rg-ssh-$(echo $random)
echo "RESOURCE_GROUP:" $RESOURCE_GROUP
export LOCATION="australiaeast"
export INFRA_VNET_NAME="myInfraVNet"
export APP_VNET_NAME="myAppVNet"
export INFRA_SUBNET_NAME="mySubnet"
export APP_SUBNET_NAME="acaSubnet"
export VM_NAME="nginxVM"
export VM_IMAGE="Ubuntu2204"
export VM_SIZE="Standard_DS1_v2"
export VM_KEY=mykey$(echo $random)
export ADMIN_USERNAME="azureuser"
export ADMIN_PASSWORD="Password123$"  # Replace with your actual password
export CONTAINER_APPS_ENV=sshacae$(echo $random)
export REGISTRY_NAME=sshacr$(echo $random)
export REGISTRY_SKU="Basic"
export CONTAINER_APP_IMAGE="mycontainerapp:latest"
export SSH_CLIENT_CONTAINER_IMAGE="sshclientcontainer:latest"
export CONTAINER_APP_NAME="mycontainerapp"
export SSH_CLIENT_CONTAINER_APP_NAME="sshclientcontainerapp"
export SSH_PORT=22
export NGINX_PORT=80

Create a resource group

Create a resource group using: az group create Command. Resource group name and location are passed as parameters.

az group create --name $RESOURCE_GROUP --location $LOCATION  

Creating virtual networks and subnets

Create two virtual networks. myInfraVNet and myAppVNetBy using az network vnet create Command. Address prefix and subnet prefix are specified for each virtual network. az network vnet subnet update Commands are used to delegate Microsoft.App/environments to myAppVNet Subnet.

az network vnet create --resource-group $RESOURCE_GROUP --name $INFRA_VNET_NAME --address-prefix 10.0.0.0/16 --subnet-name $INFRA_SUBNET_NAME --subnet-prefix 10.0.0.0/24  
az network vnet create --resource-group $RESOURCE_GROUP --name $APP_VNET_NAME --address-prefix 10.1.0.0/16 --subnet-name $APP_SUBNET_NAME --subnet-prefix 10.1.0.0/24  
az network vnet subnet update --resource-group $RESOURCE_GROUP --vnet-name $APP_VNET_NAME --name $APP_SUBNET_NAME --delegations Microsoft.App/environments

Create VNET Peering

Create a VNET peering. myInfraVNet and myAppVNet By using az network vnet peering create Command. Two peering connections are created. One is myInfraVNet to myAppVNet And the other one is myAppVNet to myInfraVNet.

az network vnet peering create --name VNet1ToVNet2 --resource-group $RESOURCE_GROUP --vnet-name $INFRA_VNET_NAME --remote-vnet $APP_VNET_NAME --allow-vnet-access  
az network vnet peering create --name VNet2ToVNet1 --resource-group $RESOURCE_GROUP --vnet-name $APP_VNET_NAME --remote-vnet $INFRA_VNET_NAME --allow-vnet-access  

Create network security groups and rules

Create a Network Security Group (NSG). nginxVM By using az network nsg create Command. Two NSG rules are created to allow SSH traffic on port 22 and HTTP traffic on port 80.

az network nsg create --resource-group $RESOURCE_GROUP --name ${VM_NAME}NSG  
az network nsg rule create --resource-group $RESOURCE_GROUP --nsg-name ${VM_NAME}NSG --name AllowSSH --protocol Tcp --direction Inbound --priority 1000 --source-address-prefixes '*' --source-port-ranges '*' --destination-address-prefixes '*' --destination-port-ranges $SSH_PORT --access Allow  
az network nsg rule create --resource-group $RESOURCE_GROUP --nsg-name ${VM_NAME}NSG --name AllowHTTP --protocol Tcp --direction Inbound --priority 1001 --source-address-prefixes '*' --source-port-ranges '*' --destination-address-prefixes '*' --destination-port-ranges $NGINX_PORT --access Allow  

Create a network interface

To create a network interface for the nginxVM, use: az network nic create Command. NIC is associated with: myInfraVNet and mySubnet And the NSG that was created previously.

az network nic create --resource-group $RESOURCE_GROUP --name ${VM_NAME}NIC --vnet-name $INFRA_VNET_NAME --subnet $INFRA_SUBNET_NAME --network-security-group ${VM_NAME}NSG 

Create VM

Create a virtual machine using: az vm create Command. The VM is created with the specified image, size, administrator username, and password. The previously created NIC is associated with the VM. Make sure you provide a value for the password. ADMIN_PASSWORD variable.

az vm create --resource-group $RESOURCE_GROUP --name $VM_NAME --image $VM_IMAGE --size $VM_SIZE --admin-username $ADMIN_USERNAME --admin-password $ADMIN_PASSWORD --nics ${VM_NAME}NIC  
export VM_PRIVATE_IP=$(az vm show -d -g $RESOURCE_GROUP -n $VM_NAME --query privateIps -o tsv)  
echo "VM Private IP: $VM_PRIVATE_IP"

Generate an SSH key pair and add the public key to the VM.

To generate an SSH key pair, use: ssh-keygen Command. The public key is added to the VM using: az vm user update command.

# Generate an SSH key pair  
ssh-keygen -t rsa -b 4096 -f $VM_KEY -N ""  
# Add the public key to the VM  
az vm user update --resource-group $RESOURCE_GROUP --name $VM_NAME --username $ADMIN_USERNAME --ssh-key-value "$(cat $VM_KEY.pub)"  
# Print success message  
echo "SSH key pair generated and public key added to VM $VM_NAME" 

Installing NGINX and SSH Server on a VM

Install NGINX and SSH server on your VM using: az vm run-command invoke command. This command runs a shell script on the VM to update package repositories, install NGINX, start the NGINX service, install the SSH server, and start the SSH service.

az vm run-command invoke --command-id RunShellScript --name $VM_NAME --resource-group $RESOURCE_GROUP --scripts "sudo apt-get update && sudo apt-get install -y nginx && sudo systemctl start nginx && sudo apt-get install -y openssh-server && sudo systemctl start ssh"

Create an Azure Container Registry

Create using Azure Container Registry. az acr create A command to save the container image to be deployed to the container app.

az acr create --resource-group $RESOURCE_GROUP --name $REGISTRY_NAME --sku $REGISTRY_SKU --location $LOCATION --admin-enabled true  

Log in to Azure Container Registry

To log in to Azure Container Registry, use: az acr login command.

az acr login --name $REGISTRY_NAME 

Create a Dockerfile for mycontainerapp.

Create a Dockerfile mycontainerappA Dockerfile specifies a base image, a working directory, how to copy files, install packages, expose ports, define environment variables, and run the application.

echo "
# Use an official Python runtime as a parent image  
FROM python:3.8-slim  
# Set the working directory in the container  
WORKDIR /app  
# Copy the current directory contents into the container at /app  
COPY . /app  
# Install any needed packages specified in requirements.txt  
RUN pip install --no-cache-dir -r requirements.txt  
# Make port 80 available to the world outside this container  
EXPOSE 80  
# Define environment variable  
ENV NAME World  
# Run app.py when the container launches  
CMD [\"python\", \"app.py\"]
" > Dockerfile.mycontainerapp

Create a Dockerfile for sshclientcontainer.

Create a Dockerfile sshclientcontainerThe Dockerfile specifies the base image, installs the SSH client, copies the SSH keys, sets up the working directory, copies files, exposes ports, and runs the SSH client.

echo "
# Use an official Ubuntu as a parent image  
FROM ubuntu:20.04  
# Install SSH client  
RUN apt-get update && apt-get install -y openssh-client && apt-get install -y curl  
# Copy SSH key  
COPY ${VM_KEY} /root/.ssh/${VM_KEY}  
RUN chmod 600 /root/.ssh/${VM_KEY}  
# Set the working directory in the container  
WORKDIR /app  
# Copy the current directory contents into the container at /app  
COPY . /app  
# Make port 80 available to the world outside this container  
EXPOSE 80  
# Run the SSH client when the container launches  
CMD [\"bash\", \"-c\", \"ssh -i /root/.ssh/${VM_KEY} -o StrictHostKeyChecking=no -L 0.0.0.0:80:localhost:80 ${ADMIN_USERNAME}@${VM_PRIVATE_IP} -N\"]
" > Dockerfile.sshclientcontainer 

Create an app for mycontainerapp

Create a simple app that can be hosted on mycontainerapp. app.py This file contains a simple Flask application that fetches content from an NGINX server running on a VM and renders it alongside other content.

echo "
import requests
from flask import Flask, render_template_string

app = Flask(__name__)

@app.route("https://techcommunity.microsoft.com/")
def hello_world():
    response = requests.get('http://sshclientcontainerapp:80')
    html_content = \"\"\"
    
    
    
        
        
        Response Page
    
    
        

Response Content - The following response has been received from the NGINX server running on the VM via SSH tunnel


{}


© 2024 My Flask App

\"\"\".format(response.text) return render_template_string(html_content) if __name__ == '__main__': app.run(host="0.0.0.0", port=80) " > app.py echo " Flask==2.0.0 Werkzeug==2.2.2 requests==2.25.1 " > requirements.txt

Build and push Docker images

Build the Docker image. sshclientcontainer and mycontainerapp By using docker build Command. The image is tagged with the Azure Container Registry name and pushed to the registry using: docker push command.

# Build the Docker image for sshclientcontainer  
docker build -t $REGISTRY_NAME.azurecr.io/$SSH_CLIENT_CONTAINER_IMAGE -f Dockerfile.sshclientcontainer .  
   
# Push the Docker image for sshclientcontainer to Azure Container Registry  
docker push $REGISTRY_NAME.azurecr.io/$SSH_CLIENT_CONTAINER_IMAGE  
   
# Build the Docker image for mycontainerapp  
docker build -t $REGISTRY_NAME.azurecr.io/$CONTAINER_APP_IMAGE -f Dockerfile.mycontainerapp .  
   
# Push the Docker image for mycontainerapp to Azure Container Registry  
docker push $REGISTRY_NAME.azurecr.io/$CONTAINER_APP_IMAGE  

Creating an Azure Container App Environment

To create an Azure Container Apps environment, use: az containerapp env create Command. The environment is associated with the previously created virtual networks and subnets.

# Get the subnet ID for the infrastructure subnet
export INFRA_SUBNET_ID=$(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $APP_VNET_NAME --name $APP_SUBNET_NAME --query id --output tsv)
echo $INFRA_SUBNET_ID
# Create the Azure Container Apps environment
az containerapp env create --name $CONTAINER_APPS_ENV --resource-group $RESOURCE_GROUP --location $LOCATION --infrastructure-subnet-resource-id $INFRA_SUBNET_ID

Deploying container apps

To deploy a container app to an Azure Container Apps environment, use: az containerapp create Command. The container image is pulled from Azure Container Registry, and the container app is configured to use an SSH tunnel for secure communication.

Deploy sshclientcontainerapp

az acr login --name $REGISTRY_NAME
# Deploy sshclientcontainerapp
az containerapp create --name $SSH_CLIENT_CONTAINER_APP_NAME --resource-group $RESOURCE_GROUP --environment $CONTAINER_APPS_ENV --image $REGISTRY_NAME.azurecr.io/$SSH_CLIENT_CONTAINER_IMAGE --target-port 80 --ingress 'external' --registry-server $REGISTRY_NAME.azurecr.io

Deploy mycontainerapp

az acr login --name $REGISTRY_NAME
az containerapp create --name $CONTAINER_APP_NAME --resource-group $RESOURCE_GROUP --environment $CONTAINER_APPS_ENV --image $REGISTRY_NAME.azurecr.io/$CONTAINER_APP_IMAGE --target-port 80 --ingress 'external' --registry-server $REGISTRY_NAME.azurecr.io

Deployment Testing

After deploying your container app, you can test your deployment by accessing its public URL. mycontainerappThe app needs to fetch content from an NGINX server running on the VM via an SSH tunnel and render it alongside other content.

  1. Search for public URLs. mycontainerapp:

    MY_CONTAINER_APP_URL=$(az containerapp show --name $CONTAINER_APP_NAME --resource-group $RESOURCE_GROUP --query 'properties.configuration.ingress.fqdn' -o tsv)  
    echo "mycontainerapp URL: http://$MY_CONTAINER_APP_URL"  
  2. Copy and paste the URL you printed in the previous step to open the URL in a web browser.

You should see a web page with the response from the NGINX server running on the VM over the SSH tunnel.

Navid KaradiMSFT_6-1725349726320.png

clean

After testing your deployment, you can clean up your resources by deleting the resource group. This will remove all resources created by the script.

az group delete --name $RESOURCE_GROUP --yes --no-wait

conclusion

In this article, we covered how to secure a containerized application using SSH tunneling. We covered the steps to set up the necessary infrastructure, create and deploy a containerized application, and set up an SSH tunnel for secure communication between the containerized app and the VM hosting the NGINX server.

Following these steps will ensure that your containerized applications can securely communicate with downstream resources and strengthen the overall security of your cloud-native architecture.

For more information about securing containerized applications, see: Azure Container App Documentation.

If you have any questions or need further assistance, please contact us at any time. Azure Documentation Or contact Microsoft support.

References





Source link

You may also like

Leave a Comment

Our Company

Welcome to OdysseyX, your one-stop destination for the latest news and opportunities across various domains.

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Laest News

@2024 – All Right Reserved. Designed and Developed by OdysseyX