Home NewsX Modernising Registrar Technology: Implementing EPP with Kotlin, Spring & Azure Container Apps

Modernising Registrar Technology: Implementing EPP with Kotlin, Spring & Azure Container Apps

by info.odysseyx@gmail.com
0 comment 5 views


Introduction

In the domain management industry, technological advancement has often been a slow and cautious process, lagging behind the rapid innovations seen in other tech sectors. This measured pace is understandable given the critical role domain infrastructure plays in the global internet ecosystem. However, as we stand on the cusp of a new era in web technology, it is becoming increasingly clear that modernization should be a priority. This blog post embarks on a journey to demystify one of the most critical yet often misunderstood components of the industry: the Extensible Provisioning Protocol (EPP).

 

Throughout this blog, we will dive deep into the intricacies of EPP, exploring its structure, commands and how it fits into the broader domain management system. We will walk through the process of building a robust EPP client using Kotlin and Spring Boot. Then, we will take our solutions to the next level by containerizing with Docker and deploying it to Azure Container Apps, showcasing how modern cloud technologies can improve the reliability and scalability of your domain management system. We will also set up a continuous integration and deployment (CI/CD) pipeline, ensuring that your EPP implementation remains up-to-date and easily maintainable.

 

By the end of this blog, you will be able to provision domains programatically via an endpoint, and have the code foundation ready to create dozens of other domain management commands (e.g. updating nameservers, updating contact info, renewing and transferring domains).

 

Who it is for

This guide is tailored primarily for registrars — services who serve as the crucial intermediary between domain registrants (the end user who wishes to claim their piece of internet real estate) and the registry systems that manage those domains. While the concepts we will explore have broad applications across the domain industry, the perspective throughout will be firmly rooted in the registrar’s role. The fundamental goal of this blog is to lower the barrier to entry in the domain management space, making this technology more accessible to smaller registrars, startups and individual developers.

 

What you will need: EPP credentials

The entire tech stack and the development prerequisites are listed below. But before commiting to this project, be aware that the cornerstone of this workflow is the registry EPP server. This is non-negotiable and absolutely essential for implementing and testing your EPP client. 

 

If you stumbled upon this blog, it is likely you already have accreditation with a registry. In this case, the registry will provide you with EPP credentials (expect a host, port, username and password). Note that some registries enforce an IP whitelist. For those who do not have accreditation with a registry, then you will need to go through the relevant accreditation process or use a publicly available sandbox.

 

 

They offer the following TLDs: .gg, .je, .co.gg, .net.gg, .org.gg, .co.je, .net.je, .org.je. Among these, we will concentrate on provisioning .gg domains. The .gg TLD has gained significant popularity, particularly in the gaming community. My personal experience in getting accreditation with Channel Islands registry was an application process and then a fee, and afterwards they provided live EPP details and access to an OTE (Operational Test & Evaluation) environment which I will be using in this blog so as to not incur unnecessary costs.

 

If you do not have access to any EPP server, then this blog will serve as informational only. Otherwise, you can follow along in creating the system.

 

Understanding EPP

EPP is short for Extensible Provisioning Protocol. It is a protocol designed to streamline and standardise communication between domain name registries and registrars. Developed to replace older, less efficient protocols, EPP has become the industry standard for domain registration and management operations.

 

More technically, EPP is an XML-based protocol that facilities the provisioning and management of domain names, host objects and contact information. Key features include:

  • Stateful connections: EPP maintains persistent connections between registrars and registries, reducing overhead and improving performance.
  • Extensibility: As the name suggests, EPP is designed to be extensible. Registries can add custom extensions to support unique features or requirements.
  • Standardization: EPP provides a uniform interface across different registries, simplifying integration for registrars and reducing development costs.

For someone new to this field, it is easy to assume that domain provisioning would be done with registries through a REST API. But actually, the modern standard is using this protocol, and that is what this blog will cover.

 

Choosing the tech stack

We need a combination of technologies that will provide performance, scalability and developer productivity. After careful consideration, I settled on using Kotlin as the programming language, Spring for the REST API and Azure Container Apps for deployment.

 

Kotlin

Kotlin has a unique blend of features which makes it a great choice for our EPP implementation. Its seamless interoperability with Java allows us to leverage existing Java libraries commonly used in other EPP implementations while enjoying Kotlin’s modern syntax. The language’s conciseness and readability results in cleaner, more maintainable code, which is particularly beneficial when dealing with complex EPP commands and responses.

 

Spring

The Spring framework plays a pivotal role in our project. After implementing the EPP functions, we will be using Spring to allow us to control these actions from the outside world. We will use Spring to create endpoints that we can use from outside of our deployment on Azure Container Apps, such as through a web backend. This is a common pattern that registrars might use, whereby when a registrant attempts registration of a domain, the web application will process most of the validation and send off the instruction to our Spring REST API which will command the EPP.

 

Azure Container Apps (‘ACA’)

One may initially assume that Azure Spring Apps would be perfect for this project. However, it was recently announced that this service is being retired, starting Sep 30th, 2024, and ending March 31st, 2028. The official migration recommendation is to move to Azure Container Apps. Note that there are other migration paths, such as a PaaS solution with Azure App Service or a containerized solution with Azure Kubernetes Service, though we will be using ACA for this blog. Read more on the retirement: https://learn.microsoft.com/en-us/azure/spring-apps/basic-standard/retirement-announcement

 

Azure Container Apps rounds out our tech stack, providing the ideal platform for deploying and scaling our EPP implementation. This fully managed environment allows us to focus on our application logic rather than getting bogged down in infrastructure management. One of the key advantages of ACA is its native support for microservices architecture, which makes it the perfect choice for a Spring application. Spring’s embedded Tomcat server aligns with ACA’s containerised approach, allowing for easy deployment with reduced development time. Moreover, ACA’s built-in ingress and SSL termination capabilities complement Spring’s security features, providing a robust, secure environment for our EPP operations. The platform’s ability to handle multiple revisions of an application also facilitates easy A/B testing and canary deployments, which is particularly useful when rolling out updates to our EPP system.

 

The architecture

Now we are familiar with the technology, let us look at how this is all going to fit together. The architecture is fairly simple:

 

StephenRhodes_0-1728251964600.png

 

In this blog, we will be making the EPP API and deploying it to an Azure Container App. The EPP API will, of course, need to connect and communicate with a registry server. While out of scope for this blog, I have included Azure CosmosDB to show where a custom user database could fit into this flow, and an Azure Web App to show a common use case for end users. Once we have put together this EPP API which connects to a registry with Kotlin & Spring, and deployed it on ACA, the hard part is out of the way. From there, you can create any sort of user interface that is relevant to your audience (e.g. an Azure Web App) and connect with a database in any way that is relevant to your platform (e.g. Azure CosmosDB for caching).

 

To put this architecture into a real-world context, imagine that you are purchasing a domain from a popular registrar such as Namecheap or GoDaddy, this is the kind of backend systems they may have. The typical user journey, in simplistic steps, as illustrated by the diagram, would be:

  1. Registrant (end user) requests to purchase a domain
  2. Website backend sends instruction to EPP API (what we are making in this blog)
  3. EPP API sends command to the EPP server provided by the registry
  4. Response provided by registry and received by registrant (end user) on website

Setting up the development environment

Prerequisites

For this blog, I will be using the following technologies:

  1. Visual Studio Code (VS Code) as the IDE (integrated development environment). I will be installing some extensions and changing some settings to make it work for our technology. Download at Download Visual Studio Code – Mac, Linux, Windows
  2. Docker CLI for containerization and local testing. Download at Get Started | Docker
  3. Azure CLI for deployment to Azure Container Registry & Azure Container Apps (you can use the portal if more comfortable). Download at How to install the Azure CLI | Microsoft Learn
  4. Git for version control and pushing to GitHub to setup CI/CD pipeline. Download at Git – Downloads (git-scm.com)

 

VS Code Extensions

These extensions are optional but will significantly improve the development experience. I would highly recommend installing them. Head to the side panel on the left, click Extensions and install the following:

  1. Kotlin
    StephenRhodes_0-1728337075552.png
  2. Spring Initialzr Java Support
    StephenRhodes_1-1728337101921.png

 

Implementing EPP with Kotlin & Spring

Creating the project

First up, let us create a blank Spring project. We will do this with the Spring Initializr plugin we just installed:

  1. Press CTRL + SHIFT + P to open the command palette
  2. Select Spring Initialzr: Create a Gradle project...
  3. Select version (I recommend 3.3.4)
  4. Select Kotlin as project language
  5. Type Group Id (I am using com.stephen)
  6. Type Artifact ID (I am using eppapi)
  7. Select jar as packaging type
  8. Select any Java version (The version choice is yours)
  9. Add Spring Web as a dependency
    StephenRhodes_0-1728250025839.png
  10. Choose a folder
  11. Open project

Your project should look like this:

StephenRhodes_1-1728250067079.png

 

We are using the Gradle build tool for this project. Gradle is a powerful, flexible build automation tool that supports multi-language development and offers convenient integration with both Kotlin & Spring. Gradle will handle our dependency management, allowing us to focus on our EPP implementation rather than build configuration intricacies.

 

Adding the EPP dependency

The Spring Initialzr has kindly added the required Spring dependencies for us. Therefore, all that is left is our EPP dependency. When exploring how best to achieve my goal in connecting to a registry through EPP, I discovered the EPP RTK (Registrar Toolkit) library. This library provides a robust implementation of the Extensible Provisioning Protocol, making it an ideal choice for our project. This library is particularly useful because:

  • It handles the low-level details of EPP communication, allowing us to focus on business logic.
  • It is a Java-based implementation, which integrates seamlessly with our Kotlin and Spring setup.
  • It supports all basic EPP commands out of the box, such as domain checks, registrations and transfers.

By using the EPPRTK, we can significantly reduce the amount of boilerplate code needed to implement EPP functionality.

You can download the library from there and manually import it into your project, or preferably add the following to your build.gradle in the dependencies section:

implementation 'io.github.mschout:epp-rtk-java:0.9.11'

 

Also, while we are here, I would recommend setting the Spring framework plugin to version 2.7.18. This version is most compatible with the APIs we are using, and I have tried and tested it. To do this, in the plugins block, change the dependency to this:

id 'org.springframework.boot' version '2.7.18'

 

 

Modifying the build settings

With that knowledge, there is some specific things we need to change in our build.gradle to support the proper Java version. The version is entirely up to you, though I would personally recommend latest due to staying up to date with security patches. Copy/replace the following into the build.gradle:

java {
    toolchain {
        languageVersion = JavaLanguageVersion.of(21)
    }
    sourceCompatibility = JavaVersion.VERSION_21
    targetCompatibility = JavaVersion.VERSION_21
}

kotlin {
    jvmToolchain(21)
}

tasks.withType(org.jetbrains.kotlin.gradle.tasks.KotlinCompile) {
    kotlinOptions {
        jvmTarget = "21"
        freeCompilerArgs = ["-Xjsr305=strict"]
    }
}

tasks.named('test') {
    enabled = false
}

 

At this point, it is good practice to attempt to build the project. It should build comfortably with these new settings, though if not then now is the perfect time to deal with errors before we get into the codebase. To do this, either use the built-in Gradle panel on the sidebar and click through Tasks > build > build, or run this command in the terminal:

 

After a few seconds, you should be met with BUILD SUCCESSFUL.

 

The structure

Our intention here is to build a REST API which will take in requests and then use EPP-RTK to beam off commands to the targeted EPP registry. I recommend the following steps for a solid project structure:

  1. Rename the main class to EPPAPI.kt (Spring auto generation did not do it justice).
  2. Split the code into two folders: epp and api, with our main class remaining at the root.
  3. Create a class inside the epp folder named EPP.kt – this is where we will connect to and manage the EPPClient soon.
  4. Create a class inside the api folder named API.kt – this is where we will configure and run the Spring API.

Your file structure should now look like this:

EPPAPI.kt
api
└── API.kt
epp
└── EPP.kt

Before we can get to coding, there is one final step: adding environment variables. To connect to the targeted EPP server, we need four variables: host, port, username and password. These will be provided by your chosen registry. It is possible that, as in my case, the registry may also grant you access to an OTE (Operational Test & Evaluation) environment, which is essentially a 1:1 of the live EPP server that acts as a sandbox for registrars to test their systems without fear of affecting data on the live registry. I highly recommend hooking up to an OTE during testing if your registry has provided you with one to not incur unnecessary costs.

 

Create a file in the root of your project called .env and populate with the following structure. I have prefilled with the host and port for the registry I am using to show the expected format:

HOST=ote.channelisles.net
PORT=700
USERNAME=X
PASSWORD=X

 

We will use these environment variables while running our project locally in VS Code and then prefill them into Docker when containerizing locally. For container apps, we will have to manually provide them when setting up the environment.

 

The code

Now comes the fun part. We have successfully set up our development environment and structured the project, so now let us populate it with some code. Given this project is in Kotlin, I will be writing solid syntax as illustrated in the Kotlin docs: https://kotlinlang.org/docs/home.html

 

Firstly, let us tackle our EPP class. The goal with this class is to provide access to an EPPClient which we can use to connect to the EPP server and authenticate with our details. The class will extend the EPPClient provided by the EPP-RTK API and implement a singleton pattern through its companion object. The class uses the environment variables we set earlier for configuration. The create() function serves as a factory method, handling the process of establishing a secure SSL connection, logging in and initializing the client. It employs Kotlin’s apply function for a concise and readable initialization block. The implementation also includes error handling and logging which will help us debug if anything goes wrong. The setupSSLContext() function configures a trust-all certificate strategy, which, while not recommended for production, is useful in development or specific controlled environments. This design will allow for easy extension through Kotlin’s extension functions on the companion object.

 

import com.tucows.oxrs.epprtk.rtk.EPPClient
import java.net.Socket
import java.security.KeyStore
import java.security.cert.X509Certificate
import javax.net.ssl.KeyManagerFactory
import javax.net.ssl.SSLContext
import javax.net.ssl.TrustManager
import javax.net.ssl.X509TrustManager

class EPP private constructor(
    host: String,
    port: Int,
    username: String,
    password: String,
) : EPPClient(host, port, username, password) {
    companion object {
        private val HOST = System.getenv("HOST")
        private val PORT = System.getenv("PORT").toInt()
        private val USERNAME = System.getenv("USERNAME")
        private val PASSWORD = System.getenv("PASSWORD")

        lateinit var client: EPP

        fun create(): EPP {
            println("Creating client with HOST: $HOST, PORT: $PORT, USERNAME: $USERNAME")
            return EPP(HOST, PORT, USERNAME, PASSWORD).apply {
                try {
                    println("Creating SSL socket...")
                    val socket = createSSLSocket()
                    println("SSL socket created. Setting socket to EPP server...")
                    setSocketToEPPServer(socket)
                    println("Socket set. Getting greeting...")
                    val greeting = greeting
                    println("Greeting received: $greeting")
                    println("Connecting...")
                    connect()
                    println("Connected. Logging in...")
                    login(PASSWORD)
                    println("Login successful.")
                    client = this
                } catch (e: Exception) {
                    println("Error during client creation: ${e.message}")
                    e.printStackTrace()
                    throw e
                }
            }
        }

        private fun createSSLSocket(): Socket {
            val sslContext = setupSSLContext()
            return sslContext.socketFactory.createSocket(HOST, PORT) as Socket
        }

        private fun setupSSLContext(): SSLContext {
            val trustAllCerts = arrayOf(object : X509TrustManager {
                override fun getAcceptedIssuers(): Array? = null
                override fun checkClientTrusted(certs: Array, authType: String) {}
                override fun checkServerTrusted(certs: Array, authType: String) {}
            })
            val keyStore = KeyStore.getInstance(KeyStore.getDefaultType()).apply {
                load(null, null)
            }
            val kmf = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm()).apply {
                init(keyStore, "".toCharArray())
            }
            return SSLContext.getInstance("TLS").apply {
                init(kmf.keyManagers, trustAllCerts, java.security.SecureRandom())
            }
        }
    }
}

 

Now that this is configured, let us alter our main class to ensure that we connect and authenticate into the client when our project is run. I have removed the default generated Spring content as we will move this to the dedicated API.kt class shortly. The main class should now look like this:

fun main() {
    EPP.create()
}

 

Now your application is able to connect and authenticate with an EPP server! However, that in itself is not very useful, so next we will focus on creating specific functions that will send off EPP messages to the target server and get a response.

 

Before we continue, it is important to understand the three main objects in domain management: domains, contacts and hosts.

  • Domains: These are the web addresses that users type into their browsers. In EPP, a domain object represents the registration of a domain name.
  • Contacts: These are individuals or entities associated with a domain. There are typically four types of contact: Registrant, Admin, Tech & Billing. ICANN (Internet Corporation for Assigned Names and Numbers) mandates that every provisioned domain must have a valid contact attached to it.
  • Hosts: Also known as nameservers, these are the servers that translate domain names into IP addresses. In EPP, host objects can either be internal (subordinate to a domain in the registry) or external.

Understanding these concepts is crucial because EPP operations involve creating, modifying or querying these objects. For instance, when registering a domain, you need to specify contacts and hosts.

 

With that knowledge, let us create three folders inside our epp folder, named domain, contact and host. And the first EPP command we will make is the simplest: a domain check. Because this relates to domain objects, create a class inside the domain folder named CheckDomain.kt. Your project structure should now look like this:

EPPAPI.kt
api
└── API.kt
epp
├── contact
├── domain
│   └── CheckDomain.kt
├── host
└── EPP.kt

 

Let us go and write our first EPP operation: checking if a domain is available for registration. I am going to create a Kotlin extension function inside our CheckDomain.kt class called checkDomain which can be used on our EPP class. Here’s the code:

import com.tucows.oxrs.epprtk.rtk.example.DomainUtils.checkDomains
import epp.EPP
import com.tucows.oxrs.epprtk.rtk.xml.EPPDomainCheck
import org.openrtk.idl.epprtk.domain.epp_DomainCheckReq
import org.openrtk.idl.epprtk.domain.epp_DomainCheckRsp
import org.openrtk.idl.epprtk.epp_Command

fun EPP.Companion.checkDomain(
    domainName: String,
): Boolean {
    val check = EPPDomainCheck().apply {
        setRequestData(
            epp_DomainCheckReq(
                epp_Command(),
                arrayOf(domainName)
            )
        )
    }

    val response = processAction(check) as EPPDomainCheck
    val domainCheck = response.responseData as epp_DomainCheckRsp

    return domainCheck.results[0].avail
}

 

Here is the flow of the function:

  1. We create an EPPDomainCheck object, which represents an EPP domain check command.
  2. We set the request data using epp_DomainCheckReq. This takes an epp_command (a generic EPP command) and an array of domain names to check. In this case, we are only checking one domain.
  3. We process the action using our EPP client’s processAction function, which sends the request to the EPP server.
  4. We cast the response to EPPDomainCheck and extract the responseData.
  5. Finally, we return whether the domain is available or not from the first (and only result) by checking the avail value.

From an EPP perspective, this function is sending a domain check command to the EPP server. The server responds with information about whether the specified domain is available for registration. Remember, EPP is an XML-based protocol, meaning that the raw output for a check of, for example, example.gg, returns the following:

org.openrtk.idl.epprtk.domain.epp_DomainCheckRsp: { m_rsp [org.openrtk.idl.epprtk.epp_Response: { m_results [[org.openrtk.idl.epprtk.epp_Result: { m_code [1000] m_values [null] m_ext_values [null] m_msg [Command completed successfully] m_lang [] }]] m_message_queue [org.openrtk.idl.epprtk.epp_MessageQueue: { m_count [4] m_queue_date [null] m_msg [null] m_id [916211] }] m_extension_strings [null] m_trans_id [org.openrtk.idl.epprtk.epp_TransID: { m_client_trid [null] m_server_trid [1728106430577] }] }] m_results [[org.openrtk.idl.epprtk.epp_CheckResult: { m_value [example.gg] m_avail [false] m_reason [(00) The domain exists] m_lang [] }]] }

 

This is why we do the casting and filter through to the Boolean to provide back to the calling function. Otherwise, this would be a mess to deal with. It is important to do the validation and casting in this function so that we do not pass the heavy work back upstream.

 

By implementing this as an extension function on our EPP class, we can call it super easily. Let us add it to our main class as a test:

fun main() {
    EPP.create()
    println(EPP.checkDomain("example.gg"))
}

 

As opposed to a long string of XML, our function has made it so that the console is either printing true or false, in this case falseThis pattern of creating extension functions for various EPP operations allows us to build a clean, intuitive API for interacting with the EPP server, while keeping our core EPP class focused on connection and authentication.

 

Now that the basic check is done, let us look at what is required to provision a domain. Remember that domains, contacts and hosts can all be used with a number of operations, including creating, updating, deleting and querying. In order to register a domain, we will need to create a domain object, which first requires that a contact and host object be created.

 

Let us start with creating a contact. I have created a CreateContact.kt class under my /epp/contact folder. Here is is how it looks:

import com.tucows.oxrs.epprtk.rtk.xml.EPPContactCreate
import epp.EPP
import org.openrtk.idl.epprtk.contact.*
import org.openrtk.idl.epprtk.epp_AuthInfo
import org.openrtk.idl.epprtk.epp_AuthInfoType
import org.openrtk.idl.epprtk.epp_Command

fun EPP.Companion.createContact(
    contactId: String,
    name: String,
    organization: String? = null,
    street: String,
    street2: String? = null,
    street3: String? = null,
    city: String,
    state: String? = null,
    zip: String? = null,
    country: String,
    phone: String,
    fax: String? = null,
    email: String
): Boolean {
    val create = EPPContactCreate().apply {
        setRequestData(
            epp_ContactCreateReq(
                epp_Command(),
                contactId,
                arrayOf(
                    epp_ContactNameAddress(
                        epp_ContactPostalInfoType.INT,
                        name,
                        organization,
                        epp_ContactAddress(street, street2, street3, city, state, zip, country)
                    )
                ),
                phone.let { epp_ContactPhone(null, it) },
                fax?.let { epp_ContactPhone(null, it) },
                email,
                epp_AuthInfo(epp_AuthInfoType.PW, null, "pass")
            )
        )
    }

    val response = client.processAction(create) as EPPContactCreate
    val contactCreate = response.responseData as epp_ContactCreateRsp

    return contactCreate.rsp.results[0].m_code.toInt() == 1000
}

 

In this command, we are using similar logic to domain checking, where we create anEPPContactCreate class which we populate from the data we took in from the constructor. Some of that data is optional, and I have given default null values to all that are optional according to the EPP specification. I am then checking for the m_code which is, for all intents and purposes, a code that indicates the result of the operation. According to the EPP specification, a result code of 1000 indicates a successful operation.

 

The last step before we can work on provisioning a domain is creating a host object. In EPP, host objects represent the nameservers that will be associated with our domain. Registries require these for two main reasons: to ensure newly registered domains are immediately operational in the DNS, and to create necessary glue records for internal nameservers (nameservers within the same TLD as the domain). Whether or not this is required or not depends on your chosen registry. With my case study as the Channel Isles, there is no requirement that a host object must be created on the system before the EPP can provision a domain for external nameservers. However, I will share the code in case your circumstances differ with your registry. Following from our previous two commands, I added created a CreateHost.kt class in my /epp/host folder with the following code:

import com.tucows.oxrs.epprtk.rtk.xml.EPPHostCreate
import epp.EPP
import org.openrtk.idl.epprtk.epp_Command
import org.openrtk.idl.epprtk.host.epp_HostAddress
import org.openrtk.idl.epprtk.host.epp_HostAddressType
import org.openrtk.idl.epprtk.host.epp_HostCreateReq
import org.openrtk.idl.epprtk.host.epp_HostCreateRsp

fun EPP.Companion.createHost(
    hostName: String,
    ipAddresses: Array?
): Boolean {
    val create = EPPHostCreate().apply {
        setRequestData(
            epp_HostCreateReq(
                epp_Command(),
                hostName,
                ipAddresses?.map { epp_HostAddress(epp_HostAddressType.IPV4, it) }?.toTypedArray()
            )
        )
    }

    val response = client.processAction(create) as EPPHostCreate
    val hostCreate = response.responseData as epp_HostCreateRsp
    
    return hostCreate.rsp.results[0].code.toInt() == 1000
}

 

As before, this function creates the EPP host create request, processes the action, checks the result code and returns true if the code is 1000, and false otherwise. The parameters are particularly important here and can lead to confusion for those not too familiar with how DNS works. The hostName parameter is the fully qualified domain name (FQDN) of the host we are creating. For example, ns1.example.com. The other ask is an array of IP addresses associated with the host. This is more crucial for internal nameservers, and for external nameservers (probably your use case) this can often be left null.

 

Now the one definite and other potential prerequisite to provisioning a domain are in our codebase, let us get to the star of the show. The following function is an EPP command that will provision a domain based on objects we just created. I created the following function in a class called CreateDomain.kt in my /epp/domain folder:

import epp.EPP
import com.tucows.oxrs.epprtk.rtk.xml.EPPDomainCreate
import org.openrtk.idl.epprtk.domain.*
import org.openrtk.idl.epprtk.epp_AuthInfo
import org.openrtk.idl.epprtk.epp_AuthInfoType
import org.openrtk.idl.epprtk.epp_Command

fun EPP.Companion.createDomain(
    domainName: String,
    registrantId: String,
    adminContactId: String,
    techContactId: String,
    billingContactId: String,
    nameservers: Array,
    password: String,
    period: Short = 1
): Boolean {
    val create = EPPDomainCreate().apply {
        setRequestData(
            epp_DomainCreateReq(
                epp_Command(),
                domainName,
                epp_DomainPeriod(epp_DomainPeriodUnitType.YEAR, period),
                nameservers,
                registrantId,
                arrayOf(
                    epp_DomainContact(epp_DomainContactType.ADMIN, adminContactId),
                    epp_DomainContact(epp_DomainContactType.TECH, techContactId),
                    epp_DomainContact(epp_DomainContactType.BILLING, billingContactId)
                ),
                epp_AuthInfo(epp_AuthInfoType.PW, null, password)
            )
        )
    }

    val response = client.processAction(create) as EPPDomainCreate
    val domainCreate = response.responseData as epp_DomainCreateRsp
    
    return domainCreate.rsp.results[0].code.toInt() == 1000
}

 

This createDomain function encapsulates the EPP command for provisioning a new domain. The function brings together all the pieces we have prepared: contacts, hosts and domain-specific information. As before, it constructs an EPP domain create request, associating the domain with its contacts and nameservers. It then processes this request and checks the result code to determine if the request was successful. By returning a Boolean, we can easily pass the response upstream and, if connected to a user interface such as a web application, can inform the end user.

 

With these functions in place, we now have the ability to provision a domain. I will run the following test in my main class:

import epp.EPP
import epp.contact.createContact
import epp.domain.createDomain

fun main() {
    EPP.create()

    val contactResponse = EPP.createContact(
        contactId = "12345",
        name = "Stephen",
        organization = "Test",
        street = "Test Street",
        street2 = "Test Street 2",
        street3 = "Test Street 3",
        city = "Test City",
        state = "Test State",
        zip = "Test Zip",
        country = "GB",
        phone = "1234567890",
        fax = "1234567890",
        email = "test@gg.com"
    )
    if (contactResponse) {
        println("Contact created")
    } else {
        println("Contact creation failed")
        return
    }

    val domainResponse = EPP.createDomain(
        domainName = "randomavailabletestdomain.gg",
        registrantId = "123",
        adminContactId = "123",
        techContactId = "123",
        billingContactId = "123",
        nameservers = arrayOf("ernest.ns.cloudflare.com", "adaline.ns.cloudflare.com"),
        password = "XYZXYZ",
        period = 1
    )
    if (domainResponse) {
        println("Domain created")
    } else {
        println("Domain creation failed")
    }
}

 

In this function which runs when the application first starts, we are firstly creating a contact using our createContact extension function. I have passed through every single parameter, required or optional, to show how it would look. Then, once confirming the contact has created, I am creating a domain with our createDomain extension function. I am giving it the required parameters, such domain name and the nameservers, but also providing the ID of the contact created just above in the four contact fields. It is required that the contact ID which is provided is a valid contact that has first been created in the system. Therefore, this merger of a couple functions that we have made should provision a domain.

 

After running it, the output in console should be:

Contact created
Domain created

 

And for humour, here is the XML response from the EPP server before we did our own filtering in our extension functions:

org.openrtk.idl.epprtk.contact.epp_ContactCreateRsp: { m_rsp [org.openrtk.idl.epprtk.epp_Response: { m_results [[org.openrtk.idl.epprtk.epp_Result: { m_code [1000] m_values [null] m_ext_values [null] m_msg [Command completed successfully] m_lang [] }]] m_message_queue [org.openrtk.idl.epprtk.epp_MessageQueue: { m_count [4] m_queue_date [null] m_msg [null] m_id [916211] }] m_extension_strings [null] m_trans_id [org.openrtk.idl.epprtk.epp_TransID: { m_client_trid [null] m_server_trid [1728110331411] }] }] m_id [123456] m_creation_date [2024-10-05T06:38:51.408Z] }

org.openrtk.idl.epprtk.domain.epp_DomainCreateRsp: { m_rsp [org.openrtk.idl.epprtk.epp_Response: { m_results [[org.openrtk.idl.epprtk.epp_Result: { m_code [1000] m_values [null] m_ext_values [null] m_msg [Command completed successfully] m_lang [] }]] m_message_queue [org.openrtk.idl.epprtk.epp_MessageQueue: { m_count [4] m_queue_date [null] m_msg [null] m_id [916211] }] m_extension_strings [null] m_trans_id [org.openrtk.idl.epprtk.epp_TransID: { m_client_trid [null] m_server_trid [1728110331467] }] }] m_name [randomavailabletestdomain2.gg] m_creation_date [2024-10-05T06:38:51.464Z] m_expiration_date [2025-10-05T06:38:51.493Z] }

 

Both of those objects were created using our extension functions on top of the EPP-RTK which is in contact with my target EPP server. If your registry has a user interface, you should see that these objects have now been created and are usable going forward. For example, one contact can be used for multiple domains. For my case study, you can see that both objects were successfully created on the Channel Isles side through our EPP communication:

 

StephenRhodes_0-1728251041708.pngStephenRhodes_1-1728251048701.png

 

In simple terms, this means they have received the instruction and successfully provisioned our domain pointing at the nameservers we provided! This now means that the domain is in my (or my registrant’s) possession and now I am able to control the website showing at that domain.

 

What about all of the other EPP commands? After all, the EPP-RTK supports the following commands:

  • Domain check
  • Domain info
  • Domain create
  • Domain update
  • Domain delete
  • Domain transfer
  • Contact check
  • Contact info
  • Contact create
  • Contact update
  • Contact delete
  • Contact transfer
  • Host check
  • Host info
  • Host create
  • Host update
  • Host delete

We have made four of these in this blog: creating a host, creating a contact, creating a domain and checking a domain. The code for the rest of these commands follows the exact same pattern, and if at any point you get stuck I highly recommend the official documentation of the EPP-RTK API: https://epp-rtk.sourceforge.net/epp-rtk-java-0.4.1/java/doc/epp-rtk-user-guide.html

 

This documentation is where I got all my information from for these commands and for this project as a whole. If you are looking at productionizing this project and intend to implement the remaining commands, you will find that the code is almost identical across the different commands with the only exception being the required parameters for each request.

 

Now that we have our core EPP functionality implemented, it is time to expose these capabilities through a web API. This is where Spring comes into play. Spring will allow us to create a robust, scalable REST API that will serve as an interface between client interactions and our EPP operations. What we will do here is wrap our EPP functions within Spring controllers, meaning we can create endpoints that external applications can easily consume. This abstraction layer not only makes our EPP functionality more accessible but also allows us to add additional business logic, validation and error handling.

 

Because we know that EPP can process commands related to three object types: hosts, contacts and domains, I am going to create three separate controllers. But let us also split that up from our API.kt by putting them in their own controller folder. I am going to name my controllers HostController.kt, ContactController.kt and DomainController.kt. At this point, the file structure should look like this:

EPPAPI.kt
api
├── controller
│   └── ContactController.kt
│   └── DomainController.kt
│   └── HostController.kt
└── API.kt
epp
├── contact
├── domain
│   └── CheckDomain.kt
├── host
└── EPP.kt

 

The job of controllers in Spring is to handle incoming HTTP requests, process them and return appropriate responses. In the context of our EPP API, controllers will act as the bridge between the client interface and our EPP functionality. Therefore, it makes logical sense to split up the three major sections into multiple classes so that the code does not become unmaintainable.

 

The simplest example we could write to link our EPP and our Spring API is checking the availability of a domain. Thankfully, earlier we wrote the EPP implementation to this in our CheckDomain.kt class. Now let us make it so that a user can trigger it via an endpoint. Because it is domain related, I will add the new code into the DomainController.kt class.

 

Firstly, with every controller class, it must be annotated with @RestController. And then a mapping is created as below:

import epp.EPP
import epp.domain.checkDomain
import org.springframework.http.ResponseEntity
import org.springframework.web.bind.annotation.GetMapping
import org.springframework.web.bind.annotation.RequestParam
import org.springframework.web.bind.annotation.RestController

@RestController
class DomainController {

    @GetMapping("/domain-check")
    fun helloWorld(@RequestParam name: String): ResponseEntity> {
        val check = EPP.checkDomain(name)

        return ResponseEntity.ok(
            mapOf(
                "available" to check
            )
        )
    }

}

 

Let us break down the code and see what is happening:

  1. GetMapping("domain-check"): This annotation maps the HTTP GETrequests to the domain-check route. When a GET request is made to this URL, Spring will call this function to handle it.
  2. fun helloWorld(@RequestParam name: String): This is the function that will handle the request. The @RequestParam annotation tells Spring to extract the name parameter from the query string of the URL. For example, a request to /domain-check?=name=example.gg would set name to example.gg. This allows us to then process the EPP command with their requested domain name.
  3. ResponseEntity
    >

    : This is the return type of the function. ResponseEntity allows us to have full control over the HTTP response, including status code, headers and body.

  4. val check = EPP.checkDomain(name): This line calls our EPP function to check if the domain is available (remember, it returns true if available and false if not).
  5. return ResponseEntity.ok(mapOf("available" to check)): This creates a response with HTTP status 200 (OK) and a body containing the JSON object with a single key available whose value is the result of the domain check.

The mapping is crucial because it connects HTTP requests to our application logic. When a client makes a GET request to /domain-checkwith a domain name as a parameter, Spring routes that request to this method, which then uses our EPP implementation to check the domain’s availability and returns the result. This setup allows external applications to easily check domain availability by making a simple HTTP GET request, without needing to know anything about the underlying EPP protocol or implementation. It is a great example of how we are using Spring to create a user-friendly API on top of our more complex EPP operations.

 

The same principle we have applied to the domain check operation can be extended to all other EPP commands we have created. For instance, creating a domain might use a POST request, updating domain information could use PUT, and deleting a domain would naturally fit with the DELETE HTTP method. For domain creation, we could use @PostMapping("/domain") and accept a request body with all necessary information. Domain updates could use @PutMapping("/domain/{domainName}"), where the domain name is part of the path and the updated information is in the request body. For domain deletion, @DeleteMapping("/domain/{domainName}") would be appropriate. Similar patterns can be applied to contact and host operations. By mapping our EPP commands to these standard HTTP methods, we create an intuitive API that follows RESTful conventions. Each of these endpoints would call the corresponding EPP function we have already implemented, process the result, and return an appropriate HTTP response. This approach provides a clean separation between the HTTP interface and the underlying EPP operations, making our system more modular and easier to maintain or extend in the future.

 

The very last step before we can finally run this project is to actually initialise the Spring side of the project like we did for the EPP side. Inside my empty API.kt class, I am going to put the following:

import org.springframework.boot.autoconfigure.SpringBootApplication
import org.springframework.boot.runApplication

@SpringBootApplication
class API {
    companion object {
        fun start() {
            runApplication()
        }
    }
}

 

This code follows the Spring requirements to register our controllers. Our API.kt class serves as the entry point for the Spring application. Inside this class, we have defined a companion object with a start() function. This function calls runApplication() to bootstrap the application, which is a Kotlin-specific way to launch a Spring application. Behind the scenes, Spring’s recognition of controllers happens automatically through a process called component scanning. When the application starts, because we have registered it here, Spring examines the codebase, starting from the package containing the main class and searching through all subpackages. It looks for classes annotated with specific markers, such as the @RestController that we put at the top of our controllers. Spring then inspects these classes, looking for any functions that may be annotated as mappings (e.g. @GetMapping like above), and then uses that information to build a map of URL paths to controller functions. This means that when a request comes in, Spring knows exactly which function in which class should process the result. It would be fair to say that Spring has an unconventional approach to application structure and dependency management. Spring embraces the philosophy of “convention over configuration” and heavily leverages annotations. However, this has helped us to significantly reduce boilerplate code, making it cleaner and more maintainable for future travelers.

 

Now that the entry point to our API is ready, all we need to do is call that start() function we just created in our APP.kt:

import api.API
import epp.EPP

fun main() {
    EPP.create()
    API.start()
}

 

And that is a wrap for the code. Let us go ahead and run our project. The console output should look something like this:

Creating client with HOST: ote.channelisles.net, PORT: 700, USERNAME: [Redacted]
Creating SSL socket...
SSL socket created. Setting socket to EPP server...
Socket set. Getting greeting...
Greeting received: org.openrtk.idl.epprtk.epp_Greeting: { m_server_id [OTE] m_server_date [2024-10-06T05:47:08.628Z] m_svc_menu [org.openrtk.idl.epprtk.epp_ServiceMenu: { m_versions [[1.0]] m_langs [[en]] m_services [[urn:ietf:params:xml:ns:contact-1.0, urn:ietf:params:xml:ns:domain-1.0, urn:ietf:params:xml:ns:host-1.0]] m_extensions [[urn:ietf:params:xml:ns:rgp-1.0, urn:ietf:params:xml:ns:auxcontact-0.1, urn:ietf:params:xml:ns:secDNS-1.1, urn:ietf:params:xml:ns:epp:fee-1.0]] }] m_dcp [org.openrtk.idl.epprtk.epp_DataCollectionPolicy: { m_access [all] m_statements [[org.openrtk.idl.epprtk.epp_dcpStatement: { m_purposes [[admin, prov]] m_recipients [[org.openrtk.idl.epprtk.epp_dcpRecipient: { m_type [ours] m_rec_desc [null] }, org.openrtk.idl.epprtk.epp_dcpRecipient: { m_type [public] m_rec_desc [null] }]] m_retention [stated] }]] m_expiry [null] }] }
Connecting...
Connected. Logging in...
Login successful.

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::               (v2.7.18)

2024-10-06 06:47:09.531  INFO 43872 --- [           main] com.stephen.eppapi.EPPAPIKt                 : Starting EPPAPIKt using Java 1.8.0_382 on STEPHEN with PID 43872 (D:\IntelliJ Projects\epp-api\build\classes\kotlin\main started by [Redacted] in D:\IntelliJ Projects\epp-api)
2024-10-06 06:47:09.534  INFO 43872 --- [           main] com.stephen.eppapi.EPPAPIKt                 : No active profile set, falling back to 1 default profile: "default"
2024-10-06 06:47:10.403  INFO 43872 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8080 (http)
2024-10-06 06:47:10.414  INFO 43872 --- [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2024-10-06 06:47:10.414  INFO 43872 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet engine: [Apache Tomcat/9.0.83]
2024-10-06 06:47:10.511  INFO 43872 --- [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2024-10-06 06:47:10.511  INFO 43872 --- [           main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 928 ms
2024-10-06 06:47:11.220  INFO 43872 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2024-10-06 06:47:11.229  INFO 43872 --- [           main] com.stephen.eppapi.EPPAPIKt                 : Started EPPAPIKt in 2.087 seconds (JVM running for 3.574)

It is clear to see that this startup console output is split into two halves. Firstly, the output from our debugging messages when creating and authenticating into the EPPClient. Then the native Spring output which shows that the local server has been started on port 8080.

 

Now for the exciting part. Heading to localhost:8080 in the browser should resolve, but throw a fallback error page, because we have not set anything to show at that route. We have, however, created a GET route at /domain-check. If you head to just /domain-check you will be met with a 400 (BAD REQUEST) error. This is because you will need to specify the name parameter as enforced in our function. So, let us try this out with a couple domains…

  1. /domain-check?name=test.gg{"available":false}
  2. /domain-check?name=thisshouldprobablybeavailable.gg{"available":true}

And that is it! At first it may not seem like a huge technical feat, but one should remember that is sending off a request to our Spring API which then routes it to a specific function, this then runs the code we wrapped over an EPP command which is sent off to the targeted EPP server who processes the domain check and sends the response back upstream to the user. There is a huge amount happening behind the scenes to power this simple domain check.

 

What we have demonstrated here with the domain check functionality is just the tip of the iceberg. We could expand our API to include endpoints for various domain-related operations. For instance, domain registration could be handled by a POST request to /domain, taking contact details, nameservers, and other required information in the request body. Domain information retrieval could be a GET request to /domain/{domainName}, fetching comprehensive information about a specific domain. Updates to domain information, such as changing contacts or nameservers, could be managed through a PUT request to /domain/{domainName}. The domain deletion process could be initiated with a DELETE request to /domain/{domainName}. Domain transfer operations, including initiating, approving, or rejecting transfers, could also be incorporated into our API. Each of these operations would follow the same pattern we have established: a Spring controller method that takes in the necessary parameters, calls the appropriate EPP function, and returns the result in a user-friendly format.

 

By expanding our API in this way, we are creating a comprehensive abstraction layer over EPP. This approach simplifies complex EPP operations, making them accessible to developers who may not be familiar with the intricacies of the protocol. It presents a consistent, RESTful interface for all domain-related operations, following web development best practices. Our EPP API can be easily consumed by various client applications, from web frontends to mobile apps or other backend services.

 

Deploying to Azure Container Apps

Now that we have our EPP API functioning locally, it is time to think about productionizing our application. Our goal is to run the API as an Azure Container App (ACA), which is a fully managed environment perfect for easy deployment and scaling of our Spring application. However, before deploying to ACA, we will need to containerise our application. This is where Azure Container Registry (ACR) comes into play. ACR will serve as the private Docker registry to store and manage our container images. It provides a centralised repository for our Docker images and integrates seamlessly with ACA, streamlining our CI/CD pipeline.

 

Firstly, let us create a Dockerfile. This step is required to run both locally and in Azure Container Registry. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. It serves as a blueprint for building a Docker container. In our case, our Dockerfile will set up the environment and instructions needed to containerise our Spring application.

 

Create a file named Dockerfile in the root of your project with the following content:

# Use OpenJDK 21 as the base image (use your JDK version)
FROM openjdk:21-jdk-alpine

# Set the working directory in the container
WORKDIR /app

# Copy the JAR file into the container
COPY build/libs/*.jar app.jar

# Expose the port your application runs on
EXPOSE 8080

# Command to run the application
CMD ["java", "-jar", "app.jar"]

 

I have added comments alongside each instruction to explain the flow. This Dockerfile encapsulates our application and its runtime environment, ensuring consistency across different deployment environments. It is a crucial step in our journey from local development to cloud deployment, providing a standardised way to package and run our EPP API.

 

However, before we push to the cloud, it is prudent to test it locally in a Docker container. This approach allows us to catch any containerization-related issues early and save time in the long run. We can verify that all components work correctly in a containerised environment, such as environment variable configurations and network settings. This step will help ensure a smooth transition to ACA, as the local Docker environment closely mimics the container runtime in Azure. Once we are confident that our application runs flawlessly in a local Docker container, we can push the image to ACR and deploy it to ACA, knowing we have minimised the risk of environment-specific issues.

 

This local testing can be done in a simple three step process with the following Gradle & Docker CLI commands:

  1. ./gradlew build – build our application and package into a JAR file found under /build/libs/X.jar.
  2. docker build -t epp-api . – tells Docker to create an image named epp-api based on the instructions in our Dockerfile.
  3. docker run -p 8080:8080 --env-file .env epp-api – start a container from the image, mapping port 8080 of the container to port 8080 on the host machine. We use this port because this is the default port on which Spring exposes endpoints. The -p flag ensures that the application can be accessed through localhost:8080 on your machine. We also specify the .env file we created earlier so that Docker is aware of our EPP login details.

If all went well, you should have the exact same console output as above. The key difference is the environment in which our application is running. Previously, we were executing our Spring application directly within our development environment. Now, however, our application is running inside a Docker container. This containerised environment is isolated from our host system, with its own file system, networking, and process space. It is a self-contained unit that includes not just our application, but also its runtime dependencies like the Java Development Kit.

 

Now that we have proven our project is ready to run in a containerised environment, let us start the cloud deployment process. This process involves two main steps: pushing our Docker image to Azure Container Registry and then deploying it to Azure Container Apps. I will be using the Azure CLI as outlined in the prerequisites. Everything I am doing can be done through the portal, but the CLI drastically reduces development time. Run the following commands in this order:

  1. az login – if not already authenticated, be sure to log in through the CLI.
  2. az group create --name registrar --location uksouth – create a resource group if you have not already. I have named mine registrar and chosen the location as uksouth because that is closest to me.
  3. az acr create --resource-group registrar --name registrarcontainers --sku Basic – create an Azure Container Registry resource within our registrar resource group, with the name of registrarcontainers (note that this has to be globally unique) and SKU Basic.
  4. az acr login --name registrarcontainers – login to the Azure Container Registry.
  5. docker tag epp-api myacr.azurecr.io/epp-api:v1 – tag the local Docker image with the ACR login server name.
  6. docker push myacr.azurecr.io/epp-api:v1 – push the image to the container registry!

If all went well, you should be met with a console output like this:

The push refers to repository [registrarcontainers.azurecr.io/epp-api]
2111bc7193f6: Pushed
1b04c1ea1955: Pushed
ceaf9e1ebef5: Pushed
9b9b7f3d56a0: Pushed
f1b5933fe4b5: Pushed
v1: digest: sha256:07eba5b555f78502121691b10cd09365be927eff7b2e9db1eb75c072d4bd75d6 size: 1365

 

That is the first part done. Now that our image is in ACR, we can deploy it to Azure Container Apps. This step is where we truly leverage the power of Azure’s managed container services. To deploy our EPP API to ACA, I will continue to use the Azure CLI, though some may find it more comfortable to use the portal for this section as a lot of configuration is required. Run the following commands in this order:

  1. az containerapp env create --resource-group registrar --name containers --location uksouth – create the Container App environment within our resource group with name containers and location uksouth.
  2. az acr update -n registrarcontainers --admin-enabled true – ensure ACR allows admin access.
  3. az containerapp create
    --name epp-api
    --resource-group registrar
    --environment containers
    --image registrarcontainers.azurecr.io/epp-api:v1
    --target-port 8080
    --ingress external
    --registry-server registrarcontainers.azurecr.io
    --env-vars "HOST=your_host" "PORT=your_port" "USERNAME=your_username" "PASSWORD=your_password"

     – creates a new Container App named epp-api within our resource group and the containers environment. It uses the Docker image stored in the ACR. The application inside the container is configured to listen on port 8080 which is where our Spring endpoints will be accessible. The -ingress external flag makes it accessible from the internet. You must also set your environment variables or the app will crash.

After running this command to create the Azure Container App, you should be me with a long JSON output to confirm the action. Then it should provide the URL to access the app. It should look like:

Container app created. Access your app at https://epp-api.purpledune-772f2e5a.uksouth.azurecontainerapps.io/

 

Which now means… if we head to that URL, and append /domain-check?name=test.gg as we did when locally testing, we are met with:

{"available":false}

 

That concludes the deployment process. This means our API is now accessible via the internet!

 

Setting up GitHub CI/CD

Now that we have our EPP API successfully running in our Azure Container Apps, the next step is to streamline our development and deployment process. This is where CI/CD comes into play: CI/CD, which stands for Continuous Integration and Continuous Deployment, is a set of practices that automate the process of building, testing and deploying our application. In simple terms: we are going to make it so that when we push code changes to our GitHub repository our container gets automatically updated and redeployed. This saves time and allows us to deliver updates and new features to our users more rapidly and reliably. We will walk through the process of setting up the CI/CD pipeline using GitHub Actions.

 

But first, let us setup our Git repository and send off an initial commit to GitHub. Firstly, head to GitHub and create a repository. You can create it in your personal account or an organization. I have named mine epp-api. Be sure to copy/paste or remember the URL for this repository as we will need it to link Git in a moment. 

 

Now you have an empty cloud repository, open the terminal in your workspace and run the following commands:

  1. git init – Initialise a new Git repository in your current directory. This creates a hidden .git directory that stores the repository’s metadata.
  2. git add . – Stages all of the files in the current directory and its subdirectories for commit. This means that these files will be included in the next commit.
  3. git commit -m "Initial commit" – Creates a new commit with the staged files and a common initial commit message.
  4. git remote add origin  – Adds a remote repository named origin to your local repository, connecting it to our remote Git repository hosted on GitHub.
  5. git push origin master – Uploads the local repository’s content to the remote repository named origin, specifically to the master branch.

If you refresh your repository on GitHub, you should see the commit! Now that your code is available outside of your local workspace, let us ask Azure to create the deployment workflow. On the Azure Portal, follow this trail:

  1. Head to your Container App
  2. On the sidebar, hit Settings
  3. Hit Deployment

You should find yourself in the Continuous deployment section. There are two headings, let us start with GitHub settings:

  1. Authenticate into GitHub and provide permissions to repository (if published to a GH organization, give permissions also)
  2. Select organization, or your GitHub name if published on personal account
  3. Select the repository you just created (for me, epp-api)
  4. Select the main branch (likely either master or main)

Then, under Registry settings:

  1. Ensure Azure Container Registry is selected for Repository source
  2. Select the Container Registry you created earlier (for me, registrarcontainers)
  3. Select the image you created earlier (for me, epp-api) 

It should look something like this:

 

StephenRhodes_0-1728437916969.png

 

Once these settings have been configured, press Start continuous deployment.

 

If all went to plan, Azure will have created a workflow file in your repository under .github/workflows with the commit message Create an auto-deploy file. Based on the content of the workflow, we can see that the trigger is on push to master. This means that, moving forward, every change you commit and push to this repository will trigger this workflow, which will in-turn trigger a build and push the new container image to the registry.

 

However, it is likely that on the first build it will fail. This is because we need to make a couple modifications to this workflow file before it will work with our technology stack. You will need to add these changes manually, so head into the workflow and begin editing as with any other file (either through GitHub or VSC – do not forget to push if VSC!). Then, add the following jobs after Checkout to the branch and before the Azure Login job:

- name: Grant execute permission for gradlew
  run: chmod +x gradlew

- name: Set up JDK 21
  uses: actions/setup-java@v2
  with:
    java-version: '21'
    distribution: 'adopt'

- name: Build with Gradle
  run: ./gradlew build

 

We added 3 jobs:

  1. Grant execute permission to gradlew – gradlew is a wrapper script that helps manage Gradle installations. This step grants execute permission to the gradlew file which allows this build process to execute Gradle commands, needed for the next steps.
  2. Set up JDK – This sets up the JDK as the Java envrionment for the build process. Make sure this matches the Java version you have chosen to use for this tutorial.
  3. Build with Gradle – This executes the Gradle build process which will compile our Java code and package it into a JAR file which will then be used by the last job to push to the Container Registry.

The final workflow file should look like this:

name: Trigger auto deployment

# When this action will be executed
on:
  # Automatically trigger it when detected changes in repo
  push:
    branches: 
      [ master ]
    paths:
    - '**'
    - '.github/workflows/AutoDeployTrigger-aec369b2-f21b-47f6-8915-0d087617a092.yml'

  # Allow manual trigger 
  workflow_dispatch:      

jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    permissions: 
      id-token: write #This is required for requesting the OIDC JWT Token
      contents: read #Required when GH token is used to authenticate with private repo

    steps:
      - name: Checkout to the branch
        uses: actions/checkout@v2

      - name: Grant execute permission for gradlew
        run: chmod +x gradlew

      - name: Set up JDK 21
        uses: actions/setup-java@v2
        with:
          java-version: '21'
          distribution: 'adopt'

      - name: Build with Gradle
        run: ./gradlew build

      - name: Azure Login
        uses: azure/login@v1
        with:
          client-id: ${{ secrets.AZURE_CLIENT_ID }}
          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
          subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

      - name: Build and push container image to registry
        uses: azure/container-apps-deploy-action@v2
        with:
          appSourcePath: ${{ github.workspace }}
          _dockerfilePathKey_: _dockerfilePath_
          registryUrl: fdcontainers.azurecr.io
          registryUsername: ${{ secrets.REGISTRY_USERNAME }}
          registryPassword: ${{ secrets.REGISTRY_PASSWORD }}
          containerAppName: epp-api
          resourceGroup: registrar
          imageToBuild: registrarcontainers.azurecr.io/fdspring:${{ github.sha }}
          _buildArgumentsKey_: |
            _buildArgumentsValues_

 

Once you have pushed your workflow changes, that action itself will trigger the new workflow and hopefully you should be met with a green circle on GitHub at the top of your repository to signify the build was a success. Do not forget that at any point you can click the Actions tab and see the result of all builds, and if any build fails you can explore in detail on which job the error occured.

 

Conclusion

That is it! You have successfully built a robust EPP API using Kotlin and Spring Boot and now containerised it with Docker and deployed it to Azure Container Apps. This journey took us from understanding the intricacies of EPP and domain registration, through implementing core EPP operations, to creating a user-friendly RESTful API. We then containerised our application, ensuring consistency across different environments. Finally, we leveraged Azure’s powerful cloud service services – Azure Container Registry for storing our Docker image, and Azure Container Apps for deploying and running our application in a scalable, managed environment. The result is a fully functional, cloud-hosted API that can handle domain checks, registrations and other EPP operations. This accomplishment not only showcases the technical implementation but also opens up possibilities for creating sophisticated domain management tools and services, such as by starting a public registrar or managing a domain portfolio internally.

 

I hope this blog was useful, and I am happy to answer any questions in the replies. Well done on bringing this complex system to life!





Source link

You may also like

Leave a Comment

Our Company

Welcome to OdysseyX, your one-stop destination for the latest news and opportunities across various domains.

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Laest News

@2024 – All Right Reserved. Designed and Developed by OdysseyX