Overview
I host my blog using American Cloud’s Kubernetes service. The core container of the cluster is a customized Docker image based on Grav's official Docker image, which is available on their Github. The official image has been updated since I created my current Dockerfile, but at the time, I took the Dockerfile from their GitHub and updated it to use the latest stable versions of Apache and PHP. My DNS records are also managed within American Cloud, as they offer a free DNS service.
Background
When conceptualizing this blog, I wanted the hosting to be a self-directed project built with security considerations at every layer. I chose not to host the blog on a traditional cloud VM or a self-hosted server due to the increased overhead for maintenance, patching, and configuration, which would also increase the potential attack surface (to the best of my knowledge). Therefore, I decided on a containerized deployment in the cloud, specifically with American Cloud following a strong recommendation from Raid Owl.
The challenge, however, was that their offerings for cloud-hosted containerized applications are limited to their Kubernetes (K8s) service. Fortunately, they have excellent documentation that was instrumental in getting the environment operational... once the custom Docker container was ready.
Building the Docker Container
While I won't be sharing my full Dockerfile, it largely mirrors the official Docker option available on Grav’s GitHub page. I iteratively built the container image by commenting out lines one at a time. If the resulting image failed to build or run, I would revert the change and proceed to the next line.
The final container was ready to push to the cluster. However, because I develop on a laptop with an ARM CPU architecture, but deploy to an x86 architecture on the cloud K8s cluster, the container required a specific cross-platform build command:
docker buildx build --platform linux/amd64,linux/arm64 -t username/myapp:latest --push .
This command uses docker buildx to create a multi-architecture image, ensuring that the correct version for the target architecture is automatically pulled by the K8s cluster.
Deploying in K8s
Deployment was straightforward, primarily involving following American Cloud’s documentation and replacing the example image in their deployment manifest with my own custom image.
Initially, I encountered errors, but once I realized the necessity of the cross-platform Docker build command, the deployment succeeded immediately. The final step was to configure my DNS records to point to the service endpoint. For additional information on Kubernetes (K8s), this video by NetworkChuck is a good resource.
Security at Every Layer
My deployment involves multiple layers, ordered bottom-up: the K8s cluster, Apache & PHP, and the Grav CMS. My security configuration approach was top-down, focusing on the layers I was most familiar with first (Grav CMS, Apache/PHP) before addressing the cluster layer.
Grav CMS
For the website itself, I opted against raw coding, as web development falls outside the core technology areas I want to focus on. This necessitated using a CMS to handle most of the site creation. I avoided a SaaS solution like Squarespace because I specifically wanted the hands-on experience of self-hosting and maintaining the underlying infrastructure. I also chose not to use WordPress, having had negative experiences with it in the past.
Fortunately, I found the open-source Grav CMS, which was a perfect fit. Grav offers both a GUI-based management option and a flat-file method; I chose the latter. More information on Grav is available here.
In terms of security, Grav is a very complete out-of-the-box solution. My primary strategy was to follow their security documentation step-by-step to configure all appropriate settings on my instance.
PHP
As mentioned earlier, when I built my container image, the official Grav image was using an older PHP version, so I upgraded it to an up-to-date version. This was in line with the official Grav documentation at the time, which recommended using stock PHP for novice users.
Since I do not plan to focus on PHP development, as it is off-track from my primary technology interests, and since I will not be hosting sensitive data (especially not third-party data), I am comfortable with the security risk of using the out-of-the-box PHP configuration.
Apache
This is where the configuration became more complex. As I am not an Apache expert, hardening the deployment against potential threat actors on the public internet required external guidance. Fortunately, many resources are available.
I configured my Apache service by following two guides. The first article is the official security tips from the Apache project; I wasn't able to apply most of its outlined recommendations to my deployment. The second guide, which I found via a basic internet search, I followed up through the section titled "Timeout value configuration," as the subsequent sections were either not applicable to my setup or exceeded my current skill level.
Ubuntu 22.04/Docker/Kubernetes
First, to clarify this subheading: while my current production deployment is running in a K8s cluster, I use an Ubuntu VM in my home lab for development, staging, and testing—especially for the Apache configuration.
The Ubuntu VM in my home lab is a relatively vanilla installation, as that is sufficient for its purpose. I use it to configure and test the Apache settings across the apache2 configuration, hostname, and sites-available files. Furthermore, the version of PHP used in my Dockerfile is a newer version than what is available in the default Ubuntu apt repositories (at least at the time I first built this). I manually build the specific PHP version (to match the containerized environment) on my VM and use it in conjunction with Apache.
Once a stable configuration is achieved, I package the Apache configuration files, the Grav core files, and the theme files to build the final Docker image and test it locally. After local validation, I push the functional image to my Docker Hub repository. My K8s cluster will then pull this latest image (which I occasionally trigger manually by restarting the cluster) to update the live site.
The cluster deployment itself was achieved by following this guide from American Cloud and adapting it to my specific requirements. As I continue to learn Kubernetes, I plan to revisit this configuration to further harden the cluster, but the current setup is a sufficient starting point.
Post Script note about AI Usage in this post
I made use of Google's Gemini AI to proofread and work through the clarity of this blog post; however, the contents contained therein are all my original thoughts.