Build Your Portfolio Website with DevOps: A Scalable and Secure Approach

Are you ready to showcase your skills as a DevOps engineer with a professional portfolio website? Imagine having a fully functional, cloud-hosted site that highlights your expertise and serves as a hands-on DevOps project to impress recruiters and clients. 🚀
Through this project, I successfully built and deployed a portfolio website using Linux, Apache, MySQL, and PHP (LAMP stack)—an essential combination for web hosting.
This journey was part of the ZTM DevOps Bootcamp, where I learned to apply industry-standard tools to create a scalable, secure, and professional portfolio site. Whether you’re an aspiring DevOps engineer, a developer, or a tech enthusiast, this project will equip you with practical cloud infrastructure skills that are highly valuable in the industry.
Overview
This project walks through the end-to-end process of setting up a cloud-based portfolio website from scratch. Here’s what we’ll cover:
1️⃣ Cloud Server Setup
✅ Deploy a Virtual Private Server (VPS) on DigitalOcean
✅ Configure essential firewall and security settings
2️⃣ Domain & DNS Configuration
✅ Register a custom domain for a professional web presence
✅ Configure BIND9 DNS Server for domain name resolution
3️⃣ Web Server Installation & Configuration
✅ Install and optimize Apache2, one of the most powerful web servers
✅ Enable HTTP/HTTPS protocols and configure SSL certificates
✅ Set up virtual hosting to manage multiple websites efficiently
4️⃣ Implementing the LAMP Stack
✅ Install and configure Linux, Apache, MySQL, and PHP
✅ Manage databases with MySQL
✅ Test and verify PHP functionality
5️⃣ Installing and Securing phpMyAdmin
✅ Install phpMyAdmin for database management
✅ Secure MySQL access with strong user authentication
✅ Restrict access to phpMyAdmin for added security
6️⃣ Deploying WordPress as a CMS
✅ Download and install WordPress
✅ Set up a secure database for WordPress
✅ Manage WordPress settings for optimal performance and security
7️⃣ Securing the Website
✅ Implement firewall configurations for protection
✅ Secure the wp-admin directory and limit login attempts
✅ Regularly update plugins and core files to prevent vulnerabilities
8️⃣ Troubleshooting & Performance Optimization
✅ Identify and fix common web server issues
✅ Optimize server performance and database queries
✅ Ensure scalability for future growth
What I Achieved & Future Scalability
✅ A fully functional, cloud-hosted portfolio website
✅ Hands-on experience with real-world DevOps tools and practices
✅ A secure and scalable infrastructure, ready to grow with my career. With this setup, my portfolio site can scale effortlessly, allowing for additional custom applications, blogs, or even e-commerce integration in the future. By mastering these skills now, I am fully prepared for larger-scale cloud and DevOps projects.
Why You Should Build This Too?
What if you could replicate this project and deploy your cloud-hosted portfolio website? This is not just another coding tutorial—this is an industry-relevant DevOps project that gives you hands-on experience deploying real infrastructure.
✅ Showcase your DevOps skills with a working, hosted portfolio
✅ Gain hands-on experience with cloud servers, web hosting, and security
✅ Enhance your resume with a real-world project that recruiters love
✅ Future-proof your site by understanding scalability and security best practices
Ready to Build Your Cloud-Based Portfolio? 🚀
Follow along as I guide you through every step—from setting up your server and domain to securing and deploying your website. Let’s turn theory into practice and build a portfolio site that speaks for itself!
Project Guide
Step 1: Setting Up Your DigitalOcean Droplet
To get started, you’ll need a cloud server to host your website. For this project, I chose DigitalOcean, as its Droplets (virtual private servers) are ideal for small projects like portfolio sites. Creating a Droplet is quick and simple, requiring just a few clicks on DigitalOcean’s platform.
When selecting your server, go for at least 1GB RAM to prevent MySQL issues later. While I initially went with 512MB RAM, it was a bit too tight for my database needs.
After your Droplet is set up, access it via SSH (don’t worry if that sounds complicated – it’s just a command to log into your server):
ssh root@your_droplet_ip
Now, go to your Digitalocean account and take a snapshot of your droplet before making any major changes to ensure you can always roll back if something goes wrong.
Step 2 – Getting a Domain name
Next, it’s time to get your domain name. I chose Namecheap for this, but you can go with any domain registrar you like. Once you have your domain (let’s say, example.com), it’s time to set up your DNS to point it to your server.
To check the current nameservers for your domain, run this command:
dig -t ns example.com
This will show the nameservers provided by your registrar:
dns1.registrar-servers.com
dns2.registrar-servers.com
If you want to set up your own DNS, you can configure a custom nameserver (like ns1.example.com). To customize your authoritative nameserver to ns1.example.com, go to your account on namecheap.com and manage your domain in Advanced DNS.
This process can take up to 24 hours to update globally.
Notice, that it is recommended to have two DNS servers. Update the information about the second DNS and create a second authoritative nameserver. To simplify the process and save our project budget, we only created ns1.
Step 3: Installing Your DNS Server (BIND9)
For this project, we use BIND9 as our DNS server. It’s the go-to software for managing DNS. BIND 9 is transparent, open source, and full-featured, providing flexibility during server setup for any application.
To install it on your server, run:
sudo ssh -l root@your_droplet_ip
sudo apt update && apt install bind9 bind9utils bind9-doc
Once installed, check the status of BIND9:
sudo systemctl status bind9
You’ll also need to configure it to use IPv4. Open the configuration file and update the option. You can use any terminal-based text editor, or stick with Vim like I did in my project. You can install it with:
sudo apt install vim
Open the configuration file:
sudo vim /etc/default/named
Add this line:
OPTIONS="-u bind -4"
Then, restart BIND9:
systemctl reload-or-restart bind9
You can test whether your server is resolving DNS queries correctly by running:
dig -t a @localhost google.com
To send a DNS query specifically for an “A” record for the domain google.com using a different IP address (in our example 1.1.1.1), you can execute the following command:
dig -t a @1.1.1.1 google.com
To verify whether the main configuration file was created in the /etc/bind directory:
cat /etc/bind/named.conf
In my project, I have added two forwarders (8.8.8.8 and 8.8.4.4) to our server. This helps reduce the number of queries to servers on the Internet and allows us to build a large cache of information on the forwarders. By doing so, we can maintain a strict separation between internal and external DNS and prevent exposing internal domains on the open Internet. DNS forwarding aims to increase network efficiency by reducing bandwidth usage and improving the speed at which DNS requests are fulfilled.
cat /etc/bind/named.conf.options
options {
directory "/var/cache/bind";
forwarders {
8.8.8.8;
8.8.4.4;
};
Restart the server to apply settings:
systemctl reload-or-restart bind9
systemctl status bind9
Run this command to test your server:
dig @localhost -t a parrotlinux.org
Step 4: Setting Up Your Authoritative DNS Server
Once BIND9 is up and running, it’s time to set up your authoritative DNS server. For large volumes of inbound requests, it’s recommended to use master and slave configurations to balance loads. For now, we’ll keep it simple as a master server (good for fewer than 50,000 requests a day).
Access your virtual private server, and check the current status of BIND9:
sudo ssh -l root@your_droplet_ip
sudo systemctl status bind9
sudo systemctl enable bind9
sudo systemctl start bind9
Run the next command to ensure that the BIND9 service starts automatically every time your server reboots:
sudo systemctl enable bind9
sudo systemctl start bind9
Start BIND9 service:
sudo systemctl start bind9
Create the configuration for your domain (example.com), and set up a master zone for direct DNS resolution:
vim /etc/bind/named.conf.local
Add the master zone in the configuration file:
zone "example.com" {
type master;
file "/etc/bind/db.example.com"; # Zone file for direct resolution
};
// Do any local configuration here
//
// Consider adding the 1918 zones here, if they are not used in your
// organization
//include "/etc/bind/zones.rfc1918";
A new zone, example.com, has been created as a master zone, meaning it is the master authoritative DNS server for our domain. We used a zone template file to create our zone file. Just copy the db.emty file to db.example.com file:
cp /etc/bind/db.empty /etc/bind/db.example.com
Go to your preferred text editor (in our project, we use the vim editor):
vim /etc/bind/db.example.com
And edit the text:
$TTL 86400
@ IN SOA ns1.example.com root.localhost. (
1 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
86400 ) ; Negative Cache TTL
;
@ IN NS ns1.example.com.
;@ IN NS ns2.example.com.
ns1 IN A 138.0.0.0 (IP address of DigitalOcean droplet)
;@ IN MX 10 your_email_address@gmail.com.
your_domain.com. IN A 138.68.71.209
www IN A 138.0.0.0 (IP address of DigitalOcean droplet) mail IN A 138.0.0.0 (IP address of DigitalOcean droplet)
external IN A 91.189.88.181
public-dns IN A 8.8.8.8
After configuring the primary master server and creating the master zone file, we restarted BIND9 to apply the new configurations:
sudo systemctl restart bind9
To test DNS Server run these commands:
sudo dig @localhost -t ns example.com
sudo dig @localhost -t a www.example.com
Test your DNS from another machine (you should wait 24 hours after registering DNS):
sudo dig -t ns example.com
sudo dig -t a www.example.com
sudo dig -t a public-dns.example.com
Notice, if we add another subdomain, we will see it on another machine.
Optionally, you can enable a reverse resolution to translate your IP to your domain.
Step 5: Installing a Web Server (Apache2)
Now that our server was up and running, it was time to set up a web server. We had two solid choices: Apache2 or Nginx. Both are excellent, but we decided to go with Apache2 since we had covered it extensively in our course.
Apache2 (often just called “Apache”) is one of the most popular web servers in the world. It’s open-source, highly customizable, and trusted by millions of websites. The reason for its success? A flexible architecture, a huge support community, and a ton of documentation to help troubleshoot any issues.
Installing Apache2 was a straightforward process. We updated our package lists and installed the server with a single command:
sudo apt update && apt install apache2
Once the installation was complete, we checked if Apache2 was running:
systemctl status apache2
If everything was set up correctly, we should see a green “active” status.
To ensure Apache starts automatically whenever the server boots up, we ran:
systemctl is-enable apache2
If needed, we could also disable it from starting at boot time:
systemctl disable apache2
Next, we checked whether our uncomplicated firewall (UFW) was enabled. By default, UFW is turned off on Ubuntu, but it’s always good to confirm:
sudo ufw status
If Apache is running, you can enable both HTTP and HTTPS by running the following command:
sudo ufw allow 'Apache Full'
Now for the exciting part—testing our installation! To verify Apache2 was working, we opened a browser and typed in our server’s public IP address. If everything was set up correctly, we would see Apache’s default welcome page.
To find our public IP address, we used one of these commands:
ip addr
curl -4 ident.me
Step 6: Setting Up Virtual Hosting
At first, our plan was simple: set up a single website. But after a little brainstorming, we realized that in the future, we might want to host multiple sites on this server. That’s where virtual hosting came in.
Virtual hosting allows a single server to handle multiple websites. Instead of using separate IP addresses for each site, Apache can differentiate between domains and serve the correct content based on the incoming request. There are two types:
- IP-based virtual hosting – Each website gets its own IP address.
- Name-based virtual hosting – Multiple websites share the same IP, but Apache uses domain names to distinguish them.
We chose name-based virtual hosting because it’s easier to manage and doesn’t require multiple IP addresses.
Creating a Virtual Host
First, we created a directory for our domain:
mkdir /var/www/example.com
Then, we made sure Apache had the correct permissions:
ps -ef | grep apache2
chown -R www-data.www-data /var/www/example.com/
Setting permissions of the directory:
chmod 755 /var/www/example.com/
To test our setup, we created a simple web page inside the directory:
vim /var/www/example.com/index.html
We added the following line:
Hello, this is example.com website!
Now, it was time to tell Apache about our new site. We created a configuration file:
vim /etc/apache2/sites-available/example_com.conf
We added the following content:
<VirtualHost *:80>
ServerName example.com
ServerAlias www.example.com
DocumentRoot /var/www/example.com
ServerAdmin webmaster@example.com
ErrorLog /var/log/apache2/example_com_error.log
CustomLog /var/log/apache2/example_com_access.log combined
RewriteEngine on
RewriteCond %{SERVER_NAME} =www.example.com [OR]
RewriteCond %{SERVER_NAME} =example.com
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>
It is important to note that we utilize the `ServerAlias` directive instead of setting up a separate `<VirtualHost>` block for redirecting from www to non-www in Apache. This approach is more efficient and ensures that both www.example.com and example.com point to the same content. You must implement a rewrite rule to achieve the actual redirection.
To enable mod_rewrite effectively, execute the following command:
sudo a2enmod rewrite
To enable the configuration, we ran:
a2ensite example_com
In Apache2, the management of virtual host configurations is handled through two key directories within the /etc/apache2/ directory: sites-available and sites-enabled. Understanding how these directories work is crucial for effectively managing websites hosted on an Apache server.
- sites-available Directory: This directory contains configuration files for all available virtual hosts. Each file typically represents a separate website or domain that can be served by Apache. However, having a configuration file in sites-available does not mean that it is currently active or being served by Apache.
- sites-enabled Directory: This directory contains symbolic links (symlinks) to the configuration files in sites-available that are currently active and being served by Apache. Only the configurations listed in sites-enabled will be read and used by Apache when it starts or reloads.
To disable the default virtual host inside of the sites-available Directory, simply run:
sudo a2dissite 000-default
That command removed a simlink to the the 000-default.conf file in /etc/apache2/sites-available/.
And finally, we reloaded Apache to apply the changes:
sudo systemctl reload apache2
Everything was set! Our virtual host was now live.
Step 7: Securing Apache with OpenSSL and Digital Certificates
Now that our site was running, we needed to make sure it was secure. By default, web servers communicate using HTTP, which sends data in plain text. That means passwords, personal information, and other sensitive data could be intercepted by hackers. To fix this, we needed HTTPS, which encrypts communication using SSL/TLS certificates.
The easiest way to get a certificate is by using Certbot, a tool that automatically configures Let’s Encrypt SSL for Apache.
Connect to our VPS using the command:
sudo ssh -l root@your_droplet_ip
Install Certbot using the following commands:
sudo apt update
sudo apt install certbot python3-certbot-apache
Request a digital certificate by running:
sudo certbot -d example.com
After requesting the certificate, make sure to specify the domain in the configuration file:
sudo /etc/apache2/sites-available/example_com.conf
Test the settings by reloading the website using the https prefix in your browser. Also, try typing the http prefix and check if it redirects you to https.
For troubleshooting, check the log file:
sudo /var/log/letsencrypt/letsencrypt.log
After completing all of those steps, the configuration file ‘example_com-le-ssl.conf` was created in the `/etc/apache2/sites-available/` directory.
Step 8: Enabling module: HTTP Compression (mod_deflate)
Our site was running smoothly, but we wanted to improve speed. One way to do this was by compressing files before sending them to users. Apache has a built-in module for this called mod_deflate.
Before we set up the compression, we looked at the compression level. The `DeflateCompressionLevel` setting controls how much compression is applied. Higher levels mean better compression but use more CPU power. The level can range from 1 (less compression) to 9 (more compression). For our project, we chose a compression level of 7, which gave us a 29% compression ratio.
We enabled it with:
sudo a2enmod deflate
Then, we configured it to compress text-based files:
sudo vim /etc/apache2/mods-enabled/deflate.conf
To set up the DeflateCompressionLevel to 7:
<IfModule mod_deflate.c>
<IfModule mod_filter.c>
AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript
AddOutputFilterByType DEFLATE application/x-javascript application/javascript application/ecmascript
AddOutputFilterByType DEFLATE application/rss+xml
AddOutputFilterByType DEFLATE application/wasm
AddOutputFilterByType DEFLATE application/xml
DeflateCompressionLevel 7
</IfModule>
</IfModule>
Create the CustomLog file. Go to the DocumentRoot:
vim /etc/apache2/sites-available/example_com-le-ssl.conf
Add the following content to the file:
<IfModule mod_ssl.c>
<VirtualHost *:443>
ServerName example.com
ServerAlias www.example.com
DocumentRoot /var/www/example.com
ServerAdmin webmaster@example.com
ErrorLog /var/log/apache2/example_com_error.log
#CustomLog /var/log/apache2/example_com_access.log combined
CustomLog "/var/log/apache2/deflate_log" deflate
DeflateFilterNote ratio
LogFormat '"%r" %b (%{ratio}n%%) "%{User-agent}i"' deflate
Include /etc/letsencrypt/options-ssl-apache.conf
SSLCertificateFile /etc/letsencrypt/live/www.example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/www.example.com/privkey.pem
</VirtualHost>
Restart the web server:
sudo systemctl restart apache2
To measure the impact, we downloaded a large file from jQuery:
sudo cd /var/www/example.com/
wget https://code.jquery.com/jquery-3.7.1.js
Then, we checked the logs:
tail /var/log/apache2/deflate_log
The result was a 29% compression ratio, meaning our site loaded much faster!
Step 9: Tracking Server Status with Apache’s Status Module
As our web server started handling more traffic, we realized we needed a way to monitor its performance. Was it running smoothly? Were there any slowdowns? Apache’s status module provided real-time data about server activity, helping us troubleshoot any issues, such as incomplete log files or unusual traffic spikes.
Enabling the module was simple. We ran:
sudo A2enmod status
Next, we configured it by editing the status.conf file:
vim /etc/apache2/mods-available/status.conf
Inside the file, we added:
<IfModule mod_status.c>
# Allow server status reports generated by mod_status,
# with the URL of http://servername/server-status
# Uncomment and change the "192.0.2.0/24" to allow access from other hosts.
<Location /server-status>
SetHandler server-status
Require local
Require ip \your public IP address
</Location>
To find our public IP address, we used:
curl ident.me
After adding our IP address and restarting Apache, we could now access real-time server statistics at:
http://example.com/server-status
With this setup, we had a bird’s-eye view of our server’s performance, making it easier to detect and fix issues before they became major problems.
Step 10: Installing PHP for Dynamic Content
At that point, our web server could serve static HTML files, but we wanted more. We needed dynamic content, database interactions, and scripting capabilities. That’s where PHP came in.
PHP (Hypertext Preprocessor) is a powerful server-side scripting language. Unlike JavaScript, which runs on the user’s browser, PHP executes on the server, generating dynamic web pages before sending them to visitors. It’s widely used for building interactive websites, and it integrates seamlessly with MySQL databases.
Connect to your VPC:
sudo ssh -l root@your_droplet_ip
To install PHP and its required modules, we ran:
sudo apt update && apt install php php-mysql libapache2-mod-php
sudo systemctl restart apache2
sudo php --version
To test if PHP was working correctly, we created a small script:
sudo vim /var/www/example.com/test.php
Inside the file, we added:
<?php
php.info();
?>
Then, we opened our browser and visited https://example.com/test.php.
Seeing the PHP info page confirmed that PHP was up and running!
