There weren't any DevOps engineers in our team. So, I took it upon myself to handle CI/CD through GitHub actions and deploy the projects on a digital ocean droplet. It took me a while to figure this out but once I did, it worked pretty smoothly. I created dockerfiles for every project (developed with Golang and react) and just built images for them on the server and ran the container in detached mode.
Every project were deployed on separate ports within the same server. Say the IP address of the server is: "159.98.132.68", I would deploy a project "A" on port 9000 and another project "B" on port 8080. So, that would mean, I could access project "A" remotely over "http://159.98.132.68:9000" and project "B" over "http://159.98.132.68:8080". (port "8080" is the default port of HTTP so project "B" could also be accessed with "http://159.98.132.68".
Everything was going smoothly. But some of our projects required a secured connection to use some features for example: implementing stripe hooks, accessing webcam for QR scanner, etc. Also, some of our frontend apps were deployed on netlify while their backend was deployed on our server which didn't have a secured connection. Since netlify provides a free HTTPS on all sites, the browser would throw a mixed-content error when the netlify deployed frontend app requests some data to our "HTTP" server.
Mixed Content: The page at 'thisIsFakeSite.netlify.app/login' was loaded over HTTPS, but requested an insecure resource '159.98.132.68:8080/query'. This request has been blocked; the content must be served over HTTPS.
The solution was to either deploy the frontend app to our own HTTP serve, enable a secured connection to our server, or disable this error from the browser itself.
It was just easier to have a good domain name provided freely by netlify than to remember the IP address and its port to access the frontend app, and it was rather a hard setup for beginners to deploy on our own server by enabling CI/CD. So in the beginning we rather chose to use netlify, though later we started deploying on our own server. And enabling a secured connection was out of the question, I did, however, do my research to map a domain name to our server and use the Let's Encrypt client for an SSL certificate but it would take me months to even get me to understand all the mechanisms that come along with it. So the only other option was to disable the mixed-content error from the browser itself. The step for that was pretty easy:
- Click on the lock icon beside the URL name on the browser
- Go to site settings
- Scroll down to "Insecure content which should be "Block (default)"
- Change its value to "Allow"
Though it did work, asking our clients to disable this error was a hassle.
Once a DevOps engineer joined the team, he enabled SSL pretty easily. I won't go into much detail over all the mechanisms of how he did it, because frankly I myself have no idea what goes where. But I will try to explain what I learned from him.
We already had a domain name so we created subdomains from it. We then added a record ( I don't know which one, probably "A", apparently there are many ) pointing to the IP address of our server on the DNS setting of the domain using the domain provider's control panel. Say our IP address is: "159.98.132.68", that would mean we map our subdomains to this IP address. Now see, the default HTTP port is 8080. Since we were mapping all our subdomains to our server, we needed a mechanism to redirect each request to their respective port. So, the DevOps engineer set up an Nginx server which would listen on port 8080, the default HTTP port where the requests from the domain name would arrive. (Remember our project "B" running on port "8080"? well I had to redeploy the project on another port). He then configured the Nginx server as a reverse proxy. Now what this proxy essentially does is redirects all incoming requests to its dedicated port.
The configuration looked something like this:
server {
server_name testProject.testServer.live;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
proxy_set_header HOST $host;
proxy_pass http://0.0.0.0:9000;
}
Once this proxy was set up all the requests from the domain name were redirected to their dedicated ports. The only thing remaining was to enable SSL connection. For that he used "certbot" to use Let's encrypt certificate manually.
I understand all of these steps aren't so clear, that's because I don't even understand what's going on behind the scene. But these are the only steps that he did to map a domain name to our server and enable a secured connection. All detailed steps are probably available on the links that I've attached. For me, it was enough to understand how things worked so even though I may not be working deeply with these things in future I would know what to do or at least what to search to do these things myself.