Almost two years ago we embarked on a change to our UI layer that created a complete separation between our presentation and application layers. The goal was to run our front end servers regionally and independent of our application servers. The exercise involved the complete decoupling of our front end and back end systems. Previously all API calls were proxied through to the back end services via the front end servers and this presented a number of challenges with scale, the biggest of which was the tight coupling between our front end layer and our application layer. In order to change this we needed to implement CORS to allow our front end servers to serve our UI while allowing the browser to talk directly to our API endpoints served by our application servers without first proxying through our front end servers. This work was completed over a year ago and has proven to be reliable and a big step change in performance and latency. During this time we also experimented with and rolled out trials of SPDY, a protocol developed by Google to speed up the web, using this gave us better pipelining and performance but this protocol is now depreciated in favour of HTTP/2.
Although our own front end servers are distributed globally (Europe, North America and APAC) we are making changes to step up a gear or two. We have been testing a great service called Cloudflare on our front end architecture. Cloudfare is an awesome global service that makes it possible to reach more parts of the world closer to the local edges of the network which basically means better and faster performance for our customers around the globe.
The way this works is simple, the browser is pointed at one of our URL’s, lets say live.hornbill.com and this resolves to Cloudflare’s servers. Cloudflare uses AnyCast to route your request to your nearest data centre so you get the best performance possible at your current location. For the most part our UI resources are static so these are served directly by Cloudflare as close to the end user as possible reducing latency and load time - this routing and content serving is the magic behind Cloudflare. As well as this distributed caching, Cloudflare brings a host of other great enhancements to our service including HTTP/2 for everyone, and SPDY for older devices that do not support HTTP/2 yet, in-line header and content compression and browser-specific optimisations. Cloudflare also filters and deflects threats and common denial-of-service attack vectors using WAF before our servers ever get touched, giving us greater reliability and service robustness. By using Cloudflare our edge presence expands from three locations to 100 locations around the globe (see here: https://www.cloudflare.com/network/) which means we can speed up access in every continent around the world - which I have to say is just awesome.
Another significant advantage we get is improved DR recovery times. In the very unlikely event that one of our data centres was taken out of action and we had to spin up instances in another DC, the IP addresses would change. This is not good because DNS changes can often take many hours to propagate the internet’s domain name system. By using Cloudflare we can simply re-direct traffic to other servers in a completely different location instantly. Finally…. Cloudfalre gives us IPv6 compatibility for all of our front end services while IPv6 for our back end services/API’s is something we are working on introducing over the coming months too.
Of course with all this goodness, there has to be some downside right? Well surprisingly, not really anything notable. We know that some more archaic network configurations like to *harden* their network by locking down to specific IP addresses/ranges. For Hornbill before Cloudflare that would have been possible because our front end server endpoints were on fixed static IP addresses. However, for Cloudflare this is not the case because of the intelligent network routing thats involved in delivering the service. So for anyone that wants to lock down access to specific services this should be done by URL matching which pretty much every modern firewall, proxy and content filter can do without any trouble.
Having spent much of my own time in the industry designing on-premise deployed software I have to admit I am really excited by the possibilities that delivering services in the cloud brings and how interoperable everything is when its designed right. The changes we made almost two years ago to our front end layer has paved the way for this change and thats ultimately great news for every single Hornbill customers who all get better performance completely automatically.
As ever, if you have any comments please leave them in the comments section below and I will respond.