Engineer, Leader, Public Speaker, Career Changer, Enthusiastic learner of any and all new things. I’m a web generalist with a proven track record of solving difficult problems and then teaching others to do the same.
Updated Apr 10, 2019
Recently at GenUI, we put the suite of Azure and .NET Core services to the test to continuously deliver a web application on a rideshare project. Beyond a reliable cloud hosting environment, Azure provides a suite of services for developers including Azure functions, Application Insights, Azure Service Bus, Managed Identities, and Key Vault.
That integration, coupled with well established design patterns and readily available sample code, had the effect of providing a more stable, testable codebase.
Other advantages include easily configurable automated API documentation through Swagger and live debugging on the server through Visual Studio. Overall, this switch enabled the team to spend less time reinventing established coding conventions and more time innovating.
It may just be evangelism, or plain excitement, but today I’m talking about Azure and not GCP or AWS. Regardless, the result of this project on the Azure platform has become case study in mature tooling.
This remains a question we have to ask of ourselves and our clients.
Because Microsoft works well with Microsoft. At first, we believed there would be libraries on the horizon that integrate with Microsoft oAuth and, in turn, would facilitate attempts to integrate with other services. We also thought Application Insights was compelling, since we wanted to log a file on the server, and that we should get more sample code to avoid ending up stuck in a fruitless hunt for configuration questions.
But we soon realized that .NET is a mature system and you don’t have to reinvent the wheel. If, as a developer, I want to build a standard API then I simply follow .NET standards. If I want API documentation without overthinking it then I use Swagger (which can be set up in half an hour). Since we’ve converted to .NET, we added five new Azure technologies.
“There is a standard way: do it this way and get on with your life.”
Next, using Key Vault, we followed a code example to run some commands in the command line. It worked the first time– which never happens. That made life easier because you don’t want to devote brainpower to questions about managing secrets. There is a standard way: do it this way and get on with your life.
The fact that you can live debug what is on the server from your local Visual Studio instance is exciting. Now, not only can we log into Application Insights and see a null reference exception, we are able to pinpoint a line of code and attach an instance to the server. Then we can put a breakpoint allowing us to see activity on the server in real time.
That way, we know how to fix it.
It’s the way you frame it in your mind. Yes, cloud services are hosting the app but Microsoft has a tendency to focus on their clients as businesses. They’re not catering to hobby developers or those who want to put up a little website. Instead, they’re providing for businesses. And often very large ones.
With some platforms, if you only want to explore then a paid subscription is still required or, at the least, entry of credit card information. Yet there’s a concept in Azure of a business as a “tenant” which is a business-level entity managing all of your applications, cloud services, and email addresses.
Another key concept in Azure is called a “resource group”. If you want to provision a little package of things and build them together, then you build them into a resource group. And if you want to spin up a new dev instance and provision all the little cloud services that you need, you can do this as a part of a single resource group.
Then, when you’re done, you can delete it all at once because there isn’t a second pair on the project. And this is just one way to measure or manage your work in Azure. If there’s no underlying system for how resources are put together, a project soon threatens to become unmanageable.
Lastly, there’s the Azure CLI. The Azure portal is what you see if you log into the website. You can do a lot of what’s possible in Azure through a command line interface (which is useful if you’re writing scripts and trying to dynamically allocate and deallocate).
Application Insights offers a lot of benefits for free. When I log into Azure and view an app, I look at the Application Insights instance. It’s evident, for example, if something is failing once every five minutes in prod. Here, I can see my average server response time. I can see my server requests and drill into how many requests are made on specific endpoints– even the frequency with which they fail.
Then I’m free to ask questions:
What are the exceptions when they do fail? How many users are on the site and how long do they spend there? What browsers are those users using when they come to our site?
I get all of this by including Application Insights on a project.
By adding custom logs, I can see every call that’s been made to every one of our other server dependencies and whether that call succeeded or failed. Debugging what’s happening on the server without customization becomes easy as a result of using Application Insights. Here, “easy” becomes exciting.
And how hard was this to add to our specific rideshare project? In Visual Studio, you right click and you say “add Application Insights” and then you keep clicking until confirmation. That simple.
Also on this project, if desired, I could create custom stats that track the people who are taking shuttles. In addition to general app usage data, we are now able to know how often folks are canceling reservations. I can run a report, make charts and graphs, then send an automated email once a week to everyone on the project. And I can configure alerts.
If our matching algorithm falls down and can’t pick itself back– which has happened– we have a metric for the error-severity level and, based on that error-severity, a responsive action set up.
Nothing is free in Azure. That notification costs less than $2 every month. That’s right: about $2. And for that cost we know when our service had fallen down and can act on it.
When we built this app in Node, we had a long running Daemon process in the background to see reservation searches when creating rideshares. We moved this to function Azure because it allowed us to take advantage of already-integrated security benefits. With our Azure function running on a timer every five seconds, we could track the many successful runs and those few that failed over the previous thirty days.
We were able to run the same query on Application Insights and get even more information. You can go serverless in front of a queue so it’s receiving messages from an external cue and reacting appropriately. This allows you to set up a function as an API endpoint so you’re actually making web requests to your serverless function. One of the primary benefits (other than security) is that you’re only paying for the compute time you actually use. This then scales depending on the number of requests, particularly in the case of an API.
A few components of our app take longer than is convenient for an API. As a user seeking a rideshare, your experience needs to be simple. You want to click a button and get a quick “okay” confirmation.
In the API it takes a few seconds to go to MS Graph, where the user’s calendar lives, and update their calendar. Instead, what we do is send a message to a queue and the queue will spit responses back to a listener. Now, instead of living in an API, the listener could live in an Azure function in another application.
When I go into Azure, it can tell me how many messages were scheduled in one tab and in another the number of messages recently sent and received. I can schedule these messages with a queue. So if, for instance, I only want to call a shuttle for the user at the time that they request then I just tell my queue “Send me this message back again tomorrow afternoon at 4 p.m.”
One of the downsides is when this system is still only receiving and sending messages one at a time. There is a latency issue where, if you needed something to happen immediately, queues are perhaps not the best system because they don’t give you finely-tuned control over how quickly you receive those messages back. Yet, for us, it has been a solid system to initiate actions in the background on demand.
Most recently, we worked on enterprise level security. This requires a few extra steps to allow our application permission to read and write on everybody’s calendar.
We also want to make sure that our secrets are kept secret– so we use Key Vault. It’s another Azure service, and because everything is integrated and we’re in this Microsoft environment, the code runs in .NET core when the application runs and most of our configuration settings come from our own files. Then we’ll pop some environment variables in and connect Key Vault in order to throw all of our secrets into the config. This is accomplished in around six lines of code. I can use this method in various places where I might keep environment settings, mash them together, and then utilize them throughout the application– all without having to think about how that works or resort to customization.
Finally, in order to connect to Key Vault, we could use a password but the whole point of using Key Vault for passwords is so they are not in your code or anywhere else that’s easily compromised. So we use something called the “managed service identity” (Microsoft offers a helpful diagram to define this concept).
Our application is unique when it runs in Azure. It lives on a VM and has its own fingerprint. Rather than stating or requesting a key or password to get into that Key Vault, it’s like asking for a fingerprint. That is how we are accessing our secrets in Key Vault. We can also use this approach to access our database because we are really just trying to remove points of vulnerability and enable secure access.