Can’t Someone Else do That?
Microservices aren’t new. The concept has been around since at least 2005 as a variant of service-oriented architecture.
The idea is to arrange an application as a federated set of small, independently deployable (and, consequently, testable) services. Along the lines of SOLID design principles, this fits well with the Single Responsibility Principle in that a microservice should fulfill a singularly-minded slice of the overall application’s required functionality. And in terms of Domain-Driven Design, microservices lend themselves very well to the implementation of well-defined bounded contexts.
This is the landing page for a project to implement a RESTful Web-API microservice using .NET Core. The approach, design, and tools used herein are all based on my personal experience and should not be considered prescriptive. Some developers might like the approach, some might have different (and even better!) ideas. I’m open to suggestion, so feel free to hit the comments section on each post.
At some point, most large-scale applications run into a need to execute long-running asynchronous operations that might need to run outside the bounds of regular user input. Sending an order completion email to an e-commerce site customer. Analyzing data and generating daily or weekly business report summaries. Automatically locking the comments sections on old, stale blog posts.
These asynchronous operations are usually called some variety of “task”: Action, Activity, Job, Runnable, ScheduledTask, etc. For the purposes of this series, I’ll call them “Jobs”.
Jobs come in many flavors: thread-bound functions, command-line executables, external API service calls, database queries, and more. The details of a Job should be up to its specific handler.
Generally speaking, Jobs are easy to implement for single applications. However, problems arise when Jobs exist inside distributed systems. For example, consider an autoscaled, distributed service that processes order confirmations and send emails to customers. Under heavy load, multiple instances are started that each read from a central repository containing recently completed, but as-yet unnotified orders. If the query for these orders is naive (and let’s face it, most will be) the result will inevitably be that multiple instances of the service grab the same order and send several emails to the customer.
Sending several copies of a confirmation email might seem like a small nuisance. However, customers might start complaining if they frequently receive several confirmation emails for the same order. These emails also unnecessarily drive up the price of operation: generating and sending each order confirmation email could take considerable computation and network resources, especially if the email contains merchandise images for visual verification. Additionally, each email (and possibly attachments, such as those images) might consume quota with an email service provider, at which point unnecessary emails might venture into the “over-limit” pricing for your service plan.
So how does one control these pitfalls? The answer is to create a centralized Job microservice that knows how to persist abstracted Job metadata. Using appropriate distributed software design patterns, this microservice can create unique Job batches that a horizontally-scaled consuming service can use to ensure that each Job is only ever executed by a single instance.
Now that the problem has been established, the rest is actually implementing this microservice. Each post – listed below – will represent a different step in the process. This is an ongoing project, so please check back regularly!