Microservices are big in the tech world these days. The evolutionary heir to service-oriented architecture, microservice-based design is the ultimate manifestation of everything you learned about good application design.
Loosely coupled with high cohesion, microservices are the application embodiment of DevOps principles. So why isn’t everything a microservice now? At the LISA Conference, Anders Wallgren and Avantika Mathur from Electric Cloud gave some insight with their talk “The Hard Truths about Microservices and Software Delivery”.
Perhaps the biggest impediment to the adoption of microservices-based application architecture is an organizational culture that is not supportive. Microservices proponents recommend a team per service, but if the organization is too rigid, the result is a silo per service. Adding more silos only adds communication overhead. Organizations with a culture of territorialism and blame assignment will have a hard time using a microservices architecture. Conway’s Law suggests that if the teams can’t communicate well, the application components will have a hard time communicating, too. None of this is new, of course, but it goes to show that architectural decisions are never purely technical.
If the organization is supportive of microservices, the next stumbling block is moving too quickly. The distributed nature of microservices adds extra complexity to the environment. As Corey Quinn quipped in another presentation at LISA, microservices “turn every outage into a murder mystery.” This means a robust and comprehensive testing process is a prerequisite for microservices. This includes not only unit tests but also end-to-end testing and performance testing. Performance testing is perhaps even more important with microservices than with monoliths since hot spots get amplified. Wallgren says organizations without solid automated testing must fix that first before working on microservices.
If the testing process is in place, it’s still possible to put the cart before the horse. Or perhaps the two wheels, axle, bed, and hitch before the horse. It is often beneficial to begin design of an application as a monolith and see where the natural boundaries emerge. The various pieces of a cart can be built separately, but if you plan to stitch your horse together from separately-developed pieces, the end result may be disappointing. Similarly, what seems like the right design at the outset might prove to have flaws as it gets implemented.
Since microservices require a supportive organizational structure and culture, a robust and automated test environment, and a clear view of the business logic and communication functions, why use microservices at all? As we said above, microservices represent DevOps principles projected onto the application universe. It’s no surprise that the relative growth of “DevOps” as a Google search term tracks closely with searches for “microservices”.
Microservices enable rapid delivery of changes by allowing pieces to be deployed and managed independently. This not only allows for A/B testing and rapid rollback of deployments, but it is critical to hyperscaling. Each service can be scaled to the appropriate size regardless of the needs of the other services. As the serverless model gains popularity among public cloud customers, microservices play into that trend.
One area where microservices have not gained much traction is the traditional high performance computing space. Since these applications are often tightly-coupled, an architecture pattern that requires loose coupling is an obvious bad fit. It’s hard to imagine how a weather simulation could be redesigned as a set of independent services. But the infrastructure around the simulations – job submission portals, data analysis, and so on – will likely trend toward microservices as well.
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Be the first to comment