Blogroll

Microservices Hierarchy of Needs

This is a small extract from a longer post I published at The New Stack. Check the original post here.

Maslow's Hierarchy of Needs is a motivational theory in psychology, comprising of multitier model of human needs, often depicted as hierarchical levels within a pyramid. Maslow uses terms such as physiological, safety, belongingness and love, esteem, self-actualization, and self-transcendence to describe the stages that human motivation generally moves through.

I thought, I should apply it to Microservices too, as there is a clear list of needs that has to be satisfied in order to be successful in the Microservices journey. Once I listed the main Microservices concerns I couldn't stop myself noticing that Kubernetes does cover a big chunk of these needs pretty well. So I've added Kubernetes to the diagram as well and here is the result:

Microservices Hierarchy of Needs
Microservices Hierarchy of Needs

The key takeaway from this diagram is that Kubernetes can automate many of the boring Microservices related activities such as environment provisioning, resource management, application placement, deployment, health checks, restarts, service discovery, configuration management, auto scaling, etc. With all that taken care of by Kubernetes, the developers can focus on what really requires creativity and talent: analyzing the business problem, creating great domain driven designs, hidden behind beautiful APIs, with well crafted clean code, that is constantly refactored and adapting to change.

Recently, I've been blogging and speaking about why Kubernetes is the best platform for running Microservices styled applications. If you have not convinced yet, give it a try.

PS: I'll be talking at #CloudNativeCon + #KubeCon in Berlin about Cloud Native Design Patterns. Visit here to learn about my talk!



Microservices Deployments Evolution

(This post was originally published on Red Hat Developers, the community to learn, code, and share faster. To read the original post, click here.)

Microservices Are Here, to Stay

A few years back, most software systems had a monolithic architecture and slow release cycle. In the recent years, there is a clear move towards Microservices architecture which is optimized for scalability, elasticity, failure, and speed of change. This trend has been further enforced by the adoption of cloud and containers, which also enabled practices such as DevOps.
Trends in the IT Industry






All these changes have resulted in a growing number of services to develop and an even bigger number of deployments to do. It soon became clear that the explosion in the number of deployments cannot be controlled using pre-microservices tools and techniques, and new ways have been born. In this article, we will see how Cloud Native platforms such as Kubernetes allow deployment of Microservices in high scale with minimal human intervention. Here, I use Kubernetes as the example, but other container based Cloud Native platforms (Amazon ECS, Apache Mesos, Docker Swarm) do offer similar primitives and capabilities. In the not so distant future, the practices described here will become the common way for managing and deploying Microservices at scale.

Cloud Native Deployment Traits

Self-Service Environments

Local, Dev, Test, Int, Perf, Stage, Prod.... are all environments, but what is an environment really? Usually, it is a VM or a group of VMs that represent an environment. For example:
Local is the developer laptop where the user has full freedom to experiment and break stuff. It still has to be similar to other environments to avoid the "it works on my machine" syndrome.
Dev is the very first environment where changes from all developers are integrated into one working application. It runs SNAPSHOT version of the services, has mocked external dependencies, and for most of the time, it is broken from constant change.
Once a service has been released, it is moved to a more stable environment such as Test. This environment may be slightly more powerful (maybe have more than one VM), may have more external services available rather than mocks, and it has also testers accessing it.
While the environments get closer to production environment such as Int/Stage/Perf, they get bigger, with more VMs and more external dependencies available. At some point, we probably need an environment that has resources quantifiable with the production environment so we can do performance tests that mean something in relation to the production environment.
The main difficulty with this model is that the concept of environment is mapped to a physical or virtual VMs and as such, it is slow to alter. You cannot easily change the resource profile of an existing environment, create new environments, or give an environment per developer on the fly.
Environments managed by Kubernetes
In the Cloud Native world, an environment is just an isolated, controlled and named resource collection. For example, given a pool of 8 VMs, you can chunk and use that resource pool for different environment instances depending on the needs of the various teams. And those chunks don't map to VMs, which means it may be that multiple environments are collocated and share few of the VMs. Creating, editing, destroying an environment is achieved with one command, instantly, w/o a request for VMs and waiting for days or even weeks. This allows teams to change their environment profiles based on their changing needs in a self-service manner, easier and faster.

Dynamically Placed Applications

We can see how a self-service platform for managing environments can ease the onboarding of new developers, services, projects, and even enable custom release strategies which may require temporary environments on the fly. Once an environment is ready, the next task is choosing a strategy for placing our Microservices on the environments.
One of my favorite Microservices related books is Building Microservices by Sam Newman. In the book, Sam approaches Microservices from all possible angles and one of those is deployments. In the Deployment chapter, the author describes few strategies for mapping services to hosts and their pros and cons.

Service to Host Mapping Strategies
I have also described this approach in painful details for Apache Camel based applications in my Camel Design Patterns book. Basically, it comes down to choosing a way to package your services and grouping them on the available hosts considering all kind of service and host dependencies. Luckily for us, the industry has moved forward quickly and containers have been accepted as the standard for packaging Microservices. Unless you are Netflix and have over 100K Amazon EC2 instances, you shouldn't look for quick ways for burning EC2 images for each service, and instead just use containers. Even Netflix has moved on and they are experimenting with containers and even developing open source container scheduling software. So no more a VM per service, and no more service to host mapping strategies. Instead, based on your service requirements/dependencies and the available host resources, your Cloud Native platform should find a host for every service in a predictable manner defined by policies. That requires an understanding of each service and describing its requirements such as storage, CPU, memory, etc but later the benefits are huge. Rather than manually orchestrating and assigning each service to a host in advance, the Kubernetes scheduler performs a choreography of services and places them on the hosts dynamically when requested.
As you can see the concept of VM/Host disappears with environments spanning multiple hosts, and services being placed on hosts dynamically. We don't care and we don't want to know the actual hosts where our environments are carved and where the Microservices are placed (except when predictable performance is critical and shared host resources and platform resources should be avoided).

Declarative Service Deployments

We can provision environments in a self-service manner and have services placed on the environments with a minimal effort. But when we have multiple instances of a service, how do we deploy the new instances? Do we first have to stop an instance, then upgrade, and when things go wrong we rollback? Cloud Native platforms (and Kubernetes specifically) have thought about this too. Using the concept of Deployment Kubernestes allows describing how the service upgrades should be performed.
Rolling and Fixed Deployment

The Rolling Update strategy of the Deployment updates one pod at a time, rather than taking down the entire service at the same time and ensures zero downtime. This Fixed (or Recreate as it is named) strategy, on the other hand, brings downs all service instances, and then gradually starts new versions.
In both cases,  the Deployment will declaratively update the deployed application progressively behind the scene.

Additional Benefits

Having a platform that is capable of managing the full life cycle of services offers also additional deployment related benefits.

Blue-Green and Canary Releases

Blue-green release is a way for achieving rapid rollback in case anything goes wrong with the new release.
Canary release is a technique for reducing the risk of introducing a new software version in production by introducing change only to a small subset of users before rolling it out to everybody.
Blue-Green and Canary Releases
Both of these techniques can be easily achieved using Kubernetes with minimal human intervention.

Self Healing

The Cloud Native platform is able not only detect failure but also act and heal it. Kubernetes will regularly perform health checks for your application and if it detects something wrong, it will restart your service and even go further and move it to a different healthier host if required.

Auto Scaling

In addition to self healing, the platform is also capable of auto scaling of services and even the infrastructure. That is a very powerful feature giving the platform some Antifragile characteristics.

DevOps and Antifragile

If we look at all the benefits provided by Kubernetes, I think it is fair to say that it offers primitives and abstractions that are better suited for managing Microservices at scale. With such a tool, it becomes easier for teams and organizations to use practices such as DevOps and focus on improving the business processes towards Antifragility.

Closing Thoughts

Not a Free Lunch

We have seen many of the benefits of Cloud Native platforms in regards to Microservices deployments but have not discussed any of its cons in this article on purpose. As you may expect, there is a steep learning curve for Kubernetes, and also a need for a change in the patterns, principles, practices and processes when developing such applications. Basically, that is the move to Cloud Native applications.

Change is Inevitable

This may sound too strong, but if you are doing Microservices, and that is Microservices at scale, using a Cloud Native platform (such as Kubernetes) is a must.
Picture from Wikipedia. Wisdom from W. E. Deming
If you are using pre-microservices tools, techniques, and practices for developing Microservices, it will hit you back, and Microservices may not work for you.

Microservices are a Commodity

I'm a big fan of Simon Wardley. I'm not able to follow everything he writes about, but even the old writings are very interesting and somehow it all makes sense in retrospective. If you haven't read anything from him, this video is an excellent start (or go back a few years to the longer talk). In this article I'll try to make sense about what is happening in Microservices world using Wardly's theory (and diagrams).

How things evolve?

Any idea, product, system, etc., starts with its genesis, and if it is successful, it evolves, others copy it and create new custom solutions from it. If it is still successful, it diffuses further and others create new products which get improved, extended, and becomes widespread and available, “ubiquitous”, well understood and more of a commodity. This lifecycle can be observed in many successful products when looked over the years such as computers, mobiles, virtualization/cloud, etc.

If we think about the Microservices, the architectural style, the projects that support it, platforms that were born from it, containers, DevOps practice, etc... each of them is at a certain stage in the diagram above. But as a whole the Microservices movement now is a pretty widespread, well understood concept, and turning into commodity already. And there are many indications confirming that, starting from number of publications, conferences, books, confirmed success stories on production, etc. No doubt about it, not any longer.

How did we get here?

The Microservices genesis started 5-6 years ago with Fred George and James Lewis from ThoughtWorks sharing their ideas. In the next few months Thoughtworks did lot of thinking, writing, and talking about it, while Netflix did a lot of hacking and created the first generation of Microservices libraries. Most of those libraries were still not very popular, and usable by the wider developer community, and only the pioneers and startup minded companies would try them at times. Then SpringSource joined the bandwagon, they wrapped and packaged the Netflix libraries into products and made all the custom build solutions accessible and easy to consume for Java developers. In the meantime, all this interest in Microservices drove further innovation and containers were born. That brought in another wave of innovation, more funding, shuffling, new set of tools, which made the DevOps theory a practise.



Containers being the primary means for deploying Microservices, soon created the need for container orchestration i.e. Cloud Native platforms. And today, the Cloud Native landscape is in transition, taking its next shape. If you look around there are multiple Cloud Native platforms, each of which started its journey from different point in time and a unique value proposition , but slowly getting into a common feature set, similar concepts and even standards. For example the feature parity of platforms such as AWS ECS, Kubernetes, Apache Mesos, Cloud Foundry, are getting close, each being feature rich, used in production, and comparable primitives. As you can see from the diagram above, now what becomes important as technology strategy is to bet on platforms with open standards, open source, large community and high chance of long term success. That means for example choosing a container runtime that is OCI compliant, choosing a tracing tool that is based on Open Tracing standard rather than custom implementation, supporting the industry standard logging and monitoring solutions, supported by companies that are good in commodity products.

Organization Types

According to Wardly, there are three types of people/teams/organizations and each is good at certain stages of the evolutions:
  • Pioneers are good in exploring uncharted territories and undiscovered concepts. They turn into life the crazy ideas.
  • Settlers are good in turning the half baked prototype into something useful for a larger audience. They build trust, understanding and refine the concept. They turn the prototype into a product, make it manufacturable and turn it profitable.
  • Town Planners are good in taking something and industrialise it taking advantage of economies of scale. They build the trusted platforms of the future which requires immense skill. They find ways to make things faster, better, smaller, more efficient, more economic and good enough.




    With this definition and the above table showing the characteristics of each type of organisation, we can make the following hypothetical classification:
    • Netflix are definitely the Pioneers. The creative, path finder people they have, the way the company is set around experimentation, uncertainty, their culture around freedom, responsibility, everything they have brought into the microservice services world makes them pioneers.
    • SpringSource for me are more the settler type. They already had a popular Java stack, and they managed to spot the trend in Microservices, and created a good consumable product in the form of Spring Boot and Spring Cloud.
    • Amazon, Google, Microsoft are the town planners. They may come late, but they come well prepared, with the long term strategy defined, with web scale solutions and unbeatable pricing. Platforms such as Kubernetes, ECS (not entirely sure about the latter as it is pretty closed) are build on over 10 years of experience, and indented to last long, and become the undustry standard.
    One important takeway from this section is that, not everything invented by pioneers is meant for general consumption. Pioneers move fast, and unless your organisation has similar characteristics, it might be difficult to follow all the time. On the other hand, town planners create products and services which are interoperable and based on open standards. That in the longer term becomes an important axes of freedom. 

    Conclusion

    In the Microservices world things are moving from uncharted to industrialised direction. Most of the activities are not that chaotic, uncertain and unpredictable. It is almost turning into a boring and dull activity to plan, design and implement Microservices. And since it is an industrialised job with a low margin, the choice of tool, and ability to swap those platforms plays a significant role.


    Last but not least, a nice side effect from this evolution is that we should hear less about Conway's Law, the two pizzas, and circuit breaker during conferences, and should hear more about managing Microservices at scale, automation, business value, serverless and new uncharted ideas from the pioneers in our industry.

    Spring Cloud for Microservices Compared to Kubernetes

    (This post was originally published on Red Hat Developers, the community to learn, code, and share faster. To read the original post, click here.)
    Spring Cloud and Kubernetes both claim to be the best environment for developing and running Microservices, but they are both very different in nature and address different concerns. In this article we will look at how each platform is helping in delivering Microservice based architectures (MSA), which areas they are good at, and how to use the best of both worlds in order to succeed in the Microservices journey.

    Background Story

    Recently I read a great article about building Microservice Architectures With Spring Cloud and Docker by A. Lukyanchikov. If you haven't read it, you should, as it gives a comprehensive overview of what it takes to create a simple Microservices based system using Spring Cloud. In order to build a scalable and resilient Microservices system that could grow to tens or hundreds of services, it must be centrally managed and governed with the help of a tool set that has extensive build time and runtime capabilities. With Spring Cloud, that involves implementing both functional services (such as statistics service, account service and notification service) and supporting infrastructure services (such as log analysis, configuration server, service discovery, auth service). A diagram describing such a MSA using Spring Cloud is below:
    Infrastructure Services
    MSA with Spring Cloud (by A. Lukyanchikov)
    This diagram covers the runtime aspects of the system, but doesn't touch on the packaging, continuous integration, scaling, high availability, and self-healing which are also very important in the MSA world. Assuming that the majority of Java developers are familiar with Spring Cloud, in this article we will draw a parallel and see how Kubernetes relates to Spring Cloud by addressing these additional concerns.

    Microservices Concerns

    Rather than doing a feature by feature comparison, let's take a look at wider Microservices concerns and see how Spring Cloud and Kubernetes approach those. The good thing about MSA today is that it is an architectural style with well understood benefits and trade-offs. Microservices enable strong module boundaries, with independent deployment and technology diversity. But they come at the cost of developing distributed systems and significant operational overhead. A key success factor is to focus on being surrounded by tools that will help you address as many MSA concerns as possible. Making the starting process quick and easy is important, but the journey to production is a long one, and you need to be this tall to get there.
    Microservices Concerns
    In the diagram above, we can see a list with the most common technical concerns (we are not covering the non-technical concerns such as organisation structure, culture and so on) that have to be addressed in a MSA.

    Technology Mapping

    The two platforms, Spring Cloud and Kubernetes, are very different and there is no direct feature parity between them. If we map each MSA concern to the technology/project used to address it in both platforms, we come up with the following table.
    Spring Cloud and Kubernetes Technologies
    The main takeaways from the above table are:
    • Spring Cloud has a rich set of well integrated Java libraries to address all runtime concerns as part of the application stack. As a result, the Microservices themselves have libraries and runtime agents to do client side service discovery, load balancing, configuration update, metrics tracking, etc. Patterns such as singleton clustered services and batch jobs are managed in the JVM too.
    • Kubernetes is polyglot, doesn't target only the Java platform, and addresses the distributed computing challenges in a generic way for all languages. It provides services for configuration management, service discovery, load balancing, tracing, metrics, singletons, scheduled jobs on the platform level and outside of the application stack. The application doesn't need any library or agents for client side logic and it can be written in any language.
    • In some areas both platforms rely on similar third party tools. For example the ELK and EFK stacks, tracing libraries, etc. Some libraries such as Hystrix, Spring Boot are useful equally well on both environments. There are areas where both platforms are complementary and can be combined together to create a more powerful solution (KubeFlix and Spring Cloud Kubernetes are such examples).

    Microservices Requirements

    In order to demonstrate the scope of each project, here is a table with (almost) end-to-end MSA requirements starting from the hardware on the bottom, up to the DevOps and self service experience at the top, and how it relates to Spring Cloud and Kubernetes platforms.
    Microservices Requirements
    In some cases both projects address the same requirements using different approaches and in some areas one project may be stronger than the other. But there is also a sweet spot where both platforms are complementary to each other and can be combined for a superior Microservices experience. For example Spring Boot provides Maven plugins for building single jar application packages. That combined with Docker and Kubernetes declarative Deployments and Scheduling capabilities makes running Microservice a breeze. Similarly, Spring Cloud has in-application libraries for creating resilient, fault tolerant Microservices using Hystrix (with bulkhead and circuit breaker patterns) and Ribbon (for load balancing). But that alone is not enough, and when it is combined with Kubernetes health checks, process restarts and auto-scaling capabilities turns Microservices into a true antifragile system.

    Strengths and Weaknesses

    Since both platforms are not directly comparable feature by feature, and rather than digging into each item, here are the (summarized) advantages and disadvantages of each platform.

    Spring Cloud

    Spring Cloud provides tools for developers to quickly build some of the common patterns in distributed systems such as configuration management, service discovery, circuit breakers, routing, etc. It is build on top of Netflix OSS libraries, written in Java, for Java developers.

    Strengths

    • The unified programing model offered by the Spring Platform itself, and rapid application creation abilities of Spring Boot, give developers a great Microservice development experience. For example, with few annotations you can create a Config Server, and few more annotation you can get the client libraries to configure your services.
    • There are a rich selection of libraries covering the majority of runtime concerns. Since all libraries are written in Java, it offers multiple features, greater control and fine tuning options.
    • The different Spring Cloud libraries are well integrated with one another. For example a Feign client will also use Hystrix for Circuit Breaking, and Ribbon for load balancing requests. Everything is annotation driven, making it easy to develop for Java developers.

    Weaknesses

    • One of the major advantages of the Spring Cloud is also its drawback - it is limited to Java only. A strong motivation for the MSA is the ability to interchange technology stacks, libraries and even languages when required. That is not possible with Spring Cloud. If you want to consume Spring Cloud/Netflix OSS infrastructure services such as configuration management, service discovery, load balancing, the solution is not elegant. The Netflix Prana project implements the sidecar pattern to exposes Java-based client libraries over HTTP to make it possible for applications written in Non-JVM languages that exist in the NetflixOSS eco-system, but it is not very elegant.
    • There is too much responsibility for Java developers to care about and the Java applications to handle. Each Microservice needs to run various clients for configuration retrieval, service discovery and load balancing. It is easy to set those up, but that doesn't hide the buildtime and runtime dependencies to the environment. For example, developers can create a Config Server with @EnableConfigServer annotation easily, but that is only the happy path. Every time developers want to run a single Microservice, they need to have the Config Server up and running. For a controlled environment, developers have to think about making the Config Server highly available and since it can be backed by Git or Svn, they need shared file system for it. Similarly for service discovery, developers need to start Eureka Server first. For a controlled environment, they need to cluster it with multiple instances on each AZ, etc. It feels like as a Java developers have to build and manage a non-trivial Microservices platform in addition to implementing all the functional services.
    • Spring Cloud alone has a shorter scope in the Microservices journey, and developers will also need to consider automated deployments, scheduling, resource management, process isolation, self healing, build pipelines, etc. for a complete Micorservices experience. For this point, I think it is not fair to compare Spring Cloud alone to Kubernetes, and a more fair comparison would be between Spring Cloud + Cloud Foundry (or Docker Swarm) and Kubernetes. But that also means that for a complete end-to-end Microservices experience, Spring Cloud must be supplemented with an application platform like Kubernetes itself.

    Kubernetes

    Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It is polyglot and provides primitives for provisioning, running, scaling and managing distributed systems.

    Strengths

    • Kubernetes is a polyglot and language agnostic container management platform that is capable of running both cloud native and traditional containerized applications. The services it provides such as configuration management, service discovery, load balancing, metrics collection, log aggregation are consumable by a variety of languages. This allows having one platform in the organisation that can be used by multiple teams (including Java developers using Spring framework) and serve multiple purposes: application development, testing environments, build environments (to run source control system, build server, artifact repositories), etc.
    • When compared to Spring Cloud, Kubernetes addresses a wider set of MSA concerns. In addition to providing runtime services, Kubernetes also lets you provision environments, set resource constraints, RBAC, manage application lifecycle, enable autoscaling and self healing (behaving almost like an antifragile platform).
    • Kubernetes technology is based on Google's 15 years of R&D and experience of managing containers. In addition, with close to 1000 committers, it is one of the most active Open Source communities on Github.

    Weaknesses

    • Kubernetes is polyglot and as such its services and primitives are generic and not optimised for different platforms such as Spring Cloud for JVM. For example configurations are passed to applications as environment variables or a mounted file system. It doesn't have the fancy configuration updating capabilities offered by Spring Cloud Config.
    • Kubernetes is not a developer focused platform. It is intended to be used by DevOps minded IT personnel. As such, Java developers need to learn some new concepts and be open for learning new ways of solving problems. Despite it being super easy to start a developer instance of Kubernetes using MiniKube, there is a significant operation overhead to install a highly available Kubernetes cluster manually.
    • Kubernetes is still a relatively new platform (2 years old) and it is still actively developed and growing. Therefore there are many new features added with every release which may be difficult to keep up with. The good news is that, this has been envisaged, and the API is extensible and backward compatible.

    Best of Both Worlds

    As you have seen both platforms have strengths in certain areas, and things to improve upon in other areas. Spring Cloud is a quick to start with, developer friendly platform, whereas Kubernetes is DevOps friendly, with a steeper learning curve, but covers a wider range of Microservices concerns. Here is a summary of those points.
    Strengths and Weaknesses
    Both frameworks address a different range of MSA concerns, and they do it in a fundamentally different way. The Spring Cloud approach is trying to solve every MSA challenge inside the JVM, whereas the Kubernetes approach is trying to make the problem disappear for the developers by solving it at platform level. Spring Cloud is very powerful inside the JVM, and Kubernetes is powerful in managing those JVMs. As such, it feels like a natural progression to combine them and benefit from best parts of both projects.
    Spring Cloud backed by Kubernetes
    With such a combination, Spring provides the application packaging, and Docker and Kubernetes provides the deployment and Scheduling. Spring provides in-application bulkheading through Hystrix thread pools, and Kubernetes provides bulkheading through resource, process and namespace isolation. Spring provides health endpoint for every microservice, and Kubernetes performs the healthchecks and traffic routing to healthy services. Spring externalizes and updates configurations, and Kubernetes distributes the configurations to every Microservice. And this list goes on and on.

    My Favourite Microservices Stack
    What about my favourite Microservices platform? I like them both. I like the developer experience offered by the Spring framework. It is all annotation driven, and there are libraries covering all kind of functional requirements. I also like Apache Camel (rather that Spring Integration) for anything to do with integration, connectors, messaging, routing, resilience and fault tolerance at the application level. Then for anything to do with clustering and managing multiple application instances, I prefer the magical Kubernetes powers. And whenever there is an overlap of functionality, such as for service discovery, load balancing, configuration management, I try to use the polyglot primitives offered by Kubernetes.

    A geeky laptop skin

    Recently we had a company meetup where the team leads talked about the Red Hat culture, mission and values. Inspired by all the talks and the stickers on the table that day, I've created my first laptop skin. And the result was this:
    Red Hat - the Open Source Catalyst
    In case you want to create something similar here the steps I took:

    1. Pick some nerdy words. I picked all the middleware project names that Red Hat contributes to (I've never realized there were that many).  Removed some that are not so cool any longer (EJB3) and added few other hot projects (Kubernetes, Apache isis, OFBiz).

    2. Created the cloud tag using the above words and the shadowman picture at Tagul. You can log in and edit my existing desing for a quicker sticker.

    3. Exported the image from step 2 and created a Mac skin at wrappz which costs around £15. (The following discount code should give you 20% off Bilgin20 and some royalties for me).

    Attached is the transparent image used for the skin and the editable version is at Tagul.