Blogroll

Open Source Monetization Ecosystem Review

Open source is a distributed innovation model that lacks distributed funding. It allows individuals with a common passion to collaborate and produce value but not capture it. It is a production factory, without a sales counter. That is why many open source contributors are not getting a fair return. That is why many companies capture value from open source but without paying back. That is why many independent open source builders use alternative means to fund themselves. That is why open source is not a business model. It is a production model, monetization not-included. But there is hope, there is change.

Open source monetization journey for individuals

Open source is an innovation model and it is going to innovate its monetization too. There are new ways for fans to support the creative work of open source builders. There are ways to create online courses and monetize knowledge. There are new ways to create digital goods with accompanying services and sell them online for a fiver. Ways to start newsletters and make money from your audience. Ways to measure an open source contributor's merit, incentivize it, and trade it. Decentralized protocols for staking tokens and support open source through interest rather than donations.

99 Ways to Make Money with Open Source as an Individual



There is an open source monetization revolution happening right now and I'll explore the whole spectrum of open source monetization projects at Open Core Summit Digital. Join me on December 16th-18th where I will talk about "99 Ways to Make Money with Open Source as an Individual".

Must Read Free Kubernetes Books

There is a rise in offerings of free educational content, free software, free cloud resources with the single goal of capturing the new kingmaker's attention. While this is a change for good, I predict that is not going to be enough and this trend will only increase in the future. In a similar spirit, here I want to quickly share my favorite Kubernetes related books offered free of charge. I've read and found them all very useful at different stages of my learning. The list contains books that are sponsored and offered free of charge but in most cases that is for exchange of your contact details. If you prefer not to give your details, you can always go to Amazon, buy the data-capture-free book, and support the authors (such as me) at the same time. I believe it is a privilege to work in this industry and I'm thankful for these authors and companies offering the choice of free-of-charge books.  Enjoy the list, and don’t come back here saying “Hey, this is not free…why are you saying it is free... blah-blah". Here are my best of Kubernetes free picks, happy learning.

 

Must Read Free Kubernetes Books

Must Read Free Kubernetes Books

There are also other honorable mentions, but these are shorter editions (less than 100 pages) and for me these qualify as books written with marketing in mind. Yet I found these books very useful and a great value for money ;)

Hope you enjoy the list! Share it, and comment with your free picks!

Free and open source is not good enough

Lack of free

When I was at university, I had to pick between Java and .Net to direct my career. At the time, there were tons of Java books, docs, and even IDEs for free. But only very few .Net books and tools for Microsoft's environments and all of them were expensive for me. Living in a 3rd world country, I made my decision based on available free learning resources.

Lack of attention

Today, we live in a different world. There is an endless amount of resources for learning, from blog posts, tutorials, and online training, all for free. Not only that, companies in a desire to establish themselves as leaders around a tech domain, sponsor quality books, swags and give them for free (some, in exchange for your contact details). Software is developed as open source, enabling collaboration, but more importantly, it allows developer adoption and talent recruiting. Meetups and developer conferences (although now virtual) are all about entertaining developers in an attempt to make them try and like a tech i.e. product. All development tools, enterprise software packages are offered for free for developers. Cloud providers give free-forever resources. All, in the name of attracting the new kingmakers' attention.

Lack of intensives

I say free is not good enough. Free books, free tools, free cloud resources are not good enough. Free pizza and drinks, free stickers, and a t-shirt are not good enough either.
You can give your time, but you can never get it back. And if companies want to get the developers’ time, they will have to pay for it. The price of free is your time.
If a company wants their story and vision heard by developers, they have to pay for it. If a company wishes to have their free and open source software tried out, they will have to pay for it. If a company wants their tools and services learned, used, adopted, they have to pay for it.

Free is too expensive

The biggest impact on the value and the success of technology is defined by its adoption. The equivalent of adoption in the open source tech world is the community. Any project that has a large community can take over other projects and enable value capture to companies. You are not building only software, you are building software with a community. To build a community, to get peoples' attention and time in the first place, you have to pay for it.

Image by Mona Tootoonchinia from Pixabay 

 
Today, the best example for incentivizing communities, and building communities is the blockchain space. Whether that is through free token distribution to early users, referral programs, through airdrops, bounties for bug fixes, competitions, badges, community tasks, participation in beta programs, etc. blockchain projects offer a value exchange for community attention. Whether the majority of these blockchain projects are a scam or not is a different topic and irrelevant here. But the fact that blockchain projects value community building, user adoption, and they know how to bootstrap projects with communities from zero is undeniable. We need similar mechanisms tailored for developers and general technology adoption. We need to value and pay for the new kingmakers' time and attention. We need to pay them to listen to our vision, learn and to try out our products. If you pay, they will come (and give a try to your software the least).

Thank you for giving a few minutes of your time and attention, for free.

Choosing Between ActiveMQ and Kafka for Messaging Infrastructure

The term asynchronous means “not occurring at the same time” and in the context of distributed systems and messaging it implies that the processing of a request occurs at an arbitrary point in time. There are many advantages of asynchronous interactions over synchronous ones but also new challenges introduced by it. In this post, we will focus on a few specific considerations for choosing a suitable asynchronous messaging infrastructure for implementing event-driven systems. Let’s see a few of the subtle differences between asynchronous interaction styles.

Message Business Value

Not all messages are created equal. Some are valid and valuable only for a short period of time and obsolete later. Some are valuable until they are consumed regardless of the time passed. And some messages are valid and useful for repeated consumption. Considering the validity and the value of messages relative to time and consumption rate, we can qualify interaction styles between services into the following categories:


Message types by business value
Message types by business value

Volatile

These are ephemeral messages where the value is time-bound. Valuable now, but not in the short future. There is no point in storing events that are useless in the future and using messaging systems with such characteristics gives the best performance with the lowest latency possible as the disk is skipped. In such a scenario, the system is aware of the connected consumers and the event disseminated to all consumers online at the time of publication. If a consumer is disconnected, the messaging system forgets about these consumers. What is important in such a system is the ability to handle a large number of dynamic clients with low latency interaction needs such as IoT devices.

Durable

However, in some situations you want the messaging system to be aware of the consumers and store the messages while the consumer is not available. That is a traditional message broker which will hold on to the messages for the consumers that he knows about and allow the consumers to re-connect and consume the events that were produced in his absence. Once an event is consumed by all the interested parties, it will discard the messages. Here the broker knows about registered consumers and messages are stored durably until read by all registered consumers. Here the goal is to do reliable messaging among services with strong ordering and delivery guarantees.

Replayable

Here, the messaging system is not aware of the consumers that are interested in the event. It simply stores the events published to a stream for some time or until capacity is reached. Then a consumer can come along at any time, connect and consume the events and perhaps replay the stream from the beginning. Consumers can move back and forth in the stream as required and replay the messages repeatedly. Here, the driving force is extreme scalability combined with the ability to replay messages for existing or new consumers.

Message Semantics

Apart from the technical characteristics of the messages, it is important to distinguish the language we use, the semantic aspects, and the intent of the interactions. Some messages are targeted for a specific consumer and demand concrete actions. Some are querying the latest state of a system without requiring a state change. And some notify the world about a change that has happened in the source system. From a messaging semantic perspective, there are the following types of messages:



Message types by semantics
Message types by semantics

Command

A command is a request for action that usually leads to a state change on a known target system. Typically there is a response indicating that action was completed and even there might be a result associated with it. When a response is expected, commands are typically implemented over synchronous protocols such as HTTP, but it is possible to implement request/response or fire and forget style commands over asynchronous messaging systems. With a command based asynchronous messages, there is some coupling between the source and the target systems in the form of command semantics.

Query

A query is like a command, but it is a read-only interaction that does not lead to a state change. By its very nature, a query expects a response, and it is common to see synchronous implementations here. But asynchronous and non-blocking implementations over messaging systems and even fire and forget style interactions for long-running operations where a response is written to a different location are common too.

Event

An event is a notification that something has changed. A system sends event notifications to notify other systems for a change in its domain. An event is different from a command in that often the event emitting system doesn’t expect an answer at all. In addition to being asynchronous, event messages are not targeted to a specific recipient and thus, they enable even further decoupling. Similar to other asynchronous interactions, events are implemented as messages on queues, which are often called streams. Martin Fowler covers in-depth the different types of events in this talk.

Summary

One approach you can take is to follow the Law of the Instrument approach defined by Maslow as “If the only tool you have is a hammer, treat everything as if it were a nail." You could certainly use a classic message broker such as Apache ActiveMQ to implement the different interaction styles. It would be a familiar technology to many and easier to start with, but hard to implement some use cases such as replayable messaging. Or you could take the other extreme and try to use Apache Kafka for everything. It would require a larger amount of hardware resources and human effort to manage it, but it would cover the replayable messaging and extreme scalability needs. While both of the above approaches are fine to start with, when you have a large number of services with different messaging needs, using the right tool for the right job is a better option. We can map the above-described messaging patterns to see what messaging infrastructure is best suited for each.

Mapping messaging subtleties to different messaging infrastructures
Mapping messaging subtleties to different messaging infrastructures

We at Red Hat love any open source technology. That is why we included Apache Qpid, Apache ActiveMQ Artemis, and Apache Kafka in our Red Hat AMQ product and let the customer choose the right tool for the right job. There are many other aspects to consider when choosing the right tool, I hope this post will help you get there one step closer.
This post was originally published on Red Hat Developers. To read the original post, check here.

What is Application Performance Monitoring (APM)?

This is a guest post by freelance editor and copywriter Laila Mahran.

When using Application Performance Monitoring, you’re able to monitor key app performance metrics about the performance of a web application in production. APM is often thought of as a ‘second wave’ of performance monitoring techniques, which was preceded by traditional host-based monitoring. Let’s dive in more.
Host-based monitoring focuses on indicators such as:
  • Storage
  • Memory
  • CPU
  • Network utilization
Application monitoring goes a step further and focuses on the actual “end-user” metrics of an application in real-time such as:
  • Code-level errors
  • Slowdowns in response times
  • Error rates

How does this APM magic work?

There are multiple different ways Application Performance Monitoring tools can function. Let’s look at the most common ways APM is used.
  • An agent process that is deployed alongside a web application that hooks into the application runtime to collect telemetry data from the process
  • Specialized web appliances that inspect Layer 7 traffic to generate telemetry
When combined with the monitoring mechanism, an external application generates synthetic traffic which is then sent to the application to monitor performance at predefined throughput intervals. When looking at APM tools and other monitoring types, the main difference to highlight is that the telemetry data is generated by inspecting the application runtime, and the performance metrics that it exposes.

Can APM help me?

Traditional host monitoring can make you feel stuck with no step closer to an answer. Application Performance Monitoring is designed to answer questions that you can’t get an answer to. While understanding the raw resource utilization of your application is useful, it doesn’t give you a lot of information when you’re trying to track down why a specific request has high latency, why a particular transaction against your database is failing, or how your application performs under load.
Let’s take a look at common questions asked on a daily basis.
  1. What are the implications of this issue on user experience for end users? 
  2. Where is this high latency coming from?
  3. What caused that outage?
  4. Why are we getting an error here?
  5. Why is this transaction failing?
  6. Can we find the root cause of this substandard user experience?
Have you asked yourself these questions before? If you’re nodding your head furiously, you can look to APM to provide the answer.

Monitoring vs. Management: What’s the difference?

Application Performance Management applies to a suite of applications while Application Performance Monitoring applies to a single application. An application performance management tool is able to aggregate and compare multiple types of metrics across multiple applications and services in order to pinpoint performance issues and regressions in your suite of applications. On the other hand, Application Performance Monitoring looks at the code-level to ensure each step is monitored thoroughly.

Is Network Monitoring different?

Network monitoring focuses on routers in order to detect issues with an application or collecting telemetry from network devices such as switches. If you’re looking to get a complete picture, networking monitoring requires stitching together information from each line. This approach doesn’t provide sufficient resolution or information for modern applications, however, especially when the application itself may be running behind a variety of proxies or service routers which themselves are running on virtualized networking equipment.

APM vs. Observability: What’s the difference?

You’ve heard the hype of observability, but how is it different from APM? Observability is a holistic approach to fully understanding your application performance as well as a shared set of practices and terminology to help communicate performance across your organization. While observability helps you navigate from effect to cause, APM falls short of being able to answer “unknown unknowns,” questions that you didn’t think to ask ahead of time. This is the reason behind APM currently being eclipsed by observability.
Observability is unique due to the capability of answering questions about modern, microservice-based application architectures where you will often contend with serverless components, polyglot services, and container-based deployments running on Kubernetes. Circling back, observability provides a shared language to standardize communication around performance. This way you’re able to focus on the measurement of service level objectives and service level indicators that are more broadly applicable and interpretable to your unique application architecture than simple throughput or health checks.

Is Application Performance Monitoring worth it?

Instead of depending on the second or third order metrics about host or network utilization to understand your application’s performance, APM collects real-time performance data from the perspective of an end-user. Another bonus: real-time results of database queries and page load times are provided with APM in a way that’s not possible with host-based monitoring. This information can be invaluable in understanding how your application performs under load or while trying to track down bugs in your software. APM solutions provide alerting systems to IT Operations, Site Reliability Engineers, DevOps, and more to quickly troubleshoot performance issues and slowdowns.

Operators and Sidecars Are the New Model for Software Delivery

Today’s developers are expected to develop resilient and scalable distributed systems. Systems that are easy to patch in the face of security concerns and easy to do low-risk incremental upgrades. Systems that benefit from software reuse and innovation of the open source model. Achieving all of this for different languages, using a variety of application frameworks with embedded libraries is not possible.

Recently I’ve blogged about “Multi-Runtime Microservices Architecture” where I have explored the needs of distributed systems such as lifecycle management, advanced networking, resource binding, state abstraction and how these abstractions have been changing over the years. I also spoke about “The Evolution of Distributed Systems on Kubernetes” covering how Kubernetes Operators and the sidecar model are acting as the primary innovation mechanisms for delivering the same distributed system primitives.

On both occasions, the main takeaway is the prediction that the progression of software application architectures on Kubernetes moves towards the sidecar model managed by operators. Sidecars and operators could become a mainstream software distribution and consumption model and in some cases even replace software libraries and frameworks as we are used to.

The sidecar model allows the composition of applications written in different languages to deliver joint value, faster and without the runtime coupling. Let’s see a few concrete examples of sidecars and operators, and then we will explore how this new software composition paradigm could impact us.

Out-of-Process Smarts on the Rise

In Kubernetes, a sidecar is one of the core design patterns achieved easily by organizing multiple containers in a single Pod. The Pod construct ensures that the containers are always placed on the same node and can cooperate by interacting over networking, file system or other IPC methods. And operators allow the automation, management and integration of the sidecars with the rest of the platform. The sidecars represent a language-agnostic, scalable data plane offering distributed primitives to custom applications. And the operators represent their centralized management and control plane.

Let’s look at a few popular manifestations of the sidecar model.

Envoy

Service Meshes such as Istio, Consul, and others are using transparent service proxies such as Envoy for delivering enhanced networking capabilities for distributed systems. Envoy can improve security, it enables advanced traffic management, improves resilience, adds deep monitoring and tracing features. Not only that, it understands more and more Layer 7 protocols such as Redis, MongoDB, MySQL and most recently Kafka. It also added response caching capabilities and even WebAssembly support that will enable all kinds of custom plugins. Envoy is an example of how a transparent service proxy adds advanced networking capabilities to a distributed system without including them into the runtime of the distributed application components.

Skupper

In addition to the typical service mesh, there are also projects, such as Skupper, that ship application networking capabilities through an external agent. Skupper solves multicluster Kubernetes communication challenges through a Layer 7 virtual network and offers advanced routing and connectivity capabilities. But rather than embedding Skupper into the business service runtime, it runs an instance per Kubernetes namespace which acts as a shared sidecar.

Cloudstate

Cloudstate is another example of the sidecar model, but this time for providing stateful abstractions for the serverless development model. It offers stateful primitives over GRPC for EventSourcing, CQRS, Pub/Sub, Key/Value stores and other use cases. Again, it an example of sidecars and operators in action but this time for the serverless programming model.

Dapr

Dapr is a relatively young project started by Microsoft, and it is also using the sidecar model for providing developer-focused distributed system primitives. Dapr offers abstractions for state management, service invocation and fault handling, resource bindings, pub/sub, distributed tracing and others. Even though there is some overlap in the capabilities provided by Dapr and Service Mesh, both are very different in nature. Envoy with Istio is injected and runs transparently from the service and represents an operational tool. Dapr, on the other hand, has to be called explicitly from the application runtime over HTTP or gRPC and it is an explicit sidecar targeted for developers. It is a library for distributed primitives that is distributed and consumed as a sidecar, a model that may become very attractive for developers consuming distributed capabilities.

Camel K

Apache Camel is a mature integration library that rediscovers itself on Kubernetes. Its subproject Camel K uses heavily the operator model to improve the developer experience and integrate deeply with the Kubernetes platform. While Camel K does not rely on a sidecar, through its CLI and operator it is able to reuse the same application container and execute any local code modification in a remote Kubernetes cluster in less than a second. This is another example of developer-targeted software consumption through the operator model.

More to Come

And these are only some of the pioneer projects exploring various approaches through sidecars and operators. There is more work being done to reduce the networking overhead introduced by container-based distributed architectures such as the data plane development kit (DPDK), which is a userspace application that bypasses the layers of the Linux kernel networking stack and access directly to the network hardware. There is work in the Kubernetes project to create sidecar containers with more granular lifecycle guarantees. There are new Java projects based on GraalVM implementation such as Quarkus that reduce the resource consumption and application startup time which makes more workloads attractive for sidecars. All of these innovations will make the side-car model more attractive and enable the creation of even more such projects.

Sidecars Providing Distributed Systems Primitives
Sidecars providing distributed systems primitives

I’d not be surprised to see projects coming up around more specific use cases such as stateful orchestration of long-running processes such as Business Process Model and Notation (BPMN) engines in sidecars. Job schedulers in sidecars. Stateless integration engines i.e. Enterprise Integration Patterns implementations in sidecars. Data abstractions and data federation engines in sidecars. OAuth2/OpenID proxy in sidecars. Scalable database connection pools for serverless workloads in sidecars. Application networks as sidecars, etc. But why would software vendors and developers switch to this model? Let’s see a few of the benefits it provides.

Runtimes with Control Planes over Libraries

If you are a software vendor today, probably you have already considered offering your software to potential users as an API or a SaaS-based solution. This is the fastest software consumption model and a no-brainer to offer, when possible. Depending on the nature of the software you may be also distributing your software as a library or a runtime framework. Maybe it is time to consider if it can be offered as a container with an operator too. This mechanism of distributing software and the resulting architecture has some very unique benefits that the library mechanism cannot offer.

Supporting Polyglot Consumers

By offering libraries to be consumable through open protocols and standards, you open them up for all programming languages. A library that runs as a sidecar and consumable over HTTP, using a text format such as JSON does not require any specific client runtime library. Even when gRPC and Protobuf are used for low-latency and high-performance interactions, it is still easier to generate such clients than including third party custom libraries in the application runtime and implement certain interfaces.

Application Architecture Agnostic

The explicit sidecar architecture (as opposed to the transparent one) is a way of software capability consumption as a separate runtime behind a developer-focused API. It is an orthogonal feature that can be added to any application whether that is monolithic, microservices, functions-based, actor-based or anything in between. It can sit next to a monolith in a less dynamic environment, or next to every microservice in a dynamic cloud-based environment. It is trivial to create sidecars on Kubernetes, and doable on many other software orchestration platforms too.

Tolerant to Release Impedance Mismatch

Business logic is always custom and developed in house. Distributed system primitives are well-known commodity features, and consumed off-the-shelf as either platform features or runtime libraries. You might be consuming software for state abstractions, messaging clients, networking resiliency and monitoring libraries, etc. from third-party open source projects or companies. And these third party entities have their release cycles, critical fixes, CVE patches that impact your software release cycles too. When third party libraries are consumed as a separate runtime (sidecar), the upgrade process is simpler as it is behind an API and it is not coupled with your application runtime. The release impedance mismatch between your team and the consumed 3rd party libraries vendors becomes easier to manage.

Control Plane Included Mentality

When a feature is consumed as a library, it is included in your application runtime and it becomes your responsibility to understand how it works, how to configure, monitor, tune and upgrade. That is because the language runtimes (such as the JVM) and the runtime frameworks (such as Spring Boot or application servers) dictate how a third-party library can be included, configured, monitored and upgraded.
When a software capability is consumed as a separate runtime (such as a sidecar or standalone container) it comes with its own control plane in the form of a Kubernetes operator.

That has a lot of benefits as the control plane understands the software it manages (the operand) and comes with all the necessary management intelligence that otherwise would be distributed as documentation and best practices. What’s more, operators also integrate deeply with Kubernetes and offer a unique blend of platform integration and operand management intelligence out-of-the-box. Operators are created by the same developers who are creating the operands, they understand the internals of the containerized features and know how to operate the best. Operators are executables SREs in containers, and the number of operators and their capabilities are increasing steadily with more operators and marketplaces coming up.

Software Distribution and Consumption in the Future

Software Distributed as Sidecars with Control Planes

Let’s say you are a software provider of a Java framework. You may distribute it as an archive or a Maven artifact. Maybe you have gone a step further and you distribute a container image. In either case, in today’s cloud-native world, that is not good enough. The users still have to know how to patch and upgrade a running application with zero downtime. They have to know what to backup and restore its state. They have to know how to configure their monitoring and alerting thresholds. They have to know how to detect and recover from complex failures. They have to know how to tune an application based on the current load profile.

In all of these and similar scenarios, intelligent control planes in the form of Kubernetes operators are the answer. An operator encapsulates platform and domain knowledge of an application in a declaratively configured component to manage the workload.

Sidecars and operators could become a mainstream software distribution and consumption model and in some cases even replace software libraries and frameworks as we are used to.

Let’s assume that you are providing a software library that is included in the consumer applications as a dependency. Maybe it is the client-side library of the backend framework described above. If it is in Java, for example, you may have certified it to run it on a JEE server, provided Spring Boot Starters, Builders, Factories, and other implementations that are all hidden behind a clean Java interface. You may have even backported it to .Net too.

With Kubernetes operators and sidecars all of that is hidden from the consumer. The factory classes are replaced by the operator, and the only configuration interface is a YAML file for the custom resource. The operator is then responsible for configuring the software and the platform so that users can consume it as an explicit sidecar, or a transparent proxy. In all cases, your application is available for consumption over remote API and fully integrated with the platform features and even other dependent operators. Let’s see how that happens.

Software Consumed over Remote APIs Rather than Embedded Libraries

One way to think about sidecars is similar to the composition over inheritance principle in OOP, but in a polyglot context. It is a different way of organizing the application responsibilities by composing capabilities from different processes rather than including them into a single application runtime as dependencies. When you consume software as a library, you instantiate a class, call its methods by passing some value objects. When you consume it as an out-of-process capability, you access a local process. In this model, methods are replaced with APIs, in-process methods invocation with HTTP or gRPC invocations, and value objects with something like CloudEvents. This is a change from application servers to Kubernetes as the distributed runtime. A change from language-specific interfaces, to remote APIs. From in-memory calls to HTTP, from value objects to CloudEvents, etc.

This requires software providers to distribute containers and controllers to operate them. To create IDEs that are capable of building and debugging multiple runtime services locally. CLIs for quickly deploying code changes into Kubernetes and configuring the control planes. Compilers that can decide what to compile in a custom application runtime, what capabilities to consume from a sidecar and what from the orchestration platform.

Software consumers and providers ecosystem
Software consumers and providers ecosystem

In the longer term, this will lead to the consolidation of standardized APIs that are used for the consumption of common primitives in sidecars. Rather than language-specific standards and APIs we will have polyglot APIs. For example, rather than Java Database Connectivity (JDBC) API, caching API for Java (JCache), Java Persistence API (JPA), we will have polyglot APIs over HTTP using something like CloudEvents. Sidecar centric APIs for messaging, caching, reliable networking, cron jobs and timer scheduling, resource bindings (connectors to other APIs, protocols), idempotency, SAGAs, etc. And all of these capabilities will be delivered with the management layer included in the form of operators and even wrapped with self-service UIs. The operators are key enablers here as they will make this even more distributed architecture easy to manage and self-operate on Kubernetes. The management interface of the operator is defined by the CustomResourceDefinition and represents another public-facing API that remains application-specific.

This is a big shift in mentality to a different way of distributing and consuming software, driven by the speed of delivery and operability. It is a shift from a single runtime to multi runtime application architectures. It is a shift similar to what the hardware industry had to go through from single-core to multicore platforms when Moore’s law ended. It is a shift that is slowly happening by building all the elements of the puzzle: we have uniformly adopted and standardized containers, we have a de facto standard for orchestration through Kubernetes, possibly improved sidecars coming soon, rapid operators adoption, CloudEvents as a widely agreed standard, light runtimes such as Quarkus, etc. With the foundation in place, applications, productivity tools, practices, standardized APIs, and ecosystem will come too.

This post was originally published at ​The New Stack here.

Multi-Runtime Microservices Architecture

Creating good distributed applications is not an easy task: such systems often follow the 12-factor app and microservices principles. They have to be stateless, scalable, configurable, independently released, containerized, automatable, and sometimes event-driven and serverless. Once created, they should be easy to upgrade and affordable to maintain in the long term. Finding a good balance among these competing requirements with today’s technology is still a difficult endeavor.
 
In this article, I will explore how distributed platforms are evolving to enable such a balance, and more importantly, what else needs to happen in the evolution of distributed systems to ease the creation of maintainable distributed architectures. If you prefer to see my talk on this very same topic, checkout my QConLondon recording at InfoQ.

Distributed application needs

For this discussion, I will group the needs of modern distributed applications into four categories — lifecycle, networking, state, binding — and analyze briefly how they are evolving in recent years.

Distributed application needs
Distributed application needs

Lifecycle

Let’s start with the foundation. When we write a piece of functionality, the  programming language dictates the available libraries in the ecosystem, the packaging format, and the runtime. For example, Java uses the .jar format, all the Maven dependencies as an ecosystem, and the JVM as the runtime. Nowadays, with faster release cycles, what’s more important with lifecycle is the ability to deploy, recover from errors, and scale services in an automated way. This group of capabilities represents broadly our application lifecycle needs.

Networking

Almost every application today is a distributed application in some sense and therefore needs networking. But modern distributed systems need to master networking from a wider perspective. Starting with service discovery and error recovery, to enabling modern software release techniques and all kinds of tracing and telemetry too. For our purpose, we will even include in this category the different message exchange patterns, point-to-point and pub/sub methods, and smart routing mechanisms.

State

When we talk about state, typically it is about the service state and why it is preferable to be stateless. But the platform itself that manages our services needs state. That is required for doing reliable service orchestration and workflows, distributed singleton, temporal scheduling (cron jobs), idempotency, stateful error recovery, caching, etc. All of the capabilities listed here rely on having state under the hood. While the actual state management is not the scope of this post, the distributed primitives and their abstractions that depend on state are of interest.

Binding

The components of distributed systems not only have to talk to each other but also integrate with modern or legacy external systems. That requires connectors that can convert various protocols, support different message exchange patterns, such as polling, event-driven, request/reply, transform message formats, and even be able to perform custom error recovery procedures and security mechanisms.

Without going into one-off use cases, the above represent a good collection of common primitives required for creating good distributed systems. Today, many platforms offer such features, but what we are looking for in this article is how the way we used these features changed in the last decade and how it will look in the next one. For comparison, let’s look at the past decade and see how Java-based middleware addressed these needs.

Traditional middleware limitations

One of the well-known traditional solutions satisfying an older generation of the above-listed needs is the Enterprise Service Bus (ESB) and its variants, such as Message Oriented Middleware, lighter integration frameworks, and others. An ESB is a middleware that enables interoperability among heterogeneous environments using a service-oriented architecture (i.e. classical SOA).

While an ESB would offer you a good feature set, the main challenge with ESBs was the monolithic architecture and tight technological coupling between business logic and platform, which led to technological and organizational centralization. When a service was  developed and deployed into such a system, it was deeply coupled with the distributed system framework, which in turn limited the evolution of the service. This often only became apparent later in the life of the software.

Here are a few of the issues and limitations of each category of needs that makes ESBs not useful in the modern era.

Lifecycle

In traditional middleware, there is usually a single supported language runtime, (such as Java), which dictates how the software is packaged, what libraries are available, how often they have to be patched, etc. The business service has to use these libraries that tightly couple it with the platform which is written in the same language. In practice, that leads to coordinated services and platform upgrades which prevents independent and regular service and platform releases.

Networking

While a traditional piece of middleware has an advanced feature set focused around interaction with other internal and external services, it has a few major drawbacks. The networking capabilities are centered around one primary language and its related technologies. For Java language, that is JMS, JDBC, JTA, etc. More importantly, the networking concerns and semantics are deeply engraved into the business service as well. There are libraries with abstractions to cope with the networking concerns (such as the once-popular Hystrix project), but the library’s abstractions "leak" into the service its programming model, exchange patterns and error handling semantics, and the library itself. While it is handy to code and read the whole business logic mixed with networking aspect in a single location, this tightly couples both concerns into a single implementation and, ultimately, a joint evolutionary path.

State

To do reliable service orchestration, business process management, and implement patterns, such as the Saga Pattern and other slow-running processes, platforms require persistent state behind the scenes. Similarly, temporal actions, such as firing timers and cron jobs, are built on top of state and require a database to be clustered and resilient in a distributed environment. The main constraint here is the fact that the libraries and interfaces interacting with state are not completely abstracted and decoupled from the service runtime. Typically these libraries have to be configured with database details, and they live within the service leaking the semantics and dependency concerns into the application domain.

Binding

One of the main drivers for using integration middleware is the ability to connect to various other systems using different protocols, data formats, and message exchange patterns. And yet, the fact that these connectors have to live together with the application, means the dependencies have to be updated and patched together with the business logic. It means the data type and data format have to be converted back and forth within the service. It means the code has to be structured and the flow designed according to the message exchange patterns. These are a few examples of how even the abstracted endpoints influence the service implementation in the traditional middleware.

Cloud-native tendencies

Traditional middleware is powerful. It has all the necessary technical features, but it lacks the ability to change and scale rapidly, which is demanded by modern digital business needs. This is what the microservices architecture and its guiding principles for designing modern distributed applications are addressing.

The ideas behind the microservices and their technical requirements contributed to the popularization and widespread use of containers and Kubernetes. That started a new way of innovation that is going to influence the way we approach distributed applications for years to come. Let’s see how Kubernetes and the related technologies affect each group of requirements.

Lifecycle

Containers and Kubernetes evolved the way we package, distribute, and deploy applications into a language-independent format. There is a lot written about the Kubernetes Patterns and the Kubernetes Effect on developers and I will keep it short here. Notice though, for Kubernetes, the smallest primitive to manage is the container and it is focused on delivering distributed primitives at the container level and the process model. That means it does a great job on managing the lifecycle aspects of the applications, health-check, recovery, deployment, and scaling, but it doesn’t do such a good job improving on the other aspects of distributed applications which live inside the container, such as flexible networking, state management, and bindings.

You may point out that Kubernetes has stateful workloads, service discovery, cron jobs, and other capabilities. That is true, but all of these primitives are at the container level, and inside the container, a developer still has to use a language-specific library to access the more granular capabilities we listed at the beginning of this article. That is what drives projects like Envoy, Linkerd, Consul, Knative, Dapr, Camel-K, and others.

Networking

It turns out, the basic networking functionality around service discovery provided by Kubernetes is a good foundation, but not enough for modern applications. With the increasing number of microservices and the faster pace of deployments, the needs for more advanced release strategies, managing security, metrics, tracing, recovery from errors, simulating errors, etc. without touching the service, have become increasingly more appealing and created a new category of software on its own, called service mesh.

What is more exciting here is the tendency of moving the networking-related concerns from the service containing the business logic, outside and into a separate runtime, whether that is sidecar or a node level-agent. Today, service meshes can do advanced routing, help to test, handle certain aspects of security, and even speak application-specific protocols (for example Envoy supports Kafka, MongoDB, Redis, MySQL, etc.). While service mesh, as a solution, might not have a wide adoption yet, it touched a real pain point in distributed systems, and I’m convinced it will find its shape and form of existence.

In addition to the typical service mech, there are also other projects, such as Skupper, that confirm the tendency of putting networking capabilities into an external runtime agent. Skupper solves multi-cluster communication challenges through a layer 7 virtual network and offers advanced routing and connectivity capabilities. But rather than embedding Skupper into the business service runtime, it runs an instance per Kubernetes namespace which acts as a shared sidecar.

To sum up, container and Kubernetes made a major step forward in the lifecycle management of the applications. Service mesh and related technologies hit a real pain point and set the foundation for moving more responsibilities outside of the application into proxies. Let’s see what’s next.

State

We listed earlier the main integration primitives that rely on state. Managing state is hard and should be delegated to specialized storage software and managed services. That is not the topic here, but using state, in language-neutral abstractions to aid integration use cases is. Today, many efforts try to offer stateful primitives behind language-neutral abstractions. Stateful workflow management is a mandatory capability in cloud-based services, with examples, such as AWS Step Functions, Azure Durable Functions, etc. In the container-based deployments, CloudState and Dapr, both rely on the sidecar model to offer better decoupling of the stateful abstractions in distributed applications.

What I look forward to is also abstracting away all of the stateful features listed above into a separate runtime. That would mean workflow management, singletons, idempotency, transaction management, cron job triggers, and stateful error handling all happening reliably in a sidecar, (or a host-level agent), rather than living within the service. The business logic doesn’t need to include such dependencies and semantics in the application, and it can declaratively request such behavior from the binding environment. For example, a sidecar can act as a cron job trigger, idempotent consumer, and workflow manager, and the custom business logic can be invoked as a callback or plugged in on certain stages of the workflow, error handling, temporal invocations, or unique idempotent requests, etc.

Another stateful use case is caching. Whether that is request caching performed by the service mesh layer, or data caching with something like Infinispan, Redis, Hazelcast, etc., there are examples of pushing the caching capabilities out of the application’s runtime.

Binding

While we are on the topic of decoupling all distributed needs from the application runtime, the tendency continues with bindings too. Connectors, protocol conversions, message transformations, error handling, and security mediation could all move out of the service runtime. We are not there yet, but there are attempts in this direction with projects such as Knative and Dapr. Moving all of these responsibilities out of the application runtime will lead to a much smaller, business-logic-focused code. Such a code would live in a runtime independent from distributed system needs that can be consumed as prepackaged capabilities.

Another interesting approach is taken by the Apache Camel-K project. Rather than using an agent runtime to accompany the main application, this project relies on an intelligent Kubernetes Operator that builds application runtimes with additional platform capabilities from Kubernetes and Knative. Here, the single agent is the operator that is responsible for including the distributed system primitives required by the application. The difference is that some of the distributed primitives are added to the application runtime and some enabled in the platform (which could include a sidecar as well).

Future architecture trends

Looking broadly, we can conclude that the commoditization of distributed applications, by moving features to the platform level, reaches new frontiers. In addition to the lifecycle, now we can observe networking, state abstraction, declarative eventing, and endpoint bindings also available off-the-shelf, and EIPs are next on this list. Interestingly enough, the commoditization is using the out-of-process model (sidecars) for feature extension rather than runtime libraries or pure platform features (such as new Kubernetes features).

We are now approaching full circle by moving all of the traditional middleware features (a.k.a ESBs) into other runtimes, and soon, all we have to do in our service will be to write the business logic.

Traditional middleware and cloud-native platforms overview

Traditional middleware and cloud-native platforms overview

Compared to the traditional ESB era, this architecture decouples the business logic from the platform better, but not yet fully. Many distributed primitives, such as the classic enterprise integration patterns (EIPs): splitter, aggregator, filter, content-based router; and streaming processing patterns: map, filter, fold, join, merge, sliding windows; still have to be included in the business logic runtime, and many others depend on multiple distinct and overlapping platform add-ons.

If we stack up the various cloud-native projects innovating at the different domains, we end up with a picture such as the following:

Multi-runtime microservices

Multi-runtime microservices

The diagram here is for illustration purposes only, it purposefully picks representative projects and maps them to a category of distributed primitives. In practice, you will not use all of these projects at the same time as some of them are overlapping and not compatible workload models. How to interpret this diagram?

  • Kubernetes and containers made a huge leap in the lifecycle management of polyglot applications and set the foundation for future innovations.
  • Service mesh technologies improved on Kubernetes with advanced networking capabilities and started tapping into the application concerns.
  • While Knative is primarily focused on serverless workloads through rapid scaling, it also addresses service orchestration and event-driven binding needs.
  • Dapr builds on the ideas of Kubernetes, Knative, and Service Mesh and dives into the application runtimes to tackle stateful workloads, binding, and integration needs, acting as a modern distributed middleware.

This diagram is to help you visualize that, most likely in the future, we will end up using multiple runtimes to implement the distributed systems. Multiple runtimes, not because of multiple microservices, but because every microservice will be composed of multiple runtimes, most likely two — the custom business logic runtime and the distributed primitives runtime.

Introducing multi-runtime microservices

Here is a brief description of the multi-runtime microservices architecture that is beginning to form.

Do you remember the movie Avatar and the Amplified Mobility Platform (AMP) "mech suits" developed by scientists to go out into the wilderness to explore Pandora? This multi-runtime architecture is similar to these Mecha-suits that give superpowers to their humanoid drivers. In the movie you have humans putting on suits to gain strength and access destructive weapons. In this software architecture, you have your business logic (referred to as micrologic) forming the core of the application and the sidecar mecha component that offers powerful out-of-the-box distributed primitives. The micrologic combined with the mecha capabilities form a multi-runtime microservice which is using out-of-process features for its distributed system needs. And the best part is, Avatar 2 is coming out soon to help promote this architecture. We can finally replace vintage sidecar motorcycles with awesome mecha pictures at all software conferences ;-). Let’s look at the details of this software architecture next.

This is a two-component model similar to a client-server architecture, where every component is separate runtime. It differs from a pure client-server architecture in that, here, both components are located on the same host with reliable networking among them that is not a concern. Both components are equal in importance, and they can initiate actions in either direction and act as the client or the server. One of the components is called Micrologic, and it holds the very minimal business logic stripped out of almost all of the distributed system concerns. The other accompanying component is the Mecha, and it provides all of the distributed system features we have been talking about through the article (except lifecycle which is a platform feature).

Multi-runtime (out-of-process) microservices architecture

Multi-runtime (out-of-process) microservices architecture

There might be a one-to-one deployment of the Micrologic and the Mecha (known as the sidecar model), or it can be one shared Mecha with a few Micrologic runtimes. The first model is most appropriate on environments, such as Kubernetes, and the latter on the edge deployments.

Micrologic runtime characteristics

Let’s briefly explore some of the characteristics of the Micrologic runtime:

  • The Micrologic component is not a microservice on its own. It contains the business logic that a microservice would have, but that logic can only work in combination with the Mecha component. On the other hand, microservices are self-contained and do not have pieces of the overall functionality or part of the processing flow spread into other runtimes. The combination of a Micrologic and its Mecha counterpart form a Microservice.
  • This is not a function or serverless architecture either. Serverless is mostly known for its managed rapid scaling up and scale-to-zero capabilities. In the serverless architecture, a function implements a single operation as that is the unit of scalability. In that regard, a function is different from a Micrologic which implements multiple operations, but the implementation is not end-to-end. Most importantly, the implementation of the operations is spread over the Mecha and the Micrologic runtimes.
  • This is a specialized form of client-server architecture, optimized for the consumption of well-known distributed primitives without coding. Also, if we assume that the Mecha plays the server role, then each instance has to be specifically configured to work with the individual client(s). It is not a generic server instance aiming to support multiple clients at the same time as a typical client-server architecture.
  • The user code in the Micrologic does not interact directly with other systems and does not implement any distributed system primitives. It interacts with the Mecha over de facto standards, such as HTTP/gRPC, CloudEvents spec, and the Mecha communicates with other systems using enriched capabilities and guided by the configured steps and mechanisms.
  • While the Micrologic is responsible only for implementing the business logic stripped out of distributed system concerns, it still has to implement a few APIs at a minimum. It has to allow the Mecha and the platform to interact with it over predefined APIs and protocols (for example, by following the cloud-native design principles for Kubernetes deployments).

Mecha runtime characteristics

Here are some of the Mecha runtime characteristics:

  • The Mecha is a generic, highly configurable, reusable component offering distributed primitives as off-the-shelf capabilities.
  • Each instance of the Mecha has to be configured to work with one Micrologic component (the sidecar model) or configured to be shared with a few components.
  • The Mecha does not make any assumption about the Micrologic runtime. It works with polyglot microservices or even monolithic systems using open protocols and formats, such as HTTP/gRPC, JSON, Protobuf, CloudEvents.
  • The Mecha is configured declaratively with simple text formats, such as YAML, JSON, which dictates what features to be enabled and how to bind them to the Micrologic endpoints. For specialized API interactions, the Mechan can be additionally supplied with specs, such as OpenAPI, AsyncAPI, ANSI-SQL, etc. For stateful workflows, composed of multiple processing steps, a spec, such as Amazon State Language, can be used. For stateless integrations, Enterprise Integration Patterns (EIPs) can be used with an approach similar to the Camel-K YAML DSL. The key point here is that all of these are simple, text-based, declarative, polyglot definitions that the Mecha can fulfill without coding. Notice that these are futuristic predictions, currently, there are no Mechas for stateful orchestration or EIPs, but I expect existing Mechas (Envoy, Dapr, Cloudstate, etc) to start adding such capabilities soon. The Mecha is an application-level distributed primitives abstraction layer.
  • Rather than depending on multiple agents for different purposes, such as network proxy, cache proxy, binding proxy, there might be a single Mecha providing all of these capabilities. The implementation of some capabilities, such as storage, message persistence, caching, etc., would be plugged in and backed by other cloud or on-premise services.
  • Some distributed system concerns around lifecycle management make sense to be provided by the managing platform, such as Kubernetes or other cloud services, rather than the Mecha runtime using generic open specifications such as the Open App Model.

What are the main benefits of this architecture?

The benefits are loose coupling between the business logic and the increasing list of distributed systems concerns. These two elements of software systems have completely different dynamics. The business logic is always unique, custom code, written in-house. It changes frequently, depending on your organizational priorities and ability to execute. On the other hand, the distributed primitives are the ones addressing the concerns listed in this post, and they are well known. These are developed by software vendors and consumed as libraries, containers or services. This code changes depending on vendor priorities, release cycles, security patches, open-source governing rules, etc. Both groups have little visibility and control over each other.

Business logic and distributed system concerns coupling in application architectures

Business logic and distributed system concerns coupling in application architectures

Microservices principles help decouple the different business domains by bounded contexts where every microservice can evolve independently. But microservices architecture does not address the difficulties coming from coupling the business logic with middleware concerns. For certain microservices that are light on integration use cases, this might not be a big factor. But if your domain involves complex integrations (which is increasingly becoming the case for everybody), following the microservices principles will not help you protect from coupling with the middleware. Even if the middleware is represented as libraries you include in your microservices, the moment you start migrating and changing these libraries, the coupling will become apparent. And the more distributed primitives you need, the more coupled into the integration platform you become. Consuming middleware as a separate runtime/process over a predefined API rather than a library helps loose coupling and enables the independent evolution of each component.

This is also a better way to distribute and maintain complex middleware software for vendors. As long as the interactions with the middleware are over inter-process communication involving open APIs and standards, the software vendors are free to release patches and upgrades at their pace. And the consumers are free to use their preferred language, libraries, runtimes, deployments methods, and processes.

What are the main drawbacks of this architecture?

Inter-process communication. The fact that the business logic and the middleware mechanics (you see where the name comes from) of the distributed systems are in different runtimes and that requires an HTTP or gRPC call rather than an in-process method call. Notice though, this is not a network call that is supposed to go to a different machine or datacenter. The Micrologic runtime and the Mecha are supposed to be colocated on the same host with low latency and minimal likelihood of network issues.

Complexity. The next question is, whether it is worth the complexity of development, and maintaining such systems for the gained benefits. I think the answer will be increasingly inclining towards yes. The requirements of distributed systems and the pace of release cycles are increasing. And this architecture optimizes for that. I wrote some time ago that the developers of the future will have to be with hybrid development skills. This architecture confirms and enforces further this trend. Part of the application will be written in a higher-level programming language, and part of the functionality will be provided by off-the-shelf components that have to be configured declaratively. Both parts are inter-connected not at compile-time, or through in-process dependency injection at startup time, but at deployment time, through inter-process communications. This model enables a higher rate of software reuse combined with a faster pace of change.

What comes after microservices are not functions

Microservices architecture has a clear goal. It optimizes for change. By splitting applications into business domains, this architecture offers the optimal service boundary for software evolution and maintainability through services that are decoupled, managed by independent teams, and released at an independent pace.

If we look at the programming model of the serverless architecture, it is primarily based on functions. Functions are optimized for scalability. With functions, we split every operation into an independent component so that it can scale rapidly, independently, and on-demand. In this model, the deployment granularity is a function. And the function is chosen because it is the code construct that has an input whose rate correlates directly to the scaling behavior. This is an architecture that is optimized for extreme scalability, rather than long term maintainability of complex systems.

What about the other aspect of Serverless, which comes from the popularity of AWS Lambda and its fully managed operational nature? In this regard, "AWS Serverless" optimizes for speed of provisioning for the expense of lack of control and lock-in. But the fully managed aspect is not application architecture, it is a software consumption model. It is an orthogonal functionally, similar to consuming a SaaS-based platform which in an ideal world should be available for any kind of architecture whether that is monolithic, microservices, mecha or functions. In many ways, AWS Lambda resembles a fully managed Mecha architecture with one big difference: Mecha does not enforce the function model, instead it allows a more cohesive code constructs around the business domain, split from all middleware concerns.

Architecture optimizations
Application architecture optimizations

Mecha architecture, on the other hand, optimizes microservices for middleware independence. While microservices are independent of each other, they are heavily dependent on embedded distributed primitives. The Mecha architecture splits these two concerns into separate runtimes allowing their independent release by independent teams. This decoupling improves day-2 operations (such as patching and upgrades) and the long term maintainability of the cohesive units of business logic. In this regard, Mecha architecture is a natural progression of the microservices architecture by splitting software based on the boundaries that cause most friction. That optimization provides more benefits in the form of software reuse and evolution than the function model, which optimizes for extremely scalability at the expense of over-distribution of code.

Conclusion

Distributed applications have many requirements. Creating effective distributed systems requires multiple technologies and a good approach to integration. While traditional monolithic middleware provided all of the necessary technical features required by distributed systems, it lacked the ability to change, adapt, and scale rapidly, which was required by the business. This is why the ideas behind microservices-based architectures contributed to the rapid popularization of containers and Kubernetes; with the latest developments in the cloud-native space, we are now coming full circle by moving all of the traditional middleware features into the platform and off-the-shelf auxiliary runtimes.

This commoditization of application features is primarily using the out-of-process model for feature extension, rather than runtime libraries or pure platform features. That means that in the future it is highly likely that we will use multiple runtimes to implement distributed systems. Multiple runtimes, not because of multiple microservices, but because every microservice will be composed of multiple runtimes; a runtime for the custom micro business logic, and an off-the-shelf, configurable runtime for distributed primitives.

This article was originally published on InfoQ here.

Top 10 must-know Kubernetes design patterns

Here are the must-know top 10 design patterns for beginners synthesized from the Kubernetes Patterns book. Getting familiar with these patterns will help you understand foundational Kubernetes concepts, which in turn will help you in discussions and when designing Kubernetes-based applications. There are many important concepts in Kubernetes, but these are the most important ones to start with:

Top 10 must-know Kubernetes design patterns

Top 10 most common Kubernetes Patterns

To help you understand, the patterns are organized into a few categories below, inspired by the Gang of Four’s design patterns.

Foundational patterns

These patterns represent the principles and best practices that containerized applications must comply with in order to become good cloud-native citizens. Regardless of the application’s nature, you should aim to follow these guidelines. Adhering to these principles will help ensure that your applications are suitable for automation on Kubernetes.

Health Probe pattern

Health Probe dictates that every container should implement specific APIs to help the platform observe and manage the application in the healthiest way possible. To be fully automatable, a cloud-native application must be highly observable by allowing its state to be inferred so that Kubernetes can detect whether the application is up and ready to serve requests. These observations influence the life-cycle management of Pods and the way traffic is routed to the application.

Predictable Demands pattern

Predictable Demands explains why every container should declare its resource profile and stay confined to the indicated resource requirements. The foundation of successful application deployment, management, and coexistence on a shared cloud environment is dependent on identifying and declaring the application’s resource requirements and runtime dependencies. This pattern describes how you should declare application requirements, whether they are hard runtime dependencies or resource requirements. Declaring your requirements is essential for Kubernetes to find the right place for your application within the cluster.

Automated Placement patterns

Automated Placement explains how to influence workload distribution in a multi-node cluster. Placement is the core function of the Kubernetes scheduler for assigning new Pods to nodes satisfying container resource requests and honoring scheduling policies. This pattern describes the principles of Kubernetes’ scheduling algorithm and the way to influence the placement decisions from the outside.

Structural patterns

Having good cloud-native containers is the first step, but not enough. Reusing containers and combining them into Pods to achieve the desired outcome is the next step. The patterns in this category are focused on structuring and organizing containers in a Pod to satisfy different use cases. The forces that affect containers in Pods result in these patterns.

Init Container pattern

Init Container introduces a separate life cycle for initialization-related tasks and the main application containers. Init Containers enable separation of concerns by providing a separate life cycle for initialization-related tasks distinct from the main application containers. This pattern introduces a fundamental Kubernetes concept that is used in many other patterns when initialization logic is required.

Sidecar patterns

Sidecar describes how to extend and enhance the functionality of a pre-existing container without changing it. This pattern is one of the fundamental container patterns that allows single-purpose containers to cooperate closely together.

Behavioral patterns

These patterns describe the life-cycle guarantees of the Pods ensured by the managing platform. Depending on the type of workload, a Pod might run until completion as a batch job or be scheduled to run periodically. It might run as a daemon service or singleton. Picking the right life-cycle management primitive will help you run a Pod with the desired guarantees.

Batch Job patterns

Batch Job describes how to run an isolated, atomic unit of work until completion. This pattern is suited for managing isolated atomic units of work in a distributed environment.

Stateful Service patterns

Stateful Service describes how to create and manage distributed stateful applications with Kubernetes. Such applications require features such as persistent identity, networking, storage, and ordinality. The StatefulSet primitive provides these building blocks with strong guarantees ideal for the management of stateful applications.

Service Discovery pattern

Service Discovery explains how clients can access and discover the instances that are providing application services. For this purpose, Kubernetes provides multiple mechanisms, depending on whether the service consumers and producers are located on or off the cluster.

Higher-level patterns

The patterns in this category are more complex and represent higher-level application management patterns. Some of the patterns here (such as Controller) are timeless, and Kubernetes itself is built on top of them.

Controller pattern

Controller is a pattern that actively monitors and maintains a set of Kubernetes resources in a desired state. The heart of Kubernetes itself consists of a fleet of controllers that regularly watch and reconcile the current state of applications with the declared target state. This pattern describes how to leverage this core concept for extending the platform for our own applications.

Operator pattern

An Operator is a Controller that uses a CustomResourceDefinitions to encapsulate operational knowledge for a specific application in an algorithmic and automated form. The Operator pattern allows us to extend the Controller pattern for more flexibility and greater expressiveness. There are an increasing number of Operators for Kubernetes, and this pattern is turning into the major form of operating complex distributed systems.

Summary

Today, Kubernetes is the most popular container orchestration platform. It is jointly developed and supported by all major software companies and offered as a service by all of the major cloud providers. Kubernetes supports both Linux and Windows systems, plus all major programming languages. This platform can also orchestrate and automate stateless and stateful applications, batch jobs, periodic tasks, and serverless workloads. The patterns described here are the most commonly used ones from a broader set of patterns that come with Kubernetes as shown below.


Kubernetes patters categorized

Kubernetes is the new application portability layer and the common denominator among everybody on the cloud. If you are a software developer or architect, the odds are that Kubernetes will become part of your life in one form or another. Learning about the Kubernetes patterns described here will change the way you think about this platform. I believe that Kubernetes and the concepts originating from it will become as fundamental as object-oriented programming concepts. The patterns here are an attempt to create the Gang of Four design patterns, but for container orchestration. Reading this article must not be the end, but the beginning of your Kubernetes journey. Happy kubectl-ing!

 This post was originally published on Red Hat Developers. To read the original post, check here.

Camel Rebirth with Subsecond Experiences

This post was originally published here. Check out my new Kubernetes Patterns book and  follow me @bibryam for future blog posts.

A look at the past decade

The integration space is in constant change. There are many open source projects and closed source technologies that have not passed the test of time and disappeared from the middleware stacks for good. After a decade, Apache Camel is still here and becoming even stronger for the next decade of integration.

Gang of Four for integration

Apache Camel started life as an implementation of the Enterprise Integration Patterns (EIP) book. Today, these patterns are the equivalent of the object-oriented Gang of Four Design Patterns but for messaging and integration domain. They are agnostic of programing language, platform, architecture, and provide a universal language, notation and description of the forces around fundamental messaging primitives.

But Camel community did not stop with these patterns, and kept evolving and adding newer patterns from SOA, Microservices, Cloud Native and Serverless paradigms. As a result, Camel turned into a generic pattern based integration framework suitable for multiple architectures.

Universal library consumption model

While the patterns gave the initial spark to Camel, its endpoints quickly became popular and turned into a universal protocol for using Java based integration libraries as connectors. Today, there are hundreds of Java libraries that can be used as Camel connectors using the Camel endpoint notation. It takes a while to realize that Camel can also be used without the EIPs and the routing engine. It can act as a connector framework where all libraries are consumed as universal URIs without a need for understanding the library specific factories and configurations that vary widely across Java libraries.

The right level of abstraction

When you talk to developers who have not used Camel in anger before, they will tell you that it is possible to do integration without Camel. And they are right about the 80% of the easy use cases, but not for the remaining 20% of the cases that can turn a project into a multi-year frustrating experience. What they do not realize yet is that without Camel, there are multiple manual ways of doing the same thing, but none are validated by the experience of hundreds of open source developers. And if you fast-forward a few years, 10s of different systems to integrate with, and 10s of developers coming and going, 100s of microservices, an integration project can quickly turn into a bespoke home-grown framework that nobody wants to work on. Doing integration is easy, but doing good integration that will evolve and grow for many years, by many teams, is the hardest part. Camel addresses this challenge with universal patterns and connectors, combined with integration focused DSLs, that have passed the test of time. Next time, if you think you don't need Camel, your are either thinking for short term gains, or you are not realizing yet how complex integration can become.

Embracing change

It takes only a couple of painful experiences in large integration projects to start appreciating Camel. But Camel is not great only because it was started by and built on the works of great minds, it is great because it evolves thanks to the world's knowledge, shared through the open source model and its networking effects. Camel started as the routing layer in ESBs during SOA period with a lot of focus on XML, WS, JBI, OSGI etc, but then it quickly adapted for REST, Swagger, Circuit breakers, SAGAs, and Spring Boot in the Microservices era. And the world has not stopped there, with Docker and Kubernetes, and now Serverless architecture, Camel keeps embracing the change. That is because Camel is written for integrating changing environments, and Camel itself grows and shines on change. Camel is a change enabling library for integration.

Behind the scene engine

One of the Camel secret sauces is that it is a non-intrusive, unopinionated, small (5MB and getting smaller) integration library without any affinity where and how you consume it. If you notice, this is the opposite of an ESB as commonly Camel is confused with because of its extensive capabilities. Over the years, Camel has been used as the internal engine powering projects such as:
  • Apache ServiceMix ESB
  • Apache ActiveMQ
  • Talend ESB
  • JBoss Switchyard
  • JBoss Fuse Service Works
  • Red Hat Fuse
  • Fuse Online/Syndesis 
  • And many other frameworks mentioned here.
You can use Camel standalone, embed it with Apache Tomcat, with Spring Boot starters, JBoss WildFly, Apache Karaf, Vert.x, Quarkus, you name it. Camel doesn't care and it will bring superpowers to your project every time.

Looking to the future

Nobody can tell how the ideal integration stack will look like in a decade, nor can I. But I will tell you about two novelties coming into Apache Camel now (and to Red Hat Fuse later), and why they will have a noticeable positive effect for the developers and the business. I call these changes as subsecond deployment and subsecond startup of Camel applications.

Subsecond deployments to Kubernetes

There was a time when cloud-native meant different technologies. Today, after a few years of natural selection and consolidation in the industry, cloud-native means applications created specifically for Kubernetes and its ecosystem of projects around CNCF. Even with this definition, there are many shades of cloud-native, from running a monolithic non-scalable application in a container, to triggering a function that is fully embracing the cloud-native development and management practices. The Camel community has realized that Kubernetes is the next generation application runtime, and it is steadily working on making Camel a Kubernetes native integration engine. The same way Camel is a first-class citizen in OSGI containers, JEE application servers, other fat-jar runtimes, Camel is becoming a first-class citizen on Kubernetes, integrating deeply and benefiting from the resiliency and scalability offered by the platform.

Here are a few of the many enhancement efforts going on in this direction:
  • Deeper Kubernetes integration - Kubernetes API connector, full health-check API implementation for Camel subsystems, service discovery through a new ServiceCall EIP, configuration management using ConfigMaps. Then a set of application patterns with special handling on Kubernetes such as: clustered singleton routes, scalable XA transactions (because sometimes, you have to), SAGA pattern implementation, etc.
  • Cloud-native integrations - support for other cloud-native projects such as exposing Camel metrics for Prometheus, tracing Camel routes through Jaeger, JSON formatted logging for log aggregation, etc.
  • Immutable runtimes - whether you use the minimalist immutable Karaf packaging, or Spring Boot, Camel is a first class citizen ready to put in a container image. There are also Spring Boot starter implementations for all Camel connectors, integration with routes, properties, converters, and whatnot.
  • Camel 3 - is a fact and actively progressing. A big theme for Camel 3 is to make it more modular, smaller, with faster startup time, reactive, non-blocking and triple awesome. This is the groundwork needed to restructure Camel for the future cloud workloads.
  • Knative integration - Knative is an effort started by Google in order to bring some order and standardization in the serverless world dominated by Amazon Lambda. And Camel is among the projects that integrate with Knative primitives from early days and enhances the Knative ecosystem with hundreds of connectors acting as generic event sources.  
  • And here is a real game-changer initiative: Camel-K (a.k.a deep Kubernetes integration for Camel) - we have seen that Camel is typically embedded into the latest modern runtime where it acts as the developer-friendly integration engine behind the scene. The same way Camel used to benefit from JEE services in the past for hot-deployment, configuration management, transaction management, etc, today Camel-K allows Camel runtime to benefit from Kubernetes features for high-availability, resiliency, self-healing, auto-scaling, and basically distributed application management in general. The way Camel-K achieves this is through a CLI and an Operator where the latter is able to understand the Camel applications, its build time dependencies, runtime needs, and make intelligent choices from the underlying Kubernetes platform and its additional capabilities (from Knative, Istio, Openshift and others in the future). It can automate everything on the cluster such as picking the best suited container image, runtime management model and update them when needed. The CLI can automate the tasks that are on the developer machine such as observing the code changes and streaming those to the Kubernetes cluster, and printing the logs from the running Pods. 
Camel route auto-deployment to Kubernetes with Camel-K
Camel-K operator understands two domains: Kubernetes and Camel. By combining knowledge of both areas, it can automate tasks that usually require a human operator.
The really powerful part is that, with Camel-K, a Camel route can be built and deployed from source code to a running Camel route on Kubernetes in less than a second!
Time to deploy and run a Camel integration(in seconds)
 
Forget about making a coffee, or even having a sip while building and deploying a Camel route with Camel K. As soon as you make changes to your source code and open a browser, the Camel route will be running in Kubernetes. This has a noticeable impact on the way the developers write Camel code, compile, drink coffee, deploy and test. Apart from changing the development practices and habits, this toolset will significantly reduce the development cycles which would be noticed by the business stakeholders too. For live demonstration, check out the following awesome video from Fuse engineers working on Camel-K project.

Subsecond startups of Camel applications

A typical enterprise integration landscape is composed of stateless services, stateful services, clustered applications, batch jobs, file transfers, messaging, real time integrations, and may be even blockchain based business processes. To that mix, today, we also have to add serverless workloads as well, which is best suited for event driven workloads. Historically, the heavy and slow Java runtime had significant drawbacks compared Go, Javascript and other light runtimes in the serverless space. That is one of the main motivations for Oracle to create GraalVM/Substrate VM. Substrate VM is a framework that enables ahead-of-time (AOT) compilation of Java applications into native executables that are light and fast. Then a recent effort by Red Hat led to the creation of Quarkus project which improves further the resource consumptions, startup and response times of Java applications mind-blowingly (a term not-used lightly here).

Supersonic Subatomic Java with Quarkus
 
As you can see from the metrics above, Quarkus combined with SubstrateVM is not a gradual evolution. It is a mutation, and a revolutionary jump that suddenly changes the perspectives on the Java’s footprint and speed in the cloud native workloads. It makes Java friendly for serverless architecture. Considering the huge Java ecosystem composed of developers and libraries, it even turns Java into the best suited language for Serverless applications. And Camel combined with Quarkus, the best placed integration library in this space.

Summary

With the explosion of Microservices architecture, the number of services has increased tenfold which gave birth to Kubernetes-enabled architectures. These architectures are highly dynamic in nature, and most powerful with light and fast runtimes that enable instant scale up and higher deployment density.

Camel is the framework to fill the space between disparate systems and services. It offers data consistency guarantees, reliable communication, failover, failure detection and recovery, and so on, in a way that makes developers productive. Now, imagine the same powerful Apache Camel based integration in 2020 that: deploys to Kubernetes in 20ms; starts up in 20ms; requires 20MB memory, consumes 20MB on the disk... That is regardless whether it runs as a stateless application in a container, or as a function on KNative. That is 100x faster deployments to Kubernetes, 100x faster startup time, 10x less resource consumption allowing real-time scale-up, scale-down, and scale to zero. That is a change that the developers will notice during development, users will notice when using the system, and the business will notice on the infrastructure cost and overall delivery velocity. That is the real cloud-native era we have been waiting for.

About Me