cxdx

Blogroll

Application modernization patterns with Apache Kafka, Debezium, and Kubernetes

We build our computers the way we build our cities—over time, without a plan, on top of ruins.

Ellen Ullman wrote this in 1998, but it applies just as much today to the way we build modern applications; that is, over time, with short-term plans, on top of legacy software. In this article, I will introduce a few patterns and tools that I believe work well for thoughtfully modernizing legacy applications and building modern event-driven systems.

Note: If you prefer watch my talk on YouTube about application modernization patterns which this post is  based on.

Application modernization in context

Application modernization refers to the process of taking an existing legacy application and modernizing its infrastructure—the internal architecture—to improve the velocity of new feature delivery, improve performance and scalability, expose the functionality for new use cases, and so on. Luckily, there is already a good classification of modernization and migration types, as shown in Figure 1.

Figure 1: Three modernization types and the technologies we might use for them.
Figure 1: Three modernization types and the technologies we might use for them.

Depending on your needs and appetite for change, there are a few levels of modernization:

  • Retention: The easiest thing you can do is to retain what you have and ignore the application's modernization needs. This makes sense if the needs are not yet pressing.
  • Retirement: Another thing you could do is retire and get rid of the legacy application. That is possible if you discover the application is no longer being used.
  • Rehosting: The next thing you could do is to rehost the application, which typically means taking an application as-is and hosting it on new infrastructure such as cloud infrastructure, or even on Kubernetes through something like KubeVirt. This is not a bad option if your application cannot be containerized, but you still want to reuse your Kubernetes skills, best practices, and infrastructure to manage a virtual machine as a container.
  • Replatforming: When changing the infrastructure is not enough and you are doing a bit of alteration at the edges of the application without changing its architecture, replatforming is an option. Maybe you are changing the way the application is configured so that it can be containerized, or moving from a legacy Java EE runtime to an open source runtime. Here, you could use a tool like windup to analyze your application and return a report with what needs to be done.
  • Refactoring: Much application modernization today focuses on migrating monolithic, on-premises applications to a cloud-native microservices architecture that supports faster release cycles. That involves refactoring and rearchitecting your application, which is the focus of this article.

For this article, we will assume we are working with a monolithic, on-premise application, which is a common starting point for modernization. The approach discussed here could also apply to other scenarios, such as a cloud migration initiative.

Challenges of migrating monolithic legacy applications

Deployment frequency is a common challenge for migrating monolithic legacy applications. Another challenge is scaling development so that more developers and teams can work on a common code base without stepping on each other’s toes. Scaling the application to handle an increasing load in a reliable way is another concern. On the other hand, the expected benefits from a modernization include reduced time to market, increased team autonomy on the codebase, and dynamic scaling to handle the service load more efficiently. Each of these benefits offsets the work involved in modernization. Figure 2 shows an example infrastructure for scaling a legacy application for increased load.

Figure 2: Refactoring a legacy application into event-driven microservices.
Figure 2: Refactoring a legacy application into event-driven microservices.

Envisioning the target state and measuring success

For our use case, the target state is an architectural style that follows microservices principles using open source technologies such as Kubernetes, Apache Kafka, and Debezium. We want to end up with independently deployable services modeled around a business domain. Each service should own its own data, emit its own events, and so on.

When we plan for modernization, it is also important to consider how we will measure the outcomes or results of our efforts. For that purpose, we can use metrics such as lead time for changes, deployment frequency, time to recovery, concurrent users, and so on.

The next sections will introduce three design patterns and three open source technologies—Kubernetes, Apache Kafka, and Debezium—that you can use to migrate from brown-field systems toward green-field, modern, event-driven services. We will start with the Strangler pattern.

The Strangler pattern

The Strangler pattern is the most popular technique used for application migrations. Martin Fowler introduced and popularized this pattern under the name of Strangler Fig Application, which was inspired by a type of fig that seeds itself in the upper branches of a tree and gradually evolves around the original tree, eventually replacing it. The parallel with application migration is that our new service is initially set up to wrap the existing system. In this way, the old and the new systems can coexist, giving the new system time to grow and potentially replace the old system. Figure 3 shows the main components of the Strangler pattern for a legacy application migration.

Figure 3: The Strangler pattern in a legacy application migration.
Figure 3: The Strangler pattern in a legacy application migration.

The key benefit of the Strangler pattern is that it allows low-risk, incremental migration from a legacy system to a new one. Let’s look at each of the main steps involved in this pattern.

Step 1: Identify functional boundaries

The very first question is where to start the migration. Here, we can use domain-driven design to help us identify aggregates and the bounded contexts where each represents a potential unit of decomposition and a potential boundary for microservices. Or, we can use the event storming technique created by Antonio Brandolini to gain a shared understanding of the domain model. Other important considerations here would be how these models interact with the database and what work is required for database decomposition. Once we have a list of these factors, the next step is to identify the relationships and dependencies between the bounded contexts to get an idea of the relative difficulty of the extraction.

Armed with this information, we can proceed with the next question: Do we want to start with the service that has the least amount of dependencies, for an easy win, or should we start with the most difficult part of the system? A good compromise is to pick a service that is representative of many others and can help us build a good technology foundation. That foundation can then serve as a base for estimating and migrating other modules.

Step 2: Migrate the functionality

For the strangler pattern to work, we must be able to clearly map inbound calls to the functionality we want to move. We must also be able to redirect these calls to the new service and back if needed. Depending on the state of the legacy application, client applications, and other constraints, weighing our options for this interception might be straightforward or difficult:

  • The easiest option would be to change the client application and redirect inbound calls to the new service. Job done.
  • If the legacy application uses HTTP, then we’re off to a good start. HTTP is very amenable to redirection and we have a wealth of transparent proxy options to choose from.
  • In practice, it likely that our application will not only be using REST APIs, but will have SOAP, FTP, RPC, or some kind of traditional messaging endpoints, too. In this case, we may need to build a custom protocol translation layer with something like Apache Camel.

Interception is a potentially dangerous slippery slope: If we start building a custom protocol translation layer that is shared by multiple services, we risk adding too much intelligence to the shared proxy that services depend on. This would move us away from the "smart microservices, dumb pipes” mantra. A better option is to use the Sidecar pattern, illustrated in Figure 4.

Figure 4: The Sidecar pattern.
Figure 4: The Sidecar pattern.

Rather than placing custom proxy logic in a shared layer, make it part of the new service. But rather than embedding the custom proxy in the service at compile-time, we use the Kubernetes sidecar pattern and make the proxy a runtime binding activity. With this pattern, legacy clients use the protocol-translating proxy and new clients are offered the new service API. Inside the proxy, calls are translated and directed to the new service. That allows us to reuse the proxy if needed. More importantly, we can easily decommission the proxy when it is no longer needed by legacy clients, with minimal impact on the newer services.

Step 3: Migrate the database

Once we have identified the functional boundary and the interception method, we need to decide how we will approach database strangulation—that is, separating our legacy database from application services. We have a few paths to choose from.

Database first

In a database-first approach, we separate the schema first, which could potentially impact the legacy application. For example, a SELECT might require pulling data from two databases, and an UPDATE can lead to the need for distributed transactions. This option requires changes to the source application and doesn’t help us demonstrate progress in the short term. That is not what we are looking for.

Code first

A code-first approach lets us get to independently deployed services quickly and reuse the legacy database, but it could give us a false sense of progress. Separating the database can turn out to be challenging and hide future performance bottlenecks. But it is a move in the right direction and can help us discover the data ownership and what needs to be split into the database layer later.

Code and database together

Working on the code and database together can be difficult to aim for from the get-go, but it is ultimately the end state we want to get to. Regardless of how we do it, we want to end up with a separate service and database; starting with that in mind will help us avoid refactoring later.

 

Database strangulation strategies

Figure 4.1: Database strangulation strategies

Having a separate database requires data synchronization. Once again, we can choose from a few common technology approaches.

Triggers

Most databases allow us to execute custom behavior when data is changed. In some cases, that could even be calling a web service and integrating with another system. But how triggers are implemented and what we can do with them varies between databases. Another significant drawback here is that using triggers requires changing the legacy database, which we might be reluctant to do.

Queries

We can use queries to regularly check the source database for changes. The changes are typically detected with implementation strategies such as timestamps, version numbers, or status column changes in the source database. Regardless of the implementation strategy, polling always leads to the dilemma between polling often and creating overhead over the source database, or missing frequent updates. While queries are simple to install and use, this approach has significant limitations. It is unsuitable for mission-critical applications with frequent database interactions.

Log readers

Log readers identify changes by scanning the database transaction log files. Log files exist for database backup and recovery purposes and provide a reliable way to capture all changes including DELETEs. Using log readers is the least disruptive option because they require no modification to the source database and they don’t have a query load. The main downside of this approach is that there is no common standard for the transaction log files and we'll need specialized tools to process them. This is where Debezium fits in.

 

Data synchronization patterns

Figure 4.2: Data synchronization patterns

Before moving on to the next step, let's see how using Debezium with the log reader approach works.

Change data capture with Debezium

When an application writes to the database, changes are recorded in log files, then the database tables are updated. For MySQL, the log file is binlog; for PostgreSQL, it is the write-ahead-log; and for MongoDB it's the op log. The good news is Debezium has connectors for different databases, so it does the hard work for us of understanding the format of all of these log files. Debezium can read the log files and produce a generic abstract event into a messaging system such as Apache Kafka, which contains the data changes. Figure 5 shows Debezium connectors as the interface for a variety of databases.

Figure 5: Debezium connectors in a microservices architecture.
Figure 5: Debezium connectors in a microservices architecture.

Debezium is the most widely used open source change data capture (CDC) project with multiple connectors and features that make it a great fit for the Strangler pattern.

Why is Debezium a good fit for the Strangler pattern?

One of the most important reasons to consider the Strangler pattern for migrating monolithic legacy applications is reduced risk and the ability to fall back to the legacy application. Similarly, Debezium is completely transparent to the legacy application, and it doesn’t require any changes to the legacy data model. Figure 6 shows Debezium in an example microservices architecture.

Figure 6: Debezium deployment in a hybrid-cloud environment.
Figure 6: Debezium deployment in a hybrid-cloud environment.

With a minimal configuration to the legacy database, we can capture all the required data. So at any point, we can remove Debezium and fall back to the legacy application if we need to.

Debezium features that support legacy migrations

Here are some of Debezium's specific features that support migrating a monolithic legacy application with the Strangler pattern:

  • Snapshots: Debezium can take a snapshot of the current state of the source database, which we can use for bulk data imports. Once a snapshot is completed, Debezium will start streaming the changes to keep the target system in sync.
  • Filters: Debezium lets us pick which databases, tables, and columns to stream changes from. With the Strangler pattern, we are not moving the whole application.
  • Single message transformation (SMT): This feature can act like an anti-corruption layer and protect our new data model from legacy naming, data formats, and even let us filter out obsolete data
  • Using Debezium with a schema registry: We can use a schema registry such as Apicurio with Debezium for schema validation, and also use it to enforce version compatibility checks when the source database model changes. This can prevent changes from the source database from impacting and breaking the new downstream message consumers.
  • Using Debezium with Apache Kafka: There are many reasons why Debezium and Apache Kafka work well together for application migration and modernization. Guaranteed ordering of database changes, message compaction, the ability to re-read changes as many times as needed, and tracking transaction log offsets are all good examples of why we might choose to use these tools together.

Step 4: Releasing services

With that quick overview of Debezium, let’s see where we are with the Strangler pattern. Assume that, so far, we have done the following:

  • Identified a functional boundary.
  • Migrated the functionality.
  • Migrated the database.
  • Deployed the service into a Kubernetes environment.
  • Migrated the data with Debezium and kept Debezium running to synchronize ongoing changes.

At this point, there is not yet any traffic routed to the new services, but we are ready to release the new services. Depending on our routing layer's capabilities, we can use techniques such as dark launching, parallel runs, and canary releasing to reduce or remove the risk of rolling out the new service, as shown in Figure 7.

Figure 7: Directing read traffic to the new service.
Figure 7: Directing read traffic to the new service.

What we can also do here is to only direct read requests to our new service initially, while continuing to send the writes to the legacy system. This is required as we are replicating changes in a single direction only.

When we see that the read operations are going through without issues, we can then direct the write traffic to the new service. At this point, if we still need the legacy application to operate for whatever reason, we will need to stream changes from the new services toward the legacy application database. Next, we'll want to stop any write or mutating activity in the legacy module and stop the data replication from it. Figure 8 illustrates this part of the pattern implementation.

Figure 8: Directing read and write traffic to the new service.
Figure 8: Directing read and write traffic to the new service.

Since we still have legacy read operations in place, we are continuing the replication from the new service to the legacy application. Eventually, we'll stop all operations in the legacy module and stop the data replication. At this point, we will be able to decommission the migrated module.

We've had a broad look at using the Strangler pattern to migrate a monolithic legacy application, but we are not quite done with modernizing our new microservices-based architecture. Next, let’s consider some of the challenges that come later in the modernization process and how Debezium, Apache Kafka, and Kubernetes might help.

After the migration: Modernization challenges

The most important reason to consider using the Strangler pattern for migration is the reduced risk. This pattern gives value steadily and allows us to demonstrate progress through frequent releases. But migration alone, without enhancements or new “business value” can be a hard sell to some stakeholders. In the longer-term modernization process, we also want to enhance our existing services and add new ones. With modernization initiatives, very often, we are also tasked with setting the foundation and best practices for building modern applications that will follow. By migrating more and more services, adding new ones, and in general by transitioning to the microservices architecture, new challenges will come up, including the following:

  • Automating the deployment and operating a large number of services.
  • Performing dual-writes and orchestrating long-running business processes in a reliable and scalable manner.
  • Addressing the analytical and reporting needs.

There are all challenges that might not have existed in the legacy world. Let’s explore how we can address a few of them using a combination of design patterns and technologies.

Challenge 1: Operating event-driven services at scale

While peeling off more and more services from the legacy monolithic application, and also creating new services to satisfy emerging business requirements, the need for automated deployments, rollbacks, placements, configuration management, upgrades, self-healing becomes apparent. These are the exact features that make Kubernetes a great fit for operating large-scale microservices. Figure 9 illustrates.

Figure 9: A sample event-driven architecture on top of Kubernetes.
Figure 9: A sample event-driven architecture on top of Kubernetes.

When we are working with event-driven services, we will quickly find that we need to automate and integrate with an event-driven infrastructure—which is where Apache Kafka and other projects in its ecosystem might come in. Moreover, we can use Kubernetes Operators to help automate the management of Kafka and the following supporting services:

  • Apicurio Registry provides an Operator for managing Apicurio Schema Registry on Kubernetes.
  • Strimzi offers Operators for managing Kafka and Kafka Connect clusters declaratively on Kubernetes.
  • KEDA (Kubernetes Event-Driven Autoscaling) offers workload auto-scalers for scaling up and down services that consume from Kafka. So, if the consumer lag passes a threshold, the Operator will start more consumers up to the number of partitions to catch up with message production.
  • Knative Eventing offers event-driven abstractions backed by Apache Kafka.

Note: Kubernetes not only provides a target platform for application modernization but also allows you to grow your applications on top of the same foundation into a large-scale event-driven architecture. It does that through automation of user workloads, Kafka workloads, and other tools from the Kafka ecosystem. That said, not everything has to run on your Kubernetes. For example, you can use a fully managed Apache Kafka or a schema registry service from Red Hat and automatically bind it to your application using Kubernetes Operators. Creating a multi-availability-zone (multi-AZ) Kafka cluster on Red Hat OpenShift Streams for Apache Kafka takes less than a minute and is completely free during our trial period. Give it a try and help us shape it with your early feedback.

Now, let’s see how we can meet the remaining two modernization challenges using design patterns.

Challenge 2: Avoiding dual-writes

Once you build a couple of microservices, you quickly realize that the hardest part about them is data. As part of their business logic, microservices often have to update their local data store. At the same time, they also need to notify other services about the changes that happened. This challenge is not so obvious in the world of monolithic applications and legacy distributed transactions. How can we avoid or resolve this situation the cloud-native way? The answer is to only modify one of the two resources—the database—and then drive the update of the second one, such as Apache Kafka, in an eventually consistent manner. Figure 10 illustrates this approach.

Figure 10: The Outbox pattern.
Figure 10: The Outbox pattern.

Using the Outbox pattern with Debezium lets services execute these two tasks in a safe and consistent manner. Instead of directly sending a message to Kafka when updating the database, the service uses a single transaction to both perform the normal update and insert the message into a specific outbox table within its database. Once the transaction has been written to the database’s transaction log, Debezium can pick up the outbox message from there and send it to Apache Kafka. This approach gives us very nice properties. By synchronously writing to the database in a single transaction, the service benefits from "read your own writes" semantics, where a subsequent query to the service will return the newly persisted record. At the same time, we get reliable, asynchronous, propagation to other services via Apache Kafka. The Outbox pattern is a proven approach for avoiding dual-writes for scalable event-driven microservices. It solves the inter-service communication challenge very elegantly without requiring all participants to be available at the same time, including Kafka. I believe Outbox will become one of the foundational patterns for designing scalable event-driven microservices.

Challenge 3: Long-running transactions

While the Outbox pattern solves the simpler inter-service communication problem, it is not sufficient alone for solving the more complex long-running, distributed business transactions use case. The latter requires executing multiple operations across multiple microservices and applying consistent all-or-nothing semantics. A common example for demonstrating this requirement is the booking-a-trip use case consisting of multiple parts where the flight and accommodation must be booked together. In the legacy world, or with a monolithic architecture, you might not be aware of this problem as the coordination between the modules is done in a single process and a single transactional context. The distributed world requires a different approach, as illustrated in Figure 11.

Figure 11: The Saga pattern implemented with Debezium.
 Figure 11: The Saga pattern implemented with Debezium.

The Saga pattern offers a solution to this problem by splitting up an overarching business transaction into a series of multiple local database transactions, which are executed by the participating services. Generally, there are two ways to implement distributed sagas:
  • Choreography: In this approach, one participating service sends a message to the next one after it has executed its local transaction.
  • Orchestration: In this approach, one central coordinating service coordinates and invokes the participating services.

Communication between the participating services might be either synchronous, via HTTP or gRPC, or asynchronous, via messaging such as Apache Kafka.

The cool thing here is that you can implement sagas using Debezium, Apache Kafka, and the Outbox pattern. With these tools, it is possible to take advantage of the orchestration approach and have one place to manage the flow of a saga and check the status of the overarching saga transaction. We can also combine orchestration with asynchronous communication to decouple the coordinating service from the availability of participating services and even from the availability of Kafka. That gives us the best of both worlds: orchestration and asynchronous, non-blocking, parallel communication with participating services, without temporal coupling.

Combining the Outbox pattern with the Sagas pattern is an awesome, event-driven implementation option for the long-running business transactions use case in the distributed services world. See Saga Orchestration for Microservices Using the Outbox Pattern (InfoQ) for a detailed description. Also see an implementation example of this pattern on GitHub.

Conclusion

The Strangler pattern, Outbox pattern, and Saga pattern can help you migrate from brown-field systems, but at the same time, they can help you build green-field, modern, event-driven services that are future-proof.

Kubernetes, Apache Kafka, and Debezium are open source projects that have turned into de facto standards in their respective fields. You can use them to create standardized solutions with a rich ecosystem of supporting tools and best practices.

The one takeaway from this article is the realization that modern software systems are like cities: They evolve over time, on top of legacy systems. Using proven patterns, standardized tools, and open ecosystems will help you create long-lasting systems that grow and change with your needs.

This post was originally published on Red Hat Developers. To read the original post, check here. 

Data Gateways in the Cloud Native Era

These days, there is a lot of excitement around 12-factor apps, microservices, and service mesh, but not so much around cloud-native data. The number of conference talks, blog posts, best practices, and purpose-built tools around cloud-native data access is relatively low. One of the main reasons for this is because most data access technologies are architectured and created in a stack that favors static environments rather than the dynamic nature of cloud environments and Kubernetes.

In this article, we will explore the different categories of data gateways, from more monolithic to ones designed for the cloud and Kubernetes. We will see what are the technical challenges introduced by the Microservices architecture and how data gateways can complement API gateways to address these challenges in the Kubernetes era.

Application architecture evolutions

Let’s start with what has been changing in the way we manage code and the data in the past decade or so. I still remember the time when I started my IT career by creating frontends with Servlets, JSP, and JSFs. In the backend, EJBs, SOAP, server-side session management, was the state of art technologies and techniques. But things changed rather quickly with the introduction of REST and popularization of Javascript. REST helped us decouple frontends from backends through a uniform interface and resource-oriented requests. It popularized stateless services and enabled response caching, by moving all client session state to clients, and so forth. This new architecture was the answer to the huge scalability demands of modern businesses.

A similar change happened with the backend services through the Microservices movement. Decoupling from the frontend was not enough, and the monolithic backend had to be decoupled into bounded context enabling independent fast-paced releases. These are examples of how architectures, tools, and techniques evolved pressured by the business needs for fast software delivery of planet-scale applications.

That takes us to the data layer. One of the existential motivations for microservices is having independent data sources per service. If you have microservices touching the same data, that sooner or later introduces coupling and limits independent scalability or releasing. It is not only an independent database but also a heterogeneous one, so every microservice is free to use the database type that fits its needs.

Application architecture evolution brings new challenges

Application architecture evolution brings new challenges

While decoupling frontend from backend and splitting monoliths into microservices gave the desired flexibility, it created challenges not-present before. Service discovery and load balancing, network-level resilience, and observability turned into major areas of technology innovation addressed in the years that followed.

Similarly, creating a database per microservice, having the freedom and technology choice of different datastores is a challenge. That shows itself more and more recently with the explosion of data and the demand for accessing data not only by the services but other real-time reporting and AI/ML needs.

The rise of API gateways

With the increasing adoption of Microservices, it became apparent that operating such an architecture is hard. While having every microservice independent sounds great, it requires tools and practices that we didn’t need and didn’t have before. This gave rise to more advanced release strategies such as blue/green deployments, canary releases, dark launches. Then that gave rise to fault injection and automatic recovery testing. And finally, that gave rise to advanced network telemetry and tracing. All of these created a whole new layer that sits between the frontend and the backend. This layer is occupied primarily with API management gateways, service discovery, and service mesh technologies, but also with tracing components, application load balancers, and all kinds of traffic management and monitoring proxies. This even includes projects such as Knative with activation and scaling-to-zero features driven by the networking activity.

With time, it became apparent that creating microservices at a fast pace, operating microservices at scale requires tooling we didn’t need before. Something that was fully handled by a single load balancer had to be replaced with a new advanced management layer. A new technology layer, a new set of practices and techniques, and a new group of users responsible were born.

The case for data gateways

Microservices influence the data layer in two dimensions. First, it demands an independent database per microservice. From a practical implementation point of view, this can be from an independent database instance to independent schemas and logical groupings of tables. The main rule here is, only one microservice owns and touches a dataset. And all data is accessed through the APIs or Events of the owning microservice. The second way a microservices architecture influenced the data layer is through datastore proliferation. Similarly, enabling microservices to be written in different languages, this architecture allows the freedom for every microservices-based system to have a polyglot persistence layer. With this freedom, one microservice can use a relational database, another one can use a document database, and the third microservice one uses an in-memory key-value store.

While microservices allow you all that freedom, again it comes at a cost. It turns out operating a large number of datastore comes at a cost that existing tooling and practices were not prepared for. In the modern digital world, storing data in a reliable form is not enough. Data is useful when it turns into insights and for that, it has to be accessible in a controlled form by many. AI/ML experts, data scientists, business analysts, all want to dig into the data, but the application-focused microservices and their data access patterns are not designed for these data-hungry demands.

API and Data gateways offering similar capabilities at different layers

API and Data gateways offering similar capabilities at different layers

This is where data gateways can help you. A data gateway is like an API gateway, but it understands and acts on the physical data layer rather than the networking layer. Here are a few areas where data gateways differ from API gateways.

Abstraction

An API gateway can hide implementation endpoints and help upgrade and rollback services without affecting service consumers. Similarly, a data gateway can help abstract a physical data source, its specifics, and help alter, migrate, decommission, without affecting data consumers.

Security

An API manager secures resource endpoints based on HTTP methods. A service mesh secures based on network connections. But none of them can understand and secure the data and its shape that is passing through them. A data gateway, on the other hand, understands the different data sources and the data model and acts on them. It can apply RBAC per data row and column, filter, obfuscate, and sanitize the individual data elements whenever necessary. This is a more fine-grained security model than networking or API level security of API gateways.

Scaling

API gateways can do service discovery, load-balancing, and assist the scaling of services through an orchestrator such as Kubernetes. But they cannot scale data. Data can scale only through replication and caching. Some data stores can do replication in cloud-native environments but not all. Purpose-built tools, such as Debezium, can perform change data capture from the transaction logs of data stores and enable data replication for scaling and other use cases.

A data gateway, on the other hand, can speed-up access to all kinds of data sources by caching data and providing materialized views. It can understand the queries, optimize them based on the capabilities of the data source, and produce the most performant execution plan. The combination of materialized views and the stream nature of change data capture would be the ultimate data scaling technique, but there are no known cloud-native implementations of this yet.

Federation

In API management, response composition is a common technique for aggregating data from multiple different systems. In the data space, the same technique is referred to as heterogeneous data federation. Heterogeneity is the degree of differentiation in various data sources such as network protocols, query languages, query capabilities, data models, error handling, transaction semantics, etc. A data gateway can accommodate all of these differences as a seamless, transparent data-federation layer.

Schema-first

API gateways allow contract-first service and client development with specifications such as OpenAPI. Data gateways allow schema-first data consumption based on the SQL standard. A SQL schema for data modeling is the OpenAPI equivalent of APIs.

Many shades of data gateways

In this article, I use the terms API and data gateways loosely to refer to a set of capabilities. There are many types of API gateways such as API managers, load balancers, service mesh, service registry, etc. It is similar to data gateways, where they range from huge monolithic data virtualization platforms that want to do everything, to data federation libraries, from purpose-built cloud services to end-user query tools.

Let’s explore the different types of data gateways and see which fit the definition of “a cloud-native data gateway.” When I say a cloud-native data gateway, I mean a containerized first-class Kubernetes citizen. I mean a gateway that is open source, using open standards; a component that can be deployed on hybrid/multi-cloud infrastructures, work with different data sources, data formats, and applicable for many use cases.

Classic data virtualization platforms

In the very first category of data gateways, are the traditional data virtualization platforms such as Denodo and TIBCO/Composite. While these are the most feature-laden data platforms, they tend to do too much and want to be everything from API management, to metadata management, data cataloging, environment management, deployment, configuration management, and whatnot. From an architectural point of view, they are very much like the old ESBs, but for the data layer. You may manage to put them into a container, but it is hard to put them into the cloud-native citizen category.

Databases with data federation capabilities

Another emerging trend is the fact that databases, in addition to storing data, are also starting to act as data federation gateways and allowing access to external data.

For example, PostgreSQL implements the ANSI SQL/MED specification for a standardized way of handling access to remote objects from SQL databases. That means remote data stores, such as SQL, NoSQL, File, LDAP, Web, Big Data, can all be accessed as if they were tables in the same PostgreSQL database. SQL/MED stands for Management of External Data, and it is also implemented by MariaDB CONNECT engine, DB2, Teiid project discussed below, and a few others.

Starting in SQL Server 2019, you can now query external data sources without moving or copying the data. The PolyBase engine of SQL Server instance to process Transact-SQL queries to access external data in SQL Server, Oracle, Teradata, and MongoDB.

GraphQL data bridges

Compared to the traditional data virtualization, this is a new category of data gateways focused around the fast web-based data access. The common thing around HasuraPrismaSpaceUpTech, is that they focus on GraphQL data access by offering a lightweight abstraction on top of a few data sources. This is a fast-growing category specialized for enabling rapid web-based development of data-driven applications rather than BI/AI/ML use cases.

Open-source data gateways

Apache Drill is a schema-free SQL query engine for NoSQL databases and file systems. It offers JDBC and ODBC access to business users, analysts, and data scientists on top of data sources that don’t support such APIs. Again, having uniform SQL based access to disparate data sources is the driver. While Drill is highly scalable, it relies on Hadoop or Apache Zookeeper’s kind of infrastructure which shows its age.

Teiid is a mature data federation engine sponsored by Red Hat. It uses the SQL/MED specification for defining the virtual data models and relies on the Kubernetes Operator model for the building, deployment, and management of its runtime. Once deployed, the runtime can scale as any other stateless cloud-native workload on Kubernetes and integrate with other cloud-native projects. For example, it can use Keycloak for single sign-on and data roles, Infinispan for distributed caching needs, export metrics and register with Prometheus for monitoring, Jaeger for tracing, and even with 3scale for API management. But ultimately, Teiid runs as a single Spring Boot application acting as a data proxy and integrating with other best-of-breed services on Openshift rather than trying to reinvent everything from scratch.

Architectural overview of Teiid data gateway

Architectural overview of Teiid data gateway

On the client-side, Teiid offers standard SQL over JDBC/ODBC and Odata APIs. Business users, analysts, and data scientists can use standard BI/analytics tools such as Tableau, MicroStrategy, Spotfire, etc. to interact with Teiid. Developers can leverage the REST API or JDBC for custom built microservices and serverless workloads. In either case, for data consumers, Teiid appears as a standard PostgreSQL database accessed over its JDBC or ODBC protocols but offering additional abstractions and decoupling from the physical data sources.

PrestoDB is another popular open-source project started by Facebook. It is a distributed SQL query engine targeting big data use cases through its coordinator-worker architecture. The Coordinator is responsible for parsing statements, planning queries, managing workers, fetching results from the workers, and returning the final results to the client. The worker is responsible for executing tasks and processing data. 

Some time ago, the founders split from PrestoDB and created a fork called Trino (formerly PrestoSQL). Today, PrestoDB is part of The Linux Foundation, and Trino part of Trino Software Foundation. Both distributions of Presto are among the most active and powerful open-source data gateway projects in this space. To learn more about this technology, here is a good book I found.

Cloud-hosted data gateways services

With a move to the cloud infrastructure, the need for data gateways doesn’t go away but increases instead. Here are a few cloud-based data gateway services:

AWS Athena is ANSI SQL based interactive query service for analyzing data tightly integrated with Amazon S3. It is based on PrestoDB and supports additional data sources and federation capabilities too. Another similar service by Amazon is AWS Redshift Spectrum. It is focused around the same functionality, i.e. querying S3 objects using SQL. The main difference is that Redshift Spectrum requires a Redshift cluster, whereas Athena is a serverless offering that doesn’t require any servers. Big Query is a similar service but from Google.

These tools require minimal to no setup, they can access on-premise or cloud-hosted data and process huge datasets. But they couple you with a single cloud provider as they cannot be deployed on multiple clouds or on-premise. They are ideal for interactive querying rather than acting as hybrid data frontend for other services and tools to use.

Secure tunneling data-proxies

With cloud-hosted data gateways comes the need for accessing on-premise data. Data has gravity and also might be affected by regulatory requirements preventing it from moving to the cloud. It may also be a conscious decision to keep the most valuable asset (your data) from cloud-coupling. All of these cases require cloud access to on-premise data. And cloud providers make it easy to reach your data. Azure’s On-premises Data Gateway is such a proxy allowing access to on-premise data stores from Azure Service Bus.

In the opposite scenario, accessing cloud-hosted data stores from on-premise clients can be challenging too. Google’s Cloud SQL Proxy provides secure access to Cloud SQL instances without having to whitelist IP addresses or configure SSL.

Red Hat-sponsored open-source project Skupper takes the more generic approach to address these challenges. Skupper solves Kubernetes multi-cluster communication challenges through a layer 7 virtual network that offers advanced routing and secure connectivity capabilities. Rather than embedding Skupper into the business service runtime, it runs as a standalone instance per Kubernetes namespace and acts as a shared sidecar capable of secure tunneling for data access or other general service-to-service communication. It is a generic secure-connectivity proxy applicable for many use cases in the hybrid cloud world.

Connection pools for serverless workloads

Serverless takes software decomposition a step further from microservices. Rather than services splitting by bounded context, serverless is based on the function model where every operation is short-lived and performs a single operation. These granular software constructs are extremely scalable and flexible but come at a cost that previously wasn’t present. It turns out rapid scaling of functions is a challenge for connection-oriented data sources such as relational databases and message brokers. As a result cloud providers offer transparent data proxies as a service to manage connection pools effectively. Amazon RDS Proxy is such a service that sits between your application and your relational database to efficiently manage connections to the database and improve scalability.

Conclusion

Modern cloud-native architectures combined with the microservices principles enable the creation of highly scalable and independent applications. The large choice of data storage engines, cloud-hosted services, protocols, and data formats, gives the ultimate flexibility for delivering software at a fast pace. But all of that comes at a cost that becomes increasingly visible with the need for uniform real-time data access from emerging user groups with different needs. Keeping microservices data only for the microservice itself creates challenges that have no good technological and architectural answers yet. Data gateways, combined with cloud-native technologies offer features similar to API gateways but for the data layer that can help address these new challenges. The data gateways vary in specialization, but they tend to consolidate on providing uniform SQL-based access, enhanced security with data roles, caching, and abstraction over physical data stores.

Data has gravity, requires granular access control, is hard to scale, and difficult to move on/off/between cloud-native infrastructures. Having a data gateway component as part of the cloud-native tooling arsenal, which is hybrid and works on multiple cloud providers, supports different use cases is becoming a necessity.

This article was originally published on InfoQ here.

The After Open Source Era Has Started

Open source is the current norm for developer collaboration and customer adoption in software. It is the foundation that enabled unicorns and cloud providers to build their services from the ground up. But that wasn’t always the case with open source, and it is changing and evolving again.
Open Source Eras and relative adoption trend lines

Open Source Eras and relative adoption trend lines

In this post, I will look at open source evolution broadly, try to analyze what are some of the triggers and enablers for the change, and where it might be heading next. Let’s start with the main open software development eras by summarizing the main trends and then focus on the big picture with an attempt to predict the future.

Free Software (1980)

The term “free software” is attributed to Richard Stallman around the 1980s for using it for the free-software movement. During these early days of computing, Richard started the GNU project in an effort to cultivate collaboration among the early hacker community and create a freedom-respecting operating system. He campaigns for software to be distributed in a manner such that its users receive the freedoms to use, study, distribute, and modify that software. This era set the origins of open source and more importantly the free software licenses (such as GPL) that flourish later.

At the time, the main software creators in the open were the individual hackers and in their view of the world, the software had to be free as speech and remain so. Free software grew because personal computers became more widely available to these hackers and they used CDs, floppy disks, and the early internet to distribute software and spread their ideology.

In this pre-internet era, manual distributions of software, supporting documentation, consulting services (installation, development), selling-exceptions were some of the popular monetization methods.

Open Source Software (2000)

The term "open source" was used by a group of people from the free-software movement around 2000. The motivation for this new term was to free itself from the ideological and confrontational connotations of the term "free software" and make it more appealing for the business world. The supporters of the open source movement stress the subtle difference from free software where free software requires any changes to be submitted to the original maker for redistribution, and any derivative software must also be distributed as free software. This new term set the beginning of a new movement and the forming of Open Source Initiative to educate and advance open source culture. The open source movement allowed smaller companies to play a more significant role in the software economy by giving them access to the software needed to compete in the global market. Before that, it was the larger corporations, the producers of the networks and hardware who had the power.

Open source sparked from the early hackers community but grew rapidly into open source businesses, enabled by software foundations, the internet, and the wider adoption of open source by companies of all sizes. The primary monetization mechanism for the open source software is through support and the open core models where additional accompanying value is created around the core open source project. While this open core (enabled by permissive licenses such as MIT, Apache) allows everybody to benefit from it, it is also its Achilles' heel as we will see next.

Shared Source Software (2020)

Open source licenses give more freedom to the users, but they don’t give many advantages to the producers of the software. Many small projects with a handful of maintainers create huge economic value which ends up captured by other companies with better operational capabilities to monetize. This leads the maintainers of these projects to remain below the poverty line. Other companies hire open source maintainers as full-time employers and bet their company existence and brand into the success of their open source project. Yet they got disrupted and threatened by even larger hyperscale SaaS providers who have the scale to capture the economical value more efficiently and faster from the same projects.

This new economic reality started forcing individual maintainers and small companies to move their software away from business-friendly open source to other free software inspired derivative licenses and pursue dual-licensing models. This new family of licenses is not proprietary, but they don’t fit the open source definition either as they protect the trademark owner from the competition by discriminating against certain ways of software distributions such as SaaS. This transition of new and existing open source projects to non-open source licenses indicates the start of a new era. Keeping the source partially open is primarily for marketing and user adoption purposes rather than collaborative development and keeping software useful for everybody. This shared source software era is triggered by the existential threat of not being able to offer the software in a way demanded by consumers (as a SaaS) and efficiently capture economical value by the creators whether they are individual contributors or large companies with an open source business model.

Open source software eras and main characteristics

Open source software eras and main characteristics

Protected by these new licenses, the enablers for the modern-day independent hackers are the powerful online services that allow them to offer good quality software through globally available automation tools based on git, build tools, software scanning, and distribution services, etc. These hackers can build enough critical mass of supporters through social media and are able to capture economical value through services such as Github sponsors, Patreon, Tidelift, and many others. The other group, the disrupted open source companies are transitioning to the SaaS based distribution of software as vertical cloud services on top of the hybrid cloud infrastructure to compete with cloud providers. This allows the creators of the software to offer their service on multiple clouds and at the same time align with the way users prefer to consume software, which is as a service.

What Will Software After Open Source Look Like?

The start of a new trend doesn’t indicate the end of the existing eras, but a new addition to the mix. Free and open source software will continue growing at a huge pace. At the same time, I believe we will see an acceleration of the trend towards the so-called shared source and source available licenses too. This will double down on the dual-licensing of smaller library projects by individual developers and the SaaS-based distribution of bigger projects. The open core and open source models will remain here, but the open core of the projects will get smaller and smaller, practically useless for the competitors. We will see projects starting as open source during bootstrapping and initial adoption phases, and then transition to source available licenses when threatened by more operationally mature competitors. Unfortunately, this initial phase of uncertainty and adaptation in the shared source era will limit collaboration among competitors and demonstrate the importance of open governance and open funding through neutral software foundations or decentralized technologies.

Then we will see cycles repeating and independent hackers flourish again, innovating as in the free software era. But this time they will be better equipped with better infrastructure to support their livelihood as independent small businesses of one. They will start projects in the open to scratch their itch, but quickly turn them into businesses or let them die. They will be less ideological, and more practical. These independent hackers will not need to be part of the traditional horizontal software companies that bundle engineering, marketing, sales, support, education, etc to be successful in the software business. Instead, they will be able to consume unbundled vertical online services and deliver enterprise-grade software. We will see a rise in the tools and platforms that offer reliable project governance without joining a foundation or consortium. Independent software builders can use decentralized infrastructure, tokenize their projects, and customize the governance through on-chain community voting. The economical and governance aspects of the projects will be merged with the source code and licenses into a holistic entity enabled by blockchain technology and create opportunities for individual hackers to create million-dollar companies.

The infrastructure for independent techies will not be only for the software builders but for the whole ecosystem. Creating software is not enough, it has to go through the full pipeline of budgeting, building, marketing, hosting, sales, support in order to grow and remain sustainable. Speculators will put money into project tokens to help bootstrap projects and gain returns. Developers will build. Indies will create niche services complementing larger projects. Subject matter experts will provide consulting services and online training, and bounty hackers will hunt for ad-hoc work. Sometimes all of it will be driven by a single person, and sometimes a whole decentralized ecosystem forming around a project without the dominance of a central business entity. This will take a generation of software builders...

Conclusion

At the beginning of the open source era, Eric S. Raymond described a decentralized software development process called The Cathedral and The Bazaar. This era proved that the bazaar is the superior software development model. But at the same time, this era also showed us the limitations and the narrow mindedness of this model when it is not accompanied by a holistic governance and monetization view. The next era will improve on the same decentralized development principles by incorporating decentralized monetization and governance too. This will take us the full cycle of Decentralized and Sustainable Open Software nirvana.

If you like my explorations of opensource, blockchain, monetization, sing up to my newsletter or follow me on twitter.

Open Source Monetization Ecosystem Review

Open source is a distributed innovation model that lacks distributed funding. It allows individuals with a common passion to collaborate and produce value but not capture it. It is a production factory, without a sales counter. That is why many open source contributors are not getting a fair return. That is why many companies capture value from open source but without paying back. That is why many independent open source builders use alternative means to fund themselves. That is why open source is not a business model. It is a production model, monetization not-included. But there is hope, there is change.

Open source monetization journey for individuals

Open source is an innovation model and it is going to innovate its monetization too. There are new ways for fans to support the creative work of open source builders. There are ways to create online courses and monetize knowledge. There are new ways to create digital goods with accompanying services and sell them online for a fiver. Ways to start newsletters and make money from your audience. Ways to measure an open source contributor's merit, incentivize it, and trade it. Decentralized protocols for staking tokens and support open source through interest rather than donations.

99 Ways to Make Money with Open Source as an Individual



There is an open source monetization revolution happening right now and I'll explore the whole spectrum of open source monetization projects at Open Core Summit Digital. Join me on December 16th-18th where I will talk about "99 Ways to Make Money with Open Source as an Individual".

Must Read Free Kubernetes Books

There is a rise in offerings of free educational content, free software, free cloud resources with the single goal of capturing the new kingmaker's attention. In a similar spirit, here I want to quickly share my favorite Kubernetes related books offered free of charge. I've read and found them all very useful at different stages of my learning. The list contains books that are sponsored and offered free of charge but in most cases that is for exchange of your contact details. If you prefer not to give your details, you can go to Amazon, buy the data-capture-free book instead, and support the authors (such as me). I think it is a privilege to work in this industry and I'm thankful for these authors and companies offering the choice of free-of-charge books.  Enjoy the list, and don’t come back here saying “This is not free…why are you saying it is free... blah-blah". Here are my best of Kubernetes free picks, happy learning.

 

Must Read Free Kubernetes Books

Must Read Free Kubernetes Books

Here are also SRE Books you can read online for free:

SRE Books by Google

There are also other honorable mentions, but these are shorter editions (less than 100 pages) and for me these qualify as books written with marketing in mind. Yet I found these books useful and a great value for money ;)

Hope you enjoy the list! Share it, and comment with your free picks!

Find Me Online