Blogroll

Camel Rebirth with Subsecond Experiences

This post was originally published here. Check out my new Kubernetes Patterns book and  follow me @bibryam for future blog posts.

A look at the past decade

The integration space is in constant change. There are many open source projects and closed source technologies that have not passed the test of time and disappeared from the middleware stacks for good. After a decade, Apache Camel is still here and becoming even stronger for the next decade of integration.

Gang of Four for integration

Apache Camel started life as an implementation of the Enterprise Integration Patterns (EIP) book. Today, these patterns are the equivalent of the object-oriented Gang of Four Design Patterns but for messaging and integration domain. They are agnostic of programing language, platform, architecture, and provide a universal language, notation and description of the forces around fundamental messaging primitives.

But Camel community did not stop with these patterns, and kept evolving and adding newer patterns from SOA, Microservices, Cloud Native and Serverless paradigms. As a result, Camel turned into a generic pattern based integration framework suitable for multiple architectures.

Universal library consumption model

While the patterns gave the initial spark to Camel, its endpoints quickly became popular and turned into a universal protocol for using Java based integration libraries as connectors. Today, there are hundreds of Java libraries that can be used as Camel connectors using the Camel endpoint notation. It takes a while to realize that Camel can also be used without the EIPs and the routing engine. It can act as a connector framework where all libraries are consumed as universal URIs without a need for understanding the library specific factories and configurations that vary widely across Java libraries.

The right level of abstraction

When you talk to developers who have not used Camel in anger before, they will tell you that it is possible to do integration without Camel. And they are right about the 80% of the easy use cases, but not for the remaining 20% of the cases that can turn a project into a multi-year frustrating experience. What they do not realize yet is that without Camel, there are multiple manual ways of doing the same thing, but none are validated by the experience of hundreds of open source developers. And if you fast-forward a few years, 10s of different systems to integrate with, and 10s of developers coming and going, 100s of microservices, an integration project can quickly turn into a bespoke home-grown framework that nobody wants to work on. Doing integration is easy, but doing good integration that will evolve and grow for many years, by many teams, is the hardest part. Camel addresses this challenge with universal patterns and connectors, combined with integration focused DSLs, that have passed the test of time. Next time, if you think you don't need Camel, your are either thinking for short term gains, or you are not realizing yet how complex integration can become.

Embracing change

It takes only a couple of painful experiences in large integration projects to start appreciating Camel. But Camel is not great only because it was started by and built on the works of great minds, it is great because it evolves thanks to the world's knowledge, shared through the open source model and its networking effects. Camel started as the routing layer in ESBs during SOA period with a lot of focus on XML, WS, JBI, OSGI etc, but then it quickly adapted for REST, Swagger, Circuit breakers, SAGAs, and Spring Boot in the Microservices era. And the world has not stopped there, with Docker and Kubernetes, and now Serverless architecture, Camel keeps embracing the change. That is because Camel is written for integrating changing environments, and Camel itself grows and shines on change. Camel is a change enabling library for integration.

Behind the scene engine

One of the Camel secret sauces is that it is a non-intrusive, unopinionated, small (5MB and getting smaller) integration library without any affinity where and how you consume it. If you notice, this is the opposite of an ESB as commonly Camel is confused with because of its extensive capabilities. Over the years, Camel has been used as the internal engine powering projects such as:
  • Apache ServiceMix ESB
  • Apache ActiveMQ
  • Talend ESB
  • JBoss Switchyard
  • JBoss Fuse Service Works
  • Red Hat Fuse
  • Fuse Online/Syndesis 
  • And many other frameworks mentioned here.
You can use Camel standalone, embed it with Apache Tomcat, with Spring Boot starters, JBoss WildFly, Apache Karaf, Vert.x, Quarkus, you name it. Camel doesn't care and it will bring superpowers to your project every time.

Looking to the future

Nobody can tell how the ideal integration stack will look like in a decade, nor can I. But I will tell you about two novelties coming into Apache Camel now (and to Red Hat Fuse later), and why they will have a noticeable positive effect for the developers and the business. I call these changes as subsecond deployment and subsecond startup of Camel applications.

Subsecond deployments to Kubernetes

There was a time when cloud-native meant different technologies. Today, after a few years of natural selection and consolidation in the industry, cloud-native means applications created specifically for Kubernetes and its ecosystem of projects around CNCF. Even with this definition, there are many shades of cloud-native, from running a monolithic non-scalable application in a container, to triggering a function that is fully embracing the cloud-native development and management practices. The Camel community has realized that Kubernetes is the next generation application runtime, and it is steadily working on making Camel a Kubernetes native integration engine. The same way Camel is a first-class citizen in OSGI containers, JEE application servers, other fat-jar runtimes, Camel is becoming a first-class citizen on Kubernetes, integrating deeply and benefiting from the resiliency and scalability offered by the platform.

Here are a few of the many enhancement efforts going on in this direction:
  • Deeper Kubernetes integration - Kubernetes API connector, full health-check API implementation for Camel subsystems, service discovery through a new ServiceCall EIP, configuration management using ConfigMaps. Then a set of application patterns with special handling on Kubernetes such as: clustered singleton routes, scalable XA transactions (because sometimes, you have to), SAGA pattern implementation, etc.
  • Cloud-native integrations - support for other cloud-native projects such as exposing Camel metrics for Prometheus, tracing Camel routes through Jaeger, JSON formatted logging for log aggregation, etc.
  • Immutable runtimes - whether you use the minimalist immutable Karaf packaging, or Spring Boot, Camel is a first class citizen ready to put in a container image. There are also Spring Boot starter implementations for all Camel connectors, integration with routes, properties, converters, and whatnot.
  • Camel 3 - is a fact and actively progressing. A big theme for Camel 3 is to make it more modular, smaller, with faster startup time, reactive, non-blocking and triple awesome. This is the groundwork needed to restructure Camel for the future cloud workloads.
  • Knative integration - Knative is an effort started by Google in order to bring some order and standardization in the serverless world dominated by Amazon Lambda. And Camel is among the projects that integrate with Knative primitives from early days and enhances the Knative ecosystem with hundreds of connectors acting as generic event sources.  
  • And here is a real game-changer initiative: Camel-K (a.k.a deep Kubernetes integration for Camel) - we have seen that Camel is typically embedded into the latest modern runtime where it acts as the developer-friendly integration engine behind the scene. The same way Camel used to benefit from JEE services in the past for hot-deployment, configuration management, transaction management, etc, today Camel-K allows Camel runtime to benefit from Kubernetes features for high-availability, resiliency, self-healing, auto-scaling, and basically distributed application management in general. The way Camel-K achieves this is through a CLI and an Operator where the latter is able to understand the Camel applications, its build time dependencies, runtime needs, and make intelligent choices from the underlying Kubernetes platform and its additional capabilities (from Knative, Istio, Openshift and others in the future). It can automate everything on the cluster such as picking the best suited container image, runtime management model and update them when needed. The CLI can automate the tasks that are on the developer machine such as observing the code changes and streaming those to the Kubernetes cluster, and printing the logs from the running Pods. 
Camel route auto-deployment to Kubernetes with Camel-K
Camel-K operator understands two domains: Kubernetes and Camel. By combining knowledge of both areas, it can automate tasks that usually require a human operator.
The really powerful part is that, with Camel-K, a Camel route can be built and deployed from source code to a running Camel route on Kubernetes in less than a second!
Time to deploy and run a Camel integration(in seconds)
 
Forget about making a coffee, or even having a sip while building and deploying a Camel route with Camel K. As soon as you make changes to your source code and open a browser, the Camel route will be running in Kubernetes. This has a noticeable impact on the way the developers write Camel code, compile, drink coffee, deploy and test. Apart from changing the development practices and habits, this toolset will significantly reduce the development cycles which would be noticed by the business stakeholders too. For live demonstration, check out the following awesome video from Fuse engineers working on Camel-K project.

Subsecond startups of Camel applications

A typical enterprise integration landscape is composed of stateless services, stateful services, clustered applications, batch jobs, file transfers, messaging, real time integrations, and may be even blockchain based business processes. To that mix, today, we also have to add serverless workloads as well, which is best suited for event driven workloads. Historically, the heavy and slow Java runtime had significant drawbacks compared Go, Javascript and other light runtimes in the serverless space. That is one of the main motivations for Oracle to create GraalVM/Substrate VM. Substrate VM is a framework that enables ahead-of-time (AOT) compilation of Java applications into native executables that are light and fast. Then a recent effort by Red Hat led to the creation of Quarkus project which improves further the resource consumptions, startup and response times of Java applications mind-blowingly (a term not-used lightly here).

Supersonic Subatomic Java with Quarkus
 
As you can see from the metrics above, Quarkus combined with SubstrateVM is not a gradual evolution. It is a mutation, and a revolutionary jump that suddenly changes the perspectives on the Java’s footprint and speed in the cloud native workloads. It makes Java friendly for serverless architecture. Considering the huge Java ecosystem composed of developers and libraries, it even turns Java into the best suited language for Serverless applications. And Camel combined with Quarkus, the best placed integration library in this space.

Summary

With the explosion of Microservices architecture, the number of services has increased tenfold which gave birth to Kubernetes-enabled architectures. These architectures are highly dynamic in nature, and most powerful with light and fast runtimes that enable instant scale up and higher deployment density.

Camel is the framework to fill the space between disparate systems and services. It offers data consistency guarantees, reliable communication, failover, failure detection and recovery, and so on, in a way that makes developers productive. Now, imagine the same powerful Apache Camel based integration in 2020 that: deploys to Kubernetes in 20ms; starts up in 20ms; requires 20MB memory, consumes 20MB on the disk... That is regardless whether it runs as a stateless application in a container, or as a function on KNative. That is 100x faster deployments to Kubernetes, 100x faster startup time, 10x less resource consumption allowing real-time scale-up, scale-down, and scale to zero. That is a change that the developers will notice during development, users will notice when using the system, and the business will notice on the infrastructure cost and overall delivery velocity. That is the real cloud-native era we have been waiting for.

Getting started with blockchain for Java developers

Follow me on twitter for other posts in this space. This post was originally published on Opensource.com under CC BY-SA 4.0. If you prefer, read the same post on Hacker Noon.

Top technology prognosticators have listed blockchain among the top 10 emerging technologies with the potential to revolutionize our world in the next decade, which makes it well worth investing your time now to learn. If you are a developer with a Java background who wants to get up to speed on blockchain technology, this article will give you the basic information you need to get started.
Blockchain is a huge space and at first it can be overwhelming to navigate. Blockchain is different from other software technologies as it has a parallel non-technical universe with a focus on speculations, scams, price volatility, trading, ICOs, cryptocurrencies, Bitcoin maximalism, game theory, human greed, etc. Here we will ignore that side of blockchain completely and look at the technical aspects only.

The theoretical minimum for blockchain

Regardless of the programing language, implementation details, there is a theoretical minimum about blockchain that you should be familiar with. Without this understanding, it is impossible to grasp the foundations, and build on. Based on my experience, the very minimum two technologies that must be understood are Bitcoin and Ethereum. It happens that both projects introduced something new in this space, both currently have the highest market cap, and highest developer community, etc. Most other blockchain projects, whether they are public or private, permissionless or permissioned, are forks of Bitcoin or Ethereum, or build and improve their shortcomings in some ways by making certain trade-offs. Understanding these two projects is like taking networking, database theory, messaging, data structures and two programing language classes in the university. Understanding how these two blockchain technologies will open your mind for the blockchain universe.
Tech books to start with blockchain
 The two books I recommend for this purpose happen to be from the same author - Andreas M. Antonopoulos:
  • Mastering Bitcoin is the most in depth, technical but still understandable and easy to read book I could find about Bitcoin. The tens of other books I checked on this topic were either mostly philosophical and non-technical.
  • On the Ethereum side, there are many more technical books, but I liked the level of detail in Mastering Ethereum most.
  • Building Ethereum Dapps is another book I found very thorough and covering the Ethereum development very well.

Most popular Java based blockchain projects

If you are coming from a technical background, it makes sense to build on that knowledge and see what blockchain brings to the table. In the end, blockchain is a fully new technology, but a new combination of existing technologies with human behavior fueled by network effects.

It is worth stating that the popular technologies such as Java, .Net, relational databases are not common in the blockchain space. This space is primarily dominated by C, Go, Rust on the server side, and JavaScript on the client side. But if you know Java, there are a few projects and components written in Java that can be used as a leveraged entry point to the blockchain space.
Assuming you read the above two books, and want to get your hands dirty, here are a few open source blockchain projects written in Java:
Popular Java-based blockchain projects
  • Corda - this is probably the most natural starting point for a Java developer. Corda is JVM based project that builds on top of popular widely used Java projects such as Apache Artemis, Hibernate, Apache Shiro, Jackson, and relational databases. It is inspired by Bitcoin, but has elements of business processes, messaging, and other familiar concepts. Check out my first impressions from it as a Java developer here.
  • Pantheon - is a full implementation of an Ethereum node in Java. It is specifically created to attract developers from the Java ecosystem into the blockchain world. Here is an intro and a getting started video by its creators.
  • BitcoinJ - is the most popular Java implementation of the Bitcoin protocol. If you prefer to start with Bitcoin directly, this is the Java project to explore.
  • Web3J - while Corda, Pantheon are examples of a full blockchain node implemented in Java, Web3J is client library written in Java. It is very well documented and active project that makes talking to Ethereum compatible nodes straight forward. I created a Apache Camel connector for it and wrote about it here.
  • Hyperledger Fabric Java SDK - one of the most popular enterprise blockchain projects is Hyperledger Fabric and it has a full-featured Java SDK to play with.
  • FundRequest - I also want to point you to full end user applications written in Java. While the above projects are examples of clients or nodes, FundRequest is an open source funding platform implemented on top of Ethereum network and fully written in Java. It gives a good idea how to implement a complete blockchains project interacting with the Ethereum network.
  • Eventum - this is a Java project that can help you monitor the Ethereum network and store Events on Kafka. It addresses a few of the common challenges when integrating with blockchain networks which are decentralized.
If you are still not sure where to start, I suggest you read Mastering Bitcoin, that will give you the solid foundation. If you like touching technology before reading, go to Github and play with one of the projects listed above. The rest will follow. The future is open and decentralized.

The next integration evolution - blockchain

Below is the conclusion from an article I wrote at TechCrunch. Checkout the full article here.

Enterprise integration has multiple nuances. Integration challenges within an organization, where all systems are controlled by one entity and participants have some degree of trust to each other, are mostly addressed by modern ESBs, BPMs and Microservices architectures. But when it comes to multi-party B2B integration, there are additional challenges. These systems are controlled by multiple organizations, have no visibility of the business processes and do not trust each other. In these scenarios, we see organizations experimenting with a new breed of blockchain-based technology that relies not only on sharing of the protocols and contracts but sharing of the end-to-end business processes and state.
Integration evolution stages

And this trend is aligned with the general direction integration has been evolving over the years: from sharing the very minimum protocols, to sharing and exposing more and more in the form of contracts, APIs and now business processes. This shared integration infrastructure enables new transparent integration models where the previously private business processes are now jointly owned, agreed, built, maintained and standardized using the open-source collaboration model. This can motivate organizations to share business processes and form networks to benefit further from joint innovation, standardization and deeper integration in general.

Open Source and Deforestation

Follow me on twitter for other posts in this space. If you prefer, read the same post on Medium.

Open source is like a forest

A forest is a complex ecosystem of plants, animals microorganisms, non-living material, all balanced delicately by nature. It requires the right geography, the right soil, the right amount of rain and sun, and decades to build a forest.

So is open source. An open source project is a delicate ecosystem of contributors, reviewer, users, supporting organizations, all balanced by a feeling of a community. It requires the right ideas at the right time, the right group of developers, the right technology, an enormous amount of dedication and passion, and years to build a project.

Forests are home for many species, the source of oxygen, clean water, and air, prevent floods block winds, source of wood when used in a sustainable manner. Forests offer endless benefits to many when consumed responsibly and without destroying it completely.

So is open source. Open source is the place where newbies learn to collaborate, communicate and code. It is the place where experienced innovate, standardize and distribute cutting edge software. It is the place where remote developers scratch their itch, get paid, and the result benefits everybody. The open source model provides the foundations of the digital infrastructure of modern human life.

Deforestation

In the late 1960s, the deforestation of the Amazon (not the company, but the rainforest) started at an enormous rate. Trees were cleared, lands were transformed. The delicately balanced rain forests were destroyed unrecoverably leading to varying degrees of loss of soil, erosion, landslides, climate change, and even change the patterns of weather. And some companies captured enormous value from destroying the commons of nature by cutting the trees for wood and fuel in an unsustainable manner.

Tragedy of the commons image by Wikipedia

Today, open source is the latest battleground that could have consequences similar to deforestation. It is a battleground of business models, a battleground of small against large, of consultancy and tool producers against large cloud service providers.

On one hand, there are the small companies employing open source developers to build and maintain open source projects and become de facto consultancy and tooling provider around that technology. The open source model helps attract customers for the small businesses and they contribute back their work in return. It is a win-win for free riders, paying customers, maintainers, and the open source projects continue getting wider adoption.

But enterprise customers don’t want consultancy or tooling, they want to use technology in a fast, scalable, reliable and secure way and focus their resources on the business domain instead. And this is what cloud services offer. If that premise is true, any widespread open source technology will eventually be wrapped as a cloud service, and companies will prefer to use it there rather than running themselves in-house with the help of consultancy or proprietary automation tools.

That leaves small companies benefiting from and contributing to open source, out of business. As Thomas Dinsmore said it on Twitter “It’s impossible to argue that software should be open AND the originators have the sole right to monetize. If you believe in the latter, the solution is commercial software. With a license key.”. Any company benefiting from the open source model is also accepting the risk of being eaten by a large cloud service provider eventually. These are the rules of the open source game, and they are fair.

That makes the future of these open source project unclear. When technology is offered as a cloud service, the smaller companies start protecting their investments by introducing additional licenses and moving away from being truly open source. We have seen that with Redis, MongoDB, Kafka, and others to follow eventually. That also means the small companies will have less incentive to develop the open source project beyond the open core elements and instead will focus more effort on their non-open source value adding competitive edges.

As for the cloud providers, they are not obliged to sustain open source by license, not forced by their business model either. If a project stagnates, loses contributors and users, a cloud provider could quicker than anybody jump into the next popular project and offer that as a service. That is likely to lead to a change of dynamics of contributors from small and mid-range companies to individuals (as these early indications around Redis) or new active on the open source arena cloud providers. This is a new reality, and we are yet to see how open source adapts to it.

A sustainable future

The mightiest corporations of our times capture enormous value by using the commons of the open source community, by wrapping the projects into services and offering them for the cost of hardware usage. While benefiting from open source unconditionally and without any expectations is perfectly fine according to the license agreements and the rules of capitalism, benefiting without contributing back a fair share, nor helping for sustainability, has the effects of deforestation of the ecosystem. In the absence of a sustainable model, the delicate balance of contributors and users can be easily broken, leading to confusion, fear, license changes, leading to multi-license projects, leading to discouraged contributors, leading to cautious users, changing the open source model as we all know it, irreversibly.

The good news is that the software industry is increasingly getting more educated on understanding how open source works, what it takes to produce, and sustain it. With recent reactions on social media, we can see that there is no legal, but moral expectation from the companies benefitting from open source most, to play their equal part in sustaining the same open source by being exemplary with their actions, and not only exploit the commons the other contributors are building and the whole society is relying upon.

Today, every household has something made of wood. We can keep it that way by sustaining our forests. Every business depends on something made of open source. We can keep it that way by sustaining our open source ecosystem.

About Me