Create Resilient Camel applications with Hystrix DSL

(This post was originally published on Red Hat Developers, the community to learn, code, and share faster. To read the original post, click here.)
Apache Camel is a mature integration library (over 9 years old now) that implements all the patterns from Enterprise Integration Patterns book. But Camel is not only an EIP implementation library, it is a modern framework that constantly evolves, adds new patterns and adapts to the changes in the industry. Apart from tens of connectors added in each release, Camel also goes hand-in-hand with the new features provided by the new versions of Java language itself and other Java frameworks. With time some architectural styles such as SOA and ESB lose attraction and new architectural styles such as REST, Microservices get popular. To enable developers do integrations using these new trends, Camel responds by adding new DSLs such the REST DSL and new patterns such as the Circuit Breaker, and components such as Spring Boot. And that's not all and we are nowhere near done. With technologies such as Docker containers and Kubernetes, the IT industry is moving forward even faster now, and  Camel is evolving too in order to ease the developers as it always has been. To get an idea of kinda tools you would need to develop and run applications on Docker and Kubernetes, check out the Fabric8 project and specifically tools such as the Docker Maven plugin, Kubernetes CDI extension, Kubernetes Java client, Arquilian tests for Kubernetes, etc. Exciting times ahead with lot's of cool technologies, so let's have a look at one of those: Camel and the Hystrix based circuit breaker.

Two Circuit Breakers in Camel, which one to chose?

Two years ago, when I first read Release It from Michael Nygard, I couldn't stop myself implementing the Circuit Breaker pattern in Camel. Usually I drive my contributions by real customer needs, but Circuit Breaker pattern is so elegant, I had to do it. To implement it in a non-intrusive manner, I have added it as a Camel Load Balancer strategy. Here how simple it is:
The DSL above is self describing: if the number of MyCustomExceptions thrown by mock:result endpoint reaches the threshold number, the CircuitBreaker goes to open state and starts rejecting all requests. After 1000ms it moves to halfOpenAfter state and the result of the first request in this state will define its next state as closed or open. It is the simplest possible implementation of the CircuitBreker you can imagine, but still useful.

Since then, the Microseservices architecture has became more popular, and so is the Circuit Breaker Pattern and its java implementation Hystrix. At some point Raúl Kripalani started the Hystrix implementation in Camel and put all the ground work in place, but with time it lost momentum. Then seeing the same request again and again from different customers, I took the relay and continued the work and pushed to Camel a Hystrix component. Seeing the feedback from the community, it still didn't feel as elegant as it could be. Then Claus stepped in and made Hystrix part of the Camel DSL by turning it into an EIP (rather than component). So how does it look like to create a Hystrix based Circuit Breaker in Camel now?

In the example above, you can see only very few of the available options for a Circuit Breaker, to see all of them checkout the offical documents and try out the example application Claus put in place.

Based on this example you may think that Hystrix is part of Camel core, but it is not. Camel core is still this light and without dependencies to third party libraries. If you want to use the Hystrix based Circuit Breaker, you need to add camel-hystrix dependency to your dependencies as it is with any other non-core component and make it available at runtime.

Fail Fast, Fallback, Bulkhead, Timeout and more

The Hystrix library implements more than Circuit Breaker patter. It also does bulkheading, request caching, timeouts, request collapsing, etc. The implementation in Camel does not support request collapsing and caching as these are done using other patterns and components available in Camel already. Request collapsing in Camel can be done using Aggregator EIP and caching can be done using cache components such as Redis, Inifinspan, Hazelcast, etc. The Hystrix DSL in Camel offers around 30 configuration options supported by Hystrix, also exposes metrics over JMX and/or REST for the Hystrix Dashboard.

As a final note, don't foget that to create a true resilient application, you need more than Hystrix. Hystrix will do bulkheading at thread pool level, but that is not enough if you don't apply the same principle at process, host and phisical machine level. To create a create a resilient distributed system, you will need to use also Retry, Throttling, Timeout... and other good bractices some of which I have described in Camel Design Patterns book.

To get some hands on feeling of the new pattern, check the example and then start defending your Camel based Microservices with Hystrix.

Scalable Microservices through Messaging

(This post was originally published on Red Hat Developers, the community to learn, code, and share faster. To read the original post, click here.)

Microservices are everywhere nowadays, and so is the idea of using service choreography (instead of service orchestration) for microservices interactions. In this article I describe how to set up service choreography using ActiveMQ virtual topics, which also enables scalable event based service interactions.

Service Interaction Styles

There are two main types of service interaction: synchronous and asynchronous.
With synchronous interactions, the service consumer makes a request and blocks until the operation completes and a response is received. The HTTP protocol is a great example for a synchronous interaction. This type of interaction is usually associated with request/response interaction style and the HTTP protocol. (Of course, it also possible to do request/response with asynchronous requests or event messaging, via registering a callback for the result, but that is a less common use case).
With an asynchronous interaction style, the service consumer makes a request, but doesn’t wait for the operation to complete. As soon as the request is acknowledged as received, the service consumer moves on. This type of interaction allows publish/subscribe style of service communication — e.g. instead of a service consumer invoking an operation from another service, a producer raises an event and expects interested consumers to react.
Apart from these technical considerations, there is also another aspect to consider with service interactions: coupling and responsibility.
If service A has to interact with service B, is it the responsibility of service A to invoke service B (orchestration) or is it the responsibility of service B to subscribe for the right events (choreography)?
With service orchestration, there is a central entity (as the service A itself in our case), which has the knowledge of other services to be called. With the choreography approach, this responsibility is delegated to the individual services and they are responsible for subscribing for the “interesting” events.
To read more about this topic, checkout Chapter 4 from the excellent Building Microservices book. For the rest of this article, we will focus on doing service choreography using messaging.

Service Orchestration Through Messaging

Service orchestration in messaging is achieved through queues. A queue implements load balancer semantics using the competing consumer pattern and makes sure that there is only one consumer of a message.
Let’s say there is a “Customer Service” that has to interact with “Email Service”. The easiest way to implement this is to use a queue and let “Customer Service” send a message to “Email Queue”. If the “Customer Service” has to interact with “Loyalty Point Service”, again, “Customer Service” has to send another message — this time to “Loyalty Point Queue”. With this approach, it is the responsibility of “Customer Service” to know about “Loyalty Point Service” and “Email Service”, and subsequently send the right messages to the corresponding queues. In short, the whole interaction is orchestrated by “Customer Service”.
One advantage of using queues is that, it allows scaling of consumers easily. We could start multiple instances of “Loyalty Point Service” and “Email Service”, and the queues will do the load balancing among the consumers.


Service Choreography Through Messaging

With the service choreography approach, “Customer Service” doesn’t have any knowledge of “Loyalty Point Service” or “Email Service”. “Customer Service” simply emits an event to “Customer Topic”, and it is the responsibility of “Loyalty Point Service” and “Email Service” to know about the Customer event contract and subscribe to the right topic — the publish/subscribe semantics of the topics will ensure that that every event is distribute to both subscribers.

Scaling Service Choreography

Since topics implement publish/subscribe semantics rather than competing consumers, scaling the consumers becomes harder. If we (horizontally) scale “Loyalty Point Service” and start two instances, both instances of the service will receive the same event and there won’t be any benefit in scaling (unless the services are idempotent).

ActiveMQ Virtual Topics to the Rescue

So what we need is some kind of mixture between topic and queue semantics. We want the “Customer Service” to publish an event using publish/subscribe semantics so that all services receive the event, but we also want competing consumers, so that individual service instances can load balance events and scale.
There are a number of ways we could achieve that with Camel and ActiveMQ:
  • The very obvious one that comes to (my) mind is to have a simple Camel route that is consuming events from “Customer Topic” and sends them to both “Loyalty Point Queue” and “Email Queue”. This is easy to implement, but every time there is a new service interested from the “Customer Service” events, we have to update the Camel routes. Also, if you run the Camel route on a separate process than the broker, there will be unnecessary networking overhead only to move messages from a topic to a set queues in the same message broker.
  • An improvement to the above approach would be, to have Camel routes running in the ActiveMQ broker process using ActiveMQ Camel plugin. In that case, you still have to update the Camel route every time there is change to the subscribers, but the routing will happen in the broker process itself, so no networking overhead.
  • And even a better solution would be to have the queues subscribed to the topic w/o any coding, but using a declarative approach using ActiveMQ virtual topics (hence the whole reason for writing this article).
Virtual topics are a declarative way of subscribing queues to a topic, by simply following a naming convention — all you have to do is define or use the default naming convention for your topic and queues.
For example, if we create a topic with a name matching VirtualTopic.> expression, such as: VirtualTopic.CustomerTopic, then have the “Loyalty Point Service” consume from Consumer.LoyaltyPoint.VirtualTopic.CustomerTopic queue, the message broker will forward every event from VirtualTopic.CustomerTopic topic to Consumer.LoyaltyPoint.VirtualTopic.CustomerTopic queue.
Then we could scale Loyalty Point Service by starting multiple service instances, all of which consume from Consumer.LoyaltyPoint.VirtualTopic.CustomerTopic queue.
Similarly, later we can create a queue for the Email Service by following the same naming convention:
Consumer.Email.VirtualTopic.CustomerTopic. This feature allows us to simply name our topics and queues in a specific way,  and have them subscribed without any coding.
Camel Design Patterns
Camel Desing Patterns

Final thoughts

    This is only one of the many patterns I have described in my recently published Camel Design Patterns book. Camel is quite often used with ActiveMQ, and as such, you can find also some ActiveMQ patterns in the book too.
    Another way to scale microservices using choreography can be achieved through event sourcing. You can find a nice blog post describing it here.

    Performance Tuning Ideas for Apache Camel

    Every now and then, I get questions around optimising Camel applications with the argument that Camel is slow. Camel is just the glue connecting disparate systems, the routing engine is all in-memory, and it doesn’t require any persistent state. So 99% of the cases, performance issues are due to bottlenecks in other systems, or having the application design done without performance considerations. If that is the case, there isn’t much you can achieve by tuning Camel further, and you have to go back to the drawing board.

    But sometimes, it might be worth squeezing few more milliseconds from your Camel routes. Tuning every application is very specific and dependent on the technology and the use case. Here are some ideas for tuning Camel based systems, which may apply for you (or not).

    Endpoint Tuning

    Endpoints in Camel are the integration points with other systems and the way they are configured will have a huge impact on the performance of the system. Understanding how different endpoints work and tuning those should be the one of the first places to start with. Here are few examples:
Messaging - If your Camel application is using messaging, the overall performance will be heavily dependent on the performance of the messaging system. There are too many factors to consider here, but main ones are:
    • Message broker - the network and disk speed, combined with the broker topology will shape the broker performance. To give you an idea, with ActiveMQ, a relational database based persistent store will perform around 50% of a file based store, and using network of brokers to scale horizontally will cost another 30% of performance. It is amazing how one configuration change in ActiveMQ can have huge impact on the messaging system and then the Camel application . There is a must read ActiveMQ tuning guide by Red Hat with lot's of details to consider and evaluate. Also a real life example from Chrisitan Posta showing how to speed up the broker 25x times in certain cases. Another recent article by Simon Green shows how to approach an ActiveMQ Tunning Adventure step by step.
    • Message client - if performance is a priority, there are also some hacks you can do on the ActiveMQ client side, such as: increasing TCP socketBufferSize and ioBufferSize, tuning the OpenWire protocol parameters, using message compression, batch acknowledgements with optimizeAcknowledge, asynchronous send with useAsyncSend, adjusting pre-fetch limit, etc. There are some nice slides again from Christina here and old but still very relevant video from Rob Davies about tuning ActiveMQ. All of these resources should give you enough ideas to experiment and improve the performance from messaging point of view.
Database writes - use batching whenever possible. You can use an aggregator to collect a number of entries before performing a batch operation to interact with the database (for example with the SQL component.
    • Working with templates
 - if you have to use a template component as part of the routing, try out the existing templating engines (FreeMarker, Velocity, SpringTeplate, Mustache, Chunk )  with a small test as the following one and measure which one performs better. There is a great presentation titled Performance optimisation for Camel by Christian Mueller with the source code supporting the findings (UPDATE: after this blog post was published, Christian created new slides with latest version of Camel 2.16.2 and Java 7/8, check out those too). From those measurements we can see that FreeMarker performs better than Velocity and SprintTemplates in general.
    • Using Web Services - whenever you have to use a web endpoint, the web container itself (has to be tuned separately. From Camel endpoint point of view, you can further optimise a little bit by skipping the unmarshalling if you don't need Java objects, and using asynchronous processing.
    • concurrentConsumers - there are a number of components (Seda, VM, JMS, RabbitMQ, Disruptor, AWS-SQS, etc) that support parallel consumption. Before using an endpoint, check the component documentation for thread pool or batch processing capabilities. To give you an idea, see how Amzon SQS processing can be improved through these options.

    Data Type Choice

    The type and the format of the data the is passing through Camel routes will also have performance implications. To demonstrate that let's see few examples.

    • Content based router, splitter, filter are examples of EIPs that perform some work based on the message content. And the type of the message affects the processing speed of these elements. Below is a chart from Christian Mueller's presentation, visualising how Content Based Router is performing with different kinds of messages:
      Content Based Routing for different data types
      Content Based Routing based on different data types
    For example, if you have a large XML document in the Exchange, and based on it you perform content based routing, filtering, etc., that will affect the speed of the route. Instead you can extract some key information from the document and populate the Exchange headers for faster access and routing later.
    • Marshaling/Unmarshaling - similarly to the templating engines, different 
data format covenrtors perform differently. To see some metrics check again Christian's presentation, but also keep in mind that performance of the supported data formats may vary between different versions and platforms so measure it for your use case.
    • Streaming  - Camel streaming and stream caching are one of the underrated features that can be useful for dealing with large messages.
    • Claim check EIP - if the application logic allows it, consider using claim check pattern to improve performance and reduce resource consumption.


    Camel offers multithreading support in a number of places. Using those can improve the application performance too.

    • Paralel processing EIPs - the following Camel EIP implementations support parallel processing - multicast, recipient list, splitter, delayer, wiretap, throttler, error handler. If you are going to enable parallel processing for those, it would be even better if you also provide a custom thread pool specifically tuned for your use case rather than relying on Camel's default thread pool profile.
    • Threads DSL
 construct - some Camel endpoints (such as the File consumer) are single threaded by design and cannot be parallelized at endpoint level. In case of File consumer, a single thread picks a file at a time and processes it through the route until it reaches the end of the route and then the consumer thread picks the next file. This is when Camel Threads construct can be useful. As visualised below, File consumer thread can pick a file and pass it to a thread from the Threads construct for further processing. Then the File consumer can pick another file without waiting for the previous Exchange to complete processing fully.
      Parallel File Consuming
      Parallel File Consuming
    • Seda component - Seda is another way to achieve parallelism in Camel. The Seda component has in-memory list to accumulate incoming messages from the producer and concurrentConsumers to process those incoming request in parallel by multiple threads.
    • Asynchronous Redelivery/Retry - if you are using an error handler with a redelivery policy as part of the routing process, you can configure it to be asynchronous and do the redeliveries in a separate thread. That will use a separate thread pool for the redelivery not block the main request processing thread while waiting. If you need long delayed redeliveries, it might be a better approach to use ActiveMQ broker redelivery (that is different from consumer redelivery BTW) where the redeliveries will be persisted on the message broker and not kept in Camel application memory. Another benefit of this mechanism is that the redeliveries will survive application restart and also play nicely when the application is clustered. I have described different retry patterns in Camel Design Patterns book.

    Other Optimisations

    There are few other tricks you can do to micro-tune Camel further. 
    • Logging configurations - hopefully you don't have to log every message and its content on the production environment. But if you have to, consider using some asynchronous logger. On a high throughput system, ane option would be to log statistics and aggregated metrics through Camel Throughput logger. Throughput logger allows logging aggregated statistics on fixed intervals or based on the number of processed messages rather than  per message bases. Another option would be to use the not so popular Camel Sampler EIP and log only sample messages every now and then.
    • Disable JMX - by default, Camel JMX instrumentation is enabled which creates a lot of MBeans. This allows monitoring and management of Camel runtime, but also has some performance hit and requires more resources. I still remember the time when I had to fully turn off JMX in Camel in order to run it with 512MB heap on a free AWS account. As a minimum, consider whether you need any JMX enabled at all, and if so whether to use RoutesOnly, Default or Extended JMX profiles. 
    • Message Histroy - Camel implements the Message History EIP and runs it by default. While on development environmnet, it might be useful to see every endpoint a message has been too, but on the produciton environment you might consider to disable this feature.
    • Original message - Every Camel route will make a copy of the original incoming message before any modifications to it. This pristine copy of the message is kept in case it is needed to be redelivered during error handling or with onCompletion construct. If you are not using these features, you can disable creating and storing the original state of every incoming message.
    • Other customisations -  Almost every feature in CamelContext can be customized. For example you can use lazyLoadTypeConverters for a faster application startup, or configure the shutdownStrategy for a quicker shutdown when there are inflight messages, or a use a custom UuidGenerator that performs faster, etc.

    Application Design

    All of the previous tunings are micro optimizations compared to the application design and architecture. If your application is not designed for horizontal scalability and performance, sooner or later the small tuning hacks will hit their limit. The chances are, what you are doing has been done previously, and instead of reinventing the wheel or coming up with some clever designs, learn from the experience of others and use well known patterns, principles and practises. Use principles from SOA, Microservices architectures, resiliency principles, messaging best practises, etc. Some of those patterns such as Parallel Pipelines, CQRS, Load Leveling, Circuit Breaker are covered in Camel Design Patterns book and do help to improve the overall application design.


    There are many articles about tuning the JVM. Here I only want to mention the JVM configuration generation application by Red Hat which can generate for you JVM configurations based on the latest industry best practices. You can use it as long as you have a Red Hat account (which is free for developers anyway). Using the latest JVM and latest version of Camel (with its updated dependencies) is another way to improve application performance for free.


    You can squeeze the application only so much. In order to do proper high load processing, tuning the host system is a must too. To get an idea for the various OS level options, have a look at the following check list from the Jetty project.

    In Conclusion

    This article is here just to give you some ideas and show you the extend of the possible areas to consider when you have to improve the performance of a Camel application. Instead of looking for a magical recipe or go through a checklist, do small incremental changes supported by measurements until you get to a desired state. And rather than focusing on micro optimisations and hacks, have an holistic view of the system, get the design right, and start tuning from the host system, to JVM, CamelContext, routing elements, endpoints and the data itself.

    Using well known patterns, principles and practises with focus on simple and scalable design is always a good start. Good luck.

    Idempotent Consumer EIP Icon

    While writing about the Idempontet Filter Patter in my recent Camel Design Patterns book, I wanted to visualise the Idempontet Consumer EIP on the Camel routes but couldn't find an icon for it. Inspired by existing Normalizer and Resequencer icons, I've created an Idempotent Consumer Icon.

    If you look at the Normalizer pattern, it uses different shapes to emphasise that incoming messages are of different data formats. And the Resequencer has the same number of incoming and outgoing messages, but reordered.
    Similarly, for the Idempotent Consumer I've used a circle and square to represent different incoming messages (that is not different data formats, but different business data). And to emphasise that it is a stateful pattern and can remember past messages, I've put the duplicate square messages apart from one another. Then from the icon we can see that, the pattern removes the duplicate square messages and allows only unique message to pass on.

    Couple of days after that, I've read A Decade of Enterprise Integration Patterns: A Conversation with the Authors where Gregor said this:

    Olaf: And what would you do differently now?
    Gregor: I would make an icon for the Idempotent Receiver pattern, which describes a receiver that can process the same message multiple times without any harm. Somehow we seem to have missed that one. 

    Not sure whether one day the icon will become part of the existing EIP icon set or not, but I've found it visually helpful to depict the idempotent behaviour. You can download a .png, visio and OmniGraffle stencil from here.

    Camel Design Patterns eBook is Out

    I've been involved with Apache Camel for many years now and apart from the occasional contributions, and blogging, I've used it in tens of projects over the years. That includes projects for large broadcasting companies, newspapers, mobile operators, oil companies, airlines, digital agencies, government organisations, you name it. One common theme across all these projects is that the development team loves Camel. Camel has always been a flexible and productive tool that gives the developers the edge over the changing requirements and short deadlines.

    Having seen many successful Camel projects, I try to share my experiences through blogging, but this time decided to invest more time and create an ebook called Camel Design Patterns. It is not another Camel book documenting the framework itself and the individual Enterprise Integration Patterns, but rather a collection of SOA, Microservices, Messaging, Cloud, Resiliency patterns that I've used in Camel based solutions day by day. Its format is similar to a series of essays or blog posts with high level examples showing different techniques and Camel tips for designing and architecting modern Camel applications.

    Table of Contents 

    • I Foundational Patterns
      • Edge Component Pattern
      • VETRO Pattern
      • CQRS Pattern
      • Canonical Data Model Pattern
      • Reusable Route Pattern (new)
      • Idempotent Filter Pattern
      • External Configuration Pattern
    • II Error Handling Patterns
    • III Deployment Patterns
      • Service Instance Pattern
      • Singleton Service Pattern
      • Parallel Pipeline Pattern
      • Load Leveling Pattern
      • Bulkhead Pattern
      • Service Consolidation Pattern
    It is a live book, and depending on interest and feedback, I plan to add more chapters and use cases in the future. The book costs around aVenti Espresso Frappuccino and for now it is available only on leanpub. To get a feel about the content, have a look at the sample Data Integrity Pattern chapter.

    I hope you find this ebook useful and looking forward to receiving your feedback.