Blogroll

Master/Slave Failover for Camel Routes

One way to implement a master/slave failover pattern is to have a cluster of instances of an application where one instance (the master) is currently active and the other instances (the slaves) are on standby, ready to take over whenever the master fails. Some projects provide this kind of master/slave support out of the box:
Creating a failover deployment for Apache Karaf is straight forward: we start two or more Karaf instances and let them point to the same lock (file system or database). Then the first instance that starts gets the lock and becomes the master while the other instances will be waiting to get the lock before starting the bundles. In addition Karaf offers hot standby functionality where some bundles are started even in the slave instances and other bundles wait for to get the lock.

Apache ActiveMQ offers couple of ways for creating master/slave configurations but the simplest is to start two or more instances of ActiveMQ pointing to the same datasource(file or database) where the first broker gets the lock and becomes the master and the second and other brokers become slaves, waiting for the lock. Simple.

What about Camel? How can we have multiple routes (in one or separate containers) where one is the master (in running state) and the other routes are waiting to take over as soon as the master route stops ensuring high availability at route level? There are couple of components providing such a capability and all of the them rely on having some kind of centralized external system used as a lock.

1. Camel Quartz component has clustering support.
- If you are using quartz consumers, in clustered mode, you can have only one of the routes triggered at a time.
- Or if a quartz based CronScheduledRoutePolicy is used, in clustered mode, only one of the routes will be started/stopped.
Both of these options rely on having quartz to be configured with a datasource that is shared among all the routes in the cluster. This usage is not exactly master/slave but will have the same effect at the end.

2. Camel Zookeeper component offers a RoutePolicy that can start/stop routes in master/slave fashion. The first route that gets the lock will be started where the remaining routes will be waiting to get the lock. One advantage of this component is that it can be configured to have more than one master running.

3. Camel JGroups component also has master/slave capability using JGroupsFilters.

4. JBoss Fuse Master component is probably the easiest way to have master/slave setup in a Fuse environment. Internally it relies on Zookeeper's znode capability similarly to zookeeper component above.

5. This is not implemented yet but in theory it is possible to implement a RoutePolicy using ActiveMQ's exclusive consumers feature that provides a distributed lock. Do let me know if you implement this ;)

6. Use database as a lock. Christian Schneider has demonstrated how to have "Standby failover for Apache Camel routes" using a database here.

A Camel Demo for Amazon's Simple Worklfow Service

In a previous post I explained why AWS SWF service is good and announced the new Camel SWF component. Now the component documentation is ready and here is a simplistic fully working demo. It consist of three independent standalone Camel routes:
A workflow producer allows us to interact with a workflow. It can start a new workflow execution, query its state, send signals to a running workflow, or terminate and cancel it. In our demo, the WorkflowProducer starts a route that schedules 10 workflow executions where the each execution receives as an argument a number.
Once a workflow execution is scheduled, we need a process that will decide what are the next steps for it. In Camel it is done using a Workflow Consumer. A workflow consumer represents the workflow logic. When it is started, it will start polling workflow decision tasks and process them. In addition to processing decision tasks, a workflow consumer route, will also receive signals (send from a workflow producer) or state queries. The primary purpose of a workflow consumer is to schedule activity tasks for execution using activity producers. Actually activity tasks can be scheduled only from a thread started by a workflow consumer.
The logic in our demo decider is simple: if the incoming argument is greater than 5, we schedule a task for execution. Otherwise the workflow will complete as there are no other tasks to be executed. Notice that it also has branches for handing signal and state query events.

The final peace of our distributed (since it consists of three independent applications) workflow application is the ActivityConsumer that actually performs some calculations. It has the simplest possible implementation: increments the given argument and returns it.
All you need to do to run this demo is to create the appropriate workflow domain and add your key/secret to the route.

About Me