Blogroll

Cloud Native Container Design Principles

Creating a containerized application that behaves like a good cloud native citizen and can be automated effectively by a platform such as Kubernetes requires some discipline. See below what it is. (Alternatively, read the same post on Medium)

Software design principles

Principles exist in many areas of life, and they generally represent a fundamental truth or belief from which others are derived. In software, principles are rather abstract guidelines, which are supposed to be followed while designing software. There are fundamental principles for writing quality software such as KISS (Keep it simple, stupid), DRY (Don’t repeat yourself), YAGNI (You aren’t gonna need it), SoC (Separation of concerns), etc. Even if these principles do not specify concrete rules, they represent a language and common wisdom that many developers understand and refer to regularly.

There are also SOLID principles that were introduced by Robert C. Martin, which represent guidelines for writing better object-oriented software. It is a framework consisting of complementary principles that are generic and open for interpretation but still give enough direction for creating better object-oriented designs. The SOLID principles use object-oriented primitives and concepts such as classes, interfaces, and inheritance for reasoning about object-oriented designs. In a similar way, there also principles for designing cloud native applications in which the main primitive is the container image rather than a class. Following these principles will ensure that the resulting containers behave like a good cloud native citizen, allowing them to be scheduled, scaled, and monitored in an automated fashion.

SOLID principles for cloud native applications

Cloud native applications anticipate failure; they run and scale reliably even when their infrastructure experiences outages. To offer such capabilities, cloud native platforms like Kubernetes impose a set of contracts on applications. These contracts ensure that applications they run conform to certain constraints and allow the platform to automate application management. Nowadays, it is possible to put almost any application in a container and run it. But to create a containerized application that can be automated and orchestrated effectively by a cloud native platform such as Kubernetes requires additional efforts. The principles for creating containerized applications listed here use the container as the basic primitive and the container orchestration platforms as the target container runtime environment.

Principles of container-based application design
Below is a short summary of what each principle dictates. To read in more details, download the freely available white paper from here (no signup required).

Build time:

  • Single Concern: Each container addresses a single concern and does it well.
  • Self-Containment: A container relies only on the presence of the Linux kernel. Additional libraries are added when the container is built.
  • Image Immutability: Containerized applications are meant to be immutable, and once built are not expected to change between different environments.

Runtime:

  • High Observability: Every container must implement all necessary APIs to help the platform observe and manage the application in the best way possible.
  • Lifecycle Conformance: A container must have a way to read events coming from the platform and conform by reacting to those events.
  • Process Disposability: Containerized applications must be as ephemeral as possible and ready to be replaced by another container instance at any point in time.
  • Runtime Confinement: Every container must declare its resource requirements and restrict resource use to the requirements indicated.
The build time principles ensure that containers have the right granularity, consistency, and structure in place. The runtime principles dictate what functionalities must be implemented in order for containerized applications to possess cloud native function. Adhering these principles, we are more likely to create containerized applications that are better suited for automation in cloud native platforms such as Kubernetes. Check out the white paper for more details.

The Cathedral and the Bazaar: Moving from Barter to a Currency System

This post was originally published as "How blockchain can complement open source" on Opensource.com under CC BY-SA 4.0. If you prefer, you can also read the same post on Medium.

Open Won Over Closed

The Cathedral and The Bazaar is the classic open source story written 20 years ago by Eric Steven Raymond. In the story, Eric describes a new revolutionary software development model where complex software projects are built without (or with a very little) central management. This new model is open source. Eric's story compares two models:
  • The classic model (represented by the cathedral) where software is crafted by a small group of individuals in a closed and controlled environment through slow and stable releases.
  • And the new model (represented by the bazaar) where software is crafted in an open environment where individuals can participate freely, but still produce a stable and coherent system.
Some of the reasons for open source being so successful can be traced back to the founding principles described by Eric. Releasing early, releasing often, accepting the fact that many heads are inevitably better than one allows open source projects to tap into the world's pool of talent (and not many companies can match that using the closed source model).

Two decades after Eric's reflective analysis of the hacker community, we see open source becoming dominant. It is not any longer a model only for scratching a developer’s personal itch, but instead, the place where innovation happens. It is the model that even worlds largest software companies are transitioning to in order to continue dominating.

A Barter System

If we look closely at how the open source model works in practice, we realize that it is a closed system exclusive only to open source developers and techies. The only way to influence the direction of a project is by joining the open source community, understanding the written and the unwritten rules, learning how to contribute, the coding standards, etc, and doing it yourself. This is how the bazaar works and where the barter system analogy comes from. A barter system is a method of exchange of services and goods for other services and goods in return. In the bazaar - where the software is built, that means, in order to take something, you have to be also a producer yourself, and give something back in return. And that is, by exchanging your time and knowledge for getting something done. A bazaar is a place where open source developers interact with other open source developers and produce open source software, the open source way.

The barter system is a great step forward and an evolution from the state of self-sufficiency where everybody has to be a jack of all trades. The bazaar (open source model) using the barter system allows people with common interests and different skills to gather, collaborate and create something that no individual can create on their own. The barter system is simple and lacks complex problems of the modern monetary systems, but it also has some limitations to name a few:
  • Lack of divisibility - in the absence of a common medium of exchange, a large indivisible commodity/value cannot be exchanged for a smaller commodity/value. For example, even if you want to do a small change in an open source project, you may still have to go through a high entry barrier sometimes.
  • Storing value - if a project is important to your company, you may want to have a large investment/commitment in it. But since it is a barter system among open source developers, the only way to have a strong say is by employing many open source committers and that is not always possible.
  • Transferring value - if you have invested in a project (trained employees, hired open source developers) and want to move focus to another project, it is not possible to transfer expertise, reputation, influence quickly.
  • Temporal decoupling - the barter system does not provide a good mechanism for deferred or in advance commitments. In the open source world, that means a user cannot express its commitment/interest in a project in a measurable way in advance, or continuously for future periods.
We will see below what is the back door to the bazaar and how to address these limitations.

A Currency System

People are hanging at the bazaar for different reasons: some are there to learn, some are there to scratch a developer’s personal itch and some work for large software farms. And since the only way to have a say in the bazaar is by becoming part of the open source community and joining the barter system, in order to gain credibility in the open source world, many large software companies pay these developers in a monetary value and employ them. The latter represents the use of a currency system to influence the bazaar. Open source is not any longer for scratching the personal developer itch only. It also accounts for a significant part of the overall software production worldwide and there are many who want to have an influence.

Open source sets the guiding principles through which developers interact and build a coherent system in a distributed way. It dictates how a project is governed, software is built and the output distributed to users. It is an open consensus model for decentralized entities for building quality software together. But the open source model does not cover how open source is subsidized. Whether it is sponsored, directly or indirectly, through intrinsic or extrinsic motivators is irrelevant to the bazaar.
Centralized and decentralized ecosystems supporting open source

Currently, there is no equivalent of the decentralized open source development model for subsidization purpose. The majority of the open source subsidization is centralized and monopolized typically by one company which dominates a project by employing the majority of the open source developers of that project. And to be honest, this is currently the best case scenario which guarantees that the developers will be employed and the project will continue flourishing. While a company is working on its paid software or services whether that is SaaS subsodizes open source development indirectly.

There are also exceptions for the project monopoly scenario: for example, some of the Cloud Native Computing Foundation projects are developed by a large number of competing companies. Also, the Apache Software Foundation aims for their projects not to be dominated by a single vendor by encouraging diverse contributors, but most of the popular projects, in reality, are still single vendor projects...

What we are still missing is an open and decentralized model that works like the bazaar without a central coordination and ownership, where consumers (open source users) and producers (open source developers) interact with each other driven by market forces and open source value. In order to complement open source, such a model also has to be open and decentralized and this is why the blockchain technology would fit here best. This would create an complementary ecosystem that flows subsidies from users to developers, but without a centralized and monopolized entity (such as an open source company). There are already successful open source cryptocurrency projects such as Decred, Dash, Monero, Zcash, that use a similar decentralized funding model where a portion of the mining or donations subsidy is used for their own development.

Most of the existing platforms (blockchain or non-blockchain) that aim to subsidize open source development are targeting primarily bug bounties, small and piecemeal tasks. There are also a few focused on the funding of new open source projects. But there are not many that aim to provide mechanisms for sustaining continued development of open source projects. Basically, a system that would emulate the behavior of an open source service provider company, or open core, open source based SaaS product company: ensuring developers get continued and predictable incentives, and guiding the project development based on the priorities of the incentivizers, i.e. the users. Such a model would address the limitations of the barter system listed above:
  • Allow divisibility - if you want something small fixed, you can pay a small amount without paying the full premium of becoming an open source developer for a project.
  • Storing value - you can invest a large amount into a project and ensure its continued development and ensure your voice is heard.
  • Transferring value - at any point, you can stop investing in the project and move funds into other projects.
  • Temporal decoupling - allow regular recurring payments and subscriptions.
There would be also other benefits raising purely from the fact that such a blockchain based system is transparent and decentralized: to quantify a project's value/usefulness based on its users' commitment, decentralized roadmap governance, decentralized decision making, etc. While there still will be user who prefer to use the more centrally managed software, there will be others who prefer the more transparent and decentralized way of influencing projects. There is enough room for all parties.

Conclusion

On the one hand, we see large companies hiring open source developers, and acquiring open source startups and even foundational platforms (such as Microsoft buying Github). Many if not most long-running successful open source projects are centralised around a single vendor. The significance of open source, and its centralisation is a fact.

On the other hand, the challenges around sustaining open source software are becoming more apparent, and there are many investigating deeper this space and its foundational issues. There are a few projects with high visibility and a large number of contributors, but there are also many other still important projects but with not enough contributors and maintainers.

There are many efforts trying to address the challenges of open source through blockchain. These projects should improve the transparency, decentralization, subsidization, and establish a direct link between open source users and developers. This space is still very young but progressing fast, and with time, the bazaar is going to have a cryptocurrency system.

Given enough time, and adequate technology, decentralization is happening at many levels:
  • The Internet is a decentralised medium that unlocked world's potential for sharing and acquiring knowledge.
  • Open source is a decentralized collaboration model that unlocked the world's potential for innovation.
  • And similarly, blockchain can complement open source and become the decentralized open source subsidization model.
Follow me on twitter for other posts in this space.

The Rise of Non-Microservices Architectures

(This post was originally published on Red Hat Developers, the community to learn, code, and share faster. To read the original post, click here.)

This is a short summary of my recent experiences with customers implementing architectures similar to microservices but with different characteristics in the post-microservices world.

The microservices architectural style has been around close to five years now, and much has been said and written about it. Today, I see teams deciding not to follow strictly certain principles of the "pure" microservices architecture and breaking some of the "rules". Teams are now more informed about pros and cons of microservices and take context driven decisions respecting team experience, organizational boundaries and accepting the fact that not every company is Netflix. Below are some examples I see in my recent microservices gigs.

No premium in advance

Teams (composed of devs, ops, testers, business analysts, architects, etc) are becoming more and more aware of on the premium they have to pay for the privilege of going into pure microservices based architecture. A typical Java based microservice running on Kubernetes (the most popular microservices platform) will require: a git repository, maven module, a collection of tests (unit, integration, acceptance), APIs, maven artifacts, container images, configurations, secure configurations, build pipelines, design, documentation, etc. At runtime, it will require CPU, memory, disk, networking, metrics aggregation, log aggregation, database, endpoints, service mesh proxy side car, etc. Also a collection of Kubernetes objects: container, volume, configmap, secret, pod, service, replica set, deployment, etc. Navigating and managing tens or hundreds of these artifacts puts a serious burden on everybody in a team. No surprise that recently Thoughtworks announced they are not intending to put microservices architecture into adopting phase of their technology radar in a foreseeable future.

Raison d'ĂȘtre

Considering there is a price per service (not envisaging all the hidden premium), rather than the original "start small, start with few hundred lines of code", systems start as one service, as a mono repository as long as it belongs to one team. Then for every service, there is a clearly identified reason with benefits justifying its existence as an independent microservice. That is a mandatory existence check before carving a service as a standalone. Talking about reasons, below are a few very valid reasons.

Breaking the bounded context

While the most talked method for decomposition into microservices is decomposition by bounded context, in practice there are many more reasons for creating microservices: decomposing by maturity, decomposing by data-access pattern (read vs write), decomposition by data source (rather than partitioning a data source per microservice, create a microservice per data source), aggregation for a derived functionality (create an orchestrating service for a few other services), aggregation for client convenience (such as backend for frontend pattern), aggregation to aid system performance, etc.

Shared data sources

One of the fundamental principles of microservices is that every service has a separate data store. While in theory, this principles makes perfect sense, in practice, for brownfield projects, it is the hardest part about microservices, and not always worth the effort. That is especially true for integration projects where the data source is typically owned by a different team or company and cannot be partitioned to start with. It is still possible to benefit from having independent services sharing the same data store, by acknowledging its future constraints caused by the data source level coupling.

Less inflated expectations

The good news is that teams are now making more informed decisions rather than blindly trusting microservices conference slides. In regards to Gartner's Hype cycle, that means, after a couple of years of "Inflated Expectations", the microservices architecture is heading (down) towards "Trough of Disillusionment" stage where expectations are more aligned with the real benefits.
Hype cycle
From here on, the future is full of enlightenment and productivity. Unless another cycle starts (such as serverless) before we rip the benefits of this one.

Mutated microservices

Microservices favor event-driven interactions and choreography over orchestration to decrease service coupling. But at the same time we have seen projects like Cadence by Uber, Conductor by Netflix created specifically to orchestrate distributed long-running services as an alternative of the choreography approach.

Bernd Ruecker has done a very good review of using events, orchestration and workflow engines in distributed system and analysing their real against perceived benefits.

In a different post titled Microservices in a Post-Kubernetes Era, I also described what are the changes in the microservices architectural style driven purely by Kubernetes and the cloud native primitives.

There are also others who also wrote about non-microservices architectures such as self-contained systems, miniservices, goodbye-microservices that are better alternatives to pure microservices in certain contexts. These are all good reasons for breaking away from pure microservices principles whenever the context requires it.

Conclusion

The best architecture is the context-driven architecture where you take a well-understood architecture and adapt it your needs. You question every principle, every rule and are not afraid of breaking away from some of the prescriptive elements as long as you understand and accept the consequences. A good analogy is “The map is not the territory”. If the architecture is the map, the context is the territory. Let me know which microservices rules you are breaking and it works for you even better. Be brave.

From Agile to Serverless

If you prefer, read the same post on Medium

Looking back. And forth.

The microservices architecture was born as a technological answer for the iterative Agile development methodology. At the early days of microservices, many companies were doing a form of Agile (XP, Scrum, Kanban, or a broken mixture of these) during development, but the existing software architectures didn't allow an incremental way of design and deployment. As a result, features were developed in fortnight iterations but deployed every six to twelve months iteration. Microservices came as a panacea in the right time, promising to address all these challenges. Architects and developers strangled the monoliths into tens of services which enabled them to touch and change different parts of the system without breaking the rest (in theory).

Microservices on its own put light into the existing challenges of distributed systems and created new ones as well. Creating tens of new services didn't mean they are ready to deploy into production and use. The process of releasing and handing them over to Ops teams had to be improved. While some fell for the extreme of "You Build It, You Run It", others joined the DevOps movement. DevOps meant, better CI/CD pipelines, better interaction between Devs and Ops and everything it takes. But a practice without the enabling tools wasn't a leap, and burning large VMs per service didn't last for a very long. That led to containers with the Docker format which took over the IT industry overnight. Containers came as a technical solution to the pain of microservices architecture and DevOps practice. With containers, applications could be packaged and run in a format that the Devs and Ops would both understand and use. Even at the very early days, it was clear that managing tens or hundreds of containers will require automation, and Kubernetes came from the heavens and swept all the competition with a swing.

Now, we do Agile development and use microservices architectures. We have one-click build and deployment pipelines with a unified application format. We practice DevOps and manage hundreds of microservices at scale, do blue-green and canary releases with Kubernetes. This was supposed to be the time of long-lasting technological peace where we can focus on the business problems and solve them. But it was not.

From Agile to Serverless and Beyond

With Kubernetes, Devs can develop and Ops can deploy in a self-service manner. But there are still only one or two company-wide Kubernetes clusters that occasionally run out of resources. The cluster is now the center of the universe. And it is even more critical than the monolith was before as it runs the build server, and the git server, and the maven repository, the website, MongoDB, and a few other things. So a promise for a serverless platform came from the clouds. It came even before the containers reached into production. With serverless, every service would have its own cluster, a cluster where resources are limited only by the limit of the credit cards. It is a platform to which you tightly couple, and bet the existence of your company for the price of going faster. So it started all over again.

How Blockchain will Influence Open Source

This post was originally published on Opensource.com under CC BY-SA 4.0. If you prefer, read the same post on Medium.

Interactions between users and developers enabled by blockchain technology can create self-sustaining, decentralized open source.

What Satoshi Nakamoto started as Bitcoin a decade ago has found a lot of followers and turned into a movement for decentralisation. For some, blockchain technology is a religion and will have the same impact on humanity as the Internet had. For others, it is another hype and a technology suitable for Ponzi schemes only. While blockchain is still evolving and trying to find its place, one thing is for sure: it is a disruptive technology that will fundamentally transform certain industries. And my bet is, open source will be one of them.

The Open Source Model

Open source is a collaborative software development and distribution model that allows people with common interests to gather and produce something that no individual can create on their own. It allows the creation of value that is bigger than the sum of its parts. It is enabled by distributed collaboration tools (IRC, email, git, wiki, issue trackers, etc), distributed and protected by an open source licensing model and often governed by software foundations (such as Apache Software Foundation (ASF), Cloud Native Computing Foundation (CNCF), etc.).

One interesting aspect of the open source model is the lack of financial incentives in its core. There are some who believe open source work should remain detached from money and remain a free and voluntary activity driven by intrinsic motivators only (such as "common purpose" and "for the greater good”). And there are others who believe open source work should be rewarded directly or indirectly through extrinsic motivators (such as financial incentive). While the idea of open source projects prospering only through voluntary contributions is a very romantic one, in reality the majority of the open source contributions are done through paid development. Yes, we have a lot of voluntary contributions, but that is on a temporary basis from contributors that come and go, or for exceptionally popular projects while they are at their peak. Creating and sustaining open source projects that are useful for the enterprises, requires developing, documenting, testing, bug fixing for prolonged periods even when the software is no longer no longer shiny and exciting. It is a boring activity that can be best motivated through financial incentives.

Commercial Open Source

Software foundations such as ASF accept and survive on donations and other income streams suchs as sponsorships, conference fees, etc. But those funds are primarily used to run the foundations, to ensure there is a legal protection for the projects, to ensure there are enough servers to run builds, issue trackers, mailing lists, etc. 
In a similar manner, CNCF has member fees (and similar other income streams as well) used to run the foundation and provide resources for the projects. Nowadays, most software is not build on laptops, it is run and tested on hundreds of machines on the cloud, and that requires money. Creating marketing campaigns, brand designs, distributing stickers, etc. all takes money and some foundations can assist with that as well. At its core, foundations implement the right processes for interacting with users, developers, and control mechanisms for distribution of the available financial resources to the open source projects for the common good. 

If users of the open source projects can donate money and the foundations can distribute it in a fair way, what is missing then? What is missing is a direct, transparent, trusted, decentralized, automated bidirectional link for transfer of value between the open source producers and the open source consumer. Currently, the link is either:

  • Unidirectional: a developer (BTW, I'm saying a developer, but think of it as any role that is involved in the production, maintenance and distribution of software) can use their brain juice and devote time to do a contribution and share that value with all open source users. But there is no reverse link.
  • or Indirect: if there is a bug that affects a specific user/company, the options are:
  • To have in-house developers to fix the bug and do a pull request. That is ideal, but not always possible to hire in-house developers knowledgeable about hundreds of open source projects used daily.
  • To hire a freelancer specialising around that specific open source project and pay for the services. Ideally, the freelancer is also a committer for the open source project and can directly change the project code quickly. Otherwise, the fix might not make it to the project ever.
  • Or to approach a company providing services around the open source project. Such companies typically employ open source committers to influence and gain credibility in the community and offer products, expertise and professional services, etc.
And the last one has been a successful model for sustaining many open source projects. Whether that is through services (training, consulting, workshops), support, packaging, open core, SaaS, there are companies employing hundreds of staff to work on the open source for full time. There is a long list of companies that have managed to build a successful open source business model over the years and the list is growing steadily. 

The companies backing open source projects have a very important role to play in the ecosystem. They are the catalyst between the open source projects and the users. The ones that add real value are not only packaging software nicely, but do much more. They can identify user needs, technology trends and create a full stack and even ecosystem of open source projects to address these needs. They can take a boring project and support it for years. If there is a missing piece in the stack, they can start an open source project from scratch and build a community around it. They can acquire a closed source software company and open source the projects. Here I got a little bit carried away, but yes, I'm talking about my employer Red Hat and what we do among other things.

To summarise, with the commercial open source model, the projects are officially or unofficially managed and controlled by a very few individuals or companies that monetise them and also give back to the ecosystem by ensuring the project is successful. It is a win-win-win for the open source developers, managing companies and end users. The alternative is inactive projects and expensive closed source software.

Self-sustaining, Decentralized Open Source

In order for a project to become part of a reputable foundation it has to conform to certain criteria. For example, at ASF and CNCF there are incubation and graduation processes respectively where apart from all the technical and formal requirements, a project must have a healthy number of active committer and users. And that is in the essence of forming a sustainable open source project. Having source code on Github is not the same as having an active open source project. The latter requires committers that write the code, and users using the code, and both groups enforcing each other continuously by exchanging value and forming an ecosystem where everybody benefits. Some project ecosystems might be tiny and short lived, and some may consist of multiple projects and competing service providers, with very complex interactions lasting for many years. But as long as there is an exchange of value and everybody benefits from it, the project is developed, maintained and sustaining.

If you loot at ASF Attic, you will find projects that have reached their end of life. Usually, that is the natural end of a project when it is not technologically fit for purpose any longer. Similarly in the ASF Incubator you can find tens of projects that have never graduated but got retired instead. Typically, these are projects that are not able to build a large enough community because they are very specialized or there are better alternatives available. But there are also cases where projects with high potential and superior technology cannot sustain themselves because they cannot form or maintain a functioning ecosystem for exchange of value. The open source model and the foundations do not provide a framework and mechanisms assisting developers get paid for their work, or users get their requests heard. There isn’t a common value commitment framework for either party. As a result, some projects can only sustain themselves in the context of commercial open source where a company acts as an intermediary and value adder between developers and users. That puts another constraint and the necessity of a service provider company for sustaining some open source projects. Ideally, users should be able to express their interest in a project and developers should be able to show their commitment to the project in a transparent and measurable way and form a community with common interest and intent for exchange of value.

Now, let's imagine there is a model, mechanisms and tools that enable direct interaction between the open source users and the developers. Not only code contributions through pull requests, questions over the mailing lists, GitHub stars, stickers on laptops, but also other ways to allow users influence projects destinies through a more richer, self-controlled and transparent means. That could include incentives for actions such as the following:

  • Funding open source projects directly (rather than through software foundations)
  • Influence the direction of projects through voting (by token holders)
  • Feature requests driven by user needs
  • On-time pull request merges
  • Bounties for bug hunts
  • Better test coverage incentives
  • Up-to-date documentation rewards
  • Long-term support guarantees
  • Timely security fixes
  • Expert assistance, support and services
  • Budget for evangelism and promotion of the projects
  • Budget for regular boring activities
  • Fast email and chat assistance
  • Full visibility of the overall project findings, etc. 
If you haven't guessed already, I'm talking about using blockchain and smart contracts that will allow such interactions between users and developers. Smart contracts that will give power to the hand of token holders to influence projects.

The usage of blockchain in the open source ecosystem.
The existing channels in the open source ecosystem provide ways for users to influence projects through financial commitments to the service providers or other limited means through the foundations. But the addition of blockchain based technology to the open source ecosystem could open new channels for interaction between users and developers. I'm not saying this will replace the commercial open source model. Most companies working with open source do much more and cannot be replaced by smart contracts. But smart contracts can spark a new way for bootstrapping new open source projects. A new way to give a second life to commodity projects that are a burden to maintain. A new way to motivate developers to apply boring pull requests, write documentation, get tests to pass, etc. A direct value exchange channel between users and open source developers. It can add new channels to open source projects to grow and be self-sustaining in the long term even when a company backing is not feasible. A new complementary model for self-sustaining open source projects. A win-win.

Tokenizing Open Source

There are already a number of initiatives aiming to tokenize open source. Some are focused only on an open source model, and some are generic but apply to open source development as well. Here is a list I built so far:
  • Gitcoin - grow open source, one of the most promising ones in this area.
  • Oscoin - cryptocurrency for open source
  • Open Collective - platform for supporting open source projects.
  • Fund Yourself
  • Kauri - support for open source project documentation.
  • Liberapay - recurrent donations platform.
  • FundRequest - a decentralized marketplace for open source collaboration.
  • CanYa - who recently acquired Bountysource - the world’s largest open source P2P bounty platform.
  • OpenGift - a new model for open source monetization.
  • Hacken - a white hat token for hackers.
  • Coinlancer - decentralized job market.
  • CodeFund - open source ad platform.
  • IssueHunt - funding platform for open source maintainers and contributors.
  • District0x 1Hive - crowdfunding and curation platform.
  • District0x Fixit - github bug bounties.
The list above is varied and growing rapidly. Some of these projects will disappear, some will pivot, but a few will emerge as the SourceForge, the ASF, the Github of the future. Not necessarily replacing these platforms, but complementing them with token models and creating a richer open source ecosystem. So every project can pick its distribution model (license), governing model (foundation), and incentive model (token). In all cases, this will pump fresh blood to the open source world.

The Future is Open and Decentralised

  • Software is eating the world.
  • Every company is a software company.
  • Open source is where innovation happens.
Given this, it is clear that open source is too big to fail, too important to be controlled by few or left to its own destiny. Open source is a shared-resource system that has value to all, and more more importantly it must be managed as such. It is only a matter of time until every company on earth will want to have a stake and a say in the open source world. Unfortunately, don't have the tools and the habits to do it yet. Such tools would allow anybody to show their appreciation or ignorance of software projects. It would create a direct and faster feedback loop between producers and consumers, between developers and users. It will foster innovation, innovation driven by user needs and expressed through token metrics.

Enterprise Integration for Ethereum

If you prefer, read the same post on Medium.
The most popular open source Java integration library — Apache Camel supports Ethereum’s JSON-RPC API now.

The Ethereum Ecosystem

Ethereum is an open source, public, blockchain platform for running smart contracts. It provides a decentralized Turing-complete virtual machine that can execute scripts and a cryptocurrency used to compensate participant mining nodes for computations performed or to mitigate spam. Today, Ethereum is one of the most established and mature blockchain platforms with interests from small and large companies, nonprofit organizations and governments. There is a lot that can be said about Ethereum ecosystem and the pace it moves with. But the facts talk for themselves, Ethereum has the momentum and all the indications of a technology with a potential:
  • Ethereum has an order of magnitude more active developers than any other blockchain platform and as dictated by the Metcalfe's law, this gap widens day by the day. Ethereum coding school CryptoZombies has over 200K users, Truffle development framework has over half a million downloads.
  • The cloud platforms Amazon Web Services and Microsoft Azure offer services for one-click Ethereum infrastructure deployment and management.
  • The Ethereum technology has the interest of enterprise software companies. Customized Ethereum-based applications are being developed and experimented by financial institutions such as JPMorgan Chase, Deloitte, R3, Innovate UK,  Barclays, UBS, Credit Suisse and many others. One of the best known in this area is the J. P. Morgan Chase developed permissioned of Ethereum blockchain called Quorum.
  • In 2017, Enterprise Ethereum Alliance (EEA) was setup up by various blockchain start-ups, Fortune 500 companies, research groups and others with the aim to help adoption of Ethereum based technology. It provides standards, resources for businesses to learn about Ethereum and leverage this groundbreaking technology to address specific industry use cases.
Ethereum has passed the moment when it was a hipster technology or a scientific experiment, and now it is a fundamental open source decentralization technology that enterprise companies are looking into. Talking about open source and the enterprise, I thought I also do my tiny piece of contribution to the Ethereum ecosystem and help for its adoption. Let's see what is it.

Open Source Enterprise Integration

Ethereum is distributed and decentralized, but it is mostly a closed system with the embedded ledger, the currency, and the executing nodes. In order to be useful for the enterprise, Ethereum has to be well integrated with existing legacy and new systems. Luckily, Ethereum offers a robust and lightweight JSON-RPC API with a good support for the JavaScript language. But in the enterprise companies, JavaScript is not the primary choice for integration, it is rather Java followed by .Net. Java is not necessary lightweight or fast evolving, but it has a huge developer community and a mature library ecosystem making it the top choice for the majority of enterprise companies. The main factor contributing to the productivity of the Java language is the reuse of existing libraries and avoiding reinventing the wheel. One of the most popular libraries enabling reuse and avoiding reinventing the wheel for integration is Apache Camel. Luckily, Camel happens to be my passion and a project I have been contributing for many years, so connecting the two was the most natural thing for me to do.
Apache Camel building blocks
Building blocks of Apache Camel
For those who are coming from a blockchain background and are not familiar with Camel, here is a very brief intro. Apache Camel is a lightweight open source integration library that is composed conceptually of three parts:
  • Implementations of the widely used Enterprise Integration Patterns (EIPs). (Notice this are not Ethereum Improvement Proposal that shares the same acronym.) EIPs provide a common notation, language and definition of the concepts in the enterprise integration space (think of publish-subscribe, dead letter channel, content-based router, filter, splitter, aggregator, throttler, retry, circuit breaker, etc.). Some of these patterns have been around for over a decade and some are new, but they are well known by anyone doing messaging and distributed system integration for a living.
  • The second major part of Apache Camel is the huge connectors library. Basically, as long as there is a Java library for a protocol, system endpoint, SaaS API, most likely there is a Camel connector for it (think of HTTP, JMS, SOAP, REST, AWS SQS, DropBox, Twitter, and now Ethereum, etc). Connectors abstract away the complexity of configuring the different libraries and provide a unified URI based approach for connecting to all kind of systems.
  • And the last piece of Apache Camel is the Domain Specific Language (DSL) that wires together connectors and EIPs in a higher level integration focused language. The DSL, combined with connectors and patterns makes developers highly productive in connecting systems and creates solutions that are industry standard and easier to maintain for long periods. All these are characteristics that are important for enterprise companies looking to create modern solutions based on mature technology.
    Companies are more integrated than ever, the systems within the companies are more integrated than ever. And if you are a Java shop, most likely there is already some Apache Camel based integration in use somewhere in the organization. Now you can use Camel and all the capabilities it provides also to talk to Ethereum.

    Apache Camel Connector for Ethereum

    The natural intersection of the two technologies is a Camel connector for Ethereum. Such a connector would allow integrating Ethereum with any other system, interaction style, and protocol. For that purpose, I evaluated the existing Java libraries for Ethereum and came to the conclusion that web3j is the right fit for this use case. Web3j is an actively developed, feature rich, Java library for interacting with Ethereum compatibles nodes over JSON-RPC. Camel-web3j connector (the technical name for the Camel Ethereum connector) is a thin wrapper that gives an easy way to use the capabilities offered by web3j from Apache Camel DSL. Currently, the connector offers the following features:
    The full power of this integration comes not from the connector features, but when the connector is used together with the other connectors, patterns and all other Camel capabilities to provide a complete integration framework around Ethereum.

    Ethereum compatible JSON-RPC APIs
    Next, I'm going to focus on adding support for Parity's Personal, and Geth's Personal client API, Ethereum wallet support, and others. The aim is to keep the component up-to-date with the web3j capabilities that are useful in system-to-system integration scenarios with Apache Camel. The connector is pushed to Apache Camel 2.22 and ready for early adopters to give it a try and provide feedback. To get started, have a look at the unit tests to discover how each operation is configured, and the integration tests to see how to connect to Ganache or Ethereum mainnet, etc. Enjoy.

    Use Cases for Apache Camel

    Bellow is the Enterprise Ethereum Architecture Stack (EEAS) which represents a conceptual framework of the common layers and components of an Enterprise Ethereum (EE) application according to the client specification v1.0.

    Enterprise Ethereum Architecture Stack
    Enterprise Ethereum Architecture Stack
    If you wonder where exactly Camel fits here, Camel-web3j is part of the tooling layer as an integration library with a focus on system-to-system integration. It uses the public Ethereum JSON-RPC API, which any Enterprise Ethereum compatible implementation must support and keep backward compatible with.
    Then, Camel would primarily be used to interact with services that are external to Ethereum but trusted by the smart contracts (so-called Oracles). In a similar manner, Camel can be used to interact with Enterprise Management Systems to send alerts and metrics, report faults, change configurations, etc.

    The main use cases I can think of for this connector are:
    • Listen for new blocks, events, happening in the Ethereum network, filter, transform, enrich and publish them into other systems. For example listen for new blocks, retrieving its transactions, filter out uninteresting ones, enriching others, and process them. That can be done using Ethereum node filters capabilities, or purely with Camel, using polling consumers to query a node periodically and idempotent filters to prevent processing previously processed blocks, etc.
    • The other use case would be, to listen for events and commands coming from an enterprise system (maybe a step in the business process) and then tell the Ethereum network about it. For example, a KYC is approved or payment is received in one system, which causes Camel to talk to the second system and retrieve a user's ERC20 address and perform an Ethereum transaction.
    Real world uses of Camel would involve a more complex mixture of the above scenarios ensuring high availability, resilience, replay, auditing, etc, in which Camel is really good at.

    Ethereum Oracle Implemented in Apache Camel

    "Talk is cheap. Show me the code." - Linus Torvalds

    In many occasions, smart contracts need information from the real world to operate. An oracle is, simply put, a smart contract that is able to interact with the outside world. The demonstrate the usage of Camel-web3j, I created a Camel route that represents an oracle. The route listens for CallbackGetBTCCap events on a specific topic, and when such an event is received, the Camel route generates a random value and passes it to the same contract by calling setBTCCap method. That is basically a "Hello world!" the Ethereum way.

    To trigger the event, you can call updateBTCCap method on the smart contract using the following unit test:
    mvn test -Dtest=CamelOracleRouteTest#updateBTCCap
    To check the current price in the contract, you can call getBTCCap method on the smart contract using the following unit test:
    mvn test -Dtest=CamelOracleRouteTest#getBTCCap
    Check the full instructions, the smart contract, Camel routes on Github and try it for yourself. If you use the component and have questions or feedback, if you like this and you are interested from implementing Camel connector for other blockchain projects, reach out. Take care. eth jar: 0x6fc1bF6A69B92C444772aCE4CB040705Afd255bD

    Cryptocurrencies with Bus Factor of One

    The Bus Factor

    As defined by Wikipedia, the bus factor is a measurement of the risk resulting from capabilities not being shared among members of an endeavour "in case they get hit by a bus". When looking at cryptocurrencies and trying to predict which one would grow in value and survive the test of time, one risk factor to keep in mind is the bus factor. Let's see some of the most popular cryptocurrencies and their bus factor.

    Bitcoin Bus Factor

    Let's imagine that the person on the Bus Factor Wikipedia page is the real Satoshi Nakamoto and he is about to be hit by a bus. If such a terrible event happens, since nobody knows who is Satoshi, and more importantly, since he is not any longer involved actively with the bitcoin project, such an event would not affect Bitcoin slightest.

    Bus Factor by Wikipedia

    Unless Satoshi has shared his private keys for the 980,000 Bitcoins with his grandchildren, the coins would remain locked forever but Bitcoin still would strive until the end of time. Since we cannot name one person as the face of Bitcoin, its Bus Factor is determined by the core team members and the supporting community as a whole. The risk is distributed and much smaller scale.

    Ethereum Bust Factor

    Now let's imagine the Etehreum founder Vitalik is declared dead. Actually that happent in the past where Etheresum lost 4$ Billion in market value instantly. That forced Vitalik to use PoL (Prove of Liveness) to calm the markets down.
    Vitaliks self prove of liveness
    Vitalik being the creator and still very actively involved in defining the project vision  increases Ethereum Bus Factor to a solid 1. Meaning, it takes only one bus accident to significantly impact the project.

    That is not to suggest that Bitcoin is infinitely decentralised and Ethereum is centralised. Actually, Cornell Professor Emin Gun Sirer proved that Ethereum is more distributed and decentralised than Bitcoin from node distribution point of view. Ethereum nodes are better distributed and spread compared to Bitcoin nodes where the majority are managed by limited large miners.
    But from visionary, leadership and influence point of view,  by hiding his/her real identity, Satoshi potentially removed the most centralized point in his decentralized system. Etehreum is the second most popular currency with huge community already that can supports years ahead, but Vitalik still remains the most centralazed point in the Ethereum system.

    Bus Factor in Action

    Bus Factor is not dictated only by life and death situations. Similar risk hovered over Ethereum also when Vitalik tweeted his views about child porn. But Factor can express itself in so many different unpredictable situations and ways. A recent example that comes to mind is when the founder of ZClassic (ZCL), Rhett Chreighton decided to fork and abandon the project and start Bitcoin Private project. The price chart below is self explanatory for the result of such an action.
    ZClassic drops 97%

    An example on the opposite site is the creator of Litecoin, Charlie Lee who took a different route, and publicly announced that he sold all of his Litecoin, to remain impartial to the project. He is basically an evangelist for Litecoin now who tries to make the project to survive without him.

    Personality Cult Coins

    There are other examples where a coin has the market cap close to the budget of a small country and that is primarily driven by a project founder or advisor. The common pattern to be aware is, when there is 1 to 1 association between a project and a single person, which is an indication of a bus factor of 1.
    A recent tweet by Kevin Pham pointed out coins that are driven by a single person. Expanding that list, I came to the following one:
    Notice the people listed here have a much bigger influence on their project than a developer would have. Most of the people listed here are typically the visionary, the face, the blood and the flesh of the coins. It is a risk much bigger than a bus factor which is typically measuring the dependency of projects to a software developer or sys admin role.

    Surviving Bus Accidents

    I've been working with open source over a decade now and there are good lessons to learn from Apache Software Foundation (ASF) and other open source foundations about creating long living project ecosystems. The main criteria at ASF for a project to graduate from incubation to a top level project is when it builds a self sustaining community. That is a diverse community (from multiple organizations) of committers, contributors and users. There is a similar criteria at Cloud Native Computing Foundation (CNCF) where the focus on supporting organizations rather than supporting individuals as in ASF. Ideally, you want senior developers and visionaries from multiple organizations and independent contributors all together.

    In the crypto world, one of the primary criteria to measure community is the number of telegram/twitter/reddit followers . While that is an indication of a user interest, it can be easily manipulated, and actually it is a common and mandatory practise nowdays to pump these statistics through bounty programs and airdrops by project themselves. On the other hand growing the develop community and project visionaries is much harder and takes longer and it is a more accurate indicator for long term success. Make your picks wisely

    Upvote this article on Steemit or ETH Tips Jar: 0x8b8945972392ed8E6983B6EE66e9777eE5Bd81aD

    Spreading Freedom with Mainframe

    If you prefer, read the same post on Medium.

     The value of freedom

    Freedom is one of these intrinsic human values that are not appreciated until lost. But how do you measure freedom and how important is it for you? Let me tell you my story.

    I was born in an ex-communist country in eastern Europe where I learned the value of freedom at a very early age. At that time, I wasn’t allowed to practice my religion and even forced to change my name. I wasn’t allowed to use my mother language. Not allowed to listen and speak, not allowed to watch, not allowed to read and write in it. Luckily these times are far behind now, but it thought me at an early age to notice freedom, the absence of freedom, and its importance.
    Now I live in London UK - a city with the highest level of individual freedom in the world. It society operates on the principles of individual liberty and tolerance of those with different faiths and beliefs. In the professional life, I always participated in open source and free communities and naturally end up working at Red Hat which is build on the open source model. There I help customers replace proprietary software with open source and open standards. I’m not saying it is the perfect free world I live now, but having experienced the absence of freedom at a young age, I noticed that freedom attracts me more than anything.

    I have also witnessed first hand how “democratic” governments suppress free exchange of ideas, ban Youtube, Wikipedia, and Twitter, first-world countries where technology is used for mass censorship or deliberate deception. For different societies, freedom matters at different levels, but it matters to everybody, from life and death matters to a simple expression of opinions. And as we have seen many times in the history, freedom can be taken away anywhere in the world, and it takes courage, vision and hard work to preserve and spread it further.

    Freedom enabling technology

    In the technology world, one of the primary manifestations of what free exchange of ideas and free collaboration can create is the rise of open source. The open source model allows individuals with a common goal to come together (virtually), collaborate and create something that is bigger than what any individual can create and even bigger than what commercial software companies can create. But such collaborative models are only possible when there is a technology that enables effective and free human interaction, where ideas are evaluated, work is valued based on merit and conflicts resolved through discussion.

    To create the next generation collaborations, to achieve the next level of freedom, we need the next generation freedom enabling technology. A technology that scales globally, a platform resistant to censorship and guaranteeing individual privacy. Only then we can be assured that individuals can contribute to their fullest ability. Only then we can unlock the world's potential!

    A mission greater than the sum of its parts

    We have seen how the Internet gives access to the world’s knowledge even in the poorest parts of the world. We have seen how social apps can connect people and lead to revolutions during the Arab Spring. We have seen how Bitcoin can revolutionize borderless value exchange. Mainframe mission is at a similar scale. Having a decentralized protocol for censorship-resistant messaging will serve as a foundation for creative minds to build applications that spread freedom on a scale that the world has not seen before. We cannot imagine what such platform will bring us, but we can be sure it will be spreading more freedom.

    The first in need for such a platform is the crypto world itself. With crypto related censorship coming from Facebook, Google and soon Twitter, we need a censorship-free messaging platform for the crypto existence. Decentralization revolution cannot be stopped, but with enough censorship, it can be delayed and momentum can be taken away. Without the free exchange of ideas, we cannot invent, build, grow and spread more freedom fast enough. We need a Mainframe first for ourselves than any other industry right now. The Mainframe mission is bigger than the success of Mainframe only, it is existential for the whole crypto world and anybody in need of freedom.

    Free Employee Scheduling Application with OptaPlanner and Apache Isis

    Here, I want to briefly talk about two open source projects I love playing with.

    OptaPlanner is a constraint solver. It optimises business resource planning use cases, such as Vehicle Routing, Employee Rostering, Cloud Optimization, Task Assignment, Job Scheduling, Bin Packing and many more that every companies face daily. It can help squeeze the last bit of optimisation in your planning challenges and improve service quality and reduce costs. It is a lightweight, embeddable planning engine, consisting from a single .jar file.


    Apache Isis on the other hand is a framework for rapidly developing domain-driven apps in Java. Write your business logic in entities, domain services or view models, and the framework dynamically generates a representation of that domain model as a webapp or a rich hypermedia REST API. It is a full-stack application where once you write your domain layer, you get the persistence and presentation layer for free. It let's you very quickly to developer and iterate over the domain model.

    Both of these projects allow developers create user focused applications and solve read world problems. And both of them are centred around domain-driven design approach and fit nicely into Java development model using annotations. Last year, around this time, I decide to combine them and created a pet project called RotaBuilder. I migrated the existing Java Swing based Employee Rostering application from OptaPlanner examples to Apache Isis and turn it into a containerized web application. Below is a screenshot of an employee view.

    An employee view created in Apache Isis and OptaPlanner
    The application is on github and offers the following features:
    • Manage employees
    • Manage skills
    • Manage days on/off requests
    • Manage shift on/off requests
    • Manage contracts
    • Manage shifts
    • Create automatic employee-shift assignments
    It is far from complete, but I probably will not have time to work on it in a near future. I decide to push on public github in case anybody wants to learn these projects by developing an employee rostering application. For me, the best way for learning something has been doing it.

    Local Build and Run with Maven

    It is straightforward to locally build and run with maven.
    mvn clean install -DskipTests=true
    cd webapp
    mvn jetty:run
     
    then go to http://localhost:8080/

    Run with Docker

    It is also available as Docker image
    docker run -p 8080:8080 bibryam/rotabuilder

    Live Demo on Red Hat OpenShift

    See live demo (if it is) running at http://rotabuilder.com

    Which Camel DSL to Choose and Why?

    (This post was originally published on Red Hat Developers, the community to learn, code, and share faster. To read the original post, click here.)

    Apache Camel is a powerful integration library that provides mainly three things: lot’s of integration connectors + implementation of multiple integration patterns + a higher level Domain Specific Language abstraction to glue all together nicely. While the connectors and pattern choices are use case and feature driven and easy to make, choosing which Camel DSL to use might be a little hard to reason about. I hope this article will be able to guide you in you first Camel journey.

    I work for Red Hat Consulting as an Integration architect and one of my primary goals is to help customers get the design and the architecture of their future systems as right as possible, and consequently get the best value out of Apache Camel. One of the common questions I get at the start of every new Camel based project is: “Which Camel DSL should we use? What are the pros and cons of each?

    I have one good news and one more good news for you. First, by choosing to use Apache Camel you have already done the right choice and Camel will turn out to be a very useful toolkit in your arsenal of libraries for lot’s of future use cases and projects to come. And second, the DSL is just a technicality and it will not impact the success of your project and you can always change your mind later and even mix and match.

    If you are a part of large company, with multiple independent two-pizza size teams that use Java language here and there, the chances are that some teams are already using Camel. Even in small companies, teams use Camel without being aware of each other as it is useful for all kind of tasks and a small enough library you to add to your pom.xml and use it without a permission from the technical design board. If that is the case, just talk to your colleagues and learn from first hand their experience with their DSL of choice.

    If you need a more comprehensive comparison and a reason to choose one of the DSLs, below is a brain dump from multiple engineers developing developing Apache Camel and consultants using Camel at multiple customer projects across the globe. Pick the arguments that are valid in your context and make your choice.

    Comparing Apache Camel's XML and Java DSLs
    Comparing Apache Camel's XML and Java DSLs
    If this table doesn’t give you the straight answer you were looking for, probably the answer is: it doesn’t matter. Camel has multiple DSLs, but there are good reasons for both Java and XML based DSLs to be equally popular. The more important takeaway from here is for developers to get used to think in terms of Pipes and Filters, learn the Enterprise Integration Patterns and their notations. Then using one of the Camel DSLs to express these patterns is a technicality without a technical consequence. Usually it is a team preference and culture based choice, such as “We are a hard-core Java shop and we hate XML” or “Can we do all through drag and drop?”.

    All that said, the only advice I can suggest is strive for consistency. Avoid using different DSLs in the same service, even for different services in the same project. And if you can convince everybody in the company to use the same DSL even better.

    Check out my Camel Design Patterns book for more Camel related topoics and follow me @bibryam for future blog posts.

    About Me