A while back I did several posts on Appian, specifically one regarding setting it up on Docker. The objective was two fold – the first was to solve an important development CI pipeline challenge, and second was to evaluate if the software was deploy-able on a PaaS (ans: yes it is, but not via the normal installation configurations).
In this post, I will explain the challenges in the development CI pipeline, and how Appian on Docker addressed a key concern.
In typical development projects, it is relatively easy to create a CI pipeline based on the source code repository (e.g. GitHub). Simply hook up third party CI tools (like TeamCity from JetBrains) to the code repository, configure the actions to be taken by the master node when a new Pull Request is created, and add slave/agent nodes to the cluster to execute them.
However, this model does not work so well for Appian. From my experience, Appian is designed to be a single, shared platform for business users to quickly design and pivot their business processes, tack on simple and efficient interfaces created via a drag-and-drop tool, and publish them quickly with just a click of a button. This is in-line with its Low-Code platform feature, and is definitely great for a variety of business use cases.
For example, if a small enterprise needed a simple transport and entertainment claims system to track, approve, and audit its employees, it simply need to use Appian instead of engaging a software development firm. With Appian’s SaaS cloud offering, this further reduces the barrier as companies do not need to have technical know-how of installing the software, and/or managing the hardware infrastructure.
Because of Appian’s single, shared platform approach, a typical CI pipeline for a large/complex Appian project would not work. Having multiple developers work on the same development platform (as compared to their local environment) often impedes/impacts the work of others. It was not uncommon for changes to a shared grid component to break another module’s larger interface, or for new features to modify common expression rules that cause older less frequently used functionality to behave abnormally.
Having multiple developers working on a single “development” environment also means that the final “production” application should not sit on the same platform. But how would one create a CI pipeline on multiple Appian instances?
We can use Appian’s manual export/import feature through its web interface to move applications between environments. In addition, the nifty “inspect-only” option before the actual import ensures that only complete application packages are being imported.
However, for applications that depend on multiple packages, and/or a tightly coupled admin console, the web interface (or even the CLI Automated Deployment Manager) would not be able to import these packages as an atomic transaction. Having errors or missing dependencies in any of them can leave the next environment in an unstable/unusable state.
Appian on Docker solves this challenge by spinning up a disposable integration platform on Docker that comprises of a clean Appian installation with the necessary databases and SMTP server. With the clean platform, it attempts to import (and integrate) the necessary packages, and forwards any errors to a chosen reporting platform/system. Because the environment is always “new” (i.e. is not affected by previous imports or depreciated resources), it is able to catch more errors and missing dependencies.
We hooked up Appian on Docker with our Appian application code repository using a cron’ed bash script. Appian on Docker was able to automatically detect new Pull Requests to the Appian project repository, and spin up a new disposable environment on Docker to do the integration testing of new features.
To prevent Appian on Docker from tripping over itself by spinning up too many instances, a semaphore lock was used to ensure that only one instance was being executed at any point in time (there exists possibilities to optimize and run the tests in parallel).
The two key bottlenecks of this approach are: 1) the sequential starting of the Appian engine and its components (i.e. JBoss and ElasticSearch) that depend on each other, as well as the inspect and import process of the Appian engine.
At the time of implementation, we considered the solution of increasing the hardware resources of the Docker instance, but decided to stick with the current settings due to project costing.
Running a typical software development CI pipeline with multiple Appian environments proved to be difficult with applications that relied on multiple packages and/or a tightly coupled admin console. This is because it is not possible to import multiple packages as a single atomic transaction. Having just one package with syntax errors, and/or missing dependencies often leave the downstream environments in an unstable/unusable state.
Appian on Docker solves this by spinning up a clean and disposable integration platform on Docker to allow application packages with new features to be imported, integrated, and tested (e.g. regression) for errors. Any import/functionality errors encountered can be forwarded to a chosen error reporting system/platform. The main bottleneck was due to the time taken to spin up new Appian on Docker environments, which can be mitigated through hardware upgrades (e.g. more RAM), and/or overlapping the startup processes (much like CPU instruction pipelining).
Appian on Docker behaves much like a third party CI tool, which acts as a sanity check before allowing Pull Requests to be approved and merged into the master branch.
If you are interested to find out more, just leave a comment here or contact me via the contact/Linkedin page.