The Problem

Auto-scaling lambdas such as OpenFaas; architectural patterns such as microservices; programmable infrastructure such as SDN - all these allow us to scale out and orchestrate networks, services, functions and many other aspects of modern, hybrid systems.

Yet, a persistent problem haunts us still.

Every solution needs logically "plugging together" somehow. The components and their orchestration aren't the problem, nor their scaling, nor even the physical wires and network that link them together.

It's the logical wires under the hood. The flow, the purpose - the whole point.

So we produce yet another black-box program or microservice and our overall maintenance burden has just gone up again.

This burgeoning black-box complexity surfaces as cost and conduct risk. It damages business.


OpenSPARKL is different. It's the Swiss Army Knife of middleware, allowing you to create, manage and scale under-the-hood middleware functions that perfectly fit the individual use case you're facing today.

Use it to produce a push-model filesync function today, and a pull-model filesync function tomorrow. An API gateway here, a continuous integration solution there.

All using the exact same configuration method. No flowcharts. Pure configuration. Pure state.

It's in the Mix

OpenSPARKL configuration features the mix.

A mix is the organization of components in your solution and the events that move between them. These are what form a particular middleware function, such as those I mention above.

You author OpenSPARKL configuration using just a text editor, or you can use the provided design tool.

The Difference

When you author a mix, you don't write a flowchart. You write an extremely concise description of the systems and services, data fields and types that combine to form your new function.

A mix may include tiny nano-services that you create to oil the wheels. It may include scale-to-millions OpenFaaS functions, for caching, search or processing.

Doing middleware this way has side effects that might seem trivial at first.

First of all, it is a completely natural fit for today's microservice and functions-as-a-service provisioning infrastructure. You literally see what systems are working together and leverage the scaling and cloud infrastructure capabilities they provide.

Second, it leads to automatic distributed and parallelised execution of middleware functions across highly scalable low-level solutions like OpenFaaS, AWS and Azure alongside traditional, external or legacy systems.

Third, it delivers interesting technical features such as

  • just-in-time, reasoned provisioning that scales to zero
  • normalised unit- and system-testing
  • the ability to listen to any individual service, field or event
  • clean, normalised audit trails

regardless of the components involved.

Next Step

This is a new project, and we're trying hard to hit the ground running.

Please move on to the Quick Start, let us know at whether it's working for you and what use cases you think we should be covering.