Happy Employees == Happy ClientsCAREERS AT DEPT®
DEPT® Engineering BlogProcess

Composable architecture - How is this any different from microservices?

What’s the buzz all about Composable Architecture? Isn’t that just microservices?

There's a lot of buzz in the tech industry about Composable Architecture. At DEPT®, we're explicitly seeing this term surface in the content management systems (CMS), e-commerce, and marketing technology (Martech) spaces.

So what is it? And what's all the fuss about? After all, it's a given that in software engineering, we try to make our architectures composable and design software modules that are easy to interchange and move around.

Isn't that what microservices are?

Composable Architecture (capital C, capital A) is a new-ish term that encompasses many of these principles but generally applies to specific types of off-the-shelf software. You can also apply the principles to content design and even user experience design.

Let's dive in more to see how this new term breaks down compared to what we usually do as software engineers, designers, and architects. Then, once you understand the concepts, we'll see how they apply to e-commerce and CMSs.

What is Composable Architecture? How is it different from Microservices?

Composable Architecture is a method of designing your software modularly, breaking down complex components of your architecture into small, reusable pieces--something software engineers always strive for.

But specifically, the term is usually applied to off-the-shelf software solutions to indicate they're built with modern standards and can easily fit into your overall system architecture.  

Why is this a big deal? If you're asking this question, consider yourself lucky. Old vendors usually build tools with walled gardens, so moving data around, getting data in, or cooperating with other systems takes a lot of work.

To make it short and sweet: Composable Architecture applied to these areas indicates to potential customers that an off-the-shelf system is engineer-friendly. YAY! 🥳

Systems that subscribe to Composable Architecture principles usually mean that your vendor solution can be composed, extended, and recomposed without long development cycles. In addition, you'll be able to create separate APIs, applications, functions, and micro UIs to separate responsibilities and makes management and dependency between them easy.

This approach introduces additional monitoring levels and allows you to move business logic into a reusable process for all other applications. This is the basic concept of the MACH (Microservices, API-first, Cloud-Native, Headless) Architecture approach (We at DEPT® are big fans of the MACH approach! DEPT is part of the MACH alliance).

The term and its principles are applied in less technical areas to involve good engineering concepts in those disciplines. Some good examples are content modeling for CMSs and how to break down user interface concepts.

Let's walk through theoretical architecture using Composable Architecture principles in more detail to understand our discussion better.

Example architecture walkthrough - e-commerce

Let's say you're building an e-commerce system. It's comprised of the following:

  1. Marketing content (to show you the goods)
  2. e-commerce (to sell you the goods)
  3. Logistics (to ship you the goods)

An older approach to this system might be to have a couple of frontends, say mobile and web, and a backend API to service requests from the front ends.

In Composable Architecture principles, you'll first break down the system's responsibilities into smaller independent components. Separating the functional components into microservices allows better management of individual services vs. a monolith API or application. In addition, it provides for independent release cycles and enhancement without affecting other services and a shared codebase. This design also supports the integration of other third-party SaaS or on-premise products that may be part of your overall digital ecosystem.

For instance, in the case of e-commerce, you might have a set of services as defined below:

However, simply saying "MICROSERVICES!" is not a silver bullet.

Interdependent service calls are inevitable and can get complex quickly. To avoid this, you can compose your system with another level of abstraction using a message bus or message queue system. This is particularly useful in a system that is transactional, such as e-commerce, or has multiple integrations.

Adding this additional layer allows microservices to be genuinely independent, raise events for transactions or errors that other services can subscribe to, and react accordingly. It also means you provide a mechanism to allow change. Even though transactional, the whole system can introduce new services to subscribe to events or change core services for new products as your business needs grow.

A version of the e-commerce system with a message bus might look like this:

Going Further - Breaking down a CMS into Composable Architecture Principles

We have seen a trend in many modern CMSs that subscribe to Composable Architecture principles. These CMSs allow easy integration with third-party vendors for e-commerce or Digital Asset Management (DAM) systems. Most older CMSs required custom code strapped into their monolith or (god forbid 🙀) professional services engagements to do any significant customization. These newer systems expose ready-to-use extension points to customize the experience to the customer's needs.

MACH using asynchronous workloads is an excellent example of how to begin to approach a CMS architecture design. Let's translate the concepts discussed in our e-commerce example into a CMS implementation.

If we look at some core roles of a CMS content delivery solution, we have a list of functions that is something like this:

  • Publishing content - Taking content that is is editorially approved and moving that it in the desired form to the content delivery portion of the architecture and data store.
  • Delivery of content - A method for channel applications to consume the content. (Note that in marketing and CMS terms we generally refer to “channels” instead of different types of end clients, so we’ll keep referring to them that way for the rest of this example).
  • Caching content - A system to provide high-performance responses for content in high-traffic load environments.
  • Dynamically Query and Search content - The ability to surface content dynamically or provide site search capabilities.

If we translate these roles into a MACH architecture we get something like the following diagram:

Let's walk through it:  

The CMS publish will trigger an event that our indexing API will consume. The indexing API will pull the published content from the CMS content delivery data store (usually via an API) and store it in our search index.

There are multiple advantages to this approach:

First, we get to transform the content we retrieved into a data contract for our business and abstract the clients/channels from the CMS.

This is a key design feature when you think about composable architecture. If you have a CMS dependency directly in your channel applications (Web, Mobile App, Kiosk, etc.) and change the CMS, all your channel apps need to be updated, which is a huge undertaking. Creating the content management orchestration as defined here removes that obstacle. It also means you can change your CMS or even add multiple content hubs to your ecosystem, and none of the apps or the content API should notice a change.

Second, we can implement secondary events from our indexing and trigger cache layer updates, which provides complete redundancy from the CMS platform. If the CMS fails to operate, you have an independent content store to run the content delivery system. We then have a content API that distributes the content to channels by endpoints allowing querying and fetching content differently. This break up of different content indexing, caching, and retrieval services makes the system more manageable and can keep logic for these workloads isolated and reusable.

Another feature of this design is that even though we have a distributable cache, you can provide additional cache layers at the application levels. Because we have an event triggered on indexing and publishing completion, other integrators can listen to the same events and handle cache expiration and other native feature if they want to.

You can achieve an additional layer of composability to the system by exposing your content service via an orchestration layer rather than integrating it directly with your applications or partners.  

That might look something like this:

This provides a few significant benefits:

No direct application access to the content service. This allows you to change the origin of the service to newly updated providers as you need without worrying about direct integration issues. In an omnichannel business (meaning one with lots of different types of channels), this is incredibly important. You don't want to have to roll out changes to your mobile app, kiosks, website, and other IoT applications just due to one provider change. Centralizing this in your orchestration layer allows you to make changes in one place while keeping your channels unaffected.

Apply additional data transform - Each integrator will have their own data contract and unique needs for the content data that their system abides by. Integrating the content service directly into your applications and channels means you must change those channels when the core API changes. In addition, you have to handle backward compatibility and roll out with the integrators as updates and breaking changes occur. This adds pressure to your development teams and lifecycles that can interrupt your general business flow. Having the orchestration layer apply data transformation to the source contracts means the integrators own those contracts. Regardless of the source, you will have complete control over your data going in and out of your system, and the abstraction layer can handle transforms when data interaction is needed.

Unique Content Querying Requirements - Not all of your integrators will want to consume and query the content similarly. Adding an orchestration layer provides a system of separation for integrators to have control. In addition, it prevents the core content delivery API from being constantly developed to support new ways of querying and inventing new ways to support requirements like GraphQL.

Caching and Extension - Some third-party or internal integrators may need faster response times for certain content items. Or integrators may have special requirements. Using your orchestration layer to handle additional caching, data storage, and extending existing capabilities gives you fault tolerance and areas of decoupling you wouldn't have with direct integrations. As with the above points, you also move the responsibility for these edge cases out of your core design and services.

Cross-Service Communication - All cross-service communication can be proxied through the orchestration layer. This abstracts interdependent services from becoming tightly coupled and allows the advantage of a single load-balanced endpoint for clustered services, centralized monitoring, and observability of inter-service calls. Even if the calls are passthrough, this is still advantageous.

The above approaches provide layers of protection, scaling, and flexibility to compose and recompose a system without the headache. As a result, teams can be more agile when managing system components and have completely independent work streams to help make your business more agile.

Wrapping up composable architecture

We used CMSs to illustrate many of our points. It's critical to remember that not all CMS solutions support the design patterns and approaches mentioned above. Adopting some or all of them will improve your ecosystem.

Hopefully, this article helped clear up a vague term for you and walked you through some sound engineering principles. Stay composable!