The shape of the platform
Eight layers. One waterline. Eighty per cent of the machinery runs invisibly beneath the twenty per cent your team builds. The Pyraburg Principle — pyramid foundations for stability, iceberg visibility for simplicity.
Eight layers from infrastructure to interface
Hover or tap a layer to reveal its components.
Above the Waterline (20%)
What your team builds and what customers see: your business logic, the agents you control and the customer-facing surfaces — Glass UI for a full web + mobile frontend, or your own UI running headless against Prism API. The product your users actually touch.
Below the Waterline (80%)
What powers everything: compute, storage and networking (the Substrate layer); databases and the event-driven message bus; identity, SSO, RBAC and network segmentation; multi-protocol integration; platform agents; full observability. Everything from the hardware up to the waterline.
The waterline is not a wall — it's a design principle. Complexity exists for a reason, but users shouldn't have to see it.
How components connect
Above the waterline
Below the waterline
Multi-protocol provider network
One platform, eight protocols. Microcelium connects to everything through its provider abstraction layer.
AMQP
Enterprise messaging via RabbitMQ. Durable, routed, reliable.
MQTT
Lightweight IoT messaging. Real-time, low-bandwidth.
REST
HTTP-based APIs. Web service integration.
MCP
Tool-calling for AI agents. Integrates external systems into the agentic fabric.
SSH
Secure shell access. CLI automation.
SAMBA
File sharing. Network storage integration.
Syncthing
Decentralised file synchronisation.
Email
SMTP/IMAP processing and automation.
Container image hierarchy
All containers follow 12-Factor methodology. Cargo Servers are the micro-machines that run the containers — VMs whose storage, networking and firewall rules are themselves Infrastructure as Code on top of Substrate. micro-glue generates the Docker Compose stacks that land on them. Docker Compose for orchestration — a deliberate choice for operational simplicity over Kubernetes complexity.
Actual technologies, not categories
Every row below is a choice we'd defend in a postmortem. Grouped into four decisions rather than a flat list.
| Category | Technology | Purpose |
|---|---|---|
| Build once, run anywhere — containers & orchestration | ||
| Base images | Minimal stemcell containers — the smallest well-tested, security-patched base we'll ship. | |
| Containerisation | Container runtime and orchestration. Compose for operational simplicity; Kubernetes is the escape hatch, not the default. | |
| Stack generation | Renders the target environment's Docker Compose file from templates; lands it on the Cargo Server. | |
| Ingress | Routing, load balancing, SSL termination (Let's Encrypt). One proxy per Cargo Server. | |
| Event-first data plane — messaging & storage | ||
| Message broker | AMQP event backbone, durable messaging. Every cross-component event lands here first. | |
| Relational database | Persistent relational storage — choose per deployment. | |
| Time-series | IoT/Air sensor data and time-series analytics. Same Postgres wire protocol, different storage model. | |
| Object storage | S3-compatible distributed object storage. Rust-based, self-healing across Cargo Servers. Swap for AWS S3 in cloud deployments without code changes. | |
| Block storage | Distributed block and file storage for persistent volumes across the Substrate layer. Replication, snapshots, scale-out. | |
| Object storage (deprecated) | Single-node S3-compatible storage used in earlier deployments. Superseded by Garage for distributed clusters; retained for backwards compatibility. | |
| Search | Full-text search. Standalone Docker service — components opt in when they need search. | |
| Observability you can query — the Medusa stack | ||
| Metrics | Metrics collection and alerting. Every container exposes /metrics; Prometheus scrapes on a schedule. |
|
| Dashboards | Visualisation and monitoring dashboards. The single pane across metrics, logs and traces. | |
| Logging | Log aggregation and querying (Grafana Labs). The audit trail "query, not sprint" lives here. | |
| Tracing | Distributed tracing (Grafana Labs). Correlates spans with the MDXF envelope UUID. | |
| Identity & access — one source of truth | ||
| SSO / IdP / RBAC | Open-source SSO, identity provider and role-based access control. The access model is declared once in Organisation as Code. | |
Deploy anywhere Docker runs
Microcelium is containerised end-to-end. Run it on your own hardware for data sovereignty, in AWS or Azure for scale, or mix and match. The platform is the constant; the compute is your choice.
On-prem with Proxmox
KVM-backed virtualisation on your own hardware. Data sovereignty, predictable costs, no egress bills. The Substrate layer runs here by default.
AWS
EC2 hosts with Docker, managed RDS (MySQL or PostgreSQL), S3 for object storage when you'd rather not run MinIO. Reference architecture provided for client exit.
Azure
VMs with Docker, Azure Database (MySQL or PostgreSQL), Blob Storage when required. Suits UK-regulated workloads needing Microsoft-aligned compliance posture.
Every deployment ships with a documented exit architecture. No lock-in — not to us, not to a cloud.
Design principles
Vendor Agnostic
Multi-cloud: AWS, Azure, on-premises. Documented exit architectures provided with every Managed Build engagement. No lock-in.
Microservices where the boundary pays for itself
Splitting services costs coordination — extra network hops, cross-service tracing, distributed transactions. A service exists when its domain is different enough to warrant the overhead. When it isn't, see Pragmatic Component Design below.
Pragmatic Component Design
A well-designed larger service beats three thin ones when a single codebase is clearer. The Ingestion Engine handles protocol abstraction, transform and routing in one service because splitting it would add coordination cost without clarity gain. Performance over dogma.
No shared failure plane
Each service has its own Docker network, its own Traefik route, its own CI/CD pipeline. A bug in one deployment can't take down another; a schema change in one service doesn't cascade into a sibling. Isolation is the safeguard.
Event-Driven (Pub/Sub)
Asynchronous messaging. Loose coupling. Event sourcing for auditability.
MDXF — one envelope for everything
MDXF — Microcelium Data eXchange Format — is a single standardised JSON envelope for every inter-service message on the bus: requests, responses, async, sync, fire-and-forget. Language-agnostic by design; reference implementations in Python, Node.js, and PHP.
Envelope fields
version, message_id (UUID v4), correlation_id, reply_to, timestamp (ISO 8601 UTC), ttl, source_service, type (request/response), identity, status, error, payload.
Signing
Optional HMAC-SHA256 over canonical JSON (keys sorted recursively, no whitespace). Shared secret via RELAY_HMAC_SECRET. Receivers can require signatures or accept unsigned traffic per service policy.
Identity
When a message acts on behalf of an authenticated user, the raw JWT travels in the identity field. Consumers decode and verify independently against the issuer's public key.
Topology
Default AMQP exchange, routing by queue name. Each service declares one durable <service>.incoming queue; both requests and responses land there, distinguished by type. Prefetch 1 per consumer.
Patterns
Async (default) — reply_to points at sender's queue. Sync RPC — exclusive temp queue, blocking call. Fire-and-forget — reply_to omitted. Same envelope across all three.
Standard errors
MESSAGE_EXPIRED, SIGNATURE_MISSING, SIGNATURE_INVALID, INVALID_PAYLOAD, PROCESSING_FAILED, UNSUPPORTED_VERSION. Services may add their own codes.
No lock-in: MDXF is an open, MIT-licensed format — JSON over standard AMQP. Spec and reference implementations are public. If you move away from Microcelium your messages are plain JSON on a plain message broker — nothing proprietary at the transport layer.
Want a technical walkthrough?
Book an architectural briefing. We'll walk through the Pyraburg layers, show real component interactions, and discuss how this maps to your requirements.