Deployment Architecture

Infrastructure and deployment model for the Demiton platform.

The Demiton platform operates as distributed infrastructure composed of several services.

These services work together to execute workflows, interact with external systems, and provide AI capabilities.

Deployment environments are typically hosted on cloud infrastructure.


Core Components

A standard deployment includes several primary components.

API Layer

The API layer handles:

• authentication
• workflow triggers
• user interaction

The API layer is typically implemented using FastAPI.


Worker Runtime

The worker runtime executes Blueprint workflows.

Workers retrieve jobs from the queue and process workflow steps sequentially.

Responsibilities include:

• executing workflow steps
• persisting execution state
• updating step status

Workers are stateless between runs.


Job Queue

The platform uses a queue system to distribute work.

Typical configuration:

• Redis job broker
• ARQ worker runtime

This allows workflows to execute asynchronously.


Database

The platform stores workflow state inside a relational database.

Typical database:

PostgreSQL

Stored records include:

BlueprintRun
StepRun
execution metadata


Vector Memory

Vector search enables document retrieval for AI interactions.

Typical implementation:

Azure AI Search

This service stores document embeddings and performs semantic search.


Environment Separation

Deployments typically include two environments.

Sandbox

Used for testing workflows and integrations.

Production

Used for live operational workflows.

Adapters must respect environment configuration to prevent accidental production writes.


Security Model

The deployment follows several security principles.

Examples include:

• encrypted communication between services
• secure credential storage
• identity-based access control

External system credentials must be stored securely and never exposed to the runtime environment.


Scalability

The worker runtime supports horizontal scaling.

Additional worker instances can be deployed to increase throughput.

Redis distributes jobs across workers automatically.


Monitoring

Production deployments should include monitoring for:

• workflow failures
• adapter errors
• queue latency

Monitoring ensures operational issues can be detected quickly.


Summary

A typical deployment includes:

• API service
• worker runtime
• Redis queue
• PostgreSQL database
• vector search service

Together these components provide deterministic workflow execution across connected enterprise systems.


---