OpenUniverse
There are no boundaries
Let's Create
Get Started Learn More…

What is Universe?

A Universe is a complete digital environment that connects data, information, and knowledge across diverse domains. It extends beyond IT infrastructure to include ALL data sources like physical assets, business processes, regulatory constraints, data lifecycles, and knowledge flows. A Universe can be understood through different perspectives:

  • Structural (composition, ownership, hierarchy).
  • Functional (usage, control, data flow).
  • Historical (lineage, evolution, journals and immutable traces of changes).
  • Regulatory (compliance obligations, contractual bindings, access constraints).
  • Protocols (standards, interoperability, communication rules).

It also encompasses events, transactions, state changes, and lifecycle transitions, thereby enabling reasoning about system evolution. In this sense, a Universe is a living ecosystem where raw data flows into information, and information evolves into knowledge, shaping how people, machines, and structures interact securely and intelligently.

As Code

Universe as Code

UaC (Universe as Code) is the forward-thinking extension of the well-known IaC (Infrastructure as Code) paradigm. While IaC focuses on describing and automating IT infrastructure such as servers, networks, and software stacks, UaC embraces a much broader and more complex scope. In UaC, the principle of declarative, code-driven definition extends not only to infrastructure but also to physical assets, business processes, regulatory constraints, data lifecycles, and knowledge flows.

Just as IaC transformed system administration into programmable workflows, UaC envisions an environment where entire digital-physical ecosystems can be described, evolved, and reasoned about through code. This approach anticipates future needs where interoperability, compliance, intelligence, and evolution of systems are orchestrated holistically, making UaC a natural evolution beyond IaC.

Connect

Why OpenUniverse?

OpenUniverse is an ultralight, security-first open source UaC (Universe as Code) platform that unifies your environment.

Traditional Infrastructure as Code tools such as Terraform are effective for provisioning and configuration but are not designed for real-time, event-driven orchestration across diverse systems. OpenUniverse fills this gap by enabling triggers, workflows, and responses to be defined as code and executed dynamically across cloud services, on-premise systems, IoT devices, and legacy applications.

The platform ensures that orchestration logic is versioned, signed, and timestamped for compliance and traceability, while maintaining security through cryptographic chaining of records. Because it is system-neutral, it integrates across heterogeneous technologies without vendor lock-in.

By combining real-time coordination with auditability and long-term verifiability, OpenUniverse extends the benefits of IaC into the operational domain, offering a structured and predictable approach to automation that aligns with regulatory and governance needs.



Decentralized Orchestration

OpenUniverse operates in a world without a single point of control. Unlike traditional systems that rely on centralized servers or orchestration engines, OpenUniverse treats every component — events, jobs, triggers, processors, and systems — as part of a decentralized, self-organizing network.

In OpenUniverse, workflows are not dictated by a central engine. Instead, each component communicates through cross-referenced event streams, allowing jobs and triggers to react dynamically to changes anywhere in the system. This model ensures:

  • Resilience: There is no single point of failure. Nodes can join or leave the system without disrupting processing.
  • Scalability: Workloads distribute naturally across available resources, allowing the system to expand organically.
  • Autonomy: Each component decides its actions based on local state and incoming events, while still contributing to global orchestration.

...There Is No Center




How it works

OpenUniverse operates in several stages to transform static document definitions into a dynamic, event-driven infrastructure.

Before any processing begins, OpenUniverse optionally performs a self-check to verify its own integrity:

  • The distribution JAR is validated against its expected SHA-256 checksum.
  • The JAR’s digital signature is verified to ensure it originates from a trusted source and has not been tampered with.
  • A detailed self-check report is generated and stored in the repository, providing a permanent audit trail of verification results.

Next, the repository working directory is scanned for documents. Each document is represented as a JSON object. A single JSON file may contain a single document or an array of documents. During this scan, OpenUniverse automatically skips any documents with unsupported specification versions or those explicitly marked as disabled.

In addition to JSON, documents and plugins may also be authored directly in Markdown files. This allows developers to provide human-readable documentation alongside executable definitions, blending source, commentary, and infrastructure logic in a single artifact.

If the repository is marked to enforce signed commits (via repository configuration), OpenUniverse performs commit signature verification before any constraint checks:
  • The system extracts the HEAD commit signature and verifies its cryptographic validity.
  • It evaluates the signing key:
    • Only keys marked as ultimately trusted (u) or fully trusted (f) in the GnuPG keyring are accepted.
    • Keys that are revoked, expired, disabled, marginal, or unknown are rejected.
  • If the signature verification fails or the key does not meet the strict trust criteria, processing halts to prevent execution of untrusted or tampered documents.
If the repository is not marked for signed commits, this step is skipped and documents are processed normally.

Next, each discovered document is passed through an optional chain of pre-run constraints, if declared. Constraints ensure that documents meet structural, logical, and environmental requirements before execution. Processing does not start if any constraint reports invalid requirements.

Once validation is complete, OpenUniverse enters the discovery phase. Here, the system analyzes declared search queries, resolves cross-references between documents, and maps their relationships.

After discovery, OpenUniverse loads the resolved document instances and begins execution. At this point, the runtime becomes active:
  • Triggers fire events based on conditions, calendars, or schedules
  • Processors consume event messages and execute the appropriate activities across systems defined in jobs
  • Export targets deliver data to external backends for storage or further processing
  • DMQ intercepts undeliverable or failed messages, routing them into a dedicated dead-message queue for later inspection, retries, or manual handling

Every record produced by OpenUniverse is secured and traceable through multiple layers of protection:

  • Identity & Ordering – each record carries a globally unique identifier (GUID) and a serial number within its node’s stream, ensuring uniqueness and ordered traceability.
  • Integrity & Authenticity – contents are hashed with SHA-256 and digitally signed to prevent tampering and prove origin.
  • Time Assurance – records receive real-time NTP timestamps, are additionally sealed by a trusted Certificate Authority (CA) for long-term non-repudiation, and timestamped by a Time Stamping Authority (TSA, RFC 3161) for independent verification.
  • Immutability – every record references the hash of its predecessor, forming a blockchain-style ledger that makes alterations immediately evident.


Core components

The architecture is based on documents as the primary abstraction. Each component—events, jobs, triggers, systems, and processors—is described as a structured document, with the RootDocument serving as the central blueprint. This model makes the environment self-describing, auditable, and easy to evolve. Execution is fully event-driven. Events from calendars, schedulers, or publishers activate jobs, which process them through defined processors and then act on target systems. Instead of static configuration, relationships between jobs, events, and systems are discovered dynamically at runtime, creating adaptive orchestration flows.

Documents:


AbstractDocument (Abstract)

Header (HeaderObject)
Properties (Map)
RootDocument
extends AbstractDocument

Header (HeaderObject)
Properties (Map)
├ Triggers (List<TriggerDocument>)
├ Jobs (List<JobDocument>)
├ SignSettings (SignSettingsObject)
├ HashAlgorithm (String)
├ UnprotectPlugin (PluginObject)
├ ExportTargets (List<ExportTargetObject>)
DMQPlugin (PluginObject)
EventDocument
extends AbstractDocument

Header (HeaderObject)
Properties (Map)
└ Processable (Boolean)
Triggers/CalendarDocument
extends AbstractDocument

Header (HeaderObject)
Properties (Map)
└ Calendars (List<CalendarObject>)
Triggers/SchedulerDocument
extends AbstractDocument

Header (HeaderObject)
Properties (Map)
└ Schedulers (List<SchedulerObject>)
Triggers/EventPublisherDocument
extends AbstractDocument

Header (HeaderObject)
Properties (Map)
└ EventPublisherPlugin (PluginObject)
JobDocument
extends AbstractDocument

Header (HeaderObject)
Properties (Map)
├ Systems (List<SystemDocument>)
└ EventProcessors (List<ProcessorDocument>)
SystemDocument
extends AbstractDocument

Header (HeaderObject)
Properties (Map)
└ AbstractSystemDefinition (Object)
EventProcessorDocument
extends AbstractDocument

Header (HeaderObject)
Properties (Map)
└ EventProcessorPlugin (PluginObject)


Objects:


HeaderObject

┌ SpecVer (String)
├ Type (String)
├ Name (String)
├ Description (String)
├ Disabled (Boolean)
├ Tags (List)
├ Attributes (Map)
└ PreRunConstraints (List<PluginObject>)
PluginObject

┌ Command (String)
├ Arguments (List)
├ EnvironmentVariables (Map)
├ WorkingDirectory (String)
├ ErrorLogFile (String)
└ InstancesCount (Integer)
CalendarObject

┌ EventType (String)
├ ScheduledFor (Time)
└ user-defined fields...
SchedulerObject

┌ EventType (String)
├ CronExpression (String)
└ user-defined fields...
SignSettingsObject

┌ KeyStoreType (String)
├ KeyStoreFile (String)
├ KeyStorePassword (String)
├ KeyAlias (String)
├ KeyPassword (String)
└ SignatureAlgorithm (String)
ExportTargetObject

┌ Id (String)
├ StoreAsArray (Boolean)
├ EnableCompression (Boolean)
├ СompresionLevel (Integer)
└ ExportTargetPlugin (PluginObject)


Plugins:


Event Publisher Plugin

Generates and publishes event messages

Required in:
  • Triggers/EventPublisherDocument
Event Processor Plugin

Consumes events, applies processing logic, and emits the result

Required in:
  • EventProcessorDocument
Pre-Run Validator Plugin

Validates OpenUniverse documents before execution starts

Optional in:
  • RootDocument
  • EventDocument
  • JobDocument
  • Triggers/EventPublisherDocument
  • Triggers/CalendarDocument
  • Triggers/SchedulerDocument
  • EventProcessorDocument
  • SystemDocument
Secret Extractor Plugin

Decrypts or extracts protected values for runtime usage.

Optional in:
  • RootDocument
Export Target Plugin

Delivers JSON messages to external backend(s)

Optional in:
  • RootDocument
Dead Message Queue (DMQ) Plugin

Safely stores undeliverable messages in the Dead Message Queue (DMQ) for later inspection or reprocessing

Optional in:
  • RootDocument



Key Features


Client-side by Design

Runs as a lightweight client, not a server — easy to integrate into existing environments without adding infrastructure overhead.

  • No exposed server endpoints
  • Data stays local — sensitive information is processed on the client
  • Aligns with strict data residency and regulatory requirements

Flexible Document Formats

Documents can be authored in JSON, YAML, or JavaScript.
When JSON saved with a .js extension, JavaScript-style comments are allowed. In addition, the MD (Markdown) format is supported, which, along with text, can include JSON/YAML documents and plugins. At startup, OpenUniverse transforms all MD documents into JSON/YAML and extracts plugins from the Markdown.


Integrated Version Control

Version control is built directly into the platform, ensuring that every change to systems, jobs, events, and relationships is tracked and managed. This makes history, auditing, and rollback an integral part of how the platform operates, not an external add-on.

  • Native tracking of changes across all core components
  • Enables reproducibility and reliable rollbacks
  • Improves collaboration with transparent history
  • Supports compliance and audit requirements out of the box

Ontology-driven Component Discovery

Components are discovered dynamically based on their tags, attributes, etc., rather than relying on hard-coded lists. Relationships describe how core components interact with each other. They are established through search queries over tags and attributes, allowing systems, jobs, events, and processors to be linked in a flexible and adaptive way.

  • Emphasizes interactions rather than static connections
  • Dynamic discovery based on component tags and attributes
  • Reduces configuration complexity by avoiding hard-coded links
  • Adapts naturally as environments evolve

Pre-Run Constraints

Each document can define pre-run constraints that validate conditions before a processing loop begins. These checks act as safeguards, ensuring that jobs and processes only start when prerequisites are met.

  • Prevents faulty or unsafe executions
  • Ensures environments and dependencies are ready
  • Provides a clear and declarative way to enforce policies

Continuous Native Processing

Employs continuous, parallel, native processes that stream data directly: Input: read from stdin. Output: write to stdout This ensures seamless composition with other tools and pipelines, staying true to the UNIX philosophy.

  • Native, continuous data processing
  • Naturally parallel for efficiency
  • Works smoothly with existing shell tools and pipelines
  • Efficient, real-time data handling
  • Supports both synchronous and asynchronous operations

Programming Language Agnostic

The platform is fully independent of programming languages. It can orchestrate jobs, processes, and events regardless of whether systems use Java, Python, JavaScript, C++, or any other language.

  • Works with any programming language
  • Integrates heterogeneous systems without language constraints
  • Enables mixed-language workflows
  • Future-proof for evolving development stacks

Universal “System” Abstraction

Everything is modeled as a system — from hardware such as IoT devices, robots and CNC machines to enterprise databases and NGFWs. This flexible concept makes it possible to orchestrate and observe heterogeneous environments with a single approach.

  • Unified model for diverse systems
  • Reduces complexity of integration
  • Scales from small devices to enterprise infrastructure

Event Triggers

Jobs and processes can be activated by events — such as calendars and schedules timepoints, or triggered from external signals. Triggers define when and why execution starts.

  • Enables automation with or without manual intervention
  • Supports time-based (schedulers, calendars) and data-driven triggers
  • Reactive design for real-time responsiveness
  • Unifies multiple event sources under one model

Event Processors

Event processors transform, filter, route and response on events across affected systems.

  • Decouples event producers from event consumers
  • Flexible filtering and transformation of event data
  • Scales from simple notifications to complex event-driven pipelines
  • Bridges real-time signals with system orchestration

Job as a Core Unit

A job executes a set of processes across one or many systems. Jobs define what to run, where to run it, and which results to collect. They provide a consistent way to express distributed execution.

  • Core abstraction for orchestrating work
  • Distributed execution across heterogeneous systems
  • Parallelism for speed and efficiency
  • Simplifies complex workflows into a single unit

Vendor-neutral Real-time Streaming

You’re not locked into a single backend tool. Export messages asynchronously to any backend in real-time:

  • Message brokers (Kafka, RabbitMQ, Redis Streams, etc.)
  • Time series (Prometheus, VictoriaMetrics, Graphite, etc.)
  • Search engines (Solr, OpenSearch, ElasticSearch, etc.)
  • Relational databases (Oracle, Db2, Postgree, MS-SQL, MySql, etc.)
  • Query and visualize your data using tools like Grafana, Perses, Datadog, Tableau, etc.

Abstract DMQ

Abstract DMQ ensures that no message is ever lost — automatically capturing and preserving undeliverable (“dead”) messages across the universe for recovery, analysis, and compliance. It can use any backend for storage and management of dead messages.

  • Backend-agnostic — supports any storage system (files, databases, object stores, queues)
  • Message safety net — guarantees visibility into all undeliverable events
  • Actionable management — inspect, triage, and reprocess messages after fixes
  • Compliance-ready — maintains audit trails for regulatory and operational needs

Secured and Traceable Records

Every record produced by OpenUniverse is protected through multiple layers of security, ensuring authenticity, integrity, and long-term verifiability. These guarantees provide a strong foundation for regulatory compliance, legal auditability, and adherence to industry standards.

  • Identity & Ordering – globally unique identifiers (GUIDs) and sequential serial numbers within each node’s stream guarantee uniqueness, ordered traceability, and legally defensible audit trails.
  • Integrity & Authenticity – SHA-256 hashing combined with cryptographic signing prevents tampering, proves origin, and satisfies evidentiary requirements for legal proceedings.
  • Time Assurance – real-time NTP timestamps are reinforced by trusted Certificate Authority (CA) sealing and RFC 3161-compliant Time Stamping Authority (TSA) proofs, ensuring admissible, verifiable time records in line with compliance frameworks.
  • Immutability – blockchain-style chaining of records creates an immutable, tamper-evident ledger that supports governance, regulatory mandates, and long-term archival policies.

MIT License

The platform is released under the MIT License, one of the most widely adopted open-source licenses. It grants broad freedom to use, modify, and distribute the software with minimal restrictions.

  • Permissive and business-friendly
  • Encourages adoption and community contributions
  • Compatible with both open-source and commercial projects
  • Simple, clear terms with no hidden limitations


Connect

Constructor for Your Next Startup

Turn your MVP into a scalable, production-ready reality from day one. OpenUniverse lets you focus on building your core product — while it takes care of integration, orchestration, and automation. Go from prototype to global scale without rewriting your architecture. Embrace a platform that’s flexible, language-agnostic, and built to adapt as your vision grows.

...MVP in action



EIP on Steroids

Today’s IT landscape is complex, with diverse protocols, multi-cloud services, and growing use of asynchronous communication. Enterprise Integration Patterns (EIP) offer proven solutions and a shared vocabulary to design reliable, adaptable integrations that handle complexity and failures gracefully.


Connect

Access Anything

Works seamlessly across platforms, clouds, and systems. Connect to cloud services, on-prem systems, APIs, message queues, databases, or custom endpoints. The platform speaks your language and adapts to your ecosystem — no need to conform to a rigid stack.

...connect them all


Reactive

Reactive by Design

Engineered for real-time execution and scalable throughput. Event flows are processed as they happen. The platform supports asynchronous triggers, backpressure handling, and distributed execution to meet the demands of fast-moving systems.



Built for DevOps & Platform Teams

Empower your engineers to automate safely and consistently. With built-in support for version-controlled workflows, audit trails, policy integration, and modular architecture, teams can build reliable automation without sacrificing control or visibility.


Extensible

True Language-Agnostic

Bring your own tools, scripts, and runtimes. Use any language or executable interface to build event sources, processors, or responders. Whether it's a shell script, Python program, or compiled binary — if it runs, it integrates.


Extensible

Composable Infrastructure

Design your infrastructure like building with blocks — intuitive, modular, and endlessly flexible. Define every component as code, integrate any system at any scale, and use your preferred tools and languages. Build fast, evolve freely, and stay in control.


Extensible

AI - FREE, Verifiable by Design

Every outcome is deterministic, transparent, and cryptographically verifiable. No AI-driven ambiguity—just open logic, signed data, and reproducible processes. Designed for those who demand proof over promises.

...security first


Extensible

Behind the Insights

1. A Scalable, event-driven architecture for designing interactions across heterogeneous devices in smart environments
Ovidiu-Andrei Schipor, Radu-Daniel Vatavu, Jean Vanderdonckt. Information and Software Technology, Vol. 109 (2019), 43–59, ISSN: 0950-5849

2. The Power of Event-Driven Architecture: Enabling RealTime Systems and Scalable Solutions
Adisheshu Reddy Kommera. Turkish Journal of Computer and Mathematics Education, Vol. 11 (2020), 1740-1751, ISSN: 3048-4855

3. Exploring event-driven architecture in microservices- patterns, pitfalls and best practices
Chavan, Ashwin. International Journal of Science and Research Archive, Vol. 4 (2021), 229-249, ISSN: 2582-8185

...more refs in the docs


Extensible

Vendor-neutral Real-time Streaming

Export messages asynchronously to any backend in real-time:

  • Message brokers
  • Time series
  • Search engines
  • Relational databases,
Query and visualize your data using tools like Grafana, Perses, Datadog, Tableau, etc.
This gives flexibility—you’re not locked into a single backend tool.

...fit to your case


Extensible

Plays well with…

Kafka Prometheus Grafana Perses Apache Solr Open Telemetry Apache nifi fluentd OpenSearch

...and others


Extensible


"code-lang": "ANY"

...Language Agnostic



Extensible

MIT License

The MIT License provides freedom to use, modify, and integrate the software in any project — open source or commercial. With no copyleft, legal barriers, or hidden constraints, it enables building, scaling, and innovation on flexible terms.

...hack it forward