# Modern Relay Full LLM Context

Canonical site: https://modernrelay.com/

Company: Modern Relay

Default description: As AI moves from chat to action, agent-native teams need shared operational context, governed change, and infrastructure they control.

Primary category: Enterprise context graph.

Primary positioning: Modern Relay is a governed source of truth where AI agents, teams, and systems share context, coordinate work, and govern change on infrastructure the company owns.

## Keyword Clusters

- Source of truth for AI agents: source of truth for enterprise AI, system of record for AI agents, governed context graph.
- Context and ontology: enterprise context graph, knowledge graph for AI agents, schema-as-code for AI, typed versioned graph.
- Coordination: AI coordination infrastructure, agent coordination infrastructure, shared structured state, human-AI coordination.
- Governance: AI governance layer, agent governance, Git-like governance for knowledge, audit trails, provenance.
- Sovereignty: sovereign AI infrastructure, owned AI infrastructure, open storage for AI agents, infrastructure you own.

# Product Stack

## Agents & Coordination

Compounding automation instead of chaos.

Bring your own agents and models or use ours. Agents read shared context, write back governed proposals, and coordinate through the graph. One surface for humans and agents to work together.

## Policy & Permissions

Govern change like code.

Every mutation starts as a proposal. Branch, review, merge, and enforce policy before changes propagate. Every change is traceable, every decision auditable, every action reversible.

## Context graph

One operational reality for agents and teams.

A typed, versioned graph for the entities, relationships, and rules your organization runs on. Schema-as-code with a real compiler that validates definitions before data enters the graph.

## Storage

Zero lock-in. Your data stays yours.

Open-source storage built on Lance and Apache Arrow. Columnar, versioned, and readable by any tool. No proprietary encoding. No vendor-owned state. Your data stays on infrastructure you own.

# Use Cases

## Competitive Intelligence

A structured competitive landscape that stays current as signals change

Challenge: Competitive signals are scattered across news feeds, patent filings, earnings calls, and analyst reports. By the time someone assembles the picture, it's already out of date. Teams make strategic decisions on stale intelligence.

With Modern Relay: Agents monitor public filings, patents, pricing changes, and market signals, then write structured findings back to the graph. Strategy teams can query a current picture instead of waiting for a quarterly report.

Signals:

- Continuous: monitoring
- Broader: signal coverage

## Deal Intelligence

Deal history, positioning, and relationships as shared, queryable context

Challenge: Deal context lives across CRM fields, email threads, call transcripts, and people's heads. A non-standard deal touches 14 people across 4 teams. By the time CS takes over, half the context is already lost.

With Modern Relay: Agents enrich the deal graph from CRM, emails, calls, and prior engagements as information arrives. Sales, legal, and CS work from the same structured context, and past wins and losses become reusable intelligence for the next deal.

Signals:

- 14 → fewer: context handoffs
- Less: context lost

## Regulatory Compliance

Policies, regulations, and compliance posture in one governed graph

Challenge: Regulations change. Internal policies live in documents nobody reads. Mapping what's required to what's actually happening means weeks of manual review. Audit prep is a fire drill every time.

With Modern Relay: Agents monitor regulatory feeds and flag changes. Policies, controls, and obligations are structured in the graph with version history, and audit trails are captured as part of the workflow. Compliance posture becomes queryable instead of reconstructed.

Signals:

- Full: change history
- Continuous: regulatory monitoring

## Medical Insights

Literature, trial data, and real-world evidence synthesized into usable knowledge

Challenge: Medical affairs teams track thousands of publications, conference abstracts, trial results, and adverse event reports. Insights stay siloed by therapeutic area. Cross-referencing takes days of manual work.

With Modern Relay: Agents scan literature, trial registries, and safety databases, extract structured findings, and link them to molecules, indications, and mechanisms in the graph. Medical affairs teams review synthesized intelligence instead of starting from raw documents.

Signals:

- Days → faster: synthesis time
- Cross-domain: insight linking

# Manifesto

# The Knowledge-Coordination Manifesto

The coordination thesis: shared structured state, governed change, and sovereign infrastructure for the agent era.

Canonical URL: https://modernrelay.com/manifesto

Author: Ragnor Comerford

Date: April 2026

> “The knowledge of the circumstances of which we must make use never exists in concentrated or integrated form, but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess.”
>
> - Friedrich Hayek, “The Use of Knowledge in Society” (1945)

> “Everything that is not forbidden by the laws of nature is achievable, given the right knowledge.”
>
> - David Deutsch, “The Beginning of Infinity” (2011)

> “Not to have one meaning is to have no meaning.”
>
> - Aristotle, Metaphysics IV

## What Intelligence Alone Cannot Solve

Knowledge work has moved through four eras. Each one solved a bottleneck and exposed a deeper one underneath.

Files and folders solved storage. A person born into that era would recognize the bottleneck immediately: physical access. You couldn’t work on something if someone else had it open.

Cloud collaboration solved access. Google Docs, Notion, Airtable. Multiple people, same document, real-time cursors. The access bottleneck vanished overnight. But the data underneath remained unstructured, ungoverned, and unversioned. Someone changes a number in a shared spreadsheet and nobody knows when, why, or what it used to be.

AI assistants solved reasoning on demand. Powerful intelligence, available instantly. But fundamentally stateless. The AI doesn’t know what changed yesterday. It can’t act on its own. Every interaction starts from scratch. A very smart colleague who forgets everything between conversations.

We believe knowledge work is now entering its fourth era. Autonomous agents operating continuously against shared, structured state. Agents that remember. Agents that build on each other’s work. Agents that coordinate.

The pattern is clear if you look at the transitions. Developers hit this wall 20 years ago. Code lived in shared folders, people overwrote each other’s changes, nobody knew what was deployed. Then they got Git: repositories, branches, pull requests, code review. That infrastructure is what made modern software development possible. Knowledge work is at the same inflection point now.

We believe every prior era concentrated more power in the hands of whoever was using the tool. And every era left the same underlying problem untouched: organizational knowledge remained scattered, implicit, and ungoverned.

We believe the current bottleneck is coordination. Agents can reason, generate, analyze, synthesize. They can write code, draft memos, process data. They operate with increasing autonomy across every domain. What they cannot yet do is coordinate safely with each other, with humans, and with the systems they act on.

We believe this is the defining infrastructure problem of the next decade.

## The Central Planning Problem

The default response to the coordination problem has been centralized orchestration. One master agent. One giant context window. One workflow engine routing everything through a single chokepoint.

We believe centralized orchestration is central planning by another name. One actor holding all the context, making all the decisions, routing all the work. It breaks the same way central planning always breaks: the world is too complex for any single actor to hold in its head.

We believe the "just make the context window bigger" argument is the new "just hire more central planners." Attention has quadratic cost. Recall degrades with scale. Narrow agents focused on specific tasks will always outperform a bloated oracle trying to manage everything at once.

We believe SaaS sprawl is the infrastructure failure of our generation. Thirty tools that don’t talk to each other. Each one captures a fragment of your organizational knowledge. Each one charges you to access what is already yours. Each one locks you into a schema you didn’t design and can’t export. The more tools you add, the worse the fragmentation gets.

We believe vendor lock-in is always an architectural choice. Your knowledge, your decision history, your institutional memory should live on infrastructure you control.

We believe the market is building in the wrong layer. Point solutions proliferate. Every new tool captures a slice of organizational state and walls it off. The coordination problem gets worse with every vendor you add. What’s needed is a shared foundation underneath all of it.

We believe foundation models are moving up the stack, application-layer differentiation is compressing, and the layer that becomes durable sits between models and applications: context, coordination, and governance. The companies that own that layer own the compounding advantage.

## How Coordination Actually Works

Nature solved coordination long before software did.

Ant colonies coordinate millions of actors without a single orchestrator. No ant has a master plan. Each ant reads signals left in the environment by other ants, does its work, and leaves signals of its own. The colony self-organizes through stigmergy: indirect coordination mediated by shared environmental state.

Markets coordinate billions of actors through the price system. Prices compress vast information about supply and demand into signals any local actor can use to make correct decisions. No central planner required. As Hayek showed in 1945, the attempt to centralize what is inherently distributed always fails. The knowledge is on the edges, in the hands of the people closest to the problem. The center, abstracted away, knows nothing.

We believe agents need the same thing: shared structured state they can read from and write to. A coordination medium that carries signal, not noise.

We believe the schema is the price system for agents. It compresses domain knowledge into a structure every agent can read, trust, and act on. When the ontology is clear, agents make locally correct decisions. When it’s unclear, they produce chaos.

We believe in self-organization without orchestration. No orchestrator. No workflows. No prompt chains. Shared state, reactive agents, emergent coordination.

We believe the ontology defines the fitness landscape over which your AI agents optimize. Get the ontology right and everything running on it improves. Get it wrong and no amount of compute fixes the output.

We believe shared state plus locally correct updates plus iteration converges toward stable attractor states. In plain language: when agents coordinate through good structure, the system gets better over time. When they don’t, it degrades.

Every time we build an ontology with a customer, they get sharper. They feel like they’re getting better at communicating and sharing ideas. The ontology forces clarity on what the business actually does. Like a coordinate system where everything is mapped. Ten years ago the MongoDB rebellion against schemas created "schema-free" databases. Agents operating without schemas create garbage instantly. Structure is no longer optional. It’s the precondition for coordination.

## Context. Coordination. Governance.

We believe every organization has a world model. Most of it is implicit: scattered across tools, locked in people’s heads, invisible to agents.

We build infrastructure that makes it explicit.

We believe agents need the same infrastructure developers got two decades ago. Developers got repositories, branches, pull requests, code review. That infrastructure is what made collaborative software development possible at scale. Agents need all of that, applied to knowledge.

**Context.** A typed, versioned graph where your business entities, relationships, and constraints are defined as precisely as code. The schema compiler validates definitions before data enters the graph. Your domain model is versioned, type-checked, and enforceable. Agents reason over a world model, not disconnected chunks.

**Coordination.** Agents read structured context, write governed proposals, and coordinate through shared state. The graph is the single surface where humans and agents make decisions together. Every mutation flows through proposals: branched, diffed, reviewed, merged. Nothing changes without a trail.

**Governance.** Git-like semantics for knowledge. Branch, merge, clone, push, pull, audit, approve. Govern change like code. Every change traceable. Every decision auditable. Every action reversible. Without governance, any system running hundreds of agents deteriorates within days. With it, you get a compounding institutional memory.

We believe knowledge should be treated like code. Versioned. Branched. Reviewed. Merged. Governed by the people and agents who understand the domain.

We believe in protocols, not platforms. Open storage formats. Zero proprietary encoding. Push and pull protocols you control.

## Sovereignty By Design

We believe your knowledge lives on infrastructure you control, governed by schemas you define, versioned by the people and agents you trust.

We believe sovereignty is a design principle. Every architectural choice we make starts from the assumption that your data stays on infrastructure you own.

The AI market keeps pushing companies toward fragmented, vendor-owned state. One tool for memory, another for orchestration, another for governance. Each with its own data silo, its own pricing model, its own lock-in mechanics. That is the wrong foundation for production AI.

If AI becomes part of your operating layer, the system that holds context, coordinates workflows, and governs agent behavior cannot live entirely inside someone else’s product. Companies increasingly know this. Repatriation of AI workloads is accelerating. The center of gravity is shifting toward owned, hybrid, and self-hosted infrastructure.

We believe the durable advantage of the next decade is ownership of ontology, context, governance, and coordination. Models are commoditizing. The layer that compounds is the one that holds your institutional knowledge, structured and governed, on the infrastructure you own.

Sovereignty by design. That is the guiding principle of the post-cloud enterprise.

## The Beginning of Infinity

We believe organizational knowledge should compound, not decay.

Every decision captured. Every relationship traced. Every change versioned. Knowledge that grows richer the longer agents and humans work on it.

We believe knowledge work is splitting permanently into two activities. **Defining** (human): what data matters, what the schema is, what agents should watch for, what the rules are. **Executing** (agent): gather, structure, update, enrich, flag, cross-reference. Continuously.

The same shift that happened in manufacturing is happening in knowledge work. Humans designed the process, machines ran it. The humans who define the ontology, set the policies, and govern the output will build organizations that compound faster than anyone thought possible.

We believe applications will become ephemeral. The durable system of record for an AI-native organization will be a governed context graph. Agents will read from it and write back to it. Applications will be thin UI generated on demand from shared state. Enterprise software becomes thinner while the context layer becomes thicker.

We believe the path to general autonomy runs through structured coordination. Self-adapting tasks. Stable attractor states. Systems that converge toward order. The architecture that works for one workflow generalizes to a hundred.

We believe in building infrastructure for explanatory knowledge and unbounded organizational intelligence. The beginning of infinity starts when your institutional knowledge stops dying with every employee departure and every tool migration.

We believe this is the most important infrastructure category forming in the enterprise. As models get better, more of the durable value shifts away from the model itself and toward the systems that hold context, coordinate agents, and govern change. As point solutions proliferate, the need for a shared coordination layer becomes more obvious, not less. The market has caught up to the thesis.

## We Build

We know what the world needs. A shared context foundation. A governed execution layer. Flexible deployment on infrastructure companies own.

We believe the companies that define the coordination layer will define the next era of enterprise infrastructure.

We are building Modern Relay to be that foundation. Open where it matters. Governed where it counts. Sovereign by default. We are open-sourcing the graph foundation, because the right architecture should be a standard anyone can build on.

We believe in building for the long arc. Open foundations. Sovereign infrastructure. Composable architecture. And the conviction that getting the ontology right matters more than getting to market first.

If you see the same future, build with us. If you’re running an enterprise and the agent era demands a better foundation, talk to us.

## Those Who Saw It First

- Aristotle: Who gave us the very first ontology: the act of defining what exists and how it relates.
- Friedrich Hayek: Who proved that coordination through distributed local knowledge outperforms central planning, always.
- Pierre-Paul Grassé: Who described stigmergy: coordination through shared environmental state, without direct communication.
- David Deutsch: Who showed that explanatory knowledge grows without bound, and that problems are soluble.
- Tim Berners-Lee: Who envisioned agents reasoning over structured data on the web. Twenty-five years early.
- Tom Gruber: Who defined ontology as "an explicit specification of a conceptualization." The sentence that launched a field.


# Memos

# Agents Are the New Database Operator

Why agentic systems need a new database category.

Canonical URL: https://modernrelay.com/memos/agents-are-the-new-database-operator

Author: Ragnor Comerford

Date: May 11, 2026

Tags: Category, Database, Agentic Systems

Summary: Database categories form around their dominant operator. Agents are a new operator profile — and they need a database that treats reading, writing, and coordination as one loop.

## The operator defines the category

Database categories form around their dominant operator.

OLTP databases were built around trusted application code mutating current state. Warehouses were built around analysts and batch jobs scanning historical state. Search engines were built around ranked retrieval. Logs were built around append and replay.

Agents introduce a different operator profile. An agent reads, reasons, waits, resumes, delegates, writes, and sometimes gets things wrong. It may run for minutes, hours, or days. It may work in parallel with other agents on the same customer, incident, account, codebase, contract, or research question.

That changes three parts of the database workload.

The read side becomes **context assembly**. The write side becomes **reviewable mutation**. The shared-state side becomes **coordination**.

## Context assembly

An agent context read is rarely a single lookup. It often starts from an entity or task, follows relationships, searches exact text, retrieves semantically similar examples, filters by type and permission, ranks evidence into a context window, and needs to know which version of the world it is reading from.

That workload has different failure modes from human search. A human can skim past a weak result. An agent may act on it. Missing a key policy is a recall failure. Including irrelevant evidence is a precision failure. Returning stale indexed data is a freshness failure. Hiding provenance is an audit failure. Returning data outside the actor's authority is a policy failure.

So retrieval quality becomes a database concern. The database has to compose graph traversal, full-text search, vector retrieval, scalar filters, permissions, provenance, and time. It also has to do this with interactive latency, because context reads sit inside agent loops. Heavy index maintenance can be asynchronous, but the freshness contract has to be explicit.

## Reviewable mutation

The write side is harder.

A traditional application usually knows what it wants to write. An agent often derives a hypothesis, proposes a change, and needs review before that proposal becomes accepted state. Treating every agent output as a direct mutation of shared truth is the wrong default for many workflows.

## Coordination

Coordination is the third pressure.

Multi-agent systems need more than private memory and tool calls. Agents need to see what has already been accepted, what is being proposed, who is working on which part of the graph, which state changed, and where their proposed work conflicts with someone else's. Some of that is control flow. But much of it is state. The database should coordinate agents at the state layer: branch heads, commits, events, diffs, actor identity, and merge status.

## Old database questions, new answers

That brings the old database questions back into the center of the problem.

What commits atomically? What can run concurrently? What isolation level does a long-running task read from? What happens when two agents propose conflicting changes to the same account graph? How fresh are the vector and text indexes behind a context read? What state survives a crash? Can we reconstruct the exact state an agent saw when it acted? Can another agent subscribe to the change, inspect the diff, and continue from the same state?

These are database primitives: latency, concurrency, consistency, isolation, durability, recovery, cost, and observability. The agentic workload changes the answers.

A fallible writer needs proposed state. That points to branches. Parallel agents need isolation and conflict detection. That points to diffs and merges. Long-running agents need stable worlds to reason over. That points to snapshot reads and point-in-time queries. Multi-entity actions need atomicity across the graph, because half a graph mutation can be worse than a failed one. Coordinating agents need observable state transitions. That points to commits, change feeds, and evented graph state. Accountable systems need actor identity, provenance, and commit history.

## A branchable context graph

The shape that falls out is a **context graph** — but a specific kind. The category of context graphs is real and is being argued for from several angles, mostly read-side: decision traces, process capture, semantic retrieval over enterprise reality. Our position is that the durable shape of the category is set by how it handles writes and coordination, not only how it serves reads. We call that shape a **branchable operational graph**: a context graph designed for the agent operator that needs to read, propose, coordinate, review, and merge changes to shared business state.

The graph matters because company context is relational: customers, people, tickets, documents, code, meetings, decisions, tasks, systems, policies, evidence, events, and proposed changes. The branch matters because agent work often starts as a proposal. The operational part matters because this state is used to coordinate action, not only to answer analytical questions after the fact.

Existing database categories cover important projections of this workload.

Vector databases solve semantic nearest-neighbor retrieval. Search engines solve ranked text retrieval. Graph databases solve relationship-local queries. OLTP databases solve trusted current-state mutation. Warehouses and lakehouses solve durable analytical history. Event logs solve append, replay, and fanout.

Agentic teams need pieces of those systems under one governance, consistency, and history model. Reads inform writes. Writes become future context. Commits become coordination events. Review needs diffs. Audit needs time.

## Why this can exist now

This category also became practical because the substrate changed.

Object storage is now cheap durable shared memory for organizations. Open table formats made data ownership and engine plurality normal. Lakehouse systems made snapshots, history, schema evolution, and compute/storage separation familiar. Lance adds a useful base for multi-modal columnar data: structured fields, vectors, text, blobs, versions, and object-store-native layout. DataFusion gives us serious query execution machinery in Rust without rebuilding the relational engine from scratch.

The workload explains why the category is needed. The substrate explains why it can exist now.

## Omnigraph

[Omnigraph](https://github.com/ModernRelay/omnigraph) is our implementation of this shape.

It is a lakehouse-native, versioned knowledge substrate for agentic teams: a typed operational graph where humans and agents assemble context, coordinate through shared state and events, and safely merge proposed changes into accepted operational truth.

Concretely:

## A typical workflow

A typical workflow looks like this.

An agent investigates a customer at risk. It starts from the customer node, walks the account graph, retrieves tickets, transcripts, product events, docs, prior decisions, and similar historical cases. Some evidence comes from exact filters. Some comes from full-text search. Some comes from vector retrieval. The useful result is a ranked, permission-safe, provenance-bearing context set, tied to a specific snapshot.

The agent then writes proposed changes on a branch: a new risk assessment, evidence links, suggested actions, affected owners, and follow-up tasks. That branch becomes visible state. Another agent can subscribe to the change event, inspect the proposed graph, add contract terms and recent usage patterns, or flag a conflict with an existing mitigation plan. A human reviews the diff, accepts part of it, rejects part of it, and merges the accepted state to main.

Later, the team can reconstruct what the agent saw, what it proposed, who reviewed it, what changed, which events fired, and which state became accepted. That is the difference between an agent transcript and an operational system of record.

## Hard problems and the category boundary

There are hard problems here.

Semantic merge is harder than row merge. Index freshness needs explicit contracts. Provenance has to be modeled into the data, not added as a comment field after the fact. Policy should push into query planning so unauthorized context never reaches the model. Branches can create review overload if the diff quality is poor. Context quality needs measurement: recall, precision, freshness, permission safety, and evidence coverage. Coordination needs clear boundaries between database state and workflow control. Hot entities and graph partitioning become real systems problems as agent count grows.

Those problems are the category boundary.

A database starts to belong in this category when it makes the following native: hybrid context retrieval, typed operational graph state, branch-local proposed writes, graph-level commits, state-layer coordination, diff and merge, point-in-time reconstruction, provenance, actor-aware governance, and durable object-store history.

## Reading, writing, and coordination as one loop

Agents need a database that treats reading, writing, and coordination as one loop. Context becomes action. Action becomes reviewed state. Reviewed state becomes coordination. Coordination produces future context.


# The Branchable Context Graph: A New Operational State Layer for Agentic Systems

Unifying context assembly, proposed mutation, coordination, and governed change.

Canonical URL: https://modernrelay.com/memos/branchable-operational-graph

Author: Ragnor Comerford

Date: May 11, 2026

Tags: Architecture, Omnigraph, Branchable Operational Graph

Summary: The dominant enterprise knowledge architecture will be replaced by a new pattern for agentic systems: a branchable operational graph — typed context, hybrid retrieval, branch-local proposed writes, and governance as a first-class system property.

## Abstract

Enterprise AI needs a new kind of system of record: a context graph. The category is emerging quickly, and most takes on it focus on the read side — decision traces, process capture, semantic retrieval. We agree the category is real, but argue the durable shape of the system is set by its write and coordination behavior, not only its reads. We call this shape a branchable operational graph: a context graph designed for agents that read context, propose changes on branches, coordinate through commits and events, and safely merge into accepted state. The pattern is characterized by: (i) a typed graph model for company context, (ii) hybrid retrieval across relationships, text, vectors, events, and files, (iii) branch-local proposed writes with diff and merge, and (iv) governance, provenance, and point-in-time reconstruction as first-class system properties.

**The motivation is that agents change the shape of the database workload.** Human-facing systems mostly supported search, reporting, and direct application writes. Agentic systems add a new operator profile: software that reads context, reasons over it, waits, resumes, delegates, writes proposed changes, and coordinates with other agents and humans over long-running tasks. Existing architectures separate the relevant functions across search engines, vector databases, document stores, workflow tools, warehouses, CRMs, ticketing systems, and operational databases. This creates staleness, unreliable context assembly, weak provenance, duplicated state, and unsafe mutation paths.

We argue that these problems are not incidental product gaps, but symptoms of an architectural mismatch. Agents need a substrate where context, coordination, and governed change are part of the same system. [Omnigraph](https://github.com/ModernRelay/omnigraph) is our implementation of this design: an open, lakehouse-native, versioned knowledge substrate for enterprise AI.

## 1. Introduction

This memo argues that the knowledge architecture used by enterprises today will wane for agentic workloads and be replaced by a new architectural pattern, which we refer to as a branchable operational graph. This pattern combines the relational structure of a graph, the retrieval properties of search and vector systems, the durability and openness of the lakehouse, and the coordination primitives of version control.

The history of enterprise software has created a large number of systems of record. CRMs store customers and opportunities. Ticketing systems store support work. Document systems store narrative knowledge. Warehouses store historical facts. Search systems index text. Vector databases retrieve semantically similar passages. Workflow tools coordinate work between humans. These systems were built for useful reasons, and each optimized for a particular operator: an analyst, an application, a support rep, a salesperson, a data scientist, or a human searching for a document.

Agents introduce a different operator.

An agent does not merely query a database or retrieve a document. It assembles context, reasons across sources, acts under uncertainty, writes intermediate state, waits for new information, resumes later, coordinates with other agents, and proposes changes to shared business truth. It may operate over a customer, contract, codebase, incident, account plan, research question, or internal process for minutes, hours, or days. It may work in parallel with other agents and humans on the same underlying entities.

That changes the data management problem.

The read side becomes **context assembly**. The write side becomes **proposed mutation**. The shared-state side becomes **coordination**.

In existing enterprise architectures, these three functions are usually separated. Search and vector tools assemble context but do not own operational truth. Workflow tools coordinate activity but do not usually represent the full underlying state. CRMs and operational databases own accepted state but are not designed for fallible writers that need branches, diffs, and review. Warehouses and lakehouses preserve history but are not the live coordination layer for agent work.

The result is a two-tier, and often multi-tier, agent architecture: agents retrieve from indexes, write to application APIs, store memory in side tables, coordinate through workflow engines, and leave audit trails in logs or transcripts. This works for demonstrations. It is brittle for enterprises.

Specifically, current agentic architectures commonly suffer from five problems.

**Context reliability.** Agents need high-recall, high-precision, permission-safe context. In practice, context is assembled from disconnected systems with different freshness, schemas, ranking behavior, and permission models. Missing a relevant policy is a recall failure. Including irrelevant or misleading evidence is a precision failure. Returning stale indexed data is a freshness failure. Returning data outside the actor's authority is a governance failure.

**State staleness.** The state used for retrieval often lags behind the state of record. Documents, tickets, CRM records, product events, meeting notes, and warehouse tables move through separate ingestion and indexing paths. The agent may reason over an older copy of the world than the one it is about to modify.

**Unsafe mutation.** A traditional application usually knows what it wants to write. An agent often derives a hypothesis and should propose a change before that change becomes accepted state. Treating every agent output as a direct write to production systems is the wrong default for many enterprise workflows.

**Poor coordination.** Agents need to know what has already been accepted, what is being proposed, who or what is working on which part of the graph, where work conflicts, and which state changed since they last read it. Today this coordination is usually implemented outside the data layer, through queues, workflow engines, comments, logs, or ad hoc memory.

**Weak reconstruction.** Enterprises need to reconstruct what an agent saw, why it acted, what it proposed, what was reviewed, what was accepted, and which downstream actions followed. A transcript is not enough. The reconstructable unit is the state transition: a snapshot, a diff, an actor, evidence, policy, review, and commit history.

In this memo, we discuss the following technical question: **is it possible to build a database substrate for long-running agent systems that provides high-quality context assembly, safe proposed writes, state-layer coordination, and durable reconstruction over enterprise-owned data?**

We argue that this design is both feasible and increasingly necessary. The reason is not simply that agents are new. The reason is that agents collapse what used to be separate steps: reading context, deciding what to do, writing state, and coordinating future work. A database for agents must therefore treat reading, writing, and coordination as one loop.

## 2. Motivation: Why existing architectures are insufficient

The dominant enterprise pattern today is fragmentation followed by indexing. Operational systems produce records. Documents and conversations produce narrative context. Data platforms collect historical state. Search and vector systems build retrieval indexes over subsets of these sources. Agents then sit above this estate and call tools.

This architecture is attractive because it can be adopted incrementally. It is also structurally limited.

First, context assembly is not a simple retrieval problem. A useful agent context read often begins from an entity and then traverses relationships: account → contacts → opportunities → contracts → support tickets → product events → meeting notes → internal decisions → similar historical cases. Some of the relevant evidence is exact and relational. Some is semantic. Some is temporal. Some is embedded in files. Some is permission-sensitive. Some only matters because of its relation to other evidence.

A pure vector database does not solve this. A pure search engine does not solve this. A graph database alone does not solve this. A warehouse alone does not solve this. The workload requires graph traversal, scalar filters, full-text search, vector retrieval, ranking, permissions, provenance, and version awareness in one runtime.

Second, agent writes are different from application writes. In an ordinary application, a user presses a button and trusted application logic mutates current state. In an agentic workflow, the system may infer that an account is at risk, propose new risk factors, attach evidence, suggest follow-up tasks, update an opportunity narrative, and recommend owner actions. Some of these changes should be accepted. Some should be edited. Some should be rejected. Some may conflict with another agent's work.

This points to branches, diffs, and merges. The correct default for many agent writes is not "mutate main"; it is "create proposed state."

Third, long-running work needs stable worlds. An agent may start from a snapshot, spend time gathering evidence, wait for a meeting transcript or approval, and resume later. During that time, the underlying company state may have changed. The system needs to answer basic database questions: which version of the world did the agent read from? Which changes occurred while it was working? Is the proposed write still valid? What conflicts with it? Can we replay or inspect the reasoning context?

Fourth, coordination belongs partly in the state layer. Workflow engines are useful for control flow: steps, retries, timers, approvals. But multi-agent coordination is not only control flow. It is also shared state. Agents need branch heads, commit history, locks or leases where appropriate, merge status, event feeds, actor identity, and conflict detection over the entities they are modifying.

Finally, governance must be pushed into the substrate. If an agent retrieves unauthorized context and then summarizes it, the policy failure has already happened. Permission checks, provenance, and audit cannot be merely post-hoc logging around model calls. They must be part of planning and execution for context reads and state transitions.

These requirements suggest that the agentic data problem is not solved by adding another memory store beside existing systems. It requires a new design point.

## 3. The branchable context graph

A context graph stores the relational, evidence-bearing state of an organization for use by agents and humans. We define a branchable operational graph as a context graph designed for the agent operator: it stores typed enterprise context as graph state and provides native support for hybrid retrieval, proposed writes, versioning, diff and merge, events, provenance, and actor-aware governance.

The graph is operational because it is used to coordinate action, not merely to run offline analytics. It contains the entities and relationships that agents and humans work on: customers, people, companies, products, tickets, contracts, meetings, documents, tasks, decisions, systems, policies, evidence, and events.

It is branchable because agent work often begins as a proposed change. A branch represents a hypothesis about the world or a proposed operational update. The branch can be inspected, extended, compared, partially accepted, rejected, or merged.

It is lakehouse-native because enterprise AI systems should operate over durable, open, customer-owned data rather than forcing all knowledge into a proprietary application store. Object storage, open table formats, and modern query engines have made it practical to separate durable data ownership from compute and application logic. For Omnigraph, Lance and DataFusion provide useful primitives for this substrate: structured fields, vectors, text, blobs, versioned data, and query execution in a Rust-native system.

A branchable operational graph combines several properties that are usually split across systems:

The architectural claim is that agents need these capabilities together. Context quality affects action quality. Proposed action becomes future context. Review and merge become coordination events. Coordination changes what later agents should read. The loop is continuous.

## 4. Implementing an Omnigraph system

One possible design for this system has three layers.

The first layer is durable open storage. Enterprise context should live in customer-owned infrastructure where possible, in formats that can be inspected, moved, governed, and queried outside a single application. This is the lesson of the lakehouse applied to agentic systems: the most important enterprise knowledge should not be trapped in a proprietary memory service. The storage substrate must support structured records, semi-structured objects, unstructured files, vectors, and versioned metadata.

The second layer is a transactional graph and metadata layer. This layer defines which objects, entities, relationships, indexes, and blobs are part of a graph version. It is responsible for schema, commits, branches, point-in-time reads, actor identity, provenance, and policy enforcement. In the same way that table metadata layers made object stores behave more like managed analytical systems, a graph metadata layer can make object-store-native enterprise knowledge behave like a coordinated operational system.

The third layer is a retrieval and coordination runtime. This runtime executes hybrid context reads, maintains auxiliary indexes, exposes APIs to agents and applications, and emits state transitions. It should support graph traversal, full-text search, vector search, scalar filtering, ranking, summarization inputs, event feeds, and diff generation. It should also expose enough structure that agents can reason over state without bypassing governance.

Several implementation choices matter.

**Versioning and branches.** Each accepted graph state should be addressable. Agent work should be able to occur on branches. Branches should carry proposed entity changes, relationship changes, evidence links, generated artifacts, and task state. A branch is not just a scratchpad; it is reviewable state.

**Diff and merge.** The system should be able to compute changes between branch and main. Some diffs are structural: a relationship was added, a field changed, a task was created. Some are semantic: a risk assessment changed because new evidence appeared. Merge therefore requires both conventional conflict detection and higher-level review UX.

**Hybrid indexes.** Context reads require auxiliary data structures: vector indexes, text indexes, statistics, graph adjacency, permission indexes, and freshness metadata. These indexes may update asynchronously, but their freshness contracts must be visible to the agent. It should be possible to know whether a context set is fresh enough for the action being taken.

**Policy-aware planning.** Permissions should be applied before context enters the model. If an actor cannot see a contract, an agent acting for that actor should not retrieve or summarize it. Policy should therefore affect graph traversal, search, ranking, and evidence packaging.

**Provenance-bearing context.** A context set should not be only a list of chunks. It should preserve the entities, relationships, documents, timestamps, versions, and permissions that produced it. This is necessary for audit, review, and later reconstruction.

**Events as coordination.** Commits, branch creation, merge requests, conflicts, review decisions, and material state changes should be observable. Other agents can subscribe to these events and continue work from a consistent graph state.

## 5. Example workflow

Consider an enterprise agent investigating a strategic customer at risk.

The agent starts from the customer node. It traverses the account graph to find contacts, opportunities, contracts, support tickets, open tasks, product usage, meeting notes, and recent executive communications. It retrieves similar historical risk cases, searches for exact mentions of renewal blockers, filters by date and owner, and ranks evidence by relevance and authority. The result is a permission-safe context set tied to a specific graph snapshot.

The agent then creates a branch. On that branch it proposes a new risk assessment, links supporting evidence, identifies likely causes, suggests mitigation actions, and creates draft tasks for the account team. It may also propose updates to the opportunity narrative or customer health fields.

A second agent sees the branch event and adds information from contract terms, usage trends, and recent support escalations. A human reviews the graph diff. Some changes are accepted, some are edited, and some are rejected. The accepted changes merge into main. The merge emits events that downstream agents and workflows can consume.

Later, the organization can reconstruct the exact graph state the first agent saw, the evidence it used, the branch it created, the changes another agent added, the human review decision, and the final accepted state. This is different from having a transcript. It is an operational system of record for agentic work.

## 6. Why the time is now

This architecture has become practical because several underlying changes have converged.

First, object storage has become the durable shared memory of the enterprise. It is cheap, highly durable, and increasingly the place where organizations expect to keep long-lived data. The economic and governance argument for customer-owned data has become stronger as AI systems touch more sensitive business context.

Second, open table and file formats have normalized the idea that data should be directly accessible by multiple engines. The lakehouse movement showed that organizations want both openness and management features: transactions, versioning, indexing, auditing, and performance over low-cost storage.

Third, modern retrieval workloads are no longer purely relational or purely semantic. Enterprise AI needs structured fields, graph relationships, text, embeddings, files, and event history in the same context path. Systems such as Lance point toward a useful storage model for multimodal, vector-aware, object-store-native data. DataFusion and similar query engines make it more realistic to build serious execution layers without reimplementing all database machinery from first principles.

Fourth, agentic applications are moving from demos to operational workflows. As soon as agents write to shared business state, the old database questions reappear: what commits atomically, what can run concurrently, what isolation level a long-running task reads from, what happens when two agents conflict, what survives a crash, and how to reconstruct the state that led to an action.

The category becomes necessary when these questions stop being edge cases.

## 7. Research questions and design challenges

The branchable operational graph design raises several open problems.

**Semantic merge.** Merging graph state is harder than merging rows. Two agents may update different fields but make incompatible claims. They may attach different evidence to the same conclusion. They may create duplicate tasks or conflicting recommendations. The system needs structural conflict detection, but it also needs review interfaces and model-assisted semantic comparison.

**Context quality measurement.** Retrieval for agents should be measured differently from human search. The system needs metrics for recall, precision, freshness, permission safety, evidence coverage, and provenance completeness. A context read is successful if it supports a safe and correct action, not merely if a user clicks a result.

**Index freshness contracts.** Some indexes can lag. That is acceptable only if the agent and reviewer understand the freshness boundary. A customer-risk workflow may tolerate a stale semantic index for historical examples but not stale contract terms or open support incidents.

**Governance in query planning.** Enterprise permissions are often complex, inherited, and source-specific. Pushing policy into graph traversal and hybrid retrieval is technically difficult, but necessary. Unauthorized context cannot be treated as a harmless intermediate result.

**Graph-level transactions.** Agent actions often span multiple entities: an account, opportunity, tasks, owners, evidence links, and follow-up events. Partial mutation can be worse than failure. The system needs atomicity at the graph-change level, not merely at the row or document level.

**Boundary between database and workflow.** Not every control-flow concern belongs in the database. Timers, retries, human approvals, and external side effects may live in workflow systems. But branch heads, commits, conflicts, proposed state, accepted state, and graph events belong close to the data. The boundary should be explicit.

**Hot entities and concurrent agents.** Important customers, incidents, or codebases may attract many agents and humans at once. The system needs concurrency control, leases or optimistic conflict detection, and usable merge flows for high-contention graph regions.

These problems define the category boundary. A system starts to belong in this category when it makes hybrid context retrieval, typed operational graph state, branch-local proposed writes, graph-level commits, state-layer coordination, diff and merge, point-in-time reconstruction, provenance, actor-aware governance, and durable object-store history native rather than bolted on.

## 8. Relation to existing systems

Several existing system categories solve important parts of this problem.

Vector databases provide semantic nearest-neighbor retrieval, but typically do not provide typed operational state, graph-level transactions, branch-local proposed writes, or merge semantics.

Search engines provide ranked text retrieval, but they generally do not own accepted business state or coordinate long-running mutation.

Graph databases provide relationship-local queries, but they are not usually designed as lakehouse-native, multimodal, branchable coordination layers for agents.

OLTP databases provide reliable application mutation, but they assume trusted application logic writing current state, not fallible agents proposing changes over long-running tasks.

Warehouses and lakehouses provide durable analytical history, open storage, and increasingly rich management features, but they are not usually the live operational coordination substrate for agents.

Workflow engines provide control flow, but not a governed, versioned, queryable model of enterprise context and proposed state.

The branchable operational graph does not replace all of these systems immediately. Instead, it provides the missing substrate that lets agents read, write, and coordinate over enterprise context with database-grade guarantees. In some deployments it will begin as an index and proposal layer over existing systems. Over time, as more work is mediated by agents, it can become the operational knowledge substrate itself.

## 9. Conclusion

Agents need a database that treats context, coordination, and governed change as one system.

The current enterprise architecture separates these concerns across too many layers. Context is retrieved from indexes. Truth lives in applications. Coordination lives in workflows. History lives in logs and warehouses. Governance is applied inconsistently across the path. This separation creates staleness, unsafe writes, weak reconstruction, and poor multi-agent coordination.

A branchable operational graph is a new design point for this workload. It stores company context as typed graph state, assembles context through hybrid retrieval, lets agents write proposed changes on branches, supports review through diffs and merges, coordinates through commits and events, and preserves the ability to reconstruct what happened from a point-in-time view of the world.

Omnigraph is Modern Relay's implementation of this architecture. Its goal is to provide the context graph for enterprise AI: an open, lakehouse-native substrate where agents and humans can assemble context, coordinate work, and safely merge proposed changes into accepted operational truth.

The core claim is simple. As agents become long-running participants in enterprise work, the database can no longer be only a place where applications store state after decisions have been made. It must become the substrate where context is assembled, proposed action is reviewed, coordination happens, and accepted truth evolves.


# Open Roles

# AI Engineer

Canonical URL: https://modernrelay.com/careers/ai-engineer

Location: San Francisco, CA or Barcelona, Spain

Type: Full-time

Reports to: CTO

Apply: careers@modernrelay.com

## About Modern Relay

Modern Relay is the source of truth for enterprise AI. As AI moves from chat to action, companies need a shared view of what's real, the governance to change it, and the infrastructure they control.

We've built a graph engine where humans and agents share one governed context. The ontology models every entity and relationship in the customer's domain. Git-like branching and merging let agents work in parallel under human review, with the query layer operating directly on the graph.

Every change to the graph is versioned, reviewed, and reversible, with all of it running on the customer's infrastructure. We believe this is the future of enterprise AI, and we're building it.

## Job summary

You might thrive at Modern Relay if you:

- Have shipped production LLM systems and can walk the agent, retrieval, evaluation, and serving layers end to end
- Think in terms of state, not prompts. How a system learns over time, how humans intervene, how confidence is scored.
- Already use Claude and Codex as your default tooling
- Write Rust, or are ready to. The systems layer isn't something you hand off.
- Prefer open-ended problems over ones with established answers

You'll work across the agent layer and the Rust engine underneath. Ship features with customers, then generalize them back into the core. Your week might span an architecture call with the founders, pairing with a Forward Deployed Engineer on an enterprise deployment, and walking a customer's CIO through what you built.

## Responsibilities

- Build the Tasks layer: reactive agent workflows that propose governed changes to the context graph
- Extend the MCP server so Claude and OpenAI agents read and write through typed, branched proposals
- Own the human-in-the-loop review surface that turns those proposals into auditable commits
- Design evaluation frameworks for long-horizon tasks. Decide when to automate, when to escalate.
- Contribute to the open-source Rust engine: schema compiler for .pg files, .gq query parser, DataFusion-backed query planner, CSR/CSC graph indexes
- Ship what runs at on-prem customer sites, including the observability to debug it remotely

## What you bring

- 3+ years shipping production LLM systems you can explain end to end
- Depth in Python and/or TypeScript, with fluency in agent frameworks, RAG, and eval tooling
- Rust isn't required. Appetite to learn it is. You'll pair with engineers already writing it.
- Experience with human-in-the-loop feedback, reward shaping, or long-horizon task decomposition
- Comfort at a customer whiteboard, explaining a governance model to a non-technical stakeholder
- Direct communication. Problems surfaced early, solutions proposed in the same sentence.

## Benefits

- Founder-tier equity
- Competitive cash, weighted toward equity
- Unlimited token access to Claude, Codex, and every AI tool to bring your wildest ideas to reality
- Small, flat team
- Build alongside the founders
- San Francisco or Barcelona, in person
- Health, dental, and vision insurance
- Flexible time off

## Apply now

If this sounds like you, email careers@modernrelay.com with the subject line "Application: AI Engineer". Attach a resume, point us at work you're proud of, and tell us what you'd build first.

We read every application. We write back within a week.


# Forward Deployed Engineer

Canonical URL: https://modernrelay.com/careers/forward-deployed-engineer

Location: San Francisco, CA or Barcelona, Spain

Type: Full-time

Reports to: CEO

Apply: careers@modernrelay.com

## About Modern Relay

Modern Relay is the source of truth for enterprise AI. As AI moves from chat to action, companies need a shared view of what's real, the governance to change it, and the infrastructure they control.

We've built a graph engine where humans and agents share one governed context. The ontology models every entity and relationship in the customer's domain. Git-like branching and merging let agents work in parallel under human review, with the query layer operating directly on the graph.

Every change to the graph is versioned, reviewed, and reversible, with all of it running on the customer's infrastructure. We believe this is the future of enterprise AI, and we're building it.

## Job summary

You might thrive at Modern Relay if you:

- Sit with a customer stakeholder in the morning and write production code the same afternoon
- Own a problem from discovery to handoff. Thin slices don't interest you.
- Translate messy organizational reality into typed schemas without losing what matters
- Have run a 30-day deployment before and know what week two actually feels like
- Enjoy the handoff as much as the build. Documentation, training, and the moment a customer runs it themselves.
- Already use Claude and Codex as your default tooling

Thirty days from discovery to handoff. You own every piece: the interviews, the ontology, the pipeline, the application, and the training.

## Responsibilities

- Run the 30-day playbook: days 1 to 7 discovery and system audit, 7 to 12 ontology design in .pg schema, 12 to 20 data pipelines, 20 to 25 .gq query library and UI, 25 to 30 handoff
- Map decision points across stakeholder interviews and draft the ontology a domain expert will recognize
- Build ETL from whatever customer systems are on the ground (regulatory, clinical, patent, LIMS, ERP, CRM), including the ones nobody wants to touch
- Ship the application layer: .gq queries, Claude chat integrations, dashboards, the interface that makes the knowledge useful this week
- Run the handoff: docs, training, the transition from you-running-it to them-running-it
- Feed every lesson back to Engineering and Product so the next deployment is 10 percent less work

## What you bring

- 5+ years full-stack engineering, ideally with time at a top consultancy or an enterprise software company that shipped real deployments
- Demonstrated ability to design ontologies for domains you didn't know a week ago
- Python and TypeScript fluency, working knowledge of Rust (or appetite to learn), command of SQL
- Customer-facing confidence: translating a 12-table schema into a three-slide board narrative
- Travel tolerance of 25 to 50 percent, depending on the deployment
- Security clearance or the ability to obtain one is a plus for defense-adjacent work

## Benefits

- Founder-tier equity
- Competitive cash, weighted toward equity
- Unlimited token access to Claude, Codex, and every AI tool to bring your wildest ideas to reality
- Small, flat team
- Build alongside the founders
- San Francisco or Barcelona, in person
- Health, dental, and vision insurance
- Flexible time off

## Apply now

If this sounds like you, email careers@modernrelay.com with the subject line "Application: Forward Deployed Engineer". Attach a resume, point us at work you're proud of, and tell us what you'd build first.

We read every application. We write back within a week.


# Deployment Strategist

Canonical URL: https://modernrelay.com/careers/deployment-strategist

Location: San Francisco, CA or Barcelona, Spain

Type: Full-time

Reports to: CEO

Apply: careers@modernrelay.com

## About Modern Relay

Modern Relay is the source of truth for enterprise AI. As AI moves from chat to action, companies need a shared view of what's real, the governance to change it, and the infrastructure they control.

We've built a graph engine where humans and agents share one governed context. The ontology models every entity and relationship in the customer's domain. Git-like branching and merging let agents work in parallel under human review, with the query layer operating directly on the graph.

Every change to the graph is versioned, reviewed, and reversible, with all of it running on the customer's infrastructure. We believe this is the future of enterprise AI, and we're building it.

## Job summary

You might thrive at Modern Relay if you:

- Translate between a customer's C-suite problem and a typed schema, and back again, without losing either side
- Know one industry deeply: regulatory, clinical, finance, cybersecurity, industrials, or defense. Enough to walk into a workflow and name the five most common data systems without Googling.
- Turn a customer hunch into a working prototype the same week
- Have written a 10-page technical document that ended up in a CEO's hands, and were proud to send it
- Read code to judge what's actually buildable in the next 30 days, even if you don't ship it yourself
- Hold strong opinions about which of a customer's 200 decisions are worth modeling, and can defend them at a whiteboard

Deployment Strategists synthesize disconnected streams of customer context into a structure engineers can build against. You go onsite, read the room, read the data, and come back with the spec that defines a 30-day deployment. You decide what's worth building.

## Responsibilities

- Run customer discovery: executive interviews, workflow audits, decision-point mapping across a single vertical
- Identify the 3 to 5 data connections per deployment that pay for the engineering cost
- Draft ontology sketches in plain English plus pseudo-schema that the Forward Deployed Engineers turn into .pg files and production pipelines
- Own the customer relationship across the 30-day deployment and the post-handoff year. Quarterly business reviews are your job, not sales'.
- Author deployment reports that double as both customer documentation and our next vertical's sales case study
- Feed what you learn into new product proposals, new vertical expansion, and pricing

## What you bring

- 5+ years in consulting, product, or customer-facing technical roles at companies that shipped real software
- Deep working knowledge in at least one of: life sciences, regulatory, cybersecurity, finance, industrials, defense. Deep enough to interrogate a workflow without a primer.
- Comfort with SQL, enough Python to script an audit, enough code-reading to judge engineering feasibility on a 30-day timeline
- Writing that's honest, structured, and free of business-speak
- Willingness to travel 25 to 50 percent and handle the occasional customer call at an inconvenient timezone
- Intellectual honesty about what you don't know, and the curiosity to close the gap before the next discovery call

## Benefits

- Founder-tier equity
- Competitive cash, weighted toward equity
- Unlimited token access to Claude, Codex, and every AI tool to bring your wildest ideas to reality
- Small, flat team
- Build alongside the founders
- San Francisco or Barcelona, in person
- Health, dental, and vision insurance
- Flexible time off

## Apply now

If this sounds like you, email careers@modernrelay.com with the subject line "Application: Deployment Strategist". Attach a resume, point us at work you're proud of, and tell us what you'd build first.

We read every application. We write back within a week.


# Legal

# Terms of Service

Canonical URL: https://modernrelay.com/terms

Last updated: January 10th, 2026

Contact: legal@modernrelay.com

## Acceptance of Terms

The Terms constitute a legally binding agreement between the user and EQTR, Inc., a Delaware corporation doing business as ModernRelay.

By creating an account, accessing, or using services, users acknowledge they have read, understood, and agree to be bound by these Terms. Those entering on behalf of an organization represent they have authority to bind that organization.

Effective Date: The Terms take effect as of whichever is earlier: the date of first access/use, or the date of acceptance.

## Description of Service

ModernRelay provides an AI-native knowledge management platform designed for pharmaceutical and life sciences organizations. The Service includes but is not limited to:

- Cloud-based knowledge management and collaboration tools
- AI-powered analytics and insights
- Document management and workflow automation
- Integration capabilities with third-party systems
- API access for authorized integrations

The company reserves the right to modify, suspend, or discontinue any aspect of the Service at any time, with reasonable notice.

## User Accounts

### 3.1 Account Registration

Users must create an account with accurate, current, and complete information and are responsible for maintaining credential confidentiality and all account activities.

### 3.2 Account Security

You agree to:

- Use strong, unique passwords for your account
- Enable multi-factor authentication when available
- Immediately notify us of any unauthorized access or security breach
- Not share your account credentials with unauthorized parties
- Log out from shared or public devices after each session

### 3.3 Account Responsibilities

Users are solely responsible for all content, data, and activities associated with their account. The Company is not liable for loss or damage from failure to maintain account security.

## Acceptable Use Policy

You agree not to use the Service to:

- Violate any applicable laws, regulations, or third-party rights
- Upload, transmit, or store malicious code, viruses, or harmful content
- Attempt to gain unauthorized access to the Service or related systems
- Interfere with or disrupt the integrity or performance of the Service
- Engage in any activity that could damage, disable, or impair the Service
- Reverse engineer, decompile, or disassemble any part of the Service
- Use automated systems to access the Service without authorization
- Transmit spam, chain letters, or other unsolicited communications
- Impersonate any person or entity or misrepresent your affiliation
- Collect or harvest user data without proper authorization

The company reserves the right to investigate and take action against violators, including removing content and terminating accounts.

## Data Protection & Security

### 5.1 Data Processing

Personal data processing is governed by the Privacy Policy. Enterprise customers are further governed by the Data Processing Agreement.

### 5.2 Security Measures

The company implements technical and organizational security measures including:

- Encryption of data in transit and at rest
- Access controls and authentication mechanisms
- Regular security assessments and penetration testing
- Employee security training and background checks
- Incident response and disaster recovery procedures

### 5.3 Compliance

The company maintains compliance with applicable data protection regulations, including GDPR for European users. Security practices align with SOC 2 Type II and ISO 27001 requirements.

### 5.4 Data Retention

Data is retained for the account duration and a reasonable period thereafter as required by law. Upon termination, data will be deleted or anonymized per retention policies.

## Confidentiality

Each party agrees to maintain confidentiality of proprietary information disclosed in connection with the Service. Confidential information excludes information that:

- Is or becomes publicly available without breach of these Terms
- Was known to the receiving party prior to disclosure
- Is independently developed without use of confidential information
- Is rightfully obtained from a third party without restriction

This confidentiality obligation shall survive termination of these Terms for a period of three (3) years.

## Intellectual Property

### 7.1 Company Intellectual Property

The Service and all associated software, algorithms, interfaces, documentation, and content are protected by IP laws. The company retains all rights, title, and interest.

### 7.2 User Content

Users retain all rights to data and content uploaded ("User Content"). Users grant a limited license to process, store, and display User Content solely as necessary to provide the Service.

### 7.3 Feedback

Any feedback, suggestions, or ideas you provide regarding the Service may be used by us without restriction or compensation to you.

## Service Availability

The company strives for high availability but does not guarantee uninterrupted access. The Service may be temporarily unavailable due to:

- Scheduled maintenance (with reasonable advance notice)
- Emergency maintenance for security or stability issues
- Circumstances beyond our reasonable control
- Third-party service provider outages

Service level commitments are defined in separate agreements with enterprise customers.

## Fees and Payment

### 9.1 Subscription Fees

Access requires payment of applicable subscription fees per the order form or pricing page. Fees are non-refundable except as expressly stated in these Terms or required by law.

### 9.2 Payment Terms

Users agree to pay all fees when due. The company may suspend or terminate access for non-payment. Fees exclude applicable taxes, which users are responsible for.

### 9.3 Price Changes

Pricing may be modified with at least thirty (30) days advance notice. Changes take effect at the start of the next billing cycle following the notice period.

## Limitation of Liability

TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, the company and its affiliates, officers, directors, employees, or agents shall not be liable for any indirect, incidental, special, consequential, or punitive damages, including loss of profits, data, goodwill, or other intangible losses, resulting from:

- Your access to or use of (or inability to access or use) the Service
- Any conduct or content of any third party on the Service
- Unauthorized access, use, or alteration of your data
- Any other matter relating to the Service

Liability Cap: Total liability shall not exceed the greater of (a) the amount you paid us in the twelve (12) months preceding the claim, or (b) one hundred US dollars ($100).

## Indemnification

Users agree to indemnify, defend, and hold harmless the Company and its affiliates, officers, directors, employees, and agents from claims, liabilities, damages, losses, and expenses (including reasonable attorneys' fees) arising from:

- Your access to or use of the Service
- Your violation of these Terms
- Your violation of any third-party rights
- Your User Content

## Termination

### 12.1 Termination by You

Users may terminate at any time by contacting the company or using account settings. Access rights cease immediately upon termination.

### 12.2 Termination by Us

The company may suspend or terminate your access to the Service, without prior notice or liability, for any reason, including Terms breaches. Inactive accounts may also be terminated.

### 12.3 Effect of Termination

Upon termination, the right to use the Service ceases immediately. Provisions that by their nature should survive, including IP, limitation of liability, and indemnification, shall survive.

### 12.4 Data Export

Upon request made within thirty (30) days of termination, the company will provide a User Content export in a commonly used format, subject to outstanding fee payment.

## Changes to Terms

The company reserves the right to modify Terms at any time, with notice of material changes posted on the website and, where appropriate, via email.

Continued use after the effective date constitutes acceptance of modified Terms. Users who disagree must discontinue use.

Notice Period: For material changes, at least thirty (30) days advance notice will be provided before changes take effect, unless legally required sooner.

## Governing Law

Terms are governed by the laws of the State of Delaware, United States, without regard to its conflict of law provisions.

Disputes shall be resolved exclusively in state or federal courts located in Delaware, and users consent to personal jurisdiction there.

### 14.1 Dispute Resolution

Before initiating legal proceedings, users agree to attempt informal resolution first. Most disputes can be resolved through good-faith negotiation.

## General Provisions

### 15.1 Entire Agreement

These Terms, the Privacy Policy, and any applicable order forms or service agreements constitute the entire agreement regarding the Service.

### 15.2 Severability

If any provision is found unenforceable, the remaining provisions continue in full force and effect.

### 15.3 Waiver

Failure to enforce any right or provision does not constitute a waiver of that right or provision.

### 15.4 Assignment

Users may not assign or transfer without prior written consent. The company may assign without restriction in connection with a merger, acquisition, or sale of assets.

### 15.5 Force Majeure

Neither party is liable for failure or delay due to circumstances beyond reasonable control, including acts of God, natural disasters, war, terrorism, or government actions.


# Privacy Policy

Canonical URL: https://modernrelay.com/privacy

Last updated: June 12th, 2025

Contact: privacy@modernrelay.com

## Who We Are

ModernRelay is a trade name of EQTR, Inc., a Delaware C-Corp headquartered in San Francisco, California. The company provides an AI-native knowledge management platform designed for pharmaceutical and life sciences organizations, focusing on global medical affairs teams.

For questions or concerns about your privacy, contact us at: privacy@modernrelay.com

## Information We Collect

This policy applies only to data collected and processed by ModernRelay. It does not cover data stored within customer-managed platforms.

### 2.1 Information You Provide Directly

- Name
- Email address
- Notes, calendar entries, and CRM activity logs

### 2.2 Automatically Collected Information

- IP address
- Device/browser identifiers
- Usage metadata
- Timestamps of interactions

### 2.3 Data via Integrated Platforms

- Supabase (authentication, database storage)
- Stripe (payment metadata, invoicing)
- Vercel (website, backend API hosting, and delivery)
- PostHog (product analytics and usage metrics)

## How We Collect Information

- User-submitted forms
- Platform interactions and telemetry
- API-based integrations with email, calendar, and CRM systems
- Authentication and analytics services from Supabase and Vercel
- Data syncs explicitly authorized by clients via connected services

## How We Use the Information

- Provide, maintain, and improve our services
- Authenticate users securely
- Manage billing and subscriptions
- Support product development and analytics
- Respond to user inquiries or support requests
- Fulfill contractual or legal obligations

## Sharing of Information

We do not sell or rent your data. Limited data is shared with trusted third-party services under contractual agreements:

- Supabase: Auth, DB, Storage (US)
- Vercel: Hosting & API runtime (US)
- Stripe: Billing & payments (US)
- PostHog: Product analytics (US)
- Google: Email/calendar integration (US/EU)

All vendors are required to meet industry-standard security and confidentiality obligations.

## Data Retention and Security

Data is retained only as long as needed to fulfill the purposes described above or as legally required.

### Security Overview

- All data is encrypted in transit via TLS
- Supabase encrypts data at rest by default
- Role-based access control is enforced for internal systems
- Supabase Auth is used for secure authentication (password and third-party OAuth)
- Access to production data is restricted to authorized personnel and is logged
- Development and production environments are isolated
- ModernRelay's servers and backend APIs are hosted and deployed via Vercel's serverless infrastructure
- Monitoring and alerting are in place to detect anomalies
- We are aligning our practices with SOC 2 Trust Service Criteria and continuously assess risk

## Your Privacy Rights

The company supports global privacy principles and complies with applicable rights under the GDPR, CCPA/CPRA, and similar laws. You may:

- Access your personal data
- Request correction or deletion of your data
- Export your data in a portable format (data portability)
- Object to certain types of processing

To exercise your rights, email: privacy@modernrelay.com with the subject line: Privacy Request. Identity verification may be required.

## Children's Privacy

Our services are not directed to individuals under 13. The company does not knowingly collect data from children. If such data is discovered, it will be deleted promptly.

## International Users

Data is processed and stored in the United States. If you access the services from outside the U.S., you consent to transferring your data to the U.S. where different privacy protections may apply.

## Changes to This Policy

The policy may be updated to reflect changes to practices or for legal reasons. The latest version will always be available at modernrelay.com/privacy. Revisions are effective upon posting.

## Cookies and Tracking

The site and platform may use cookies or similar technologies to improve performance and analyze usage. You can control cookie preferences via your browser settings. Disabling cookies may affect functionality. Vercel and Supabase services do not currently respond to "Do Not Track" browser signals.

## Legal Basis for Processing

Personal data is processed based on:

- Your consent
- Contractual necessity
- Legitimate interests (e.g., service improvement, security)
- Legal obligations

## Data Breach Notification

In the event of a data breach, affected individuals and regulators will be notified as required under applicable law.

## Use of Subprocessors

We do not currently use subcontractors, but subprocessors may be engaged (e.g., for hosting or payments) bound by confidentiality and data protection terms.

For enterprise customers requiring a formal data processing agreement, please review the Data Processing Agreement.

## Governing Law

This policy is governed by the laws of the State of Delaware, United States. Any disputes will be resolved in courts located in Delaware.

## Client System Access & Data Handling

ModernRelay does not directly access or store customer-controlled system data unless explicitly authorized for support purposes. All such access is time-limited, logged, and governed by internal controls. All client data remains the property of the client. ModernRelay does not claim ownership or reuse rights over any uploaded content or activity logs unless explicitly authorized.

When ModernRelay integrates with client systems, data access is governed by explicit client authorization. Clients must provide permission to initiate and define the scope of such data syncs. Clients may revoke access or modify permissions at any time through the integration settings or by contacting the support team. Only data necessary to support functionality will be accessed or stored, and all synced data is treated as confidential.

For additional questions, reach out to: privacy@modernrelay.com

