Plex Labs Research

The 1+1 Operating System

A research-backed framework for managing human-AI teams. Six components, a progressive Trust Ladder, and the management layer every AI deployment will eventually need.

Read the Full White Paper →

The Management Gap

AI agents are being deployed everywhere and managed nowhere. The frameworks we use today — OKRs, Agile, EOS — were built for all-human teams. They assume shared context, implicit trust, and synchronous accountability. AI agents violate all three.

88%
of organizations use AI regularly, but only 13% see agents deeply integrated into workflows
86%
of CFOs have encountered hallucinated data from AI agents
$830B
erased from software stocks as markets priced in the AI agent disruption
6%
of companies fully trust AI agents for core business processes
"Finance leaders will trust AI when they can audit it." — Journal of Accountancy, citing Maximor Finance AI Adoption Report, Feb 2026

Why Existing Frameworks Fail

Every management framework assumes three things that AI agents break.

🧠

Shared Context

Humans absorb context through osmosis — overhearing conversations, reading body language. AI agents have no ambient context. It must be engineered.

🤝

Implicit Trust

New employees start with baseline trust earned by being hired. AI agents start at zero — or negative, because they fail silently and confidently.

⏱️

Synchronous Accountability

Standups and reviews leverage social pressure. AI agents work at 3 AM and don't feel embarrassment. Accountability must be structural.

Agentic AI Is Not Automation

There are trade-offs to both. Automation isn't going away. But agentic is something new — and anyone can build with it.

⚙️

Automation

Follows a script. Step 1, step 2, step 3. If step 2 breaks, everything stops. Nobody's home to fix it. Want to change what it does? You're rewriting code.

Asks: "What are the instructions?"

Runs on Python scripts. Requires a developer every time something changes.

An Agent

Knows where it's going. Figures out the steps, hits a wall, tries another way. The instructions aren't code — they're plain English.

Asks: "What does done look like?"

Runs on file trees and plain language. Requires a clear thinker who can write good instructions.

An agent's "program" is a folder. Markdown files that say what it knows, what it's good at, and how it should behave. Skills are documents, not scripts. Context is a text file, not a database. You teach an agent the same way you'd onboard a new hire — you write things down and put them where they can find them.

Traditional automation requires a developer every time something changes. An agent requires a clear thinker who can write good instructions. That's not a small difference. That's a completely different skill set, and a completely different ceiling for what non-technical teams can build.

The Framework

The 1+1 OS provides six components organized around a progressive Trust Ladder. Together, they form a closed loop: Assign → Execute → Report → Triage → Reassign.

🔑

Keystones

The 3–5 things that matter right now. Not a backlog — a forcing function for priority.

Actions

Atomic, completable units with explicit ownership. Every action is assigned to a human or an AI — never ambiguous.

📡

Signals

Structured status updates that replace synchronous check-ins. Context without interruption.

💓

Pulse

The daily rhythm — a shared snapshot that keeps both parties synchronized without meetings.

🔄

TDR

Triage, Discuss, Resolve — the escalation protocol. What to do when the AI gets stuck or gets it wrong.

📘

Blueprint

The persistent knowledge base. Not documentation — a living context source that feeds every other component.

The Trust Ladder

AI autonomy should be earned, not assumed. The Trust Ladder provides five levels that govern how much independence an AI agent gets — and what guardrails remain.

L4
Autonomous
AI acts independently within defined boundaries. Human reviews outcomes, not actions.
L3
Supervised
AI executes multi-step workflows. Human approves plans and spot-checks results.
L2
Assisted
AI drafts and suggests. Human reviews and approves before execution.
L1
Reactive
AI responds to direct requests. No autonomous action.
L0
Manual
No AI involvement. Human does everything. The baseline.
"Trust in human-AI teams follows fundamentally different pathways than human-human teams. The timing, content, and methods for calibrating trust must be specific to each collaboration." — ACM Human-Autonomy Teaming Research, 2024

The Closed Loop

The six components form a continuous cycle. Work flows from Keystones → Actions → Execution → Signals → Pulse → TDR → back to Actions. The Blueprint feeds context into every step. Nothing falls through the cracks because nothing leaves the loop.

Assign → Execute → Report → Triage → Reassign

This is how you manage a team member that works 24/7, doesn't share your context, and can't be held accountable through a standup. You build the management into the system itself.

Read the Full Framework

The white paper covers the complete research, all six components in depth, the Trust Ladder methodology, and what we're building at Plex Labs.

Download the White Paper (PDF) →