How OLi works.

Five steps from desktop activity to help in the moment. No prompting. No intervention. OLi sees what you are working on and acts.

The short version

OLi watches what your people are doing on their computers, recognizes moments where it can help, and shows up with the right content or action — no prompts, no dashboards, no training. The result: 10% more productive time in 30 days, or you don’t pay.

01

Capture — the activity graph starts at the desktop

A lightweight agent on the user's machine records what application is active, how long they dwell, what content is on screen — structured activity records, not screenshots. On-device capture means raw data never leaves the machine unprocessed. The agent writes roughly 1M records per day across a typical deployment.

Each record is a structured tuple: timestamp, application, window title, dwell duration, detected content type. No keylogging. No full-screen capture. The signal is what the user is doing, not what the user is typing.

02

Graph — five years of real human work

Activity records flow into the activity graph — Dataken's first-party dataset of real human work. Over 10 billion records across 5+ years of continuous operation. This is the asset no other agent platform has: a deep, longitudinal record of how people actually work on their computers.

Built on Apache Spark. Per-tenant data boundaries — your graph is yours. Anonymized at the architectural layer, not as an afterthought. The graph is what makes OLi's suggestions specific rather than generic.

03

Recognize — OLi sees the work, not just the window

Pattern recognition runs against the activity graph in real time. OLi detects friction (three tab switches in 90 seconds on a compliance form), recognizes tasks (a department timesheet opening), and identifies context (a knowledge-base article that matches the active task). Recognition is the trigger — nothing happens until OLi sees something worth acting on.

Recognition rules are configured per tenant in the rules registry. A healthcare company's rules detect EHR workflows. An insurance company's rules detect claims processing. Same agent, different context, different actions.

04

Act — help lands at the point of work

When OLi recognizes a moment worth acting on, it surfaces help as a desktop toast — a small notification at the point of work. Today that is usually content: a micro-learning refresh, a knowledge-base article, a break reminder, a motivation nudge. Increasingly it is skills: real actions OLi takes for the user, like building SOWs from a timesheet and routing them through Zoho Sign.

Skills are tenant-customized. They integrate with your stack — your EHR, your CRM, your document signing workflow. This is what separates OLi from a generic AI assistant: the action is wired to how your organization actually works.

05

Learn — the graph gets smarter with every deployment

Every interaction feeds back into the activity graph. OLi's suggestions get more specific over time — not because of some vague machine learning claim, but because the recognition patterns are refined against real usage data. Five years of continuous operation means Dataken's models have seen work patterns that no new entrant can replicate.

Privatized LLM inference by default. Any skill or Ask OLi call that invokes an LLM uses the provider's zero-retention, no-training-on-tenant-data mode. An open-source isolated-deployment option is available for security-sensitive tenants.

Privacy is architecture, not policy.

Activity-graph anonymization. Privatized LLM inference by default. Per-tenant data boundaries. On-device capture where possible. These are not promises — they are how the system is built.

Read the full privacy story →

10% more productive time in 30 days. Or you don’t pay.

Book a 30-minute demo