Lens Documentation
Lens is a log routing agent that filters, enriches, and routes your logs intelligently — reducing what you send to expensive destinations like Datadog or Splunk by 40–90%.
Quickstart
Get from zero to routing logs in under 5 minutes:
Go to the Agents tab in your dashboard and click New Agent. Copy the agent key.
Run the one-line installer — it downloads the binary and writes a systemd service.
curl -sSL https://your-domain/install.sh | bash -s YOUR_AGENT_KEY
In the Pipelines tab, click New Pipeline. Add rules to filter or tag your logs, then set send/archive/drop thresholds.
In the Destinations tab, connect your log backend (Datadog, Elasticsearch, S3, webhook, etc.).
The Overview tab shows real-time volume, cost savings, and routing decisions.
Installing the Agent
One-line install (Linux, systemd)
curl -sSL https://your-domain/install.sh | bash -s AGENT_KEY
This installs the agent as a systemd service that starts on boot. The agent binary is placed at /usr/local/bin/lens-agent.
Docker
curl -sSL https://your-domain/install/docker/AGENT_KEY -o lens-agent.yml
docker compose -f lens-agent.yml up -d
Kubernetes (DaemonSet)
kubectl apply -f https://your-domain/install/k8s/AGENT_KEY
This deploys a DaemonSet that runs one agent per node, tailing /var/log/pods.
Manual (any platform)
# Linux amd64
curl -fsSL https://your-domain/download/lens-agent-linux-amd64 -o lens-agent
chmod +x lens-agent
AGENT_KEY=your_key DASHBOARD_URL=https://your-domain ./lens-agent
Supported platforms: linux/amd64, linux/arm64, darwin/amd64, darwin/arm64.
Environment variables
| Variable | Required | Default | Description |
|---|---|---|---|
AGENT_KEY | ✓ | — | Your agent key from the dashboard |
DASHBOARD_URL | ✓ | — | URL of your Lens dashboard |
LISTEN_OTLP_HTTP | :4318 | OTLP HTTP receiver port | |
LISTEN_FLUENTD | :24224 | Fluentd TCP receiver port | |
LISTEN_SYSLOG | off | Syslog UDP/TCP address (e.g. :514) | |
FILETAIL_GLOBS | — | Comma-separated file globs to tail (e.g. /var/log/*.log) | |
WAL_DIR | /var/lib/lens-agent/wal | Write-ahead log directory | |
PROMETHEUS_ADDR | :9090 | Prometheus metrics port | |
LENS_AUTO_UPDATE | false | Enable automatic agent binary updates |
Creating Your First Pipeline
A pipeline is a set of rules that evaluate each incoming log and decide what to do with it.
- Go to Pipelines → New Pipeline
- Choose an input type (OTLP, Syslog, file, etc.)
- Add rules — each rule has conditions and an action
- Set thresholds: logs above the Send score go to your destination; between Archive and Send go to cheap storage; below Archive are dropped
- Assign agents to the pipeline
- Save and enable
Pipelines & Rules
Rule actions
| Action | Description |
|---|---|
boost | Increase the log's score (more likely to be sent) |
reduce | Decrease the log's score (more likely to be archived/dropped) |
force_send | Always send this log, bypassing thresholds |
force_archive | Always archive this log |
drop | Discard the log immediately |
sample | Keep only N% of matching logs (e.g. keep 10% of debug logs) |
parse_json | Parse a field's value as JSON and merge into the event |
add_field | Add a static key=value field to the event |
remove_field | Remove a field from the event |
redact | Replace a field's value with [REDACTED] |
Condition operators
| Operator | Description |
|---|---|
= | Exact match |
!= | Not equal |
contains | Substring match (case-insensitive) |
!contains | Does not contain |
starts_with | Prefix match |
regex | Regular expression match |
> < >= <= | Numeric comparison |
Adding a Destination
Go to Destinations → New Destination and choose a type:
| Type | Description |
|---|---|
| Datadog | HTTP log intake — set your API key and site (datadoghq.com or datadoghq.eu) |
| Elasticsearch | Index-based — set host, port, index, and optional API key |
| S3 | Object storage — set bucket, region, prefix, and AWS credentials |
| Splunk HEC | Splunk HTTP Event Collector — set URL and token |
| New Relic | Log API — set your ingest license key |
| Loki | Grafana Loki push API — set URL |
| Webhook | Any HTTP endpoint — set URL and optional headers |
Agents
Each agent has a unique key that authenticates it to the dashboard. The agent:
- Polls the dashboard every 30 seconds for pipeline config changes (hot reload — no restart needed)
- Batches log events before sending to destinations (up to 500 events or 5 seconds, whichever comes first)
- Writes failed sends to a local WAL and retries every 30 seconds
- Reports metrics (volume, savings, WAL depth) back to the dashboard every minute
- Exposes Prometheus metrics on
:9090/metrics - Has a health endpoint at
:8080/health
Alerts
Go to Alerts to create metric-based alert rules. Available metrics:
| Metric | Description |
|---|---|
logs_per_min | Log line throughput |
bytes_per_min | Raw byte volume |
drop_rate_pct | Percentage of logs being dropped |
error_rate_pct | Percentage of logs with severity=ERROR |
Add notification channels (email, Slack, PagerDuty) in Settings → Alerts to receive notifications when rules fire.
Plans & Limits
| Feature | Free | Starter | Pro | Enterprise |
|---|---|---|---|---|
| Agents | 3 | 10 | Unlimited | Unlimited |
| Pipelines | 5 | 20 | Unlimited | Unlimited |
| Destinations | 2 | 5 | Unlimited | Unlimited |
| Daily volume | 5 GB | 50 GB | 500 GB | Unlimited |
| Retention | 7 days | 30 days | 90 days | Custom |
| SLA | — | — | 99.9% | 99.99% |
| Price/mo | $0 | $29 | $99 | $299 |
Self-Hosting
Lens runs as a Docker Compose stack. Requirements: a Linux server with Docker and a public domain (for HTTPS).
# Download the deploy package and run:
./deploy.sh
The setup wizard prompts for your domain, database passwords, and optional integrations (OAuth, Stripe, SMTP, Anthropic).
Updating
./deploy.sh update
Logs
./deploy.sh logs
API Keys
Generate API keys in Settings → API Keys. Keys begin with lns_ and can be used as Bearer tokens:
curl -H "Authorization: Bearer lns_your_key" \
https://your-domain/api/v1/pipelines