Building a Deterministic Security Buffer for Modern APIs
Why Tracehound sits between detection and response, not in front of your application

Most security tooling for APIs forces teams into a bad choice.
Either you put a decision-heavy control directly in the runtime path and accept the risk that security logic becomes an availability problem, or you log after the fact and accept that the most important forensic details may already be incomplete, noisy, or tampered with.
Tracehound was built to reject that trade-off.
It is not a WAF. It is not a SIEM. It is not an APM tool. It is not a detection engine.
Tracehound is a deterministic runtime security buffer for modern applications. Its job is narrower, but also stricter: accept explicit threat signals from external detectors, quarantine the resulting Evidence, preserve an AuditChain, and do all of that without turning the security layer into a self-inflicted denial-of-service vector.
The boundary matters
A lot of security products become confusing because they try to do too many jobs at once.
Detection is one job. Containment is another. Evidence preservation is another. Operational response is another.
Tracehound draws a hard line around those responsibilities.
External systems decide whether a Scent is suspicious. That decision can come from a WAF, a rule engine, a SIEM correlation flow, or a custom detector. Tracehound does not second-guess that decision and it does not invent one of its own.
Once a Scent arrives with explicit threat metadata, Tracehound takes over the part most teams usually underinvest in:
Bounded runtime handling
Deterministic Evidence creation
Quarantine storage with explicit limits
Tamper-evident AuditChain custody
Isolated Hound analysis outside the hot path
That separation is the core design decision.
What Tracehound actually does
At a high level, the flow looks like this:
An external detector classifies a Scent.
The Agent receives the Scent synchronously.
If the Scent carries no threat signal, the result is
clean.If the Scent carries threat metadata, the Agent creates Evidence and stores it in Quarantine.
The AuditChain records the operational custody of that event.
A Hound child process can analyze the Evidence asynchronously, outside the runtime hot path.
That architecture matters for two reasons.
First, it keeps the runtime path explainable. You can reason about what happens under pressure because the core behavior is deterministic.
Second, it preserves a clean trust boundary. Raw payload bytes stay inside Quarantine. Outside that boundary, runtime code operates on metadata, Signatures, and explicit status values rather than free-floating payload access.
The primitives are simple on purpose
Tracehound uses a small vocabulary because vague language causes vague systems.
Scentis the incoming unit of metadata entering the Agent.Evidenceis the quarantined artifact with cryptographic integrity.Quarantineis the bounded storage area for Evidence.AuditChainis the tamper-evident operational log.Houndis the isolated child process used for analysis.Signatureis the content-derived deterministic identifier.
This sounds like naming discipline, but it is really boundary discipline.
When a system says "event," "alert," "payload," "artifact," and "incident object" interchangeably, it usually means the interfaces are already leaking across responsibilities. Tracehound keeps those responsibilities deliberately narrow.
The hard constraints
Tracehound is opinionated in ways that matter operationally:
1. Decision-free
Tracehound does not detect threats. That sounds like a limitation until you operate security systems in production.
Detection models change constantly. Rules change. Threat intelligence changes. False positives change. If you embed those shifting semantics inside your runtime buffer, the buffer becomes unstable.
Tracehound keeps the data plane boring on purpose.
2. Deterministic
The hot path should not depend on heuristics, probabilistic scoring, or hidden asynchronous behavior.
If two identical Scents enter the system under identical configuration, the system should behave the same way. That property becomes extremely valuable during incident review.
3. Fail-open
Security tooling must not become a denial-of-service vector against the application it is meant to protect.
If Tracehound is degraded, the host application must keep running. That does not mean "pretend nothing happened." It means degradation is explicit, bounded, and designed around host survivability rather than brittle perfectionism.
4. Payload-less outside Quarantine
This is one of the most important design choices.
Too many systems casually let raw payloads leak into logs, dashboards, side channels, retries, and helper utilities. That is not just untidy engineering. It is a compliance and evidence-integrity problem.
Tracehound keeps raw payload access inside Quarantine and treats everything outside that boundary as metadata-only.
A minimal integration example
This is the shape of the model in practice:
import { createTracehound, generateSecureId } from '@tracehound/core'
import type { Scent } from '@tracehound/core'
const th = createTracehound({
quarantine: { maxCount: 1000, maxBytes: 100_000_000 },
rateLimit: { windowMs: 60_000, maxRequests: 100 },
})
function buildScent(req: Request): Scent {
const threat = externalDetector(req)
return {
id: generateSecureId(),
timestamp: Date.now(),
source: {
ip: req.ip,
userAgent: req.headers['user-agent'],
},
payload: {
method: req.method,
path: req.url,
body: req.body,
},
threat,
}
}
const result = th.agent.intercept(buildScent(req))
The important part is not the code sample itself. The important part is the contract:
external detection remains external
the Agent stays synchronous
the Quarantine stays bounded
the runtime path never depends on a remote scoring system to stay safe
Where Tracehound fits in a real stack
A practical deployment usually looks like this:
Cloud or edge controls flag suspicious traffic.
Application-side logic converts the signal into a Scent.
Tracehound preserves Evidence and operational custody.
Downstream systems consume metadata, alerts, or archived artifacts as needed.
That makes Tracehound especially relevant for teams that already have detection but do not trust their evidence path.
If your current setup tells you something suspicious happened but cannot give you deterministic runtime handling, bounded Quarantine behavior, or a trustworthy AuditChain, then you do not really have a complete forensic layer.
When not to use it
Tracehound is not the right product if you want:
an inline system that invents its own threat verdicts
a dashboard-first observability tool
a semantic exploit detection engine
a replacement for your existing WAF or SIEM
It is a better fit when you want a strict layer between detection and response that can preserve Evidence under pressure without collapsing into undefined behavior.
The short version
WAFs catch threats. Tracehound preserves evidence.
That may sound narrower than the average security platform pitch. It is. But the narrower the contract, the stronger the guarantees can become.
If your team cares about deterministic runtime behavior, tamper-evident operational custody, and fail-open security design, that is the surface Tracehound is trying to make rigorous.
Repository: https://github.com/tracehound/tracehound
Website: https://tracehoundlabs.com/





