Security research, systems design, and the work between them.
I prefer the parts of the stack that stay invisible until something breaks, and the parts of a team that stay quiet until it matters.
My work has moved, over time, from finding failures to designing systems that can survive them.
The early work was low-level: vulnerability discovery, malware analysis, systems internals, and the implementation details that usually matter only after something breaks. That background still shapes how I look at AI-driven systems: through boundaries, assumptions, logs, and failure modes.
Today I care most about trust boundaries in AI-driven products, browser and agent security, and evidence trails that survive after the demo is over. I like architectures that make dangerous behavior easier to see and harder to hide.
Much of my current work is about bringing that security thinking into scalable consumer-product systems, where new ideas need room to move without breaking the trust boundaries around them.
I have built malware analysis pipelines, early product infrastructure, internal security tooling, and architecture documents with small teams. Some work became products. Some became patches. Some became diagrams that changed how a team made decisions.
How I work
I prefer systems that are understandable, correct, and defensible - whether they are small internal tools or large consumer products. I write things down because good technical decisions should outlast the meeting. I distrust dashboards when they replace understanding.
I like architecture that can be explained clearly, tested directly, and defended after something goes wrong.
What I have helped build
Security foundations for early-stage products. Malware analysis and automation pipelines. Internal tools that turn one-off research into repeatable engineering loops. Architecture notes for AI and browser security. Patches that needed quiet, careful handling. Writing that makes hard technical work easier to reason about.
A more complete CV is available on request.