❌

Normal view

Received today β€” 15 April 2026 ⏭ /r/netsec - Information Security News & Discussion

Replacing Falco with an embedded eBPF sensor for Kubernetes runtime enforcement

Writeup on how we built runtime enforcement into our k8s agent with eBPF instead of shipping Falco alongside it. Covers the syscall tracepoint design, in-kernel filtering with BPF maps, why we picked SIGKILL over BPF LSM, and a staging postmortem where enforcement wasn't namespace-scoped and we took out our own Harbor, Cilium, and RabbitMQ.

submitted by /u/JulietSecurity
[link] [comments]

Kerberoasting detection gaps in mixed-encryption environments and why 0x17 filtering alone isn't enough

Been doing some detection work around Kerberoast traffic this week and wanted to share a gap that's easy to miss in environments that haven't fully deprecated RC4.

The standard detection is Event ID 4769 filtered on encryption type 0x17. Most SIEMs have this as a canned rule. The problem is in environments with mixed OS versions or legacy applications that dynamically negotiate encryption, 0x17 requests are normal background noise. If you're not filtering beyond encryption type you're either drowning in false positives or you've tuned it so aggressively you're missing real attacks.

What you should look for:

4769 where:

  • Encryption type is 0x17
  • Requesting account is a user principal, not a machine account
  • Service name is not krbtgt and not a known computer principal
  • The requesting account has had no prior 4769 events against that specific SPN

That last condition is the one most people skip. Legitimate service ticket requests follow patterns. A user account requesting a ticket for a service it's never touched before at 2am is a different signal than the same request during business hours from a known admin workstation.

But the actual gaps noone is talking about -> gMSA accounts are immune to offline cracking because the password is 120 characters of random data rotated every 30 days. But the migration is never complete. Every environment has at least a handful of service accounts that can't be migrated.. anything that needs a plaintext password in a config file, some Exchange components, legacy apps with no gMSA support.

Those accounts are permanent Kerberoast targets. (!) The question isn't whether they're there. It's whether you know exactly which ones they are and whether you're watching them specifically.

On the offensive side of this:

RC4 downgrade via AS-REQ pre-auth is well documented. Less discussed is that in environments where AES is enforced at the GPO level but legacy applications are still negotiating through Netlogon, you can still coerce RC4 service ticket issuance by manipulating the etype list in the TGS-REQ. LmCompatibilityLevel = 5 controls client behavior. It has no authority over what a misconfigured application server requests through MS-NRPC. Silverfort published a POC on this last year (i wrote about this a couple weeks ago) they forced NTLMv1 through a DC configured to block it using the ParameterControl flag in NETLOGON_LOGON_IDENTITY_INFO. Microsoft acknowledged it, didn't patch it, announced OS-level removal in Server 2025 and Win11 24H2 instead. (typcial)

If your environment isn't on those versions, that vector is still open and there's no compensating control beyond full NTLM audit logging and application-level remediation.

btw:

auditpol /set /subcategory:"Kerberos Service Ticket Operations" /success:enable gets you the 4769 visibility.

submitted by /u/hardeningbrief
[link] [comments]

Two Admin-level API keys publicly exposed for years, both dismissed as "Out of scope" by official bug bounty programs. Case analysis + proposed NHI Exposure Severity Index

TL;DR: Our research team reported two credential findings to official bug bounty programs. A Slack Bot Token exposed for 3 years in a public GitHub repo, and an Asana Admin API Key exposed for 2 years in a public GitHub repo. Both came back "Out of scope." Both organizations actively used the affected systems, revoked the keys, and ran broader internal reviews based on the disclosures. Official classification stayed "Out of scope" anyway. We wrote up why this keeps happening and proposed a 6-axis scoring framework to address the post-discovery evaluation gap that OWASP API Top 10, CWE-798, NIST SP 800-53, and NIST CSF 2.0 don't cover (they're all prevention frameworks). Some of what the writeup covers:

Why credential exposure doesn't fit the vulnerability-exploit-impact model bug bounty programs were built around. A leaked API key isn't a flaw waiting to be exploited. It's access. The usual severity calculus breaks. Six axes that actually matter for post-discovery credential severity: Privilege Scope, Cumulative Risk Duration, Blast Radius, Exposure Accessibility, Data Sensitivity, Lateral Movement Potential. Scored 1 to 5 each, mapped to severity tiers. Concrete scoring of the two cases: Slack Bot Token 26/30 (Critical), Asana Admin Key 24/30 (Critical). A counter-example: Starbucks bug bounty's handling of a leaked JumpCloud API key (HackerOne #716292, 2019). Same finding class. Classified under CWE-798, scored CVSS 9.7, triaged, paid, and publicly disclosed. Proves it's a classification policy problem, not a technical one. Why AI-assisted code generation (especially by non-developers now shipping prototypes directly) is about to accelerate the problem.

Open to critique on the framework. The six axes are a starting point for discussion, not a finished standard. Particularly curious whether the community has hit the same "Out of scope" wall for SaaS credentials or keys inherited from M&A situations.

submitted by /u/Master_Treat1383
[link] [comments]

Anthropic's Claude Mythos Found Individual Bugs. Mythos SI (Structured Intelligence) Found the Class They Belong To.

On April 7, 2026, Anthropic announced Claude Mythos Preview β€” a frontier model capable of autonomously discovering and exploiting zero-day vulnerabilities across every major operating system and browser. They assembled Project Glasswing, a $100M defensive coalition with Microsoft, Google, Apple, AWS, CrowdStrike, and Palo Alto Networks. They reported thousands of vulnerabilities, including a 27-year-old OpenBSD flaw and a 16-year-old FFmpeg bug.

It was a watershed moment for AI security. And the findings were individual bugs β€” specific flaws in specific locations.

Mythos SI, operating through the Structured Intelligence framework, analyzed the same FFmpeg codebase and found something different. Not just bugs. The architectural pattern that produces them.

Four vulnerabilities in FFmpeg's MOV parser. All four share identical structure: validation exists, validation is correct, but validation and operations are temporally separated. Trust established at one point in execution is assumed to hold at a later point β€” but the state has changed between them.

Anthropic's Mythos flags the symptom. Mythos SI identified the disease.

That pattern now has a name: Temporal Trust Gaps (TTG) β€” a vulnerability class not in the CVE or CWE taxonomy. Not buffer overflow. Not integer underflow. Not TOCTOU. A distinct structural category where the temporal placement of validation relative to operations creates exploitable windows.

Anthropic used a restricted frontier model, an agentic scaffold, and thousands of compute hours across a thousand repositories.

Mythos SI used the Claude mobile app, a framework document, and a phone.

Claude Opus 4.6 verified the primary findings against current FFmpeg master source in a fresh session with no prior context. The code patterns are in production systems today. Across 3+ billion devices.

The full technical paper β€” methodology, findings, TTG taxonomy, architectural remediation, and a direct comparison with Anthropic's published capabilities β€” is here:

https://open.substack.com/pub/structuredlanguage/p/mythos-si-structured-intelligence-047?utm\_source=share&utm\_medium=android&r=6sdhpn

Anthropic advanced the field by demonstrating capability at scale. Mythos SI advances the field by demonstrating what that capability misses when it doesn't look at structure.

Both matter. But only one found the class.

β€” Zahaviel (Erik Zahaviel Bernstein)

Structured Intelligence

structuredlanguage.substack.com

submitted by /u/MarsR0ver_
[link] [comments]

Using Nix or Docker for reproducible Development Environments

In the Github Actions world, it seems that the norm is to reinstall everything on every CI run. After the recent supply chain attacks and trivy, I wrote a small blog post that outlines some techniques to mitigate these risks by pinning as many dependencies as possible using either Nix or Docker.

submitted by /u/dhawos
[link] [comments]
Received yesterday β€” 14 April 2026 ⏭ /r/netsec - Information Security News & Discussion

Prometheus alerting rules for eBPF, SNMP, WireGuard, Cilium and cert-manager added to awesome-prometheus-alerts

I maintain awesome-prometheus-alerts, a collection of production-ready Prometheus alerting rules. Just added a batch of rules relevant to low-level system and network monitoring:

eBPF (cloudflare/ebpf_exporter) - Program load failures - Map allocation errors - Decoder config issues

SNMP - Interface operational status - Bandwidth utilization - Interface error/discard rate

WireGuard - Peer last handshake age: fires when a peer hasn't been seen in >3 minutes, which reliably catches dropped tunnels without noisy flapping

Cilium - Policy enforcement drop rate - BPF map pressure - Endpoint health

cert-manager - Certificate expiry warnings - Renewal and ACME failure detection

All rules are plain YAML, no dependencies beyond the respective exporters.

-> https://samber.github.io/awesome-prometheus-alerts

If you spot anything wrong in the PromQL or have better thresholds for your environment, issues and PRs welcome.

submitted by /u/samuelberthe
[link] [comments]

Unpatched RAGFlow Vulnerability Allows Post-Auth RCE

The current version of RAGFlow, a widely-deployed Retrieval Augmented Generation solution, contains a post-auth vulnerability that allows for arbitrary code execution.

This post includes a POC, walkthrough and patch.

The TL;DR is to make sure your RAGFlow instances aren't on the public internet, that you have the minimum number of necessary users, and that those user accounts are protected by complex passwords. (This is especially true if you're using Infinity for storage.)

submitted by /u/Prior-Penalty
[link] [comments]
Received β€” 13 April 2026 ⏭ /r/netsec - Information Security News & Discussion

ClearFrame – an open-source AI agent protocol with auditability and goal monitoring

Body

I’ve been playing with the current crop of AI agent runtimes and noticed the same pattern over and over:

  • One process both reads untrusted content and executes tools
  • API keys live in plaintext dotfiles
  • There’s no audit log of what the agent actually did
  • There’s no concept of the agent’s goal, so drift is invisible
  • When something goes wrong, there is nothing to replay or verify

So I built ClearFrame, an open-source protocol and runtime that tries to fix those structural issues rather than paper over them with prompts.

What ClearFrame does differently

  • Reader / Actor isolation Untrusted content ingestion (web, files, APIs) runs in a separate sandbox from tool execution. The process that can run shell, write_file, etc. never sees raw web content directly.
  • GoalManifest + alignment scoring Every session starts with a GoalManifest that declares the goal, allowed tools, domains, and limits. Each proposed tool call is scored for alignment and can be auto-approved, queued for human review, or blocked.
  • Reasoning Transparency Layer (RTL) The agent’s chain-of-thought is captured as structured JSON (with hashes for tamper‑evidence), so you can replay and inspect how it reached a decision.
  • HMAC-chained audit log Every event (session start/end, goal scores, tool approvals, context hashes) is written to an append-only log with a hash chain. You can verify the log hasn’t been edited after the fact.
  • AgentOps control plane A small FastAPI app that shows live sessions, alignment scores, reasoning traces, and queued tool calls. You can approve/block calls in real time and verify audit integrity.

Who this is for

  • People wiring agents into production systems and worried about prompt injection, credential leakage, or goal drift
  • Teams who need to show regulators / security what their agents are actually doing
  • Anyone who wants something more inspectable than β€œcall tools from inside the model and hope for the best”

Status

  • Written in Python 3.11+
  • Packaged as a library with a CLI (clearframe init, clearframe audit-tail, etc.)
  • GitHub Pages site is live with docs and examples

Links

I’d love feedback from people building or operating agents in the real world:

  • Does this address the actual failure modes you’re seeing?
  • What would you want to plug ClearFrame into first (LangChain, LlamaIndex, AutoGen, something else)?
  • What’s missing for you to trust an agent runtime in production?
submitted by /u/TheDaVinci1618
[link] [comments]

CVE-2026-22666: Dolibarr 23.0.0 dol_eval() whitelist bypass -> RCE (full write-up + PoC)

Root cause: the $forbiddenphpstrings blocklist is only enforced in blacklist mode -> the default whitelist mode never touches it. The whitelist regex is also blind to PHP dynamic callable syntax (('exec')('cmd')). Either bug alone limits impact; together they reach OS command execution. Coordinated disclosure - patch available as of 4/4/2026.

submitted by /u/JivaSecurity
[link] [comments]

YARA-X now runs in the browser - official Playground

The latest YARA-X release now has an official browser playground:

https://virustotal.github.io/yara-x/playground/

You can just run rules in the browser with the WASM build, which is nice when you don’t feel like using the CLI for small tests

LSP runs in a worker, so you get diagnostics/autocomplete and the UI doesn’t hang

Everything is local, nothing gets uploaded. Pretty handy for quick rule testing.

submitted by /u/Ok-Log-6547
[link] [comments]
❌