Push llm event address#3664
Conversation
|
✨ Fix all issues with BitsAI or with Cursor
|
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #3664 +/- ##
==========================================
- Coverage 62.20% 62.08% -0.12%
==========================================
Files 141 141
Lines 13352 13352
Branches 1746 1746
==========================================
- Hits 8305 8290 -15
- Misses 4256 4269 +13
- Partials 791 793 +2 see 4 files with indirect coverage changes Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
Benchmarks [ tracer ]Benchmark execution time: 2026-03-02 11:56:00 Comparing candidate commit ba5e2ab in PR branch Found 3 performance improvements and 27 performance regressions! Performance is the same for 163 metrics, 1 unstable metrics. scenario:ComposerTelemetryBench/benchTelemetryParsing
scenario:ContextPropagationBench/benchExtractHeaders128Bit
scenario:ContextPropagationBench/benchExtractHeaders64Bit
scenario:ContextPropagationBench/benchExtractTraceContext128Bit
scenario:ContextPropagationBench/benchExtractTraceContext64Bit
scenario:ContextPropagationBench/benchInject128Bit
scenario:ContextPropagationBench/benchInject64Bit
scenario:HookBench/benchHookOverheadInstallHookOnFunction
scenario:HookBench/benchHookOverheadInstallHookOnMethod
scenario:HookBench/benchHookOverheadTraceFunction
scenario:HookBench/benchHookOverheadTraceMethod
scenario:HookBench/benchWithoutHook
scenario:LogsInjectionBench/benchLogsInfoInjection-opcache
scenario:MessagePackSerializationBench/benchMessagePackSerialization
scenario:MessagePackSerializationBench/benchMessagePackSerialization-opcache
scenario:PDOBench/benchPDOBaseline
scenario:PHPRedisBench/benchRedisBaseline
scenario:SamplingRuleMatchingBench/benchGlobMatching1
scenario:SamplingRuleMatchingBench/benchGlobMatching2
scenario:SamplingRuleMatchingBench/benchGlobMatching3
scenario:SamplingRuleMatchingBench/benchGlobMatching4
scenario:SamplingRuleMatchingBench/benchRegexMatching1
scenario:SamplingRuleMatchingBench/benchRegexMatching2
scenario:SamplingRuleMatchingBench/benchRegexMatching3
scenario:SamplingRuleMatchingBench/benchRegexMatching4
scenario:SpanBench/benchDatadogAPI
scenario:TraceAnnotationsBench/benchTraceAnnotationOverhead
scenario:TraceFlushBench/benchFlushTrace
scenario:TraceSerializationBench/benchSerializeTrace
|
cff1eed to
1ae39aa
Compare
f363215 to
3fadba0
Compare
3fadba0 to
6028af2
Compare
cataphract
left a comment
There was a problem hiding this comment.
I'd wait for a production rule to be ready (if it ain't already), update the recommended.json files and then also write an integration test. This would validate the correctness of the address and its parameters.
I have the rule https://github.com/DataDog/appsec-event-rules/pull/265 but it's not merged yet. Also we would need to mock the openai library http call. Do we have a system for that already on integration? |
| int getPort() { | ||
| PORT | ||
| } |
There was a problem hiding this comment.
same effect could be gotten in groovy by not making PORT private (with no access modifiers, it generates setters/getters)
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 3c92b6746e
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
e713572 to
277957f
Compare
cataphract
left a comment
There was a problem hiding this comment.
Seems better now, pending test success, but see some new comments
Description
This PR enables appsec capabilities when using
openai-phpclient. The implementation push appsec addresses to the waf and then eventually they are reported to the backend.PHP implementation of the cross-AppSec LLM usage detection RFC for detecting and monitoring OpenAI SDK usage per endpoint.
The instrumentation wraps the OpenAI PHP SDK's request methods and captures LLM-related signals on each call: the model being used, input and output token counts, the request type (completion, chat, embedding, etc) and whether the call succeeded or failed. These are emitted as appsec events.
More info on RFC:
API Endpoints: AI usageReviewer checklist