docs: add audit log export to S3 documentation#24486
docs: add audit log export to S3 documentation#24486ryan-crabbe merged 2 commits intolitellm_ryan-march-23from
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Greptile SummaryThis PR adds an "Export Audit Logs to External Storage" section to the Audit Logs documentation page, covering S3 as the first supported backend via the Key changes:
Issues found:
Confidence Score: 4/5
|
| Filename | Overview |
|---|---|
| docs/my-website/docs/proxy/multiple_admins.md | Adds an "Export Audit Logs to External Storage" section with an S3 example; documented key path format is consistent with the s3_v2.py implementation, but IAM-role credential optionality and the in-memory buffering data-loss caveat are not mentioned. |
Sequence Diagram
sequenceDiagram
participant Admin as Proxy Admin
participant Proxy as LiteLLM Proxy
participant AL as audit_logs.py
participant CB as S3Logger (s3_v2)
participant S3 as AWS S3 Bucket
Admin->>Proxy: Management API call (create/update/delete key/team/user)
Proxy->>AL: create_audit_log_for_update(request_data)
AL-->>AL: check store_audit_logs + premium_user
AL->>CB: async_log_audit_log_event(payload)
CB-->>CB: build s3_object_key\n(s3_path/audit_logs/YYYY-MM-DD/HH-MM-SS_id.json)
CB-->>CB: append to in-memory log_queue
Note over CB: Flushes when batch_size reached\nor flush_interval expires
CB->>S3: PUT s3://<bucket>/<key> (SigV4 signed)
S3-->>CB: 200 OK
Reviews (2): Last reviewed commit: "Update docs/my-website/docs/proxy/multip..." | Re-trigger Greptile
| :::info | ||
|
|
||
| Both `store_audit_logs: true` and `audit_log_callbacks` must be set. If `store_audit_logs` is not enabled, the callbacks will not fire. | ||
|
|
||
| ::: |
There was a problem hiding this comment.
Enterprise License requirement not re-stated for the export feature
The :::info callout only mentions that both store_audit_logs: true and audit_log_callbacks must be set. However, the underlying implementation (audit_logs.py, line 175) also gates callback dispatch behind the premium_user check:
if premium_user is not True:
return # ← callbacks never fire without an Enterprise LicenseWhile the top-level :::tip at the document's beginning covers this, a user reading only the S3 export section may set everything up and wonder why no files appear in S3. Adding a brief note here would prevent that confusion, e.g.:
:::info
Requires an Enterprise License. Both `store_audit_logs: true` and `audit_log_callbacks` must be set. If `store_audit_logs` is not enabled, the callbacks will not fire.
:::
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
|
|
||
| ## Export Audit Logs to External Storage | ||
|
|
||
| You can export audit logs to an external storage backend (e.g. S3) in addition to storing them in the database. Logs are batched and uploaded asynchronously, so they do not block your proxy requests. |
There was a problem hiding this comment.
In-memory buffer means logs can be lost on crash or restart
The sentence "Logs are batched and uploaded asynchronously" is accurate, but omits an important operational caveat: the batch queue (self.log_queue) is in-memory only (see s3_v2.py lines 110 and 276). Any audit logs buffered between flush intervals that have not yet been uploaded to S3 will be permanently lost if the proxy process crashes or is restarted.
For an audit/compliance feature this is a meaningful data-loss scenario. Consider documenting it explicitly, e.g.:
You can export audit logs to an external storage backend (e.g. S3) in addition to storing them in the database. Logs are batched and uploaded asynchronously, so they do not block your proxy requests. Note: because the queue is held in memory, any logs not yet flushed will be lost if the proxy process restarts unexpectedly.
| s3_bucket_name: my-audit-logs-bucket # AWS Bucket Name | ||
| s3_region_name: us-west-2 # AWS Region | ||
| s3_aws_access_key_id: os.environ/AWS_ACCESS_KEY_ID | ||
| s3_aws_secret_access_key: os.environ/AWS_SECRET_ACCESS_KEY | ||
| s3_path: litellm-audit # [OPTIONAL] prefix path in the bucket |
There was a problem hiding this comment.
IAM role credentials shown as if required
s3_aws_access_key_id and s3_aws_secret_access_key are both Optional[str] in S3Logger.__init__ (see s3_v2.py lines 41–42). The BaseAWSLLM.get_credentials call underneath also supports IAM instance profiles, ECS task roles, and environment-variable-based credentials automatically.
Users running LiteLLM on AWS EC2, ECS, or Lambda with an attached IAM role do not need to supply these keys at all. Without a note to that effect, they may think explicit credentials are always required and unnecessarily expose or mismanage them.
Consider marking these two fields as [OPTIONAL] in the same style as s3_path, or adding a brief sentence such as: "Omit s3_aws_access_key_id and s3_aws_secret_access_key when using IAM instance profiles or task roles."
Type
📖 Documentation
Changes
adding the audit logs export feature to the docs