feat(profiling): Support profiles tagged for many transactions#1444
feat(profiling): Support profiles tagged for many transactions#1444
Conversation
| } | ||
|
|
||
| impl TransactionMetadata { | ||
| pub fn valid(self) -> bool { |
There was a problem hiding this comment.
| pub fn valid(self) -> bool { | |
| pub fn valid(&self) -> bool { |
I don't think valid needs to take ownership of the instance here.
It's just reading and checking if it's valid.
This way you won't have to clone every time just to check if the TransactionMetadata is valid as currently done here
relay-profiling/src/android.rs
Outdated
| } | ||
|
|
||
| for transaction in &profile.transactions { | ||
| if !transaction.clone().valid() { |
There was a problem hiding this comment.
Related to my previous comment.
If you change the valid method to take a reference to the instance here we don't need to clone each transactions just to check their validity.
relay-profiling/src/cocoa.rs
Outdated
| } | ||
|
|
||
| for transaction in profile.transactions.iter() { | ||
| if !transaction.clone().valid() { |
There was a problem hiding this comment.
Here too, changing the valid method to accept a reference to the instance here we don't need to clone each transactions to check their validity.
iker-barriocanal
left a comment
There was a problem hiding this comment.
Some code for the android and cocoa profiles seem very similar, at some point we may want to do something about that.
| !self.id.is_nil() | ||
| && !self.trace_id.is_nil() | ||
| && !self.name.is_empty() | ||
| && self.relative_start_ns < self.relative_end_ns |
There was a problem hiding this comment.
Should we also require relative cpu start time to be smaller than end time, when they exist (for android)?
There was a problem hiding this comment.
To keep it simple, I'll check relative_cpu_start_ms <= relative_cpu_end_ms as I don't have the platform available in that struct to make the check Android only.
| } | ||
|
|
||
| fn set_transaction(&mut self, transaction: &TransactionMetadata) { | ||
| self.transaction_name = transaction.name.clone(); |
There was a problem hiding this comment.
why clone the string?
There was a problem hiding this comment.
I guess that's the only I thought I could copy the value. Copy trait can't be implemented for the TransactionMetadata type and since I'm passing a shared reference, that's kind of the only way the compiler would let me off the hook. Do you have another suggestion?
There was a problem hiding this comment.
Nope, sounds good to me. I was just curious.
| } | ||
|
|
||
| fn substract_to_timestamp(time: Time, duration: Duration) -> Time { | ||
| match time { |
There was a problem hiding this comment.
Dang how many different times hahah
There was a problem hiding this comment.
For real, this time struct for Android is driving me crazy.
| match serde_json::to_vec(&profile) { | ||
| Ok(payload) => Ok(payload), | ||
| Err(_) => Err(ProfileError::CannotSerializePayload), | ||
| Ok(items) |
There was a problem hiding this comment.
At this point, outside the for loop, all new profiles have empty transactions. Should we also clear the transactions for the current profile?
There was a problem hiding this comment.
The current profile is discarded so I don't think it matters.
Co-authored-by: Iker Barriocanal <[email protected]>
|
Indeed, they are very similar. I think that's fine for now, we're already planning to rewrite profiling to properly decouple profiles from transactions and this code will end up deprecated. |
| } | ||
|
|
||
| fn set_transaction(&mut self, transaction: &TransactionMetadata) { | ||
| self.transaction_name = transaction.name.clone(); |
There was a problem hiding this comment.
Nope, sounds good to me. I was just curious.
In order to support concurrent transactions for mobile platforms (
cocoa,android), we want to start tagging profiles for many transactions. This would be a workaround before we move to a proper continuous profiling architecture, where we stream chunks of profiles.In this workaround, a profile will be collected as long as 1 transaction is still running. All transaction metadata will be collected and sent over in a
transactionsarray andrelaywill duplicate the profile data for each transaction, filter samples/events that are not for that transaction and sent it over to Kafka as a regular profile (attached to 1 transaction only) so the rest of the pipeline doesn't need to be changed.This PR implements this workaround, looping around transaction metadata, duplicating the profile data, filtering and adding an item of type
Profileto the envelope in order to be sent to Kafka afterwards. In case a profile comes with no transaction metadata it's rejected. If a profile comes in the old format, it's processed as usual.Other platforms are not concerned by this workaround.