Conversation
processedTitle.Length.ToString() is replaced by processedTitle.Length to use CultureInfo.InvariantCulture (Same for processedTitle)
truthbk
left a comment
There was a problem hiding this comment.
I like this, some similarities with the Java approach, but looks even cleaner to me.
I just have a doubt regarding the concurrent queue where we drop the SerializedMetric, looks like we will always hold a reference so the GC will not clear those up. Obviously that's part of the goal, to reuse those buffers, but I'm wondering if under heavy loads the RSS of the application might balloon up a little bit.
Let me know your thoughts.
| public void Dispose() | ||
| { | ||
| Builder.Clear(); | ||
| _pool.Enqueue(this); |
There was a problem hiding this comment.
Interesting approach, I like it.
| internal class SerializerHelper | ||
| { | ||
| private static readonly string[] EmptyArray = new string[0]; | ||
| private readonly ConcurrentQueue<SerializedMetric> _pool = new ConcurrentQueue<SerializedMetric>(); |
There was a problem hiding this comment.
There is no upper bound on this queue, right? It should eventually reach a state of equilibrium, but that state is dependent on load. I'm wondering if this is perhaps too optimistic an approach?
There was a problem hiding this comment.
Indeed, if the processing of the metrics is not fast enough, it can lead to out of memory. I will fix it.
I just have a doubt regarding the concurrent queue where we drop the SerializedMetric, looks like we will always hold a reference so the GC will not clear those up. Obviously that's part of the goal, to reuse those buffers, but I'm wondering if under heavy loads the RSS of the application might balloon up a little bit.
If we limit the size of the pool, it should be OK. What do you think?
There was a problem hiding this comment.
I am working on creating a generic pool to limit the number of allocated objects. I am doing it in a separate PR as it introduce several changes.
…nto olivierg/ReduceMemoryAllocation
This PR reduces the memory allocations. Sending a counter is now around twice faster as before.
Reviewing commit by commit is recommended.