Skip to content

Real world benchmarks #44

@dzmitry-lahoda

Description

@dzmitry-lahoda

Goals:

  1. Avoid real performance regressions

  2. Automate testing without false positives

  3. Allow to experiment with improvement against self.

  4. Provide other containers with exampele of testing and containers to live longer against pure functional composition.

  5. Allow reasonably measure implementation of https://bitbucket.org/dadhi/dryioc/issues/197

Means:

  1. Several scenarios (web site, web server, desktop, mobile, cli, networking server, database, nano services(actors) ). [2]. Try to check Java world for documented cases. Document each case and reasoning for object graph.
  2. Generate all classes via tt. Not manual coding.
  3. Choose DI of several versions and last references csproj. E.g. major or specified version changes downloaded from nuget.
  4. Run BDN to get structured output against each chosen previous version.
  5. Apply proper statistical comparisons measures to avoid false negatives because of fluctuations (need to recall CS article about that I have seen and act accordingly). Stat on {moment0, monent1, moment2} * {gc, mem, time, cpu} * {workload1, workload2, ..., workloadX}. Possible prune outliers, rerun on fail.
  6. Setup and document each assert and reasoning so easy to tune.
  7. Run tests on several machines/vms/while gaming to ensure did stats/comparisons right.
  8. Use complex features of container not available in tests which cover many containers.

Will not:

  1. Store and compares historical data.
  2. Will not test other containers.
  3. Run real code (host http or read from storage).

Links:

#27

danielpalme/IocPerformance#103

Metadata

Metadata

Assignees

Labels

enhancementNew feature or requestinfrastructuresimplifies solution development

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions