Skip to end of banner
Go to start of banner

MooBench Bench-marking

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

This page provides information on the bench-marking of the inspectIT Agent performed by the Jan Waller from University of Kiel. The bench-marking was done with the MooBench tool. The page also presents our views on the document as well as discussion points that should be undertaken.

Document

The document provided to us by Jan Waller can be downloaded here

Impressions and Discussions

Please add your impressions here after reading the document.

ISE

  • On page 149 Jan mentions the hash function we employ during data collection that they had to change to perform the experiments. I still don't get what is he refering to. I quote: "Furthermore, we encounter additional challenges with high workloads: InspectIT employs a hash function when collecting its monitoring records. Due to the present implementation, it is only capable of storing one monitoring record per millisecond per monitored method. Thus, the MooBench approach of rapidly calling the same method in a loop is not easily manageable. In order to benchmark higher workloads, we also change this behavior to one monitoring record per nanosecond per monitored method. At least with our employed hardware and software environment, this small change proves to be sufficient. Otherwise, the hash function could easily be adapted to use a unique counter instead of a timestamp." Also in conclusion he mentions it as the first focus for improving the performance: "The gravest adjustment is required in the employed hash function. But this behavior can also be considered a bug and might be fixed in future versions. Otherwise, at least a baseline evaluation of the monitoring tool is feasible."
    • What is this hash thing?
    • What unique counter VS time-stamp means?
    • They wrote they fixed this, if so how?
  • Paper states that  under heavy load inspectIT crashes due to heavy garbage collection. Also they said that the ListSizeStrategy they used in the tests was with value of 1,000,000. This is just too much. Meaning everything that is collected as monitored data is added to lists, and although it's just soft references there I think, sure we will always be in the garbage collections when monitored method is finished in 0ms.. Thus, I don't take this as heavy problem.
  • The first impression I had from the results was that having timer and isequence on the monitored method is 161 microseconds.. But in fact that's for the whole recursion of the depth 10, thus we can conclude it's ~17microseconds when having both sensors on method. I don't see it as too high?
  • The experiments show that we bring higher overhead when CMR is not there than when it is.. We should tackle this, for me it's a problem.. We should never be slower if CMR is not even there.
  • The experiments described in the section 11.1.4 are not quite clear for me. Seams they introduced some Kieker based stuff there.
  • It seams that they changed a lot on our code while doing the experiments.. So do they have our source code or not?

 

  • No labels