Skip to end of banner
Go to start of banner

MooBench Bench-marking

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

This page provides information on the bench-marking of the inspectIT Agent performed by the Jan Waller from University of Kiel. The bench-marking was done with the MooBench tool. The page also presents our views on the document as well as discussion points that should be undertaken.

Document

The document provided to us by Jan Waller can be downloaded here

Impressions and Discussions

Please add your impressions here after reading the document.

ISE

  • On page 149 Jan mentions the hash function we employ during data collection that they had to change to perform the experiments. I still don't get what is he refering to. I quote: "Furthermore, we encounter additional challenges with high workloads: InspectIT employs a hash function when collecting its monitoring records. Due to the present implementation, it is only capable of storing one monitoring record per millisecond per monitored method. Thus, the MooBench approach of rapidly calling the same method in a loop is not easily manageable. In order to benchmark higher workloads, we also change this behavior to one monitoring record per nanosecond per monitored method. At least with our employed hardware and software environment, this small change proves to be sufficient. Otherwise, the hash function could easily be adapted to use a unique counter instead of a timestamp." Also in conclusion he mentions it as the first focus for improving the performance: "The gravest adjustment is required in the employed hash function. But this behavior can also be considered a bug and might be fixed in future versions. Otherwise, at least a baseline evaluation of the monitoring tool is feasible."
    • What is this hash thing?
    • What unique counter VS time-stamp means?
    • They wrote they fixed this, if so how?
  • Paper states that  under heavy load inspectIT crashes due to heavy garbage collection. Also they said that the ListSizeStrategy they used in the tests was with value of 1,000,000. This is just too much. Meaning everything that is collected as monitored data is added to lists, and although it's just soft references there I think, sure we will always be in the garbage collections when monitored method is finished in 0ms.. Thus, I don't take this as heavy problem.
  • The first impression I had from the results was that having timer and isequence on the monitored method is 161 microseconds.. But in fact that's for the whole recursion of the depth 10, thus we can conclude it's ~17microseconds when having both sensors on method. I don't see it as too high?
  • The experiments show that we bring higher overhead when CMR is not there than when it is.. We should tackle this, for me it's a problem.. We should never be slower if CMR is not even there.
  • The experiments described in the section 11.1.4 are not quite clear for me. Seams they introduced some Kieker based stuff there.
  • It seams that they changed a lot on our code while doing the experiments.. So do they have our source code or not?

MHU

  • additional to ISE
    • about section 11.1.4: Yes it looks like they added a enable/disable switch if a sensor is active (collects data) and also if it sends the data. Even if that is not much code, it will have an influence on the 17 microseconds per method call
    • hashcode function: I think this has something to do with the resolution of java.util.Date.getTime() => Returns the number of milliseconds since January 1, 1970, 00:00:00 GMT. DefaultData.java uses the hashCode function of java.util.TimeStamp which uses the hashcode function of its superclass. This superclass is java.util.Date which calls its method getTime().
  • wrote E-Mail to Jan Waller asking him to provide the modified agent and cmr
  • What I might could do is using benchIT to re-test. We would not be able to use exactly the same method (due to the recursion) but it should be possible measuring the execution time of one method (maybe with calls to other methods (no benchIT test methods) without inspectIT, with inspectIT running (Agent + CMR, only Agent)
  • No labels