Data serialization possibilities
for inspectIT storage solution
To be able to store inspectIT data to the disk, serialization of the data has to be performed, that's very simple and clear. However, Java native serialization comes with high overhead, and many other problems, thus I performed a examination of different possibilities that can be used in order to effectively persist data on the disk. Effectively means, as quickest as possible, where size of the file should also be as smallest as possible. Since, serialization to binary data is taking less space then any other human-readable (or not) format, I have only analyzed the possibilities that serialize data into "array of bytes".
Generally there are three different solutions that I believe are acceptable:
- Cross-language data transformation libraries: Protocol Buffers or Thrift
- Fast Java serialization library Kryo
- Externalizable Interface
Data transformation libraries
Both Google and Facebook developed a cross-language data transformation libraries to fit their needs of transferring the data between different languages that they use. Thus, Google created Protocol Buffers and Facebook Thrift (although there is a word that actually Thrift was developed by the ex-Google employees that changed to Facebook). Thrift was immediately started as an open-source project, that was later on given to Apache foundation, while Google "open" the source from the version 2.0.
The approach that both libraries have is quite the same. Both defined an IDL for defining the stricture of the data that needs to be transferred. These IDLs are more or less the same, they both have defined general types that exist in most of the programing languages, they both don't support polymorphism, but they have some little differences. For example, I like that in Thrift they created types like Map, Set and List that directly map to the HashMap, HashSet and ArrayList in Java for example, while Protocol Buffers only have a List type of collection. Furthermore, Thrift included also possible definition of service into IDL, so that transfer of the data can be easier (but we totally don't need this ). Here is example of such a definition:
Code Block |
---|
message Person { |
...
required int32 id = 1; |
...
required string name = 2; |
...
optional string email = 3; |
...
} |
or in Thrift
Code Block |
---|
struct UserProfile { |
...
1: i32 uid, |
...
2: string name, |
...
3: string blurb |
...
} |
So basically for each class that we wish to serialize with these libraries we have to create a similar definition for it. With this definition, a code is generated via the compilers provided by Protocol Buffers and Thrift. It is possible to generate code for any language, Java as well of course. And these generated class know how to serialize and de-serialize objects. Thus, we can say that is some kind of static serialization, because all the definitions are known and this is the basic reason why this is much faster then Java serialization or other serialization libraries that use reflection.
What I definitively did not like are these generated classes (I only produced them for Protocol Buffers). They look somehow ugly, have many inner classes, and are somehow confusing. Not to mention that for a I would say very simple definition, the compiler generated the class that had 2,000 lines. Furthermore, there is a problem of usage with everything that you want to serialize. For example, for our TimerData we would need to create an object of TimerDataProtos class (basically have same getters and setter as our class and is actually the one that we can serialize fast) first, then "clone" the data from our object to this other one and do the serialization. And do the same thing in other direction for de-serializing. Seams a lot of new classes and a lot of new code.
What I definitively like is the backwards and forward compatibility that these two libraries provide, that is there out of the box if some basic rules are followed. Compatibility is achieved, by associating number with fields (see the sample definition code). Thus, new fields can be added to the definition with our any problems, and when field should be removed in the new class version, the definition just has to keep the old field, so that old data can also be read (but this field will be ignored).
Although they are quite simple, I found a nice table that compares the two libraries:
...
It is clear that Thrift wins in functionality, and this is probably the consequence of having people how already worked on ProtoBuf development, actually creating it, thus they were able to add any non-existing functionality straight forward. But on the other side, Thrift lack any kind of documentation, and this is a serous problem. Except one very good White-paper, it is very difficult to find any other information, tutorials, explanation, etc.
Kryo library
While I was trying to find out who was better performance results, Thrift or Protocol Buffers, I stumbled to a nice benchmarking of different serialization libraries (http://code.google.com/p/thrift-protobuf-compare/wiki/Benchmarking). The results that they showed are something like this:
Total time: Create object, serialize and deserialize
Serialize time
Deserialize time
Bytes length after serialization
Kryo is in the top all of the list, thus I was supposing that it is worth of checking it. And I stumbled to a very small and very nice library, that really have some nice features. So lets just jump to features right away:
...
This would be the last solution, thus using no additional libraries. It should provide very fast serialization, but requires a lot of work and can be error prone also.
It consists of making all our classes implement Externalizable interface, thus implementing these methods:
public void writeExternal(ObjectOutput out) throws IOException;
public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException;
In this methods we have to create a byte array on our own using some basic methods provided in the ObjectOutput class.
I don't see any advantage to the option number 2 with Kryo library. Forward and backward compatibility also needs to be created. Thus, I think that there is no reason to consider it as a valid option for us.
Interesting links
Discover the secrets of the Java Serialization API – Basic introduction
Comparison of data serialization formats – Wikipedia
Protocol Buffers – Official page
Getting started with Apache Thrift
Thrift vs ProtocolBuffers
Kryo Project – Official page
Java serialization benchmarking
Java NIO