#1 2013-04-18 04:19:04

bsanlang
Member
Registered: 2013-04-17
Posts: 1

google snappy compression

Hello:
  Can you implement google snappy compression for orm?

Offline

#2 2013-04-18 05:43:37

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,659
Website

Re: google snappy compression

But there is already zip/deflate and our SynLZ algorithm for Delphi clients.

Snappy/zippy is very close to our SynLZ algorithm (and from my benchmarks, SynLZ is faster for compression), and much complex to implement, with no existing pure Delphi version.
Not worth it, IMHO.

There are other spaces for speed improvements in the framework, e.g. with distributed work, or improved multithread process with lockless writing.
If you publish some code, we will include it, of course!

We already investigated LZ4 and QuickLZ algorithms, but were not worth it - see http://synopse.info/forum/viewtopic.php?id=568

I would not add complexity and dependencies unless you have a noticeable server-side gain against SynLZ, or being able to use it with browsers, which is not the case.

Offline

#3 2013-04-19 15:42:23

Therinor
Member
Registered: 2012-01-20
Posts: 5

Re: google snappy compression

ab wrote:

We already investigated LZ4 and QuickLZ algorithms, but were not worth it - see http://synopse.info/forum/viewtopic.php?id=568

But, was the comparison fair ?
You seemed to have compared file i/o operation with pure in-memory benchmark, providing an obvious advantage to the second one.

Offline

#4 2013-04-19 16:07:03

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,659
Website

Re: google snappy compression

Therinor wrote:

You seemed to have compared file i/o operation with pure in-memory benchmark, providing an obvious advantage to the second one.

If you take a look at the source code of the LZ4 benchmark, you will see that it test the compression in pure in-memory.
https://code.google.com/p/lz4/source/br … nk/bench.c
In fact, in this code, the reported value is the fastest of all iteration and lack of high precision timing, whereas in our benchmarks, we compute the average and use high-resolution timing.
In fact, this bench.c is wrong for small or middle blocks of memory (up to 1MB e.g.), since it will lack of high resolution timing, so will not be as accurate than our test: giving the fastest iteration timing will give an error factor close to the timer resolution, which is about the one of the C function gettimeofday() - I suspect around 10 or 18 ms - whereas the QueryPerformanceCounter API we use for SynLZ benchmark is much more precise.
So IMHO this is your comment which is pretty unfair.

All those 3 algorithms are very close to each other - only small implementation patterns, but mostly based on the LZO ancestor.
So you may only find some percents of speed change for compression. Decompression is a bit diverse, and in this case SynLZ may be slower than the others - but always faster than zip.
But in our case, this is compression speed which matters. And SynLZ is almost always the fastest. Only in x64 mode it will be slower than Snappy, since there is no asm optimized version yet, but just a pascal version, which is pretty fast.

Offline

Board footer

Powered by FluxBB