You are not logged in.
Yo!
Did not find with quick search (or did not understand what I saw).
So would it possible to use SynLZCompress1() to compress data in lets say 10kb chunks instead of whole data at once.
I have use case where I would like to avoid to have single large piece of the data in memory all at once (op there is possibility of that).
To be clear I mean that I would like to get same result with chunks than as I would if I had all the data in one large buffer. So the data would be fed in same pipeline but not in one go. (So not just small individual pieces of compressed data, but same binary data as if I would compress in one go, hope someone will understand this )
-Tee-
Offline
What makes SynLZ efficient is its in-memory process.
This is not the only algorithm doing it. For instance, Lizard has the same feature / limitation.
See https://github.com/inikep/lizard/blob/l … _format.md
So there is no "stream" oriented interface of SynLZ.
What you could do is cut the main data in chunks.
This is what we do e.g. with FileSynLZ() which use 128 MB compression chunks.
Offline
OK,
I could do somethiong like that.
if code is something like that
whilke Data do
Compress(GetFewBytesOfData);
make the Compress() keep cache internally which is flushed at end and when buffer gets full.
I take a look at the file compression routine.
Thanks man.
-Tee-
Offline