#1 2011-10-05 06:52:44

mjustin
Member
Registered: 2010-12-14
Posts: 3

TSynLog with support for multiple processes writing to the same log

On Stackoverflow I read about TSynLog. I know a Java logging framework which implements a so called prudent mode which locks files using newer operating system specific API to allow parallel writing (see http://stackoverflow.com/questions/7138 … plemented)

Would this be possible as an improvmenet in TSynLog? As I understand the logback source code, a lock area is created at the end of the log file and released after.

I have seen Delphi code to access the locking API on Stackoverflow and try to find it again if this is interesting. I am also using the log4d (at sourceforge) library and would like to implement it there too as it is our in-house standard logging library...

Regards
Michael Justin

Update:

http://stackoverflow.com/questions/1916 … -in-delphi

shows how a lock can be acquired

Last edited by mjustin (2011-10-05 06:57:50)

Offline

#2 2011-10-05 07:56:15

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,659
Website

Re: TSynLog with support for multiple processes writing to the same log

I want to get rid of this global file locking.
Some support people here do not like the fact that the log file is unable to read until the application is closed.
And they are right: I've just made a small modification to allow read share about the .log file.
See http://synopse.info/fossil/info/738156dbd9

Parallel / concurrent writing is not allowed in the current implementation of TSynLog.
For four reasons:
- I did not need this feature - and parallel logs tends to be difficult to follow;
- You can implement such a parallel locking by publishing some kind of service to use the main logging file from other process - without changing how TSynLog works (this "shared service" would be implemented on top of it);
- The TSynLog logging can be shared among threads or even thread-specific - there is a critical section which allows safe multi-thread access;
- File level locking via the LockFile/UnLockFile APIs is expensive in terms of process speed, and is known to be buggy over networks: a logging system will be much slower if using such locking - my 2nd solution ("service"-based) sounds much less expensive (especially if you use a GDI message based transmission, which is very light-weight on a local computer).

Other implementations are welcome.
But I don't want the logging feature to become slow because of this new feature.

I may add a GDI message based multi-process feature logging.
It will work only locally on the same PC, but won't work over a network or from a windows service (since Vista/Seven).
There are some points to be checked, especially all timing resolution which must be processed on the running process side - in fact the messages would only send textual messages to the main logging.

What do you think about that?

Offline

#3 2011-10-05 15:27:27

mjustin
Member
Registered: 2010-12-14
Posts: 3

Re: TSynLog with support for multiple processes writing to the same log

Thanks for your detailed answer! Here is the use cases:

x users working with y applications on z terminal servers, every app needs to write a log file.

Possible solutions:

* every user session writes its own log file (see the disadvantage?)
* one log per application (exe1 -> exe1.log, exe2 -> exe2.log) does not work with basic file open modes like fmShareDenyWrite
* all apps write into one big log file (good for browsing)

Other options:

* write to a central application server with a UDP based service, write to central files from there (this is what I am implementing at the moment).

But I think that writing to the same file from different processes would not be slow. Even if the logging speed is slower with file locking, it is still better than no logging, in production mode apps log only events every couple of seconds.

Offline

#4 2011-10-05 16:54:02

ab
Administrator
From: France
Registered: 2010-06-21
Posts: 14,659
Website

Re: TSynLog with support for multiple processes writing to the same log

Of course, I understand what is your use case. Terminal server makes it easier and more complex on the same time.

Over a network, the central logging service does make sense, as you are currently implementing.
But remember that UDP can loose packets, whereas TCP will ensure that you receive all data. May be an issue for error tracking.
Also consider regrouping all logging data on each clients, then sent it at once every .. ms to the main server. This will be faster.

A true SOA architecture with a Client-Server design would make it more easier, of course:
- Client-level operations on each client PC;
- Server centralized operations.
But I suspect it is too late to change the design.

Offline

Board footer

Powered by FluxBB