#1 Re: mORMot 1 » Stop the war! » 2022-02-24 17:24:52


We all woke up in a different world today.
This whole situation is so sad and unbelievable.
By starting this war Putin prooved to the world that he is nothing less then the next Adolf Hitler.
He is totally insane and I'm sure he won't stop once he's done with Ukraine. Just listen to his spoken and written words...
It was a massive mistake from the rest of the world to "tolerate" the annexation of Krim in 2014.

My grandma was a refugee escaping from Ukraine at the end of WW2.
My mom is born in Germany, me too. But there is 100% ukrainian blood running through my vains.

This all just feels so unreal...

Pavlo, stay safe!!!

Слава Україні! - Героям слава!

#2 Re: mORMot 1 » SOA: ServiceUndefine/DeleteInterface » 2022-01-20 10:39:33

ab wrote:

Code added into TRestServerUriContext.ExecuteSoaByMethod slows down execution of regular methods

I guess that's because of unnecessary MethodIndex recalculation after each callback.

Which testcase/benchmark did you use to come to that conclusion? I'll use that test for comparing my numbers and doing optimizations.

ab wrote:

... and I am not sure it is very stable about the actual methods process.
It is very unsafe to play with the method indexes once the server is initialized, because a lot of code rely on them to be stable enough.

I just had a deeper dive into mORMot2 sources. You are right, current changes are not stable enough.
I'll try to do more modifications to make it stable and fast (including a heavy multi-threaded TestCase) just for curiosity, as soon as I'll find some time.
If everything will be stable without any performance penalties, then I'll do another pull request. If not, then i'll forget about dynamic service registration. wink

#3 Re: mORMot 1 » SOA: ServiceUndefine/DeleteInterface » 2022-01-17 14:22:44


Would be handy. Otherwise everyone has to revert his local *.lpi file(s) before commiting. That's quite error-prone.

Please also add to gitignore:

I merged my cloned branch with mORMot2:master.
One of the last commits causes mormot2tests failing to compile (too many params)...
See pull request for fix: https://github.com/synopse/mORMot2/pull … 14c64369a4

#4 Re: mORMot 1 » SOA: ServiceUndefine/DeleteInterface » 2022-01-17 10:25:57


All requested changes are done, commited and pushed for review.

Maybe *.lpi files should be added to git-ignore-files?

#5 mORMot 1 » SOA: ServiceUndefine/DeleteInterface » 2022-01-17 08:30:25

Replies: 7

Hi Arnaud,

please have a look at my GitHub pull request at https://github.com/synopse/mORMot2/pull/72

I need the ability to switch Interface implementation factories at runtime.
TestCase is added.

King regards,

#6 mORMot 1 » Getting rid of TSQLRestServerDB giant (read) lock » 2021-05-03 11:45:38

Replies: 4


currently there is a giant lock in TSQLRestServerDB database access.
Whenever you are writing OR reading something via the ORM Layer you will find yourself trapped in that lock (DB.Lock/DB.LockJSON/DB.Unlock/DB.UnlockJSON).
When using SQLite3 as your backend DB this makes sense for writing. SQLite only allows one single concurrent writer anyways.
But imho this is very bad for reading from db.
As long as ALL of your SELECT statements are executed fast and TSQLRestServerDB's internal JSON result caches aren't flushed too often (because of INSERT/UPDATE statements) you won't notice any delay.
But things start to become very slow if you have some not-so-fast db reads and a lot of database write operations flushing the cache all the time.

-> Every single SELECT statement will be blocking all other concurrent db access until that SELECT statements result data is fetched.

In my use-case scenario this is exactly what happens. There are some non-predictable and therefore non-optimizable SELECT statements coming from "user-land" taking 1 second + to execute. If those statements are triggered by 10 concurrent client sessions in parallel, then I have an unresponsive DB for 10*1 seconds. All other tasks (may it be background jobs accessing db, or other users trying to read/write anything from db) have to wait for all other db stuff to be finished. This results in unresponsive "system hangs".

I'm currently in the stage of implementing and testing multiple reader/single writer SQLite3 db access in my codebase.
The idea is to have one single main writer connection, but multiple reader connections (one connection per (reader) thread).

This is what i've done so far:

- Add a ThreadSafe SQLite3 connection pool to TSQLRestServerDB.
- Assign one of those pooled connections to TSQLRestServerDB's as it's main connection.
- Make all those pooled SQLite connections use "SQLITE_OPEN_SHAREDCACHE", "WAL mode" and "PRAGMA read_uncommited=true;", allowing multiple reader/single writer pattern without "SQLITE 262/Table is locked" errors.
- In TSQLRestServerDB.MainEngineList() use a thread safe connection from pool instead of main connection. Only a short cache-lookup-lock is required here, the DB.LockJSON() call is gone.
- Make each pooled connection use it's own prepared statement cache.
- Make each pooled connection use a common JSON Result cache shared by all connections.

First results look promising so far...

Are there any opinions why it SHOULDN'T be done like that? Are there any drawbacks I didn't think of?


#7 Re: mORMot 1 » Revision 2.x of the framework » 2020-03-06 22:40:04

ab wrote:

@Javierus "split the project in subprojects"... in fact I was going into the opposite direction.
A lot of units will always be shared among projects.
And due to that, we have regularly a lot of problems with users having e.g. SynCommons.pas from SynPDF project and mORMot project in their path, and failing to compile.
So my idea would be to remove separate projects, and download and use mORMot as main source. If units are refactored into smaller and well-scoped units, it would be just fine to use only SynPDF or SynMustache and not the REST/ORM/SOA part.

I'm totally with you over here. Splitting into smaller units is a good idea, but don't split into sub-projects. That's worth nothing and only introduces new problems. One project to rule them all ... wink

About compiler support:
Hmm, well, thats a difficult decision...
Delphi 5 and Kylix support is no longer needed for sure. I think no one is using those compilers in combination with mORMot seriously.
But then things get complicated. I guess there are some users for every other version...
Let's wait for the result of the survey which versions are worth to be supported in mORMot2.
We're using Delphi 10.3 and D7 (only one older project left) over here. But the D7 project has to be merged to 10.3 anyway.
The current mORMot codebase is very stable and mature in general. Only bugfixes in 1.18 version from now on will be perfectly fine.

Maybe it's even worth the big cut to support Delphi 10.3+ and FPC only?
Being able to use current language features in mORMot2 would be nice, wouldn't it?

But that won't work in reality I guess wink

#8 Re: mORMot 1 » Relay server scenario » 2020-01-14 13:10:29


Hi Daniel,

without going into too much detail, I would implement it as such:

- Define relay-user-accounts on your relay server. Store the info to which in-office-server that account should connect to.
- Let in-office-server connect to your relay server.
- Establish a web-socket connection to relay-server.
- Let home-clients connect to that relay-server.
- Intercept the http request from the client, store the request in the relay-session, let the client wait here in blocking mode
- Send websocket-notification to the in-office-server
- Load the original client-request from relay-server, process/reassign http headers if required an let the in-office-server handle that request
- Intercept the http reply and redirect it to the relay-server
- Send reply to home-client.

Nice to see that you are evaluating mORMot for that task, i'm sure it could be done using mORMot. wink


#9 Re: mORMot 1 » Store PDF as Blobs in SQlite - performance issues? » 2019-12-17 17:08:15

Vitaly wrote:

Could you share details of your experience, please?

Well, let‘s start like this: If you really want to store large blob data in a database, then sqlite is one of your best friends to do so. That's because of its basic design. It‘s a big advantage to have your storage engine running in-process for that task. Put that large blob table into its own db file and use data sharding principles if possible. Don‘t put that table in your main db file. 

A short story with some oversimplified theory:

Once upon a time in database land, there was a performance issue regarding a Firebird RDBMS based 3-tire app. Must have been about 12 years ago approximately.

Everything worked like a charm, but once one specific table has reached about 250.000 records containing 180 GB of blob data, things started getting really weird... Indexes were fine, db settings were fine, page size was fine, cache options were fine etc... 
Adding new records to that large blob table was fine, selects were not that fast but ok, updates were fine.

But: working with other havy used tables in that db became ultra slow.

At the end it breaks down to simple physics...

We had other tables in that db with a lot of text blob data of various length (0-16k bytes, iso8859-1 codepage). 

At the end every single SQL server has to store its data into a file or set of files. There is no magic in how SQL servers work. A flat record file containing an array of fixed length records paired with a well implemented b-tree for indexing will always be the most performant way of accessing data. This is what all sql servers are doing at the end. They store fixed length data blocks with a bit of metadata info and your data as payload. Depending on the implementation of the sql server some or many tuning options and clever optimizations come into play. You can tune page and cache sizes, there are algorithms calculating expected table growth etc..

Data on disk is physically stored like that in sequence:

[Table A, Record 1]
[Table A, Record 2]
[Table A, Record 3]
[Table B, Record 1]
[Table B, Record 2] 
[Table C, Record 1]
[Table A, Record 4]
[Table C, Record 2]

I think you got it, data is appended at the end of the file in creation order.

RDBMS try to arrange records of same type/table as close together as possible.

[Table A, Record 1]
[Table A, Record 2]
[Table A, Record 3]
  EMPTY BLOCK for 10 new Table A records
[Table B, Record 1]
[Table B, Record 2] 
  EMPTY BLOCK for 10 new Table B records
[Table C, Record 1]
  EMPTY BLOCK for 10 new Table C records
[Table A, Record xxx]

If you have a lot of variable-length data records (with text-blobs, big varchars, etc..) and lot of various inserts, those reserved blocks become unbalanced.
This results in your data being spread around just everywhere in your db pages.

The physical HDD heads have to do a lot of movement to collect all data from db pages on disc. Modern SSD drivers don't face that problem as bad as physical drives because of their architecture. But even SSDs have to do a lot of calculation in such a case. If there are many concurrent users who are requesting that data all the time, then the HDD will go crazy. This is what has happend in my case.

Things are getting even worse the more data is inserted into db.

Well, in Firebird RDBMS there is a way to reorganize the ODS (on disc structure): Do a simple DB backup and restore. When restoring db, all the table data is written back in sequence, so all Table A records reside next to each other on disk.

That helped a lot and improved performance, but we were forced to do those backup/restore jobs quite regularly.

It was absolutely no fun to do so. It took a lot of time to backup/restore those 200 GB of data. Data was still growing and soon we've reached a point where such backup/restore tasks took longer then the out-of-office times of the users.

The DB became unmaintainable.

Solution was to store those blobs outside of db, that fixed all those problems.

So, to come to an conclusion: It depends a lot on your use-case and data types if it is a good idea to store large blobs in db.
My general advice is: don't do it.
Do it only if you exactly know what and why you are doing it.

#10 Re: mORMot 1 » Store PDF as Blobs in SQlite - performance issues? » 2019-12-09 19:23:30

mine1961 wrote:

What are your experiences with large blobs in SQlite?

My experience with LARGE blob data:

Don't do it.

Store metadata in DB, large content outside DB in files. But be carefull not to store too many files in one folder (at least under Windows). File I/O will become incredibly slow once a certain amount of files per folder is reached.

#11 Re: mORMot 1 » Business rules example » 2019-12-09 16:05:32

Javierus wrote:

Is your advice taking all the mORMot framework, but the ORM part?

No, you got me wrong here.

My advice is to use the ORM part as a simple data storage layer only. All the domain specific functions should reside in higher level services. Don't use TSQLRecords in your domain logic. The domain should know nothing about TSQLRecords.

If you you are in a large/complicated domain, then keep your CQRS Services who are responsible for storing TSQLRecord* data as dumb as possible. By following mORMot's DDD principles you end with a TSQLProductRecord=class(TSQLRecord)... and a correspondig TProductRecord = class(TSynPersistent). TSQLProductRecord is your ORM storage class, TProductRecord is your PODO (plain old Delphi object). Your bussines rules should aways work on TProductRecord, not on TSQLRecord* classes.

E.g.: sometimes it's not enough to store any kind of item arrays as Variant only in your entity. Searching for specific items would result in loading all entities, then searching each entity for the array-item.
If you have another higher-level service in front of your CQRS service, then the .Add method could utilize various CQRS services and split your PODO values to different TSQLRecords...

I hope you get what I mean.

#12 Re: mORMot 1 » Business rules example » 2019-12-03 19:04:02


You should go for the DDD way with interface based services.
Have a look at the documentation about DDD. It‘s an excellent stating point.
We are developing quite a large project based on mORMot over here using this principle, and it works like a charm!
Try to encapsulate those business rules inside CQRS services, or even in higher layered „parent“ services who are consuming those CQRS services by themself. Expose only high level services as api. Don‘t rely your business rules on TSQLRecords but on business PODOs. That‘s my advice for building complex servers based on mORMot.

#13 Re: mORMot 1 » Access Violation when running a query with a huge ammount of data » 2019-11-22 08:16:06


There‘s simply not enough memory available for such an amount of data in X32. You could switch to X64, but this sounds like a design issue to me. Dou you really have to fetch all that data at once?

#14 Re: mORMot 1 » Unexpect query results after REINDEX » 2019-04-30 09:34:51



try to execute following 2 statements using your Rest-Connection or via SynDBExplorer:

pragma integrity_check;

Do not use any other tools for write operations to your DB.

#15 Re: mORMot 1 » LZMA/2 compression unit » 2019-04-04 10:06:58


InnoSetup has built-in support for LZMA file compression using a dll. Have a look at https://github.com/jrsoftware/issrc/blo … s/LZMA.pas
Another option would be to go for translating headers and directly linking the C .obj file.

#16 Re: mORMot 1 » [issue] Serious issue with THttpApiWebSocketServer and Safari Browser » 2018-04-19 08:20:42

mpv wrote:

@oz - thanks for debugging. Now I understand the problem. I will refactor HttpApiWebSocket code on weekend. In additional to your proposal we need clean fPendingForClose sessions, because fProtocol.RemoveConnection(fIndex) will put the session in PendingForClose, but since we never receive close event in case of safari pending session list will grow to infinity...

You're welcome, i'm glad being able to help/contribute...

I've just checked it again, you're wrong about PendingForClose List growing to inifinity.
fProtocol.RemoveConnection(fIndex) puts the connection to PendingForClose, that's right.
But fState ist set to fState := wsClosedByClient after that.
THttpApiWebSocketConnection.ProcessActions is called by  TSynThreadPoolHttpApiWebSocketServer.Task.
Inside TSynThreadPoolHttpApiWebSocketServer.Task, the pending connection is deleted by following block because fState=wsClosedByClient.

  if conn.fState in [wsClosedByClient,wsClosedByServer,wsClosedByGuard,wsClosedByShutdown] then begin
    if conn.fState = wsClosedByClient then
      conn.Close(conn.fCloseStatus, Pointer(conn.fBuffer), length(conn.fBuffer));

Adding following debug output prooves this:

  writeln(conn.Protocol.fPendingForClose.Count, ' pending connections.'); -> 0 pending connections

King regards,

#17 Re: mORMot 1 » [issue] Serious issue with THttpApiWebSocketServer and Safari Browser » 2018-04-18 12:18:23


Pavel, I think i traced the problem down and have a fix available...

I've added a breakpoint to "function THttpApiWebSocketConnection.ProcessActions(ActionQueue: WEB_SOCKET_ACTION_QUEUE): boolean;":

Following two Actions happen when closing the tab with Firefox:



Following single action happens when closing the tab with Safari:

THttpApiWebSocketConnection.ReadData... is called.
Inside this function -> Err= ERROR_HANDLE_EOF -> Disconnect is called.

I've changed procedure ReadData to a function:

function THttpApiWebSocketConnection.ReadData(const WebsocketBufferData): integer;
var Err: HRESULT;
    fBytesRead: cardinal;
    aBuf: WEB_SOCKET_BUFFER_DATA absolute WebsocketBufferData;
  if fWSHandle = nil then
  Err := Http.ReceiveRequestEntityBody(fProtocol.fServer.FReqQueue, fOpaqueHTTPRequestId, 0,
    aBuf.pbBuffer, aBuf.ulBufferLength, fBytesRead, @self.fOverlapped);
  case Err of
//     Disconnect;
    NO_ERROR: ;//
    // todo: close connection

and the calling function

function THttpApiWebSocketConnection.ProcessActions(ActionQueue: WEB_SOCKET_ACTION_QUEUE): boolean;
    case Action of
        for i := 0 to ulDataBufferCount - 1 do
          if ReadData(Buffer[i])=-1 then begin
            fState := wsClosedByClient
            fCloseStatus := 1001;//Buffer[0].Reserved1;
            EWebSocketApi.RaiseOnError(hCompleteAction, WebSocketAPI.CompleteAction(fWSHandle, ActionContext, 0));
            result := False;
        fLastActionContext := ActionContext;
        result := False;

With this fix everything works as expected. Could you have a look at that fix and incorporate it into trunk if ok?


#18 Re: mORMot 1 » [issue] Serious issue with THttpApiWebSocketServer and Safari Browser » 2018-04-18 09:18:15


With Safari it is quite the same as with Firefox/Chrome.
Here is what I did:

Open new tab, enter url: "http://omacwin10/8888"
-> Page is loaded
Console log:

Event {isTrusted: true, type: "open", target: WebSocket, currentTarget: WebSocket, eventPhase: 2, …}

Network/Websocket log:

WebSocket-connection established

-> Enter "Test1", click Send
Console output:

MessageEvent {isTrusted: true, origin: "ws://omacwin10:8888", lastEventId: "", source: null, data: "Test1", …}

Network/Websocket log:

[out]: Test1
[in]: Test1

-> Reload page:
Console output:


Network/Websocket log:

Connection closed

So, it basically looks like if nothing is sent by Safari. The connection is simply closed by the browser without any data being sent. It looks like if that closed connection isn't recognized by the http.sys API based server.

King regards,

#20 Re: mORMot 1 » Issue & Fix: TSQLRecordMany.ManyAdd() using TSQLRestBatch » 2018-04-17 14:34:39


@ab: is there any chance to apply this little fix?

#21 mORMot 1 » [issue] Serious issue with THttpApiWebSocketServer and Safari Browser » 2018-04-17 14:32:36

Replies: 6

there is a serious issue with THttpApiWebSocketServer connections consumed by Safari Browser Clients. I've tested with latest MacOS Desktop Browser and iPhone.

Steps to reproduce:
1. Open "Project31WinHttpEchoServer.dpr".
2. Change value "localhost" in constructor TSimpleWebsocketServer.Create to your (external) hostname for being able to test it with external devices.
3. Also add ",true" for registering hostname within http.sys

constructor TSimpleWebsocketServer.Create;
  fServer := THttpApiWebSocketServer.Create(false, 8, 1);
  fServer.AddUrl('','8888', False, 'PUT YOUR PUBLIC HOSTNAME HERE', true);
  fServer.AddUrlWebSocket('whatever', '8888', False, 'PUT YOUR PUBLIC HOSTNAME HERE', true);
  // ManualFragmentManagement = false - so Server will join all packet fragments
  // automatically and call onMessage with full message content
  fServer.RegisterProtocol('meow', False, onAccept, onMessage, onConnect, onDisconnect);
  fServer.OnRequest := onHttpRequest;

4. Compile and run Project

5. Open Firefox, navigate to "http://hostname:8888"
-> console shows "HTTP request to /"
"Connected 0"

6. Reload page
-> console shows "Disconnected 0 1001"
-> Ok.

7. Now open Safari and repeat steps 5 and 6.
-> no disconnect message appears
It seems like if those invalid connections aren't handled properly.

8. Enter "send 0" [ENTER] in console

An external exception is fired in procedure THttpApiWebSocketConnection.InternalSend, and from now on things start going wrong. In production all of our calls to "THttpApiWebSocketConnection.Send" are protected by "try...except" blocks, but this doesn't help. Windows decides to kill the running server process as soon as this situation with undetected closed connections appears.

@mpv / @ab: Could you have a look at this please!?

King regards,

#22 Re: mORMot 1 » Cookies and expire by date on web application » 2018-01-17 12:59:33


You can specify your desired timeout in TMVCSessionWithCookies.Initialize(PRecordData,PRecordTypeInfo: pointer; SessionTimeOutMinutes: cardinal): integer;
Param: SessionTimeoutMinutes.

#24 Re: mORMot 1 » How to use THTTPAPIWebSocketServer with TSQLRestServer ? » 2017-11-22 10:21:28

mpv wrote:

Yes, but a thin layer should be implemented: onAccept, onMessage, onConnect, onDisconnect as in sample
But I don't work with TSQLRestServer in my projects, so - contribution is welcome.

BTW performance of HTTP.SYS is very good - it's our stress test result:
https://unitybase.info/api/server-V41/i … yGraph.png

Those are quite impressive stats! Could you tell some details about the stress test? http.sys thread pool size, use of http or https, how many physical clients or tested on local machine only...

#25 mORMot 1 » Enhanced URI routing for method-based services » 2017-11-06 12:24:05

Replies: 19


i need to build some kind of "external api" for my server. I need to enhance custom URI routing for logically grouping method-based services by functionality using TSQLRestServerDB.ServiceMethodRegister.


This results in following URLs:


But those URIs should rather be accessible as:




I got this working by changing following method:

function TSQLRestServerURIContext.URIDecodeREST: boolean;
  slash := PosEx(RawUTF8('/'),URI);
  if slash>0 then begin
    URI[slash] := #0;
    Par := pointer(URI);
    if (Table<>nil) and (Par^ in ['0'..'9']) then
      // "ModelRoot/TableName/TableID/URIBlobFieldName"
      TableID := GetNextItemInt64(Par,'/') else
      TableID := -1; // URI like "ModelRoot/TableName/MethodName"
    URIBlobFieldName := Par;
    if Table<>nil then begin
      j := PosEx('/',URIBlobFieldName);
      if j>0 then begin // handle "ModelRoot/TableName/URIBlobFieldName/ID"
        TableID := GetCardinalDef(pointer(PtrInt(URIBlobFieldName)+j),cardinal(-1));
    if (Table=nil) and (slash>0) then   // <----------- do not truncate URI if there was a slash, but no Table.
      URI[slash]:='/' else

The idea is to not truncate URI if no Table is found. My tests show that this is working properly. Are there any hidden side-effects which i'm not aware of?

Looks like if i'd been to fast... static MVC URIs are broken after this patch, maybe even more. So this is not working.

#26 mORMot 1 » JWT Authentication » 2017-10-30 10:17:24

Replies: 1

Hi Arnaud,
i need to implement JWT-only authentication for one of my projects. I can rely on the bricks already present right now, first tests show it is working great. But I am questioning myself if it wouldn't be better to implement a TSQLRestServerAuthenticationJWT for better integration with general mORMot authentication scheme. What's your opinion about that topic? Any advice would be appreciated.
Thanks, oz.

#27 Re: mORMot 1 » Multiple Mormot based servers accessing the same SQLite database » 2017-09-29 10:11:15


Well explained! We are using the same approach over here.
Just a little question: is it save to let one TSQLRestServer instance be served by different THttpServers at the same time? Meaning one WebSocket Server for server-to-server communication and one http.sys server for an Admin-MVC Client frontend? I remember trying that some time ago and faced problems as far as i can remeber. But as far as I understand it should be possible by design.

#28 Re: mORMot 1 » Query on lot of records » 2017-09-15 06:00:42


Imho it is no option to use id>=? and id<? statements for paging functionality because of deleted records. Imagine there are 100 records in your table. Paging should load 10 records for each page. Loading page 3 results in a query like: id>=20 and id<30. Everything will be ok until records will be deleted. If record 20 to 30 are delted then you will get an empty result. If record 23 and 24 are deleted then this query will contain only 8 records instead of 10.
So, go for LIMIT statements.

#29 Re: mORMot 1 » Issue with TPropInfo.SameValue for classes with nested ObjArrays » 2017-09-14 13:01:09


Sorry for replying late, the issue is fixed indeed. Thank you!

#30 mORMot 1 » Issue & Fix: TSQLRecordMany.ManyAdd() using TSQLRestBatch » 2017-09-14 12:59:41

Replies: 3

Hi Arnaud,

there's an issue with TSQLRecordMany.ManyAdd() when used via TSQLRestBatch.
Current code:

function TSQLRecordMany.ManyAdd(aClient: TSQLRest; aSourceID, aDestID: TID;
  NoDuplicates: boolean; aUseBatch: TSQLRestBatch): boolean;
  result := false;
  if (self=nil) or (aClient=nil) or (aSourceID=0) or (aDestID=0) or
     (fSourceID=nil) or (fDestID=nil) then
    exit; // invalid parameters
  if NoDuplicates and
     (InternalIDFromSourceDest(aClient,aSourceID,aDestID)<>0) then
      exit; // this TRecordReference pair already exists
  fSourceID^ := aSourceID;
  fDestID^ := aDestID;
  if aUseBatch<>nil then
    result := aUseBatch.Add(self,true)<>0 else   // Issue here
    result := aClient.Add(self,true)<>0;

aUseBatch.Add returns the batch index or -1 on error, but code is checking for <> 0, so its always false for the 1st element.

function TSQLRecordMany.ManyAdd(aClient: TSQLRest; aSourceID, aDestID: TID;
  NoDuplicates: boolean; aUseBatch: TSQLRestBatch): boolean;
  result := false;
  if (self=nil) or (aClient=nil) or (aSourceID=0) or (aDestID=0) or
     (fSourceID=nil) or (fDestID=nil) then
    exit; // invalid parameters
  if NoDuplicates and
     (InternalIDFromSourceDest(aClient,aSourceID,aDestID)<>0) then
      exit; // this TRecordReference pair already exists
  fSourceID^ := aSourceID;
  fDestID^ := aDestID;
  if aUseBatch<>nil then
    result := aUseBatch.Add(self,true)<>-1 else   // Fix
    result := aClient.Add(self,true)<>0;

Could you apply this fix to current sources please?!


#31 mORMot 1 » Issue with TPropInfo.SameValue for classes with nested ObjArrays » 2017-08-25 12:26:57

Replies: 3

Hi Arnaud,
I think there's an issue with TPropInfo.SameValues.
I have a simple DTO class inheriting from TSynPersistent. This DTO class has a TSomethingObjArray published property. When calling TPropInfo.SameValue the function crashes with an access violation for the ObjectEquals method call concerning the ObjArray.

function TPropInfo.SameValue(Source: TObject; DestInfo: PPropInfo; Dest: TObject): boolean;
    tkDynArray: begin
      if daS.Count=daD.Count then
        if DynArrayIsObjArray and
           ((@self=DestInfo) or DestInfo^.DynArrayIsObjArray) then begin
          for i := 0 to daS.Count-1 do
            if not ObjectEquals(PObjectArray(daS.Value)[i],PObjectArray(daD.Value)[i]) then // ->> ACCESS VIOLATION 

It looks like if PObjectArray(daS.Value) cast is invalid.
I could fix that issue by changing:

if not ObjectEquals(PObjectArray(daS.Value)[i],PObjectArray(daD.Value)[i]) then


if not ObjectEquals(TObject(daS.ElemPtr(i)^),TObject(daD.ElemPtr(i)^)) then

Could you please have a look at that issue?

Thanks, oz.

#32 mORMot 1 » How to integrate node.js NPM modules to SyNode? » 2017-08-23 09:07:10

Replies: 1

Hi all,
I like to integrate some node.js NPM packages into one of my projects using SyNode/Firemonkey engine. Now I'm searching for some informations on how to achieve that.
My current goal is to integrate the "jsreport-core" (https://www.npmjs.com/package/jsreport-core) package with all of its dependencies and use that in one of my SOA Services.
This is a minimalistic JavaScript example function which I'm trying to execute in SyNode:

var jsreport = require('jsreport-core')()
jsreport.init().then(function () {     
   return jsreport.render({
       template: {
           content: '<h1>Hello {{:foo}}</h1>',
           engine: 'jsrender',
           recipe: 'phantom-pdf'
        data: {
            foo: "world"
    }).then(function(resp) {
     //prints pdf with headline Hello world 
}).catch(function(e) {

It fails on the first "require" call not being able to find "jsreport-core" package.
For a quick test I copied "jsreport-core" folder from my local NPM installation to my application bin dir and changed

var jsreport = require('jsreport-core')()


var jsreport = require('./jsreport-core')()

Now evaluation fails with message "cannot find module bluebird" which is one of "jsreport-core" required packages.
My questions are:
- Is it right to copy all those required NPM packages to application's bin directory?
- What needs to be done to "register" all required packages in SyNode so that "require('jsreport-core')" function calls do work?
- Are there any other tasks required to be done for integrating such NPM packages to SyNode?
- Is there some kind of faq/documentation about how to integrate NPM packages to SyNode?

King regards,

#33 Re: mORMot 1 » Websockets Newbie - How share port 80 with iis » 2017-05-20 15:15:33


Hi, current WebSocket implementation requires exclusive port usage. You can not share the same port with your IIS instance when using any other kind then http.sys mode. Just specify another port in the constructor.

#34 Re: mORMot 1 » TSQLHttpServer with custom ThreadClass » 2017-05-17 15:53:15

ab wrote:

How the thread is created depends on how the HTTP server is implemented.
It may be a from a thread pool, or not. It may inherit from TSynThread, but not directly.

Let the logic depend from this implementation detail is a wrong idea.
There are other cleaner options available, focusing on resources, not threads.

Your design is clearly breaking the Single Responsibility OOP Principle.

The cleanest way to handle such case may be to use a sicClientDriven interface-based service...

I'm with Arnaud here. What you need sounds like some kind of ressource pool, not threadpool. If using interface base services in sicClientDriven mode is no option, then there's no other way then using custom ressource pools i guess.

#35 Re: mORMot 1 » Turn MVC View Caching in DEBUG mode off » 2017-05-09 08:11:23


did you have a look at TMVCRunWithViews.SetCache? Looks like a good starting point for further investigation...
Cheers, oz.

#36 Re: mORMot 1 » Public-key Asymmetric Cryptography via SynECC » 2016-10-18 19:25:31


Any news about about this? I just can't wait to get @ab pushed to merge everything! wink

#37 Re: mORMot 1 » Issue with TDynArrayHashed: failed Assertions/mem leaks » 2016-10-17 14:55:25


No worries, you're doing a great job maintaining this huge and fantastic project!
I'm the one who is to blame for releasing a build of our software without testing it enough wink
Gladly it was good enough to "eat" the exception without any further harm in this situation.
I tried to fix the root of this issue by myself, but had to give up after 10 minutes because a quick fix was required to get the system up again asap. I'm into mORMot for about a year now but there are parts who are still difficult to get into. smile
I've already pushed a new version for this customer with your fix included.

#39 Re: mORMot 1 » Issue with TDynArrayHashed: failed Assertions/mem leaks » 2016-10-17 08:00:32


I just helped myself with introducing an empty try...except...end block to TDynArrayHashed.HashAdd() for "eating" the exception. Things seem to work, my logs are full of "Assertion failed" exceptions, but at least the system on my customer's server is up again. But this is not the solution obviously.

#40 Re: mORMot 1 » Issue with TDynArrayHashed: failed Assertions/mem leaks » 2016-10-17 07:09:01


There's an issue again with TDynArrayHashed.HashAdd() which i've discovered in production.
My Setup:
A Firebird 2.5 DB is beeing accessed from an SOA service directly through SynDBZeos.
Everything works as expected for the first time. Then at the 2nd/3rd call to the same method problems start.
Assertion "assert(result<0);" fails.

procedure TDynArrayHashed.HashAdd(const Elem; aHashCode: Cardinal; var result: integer);
var n,cap: integer;
  n := Count;
  SetCount(n+1); // reserve space for a void element in array
  cap := Capacity;
  if cap*2-cap shr 3>=fHashsCount then
  {$ifdef UNDIRECTDYNARRAY}with InternalDynArray do{$endif} begin
    // fHashs[] is too small -> recreate
    if fCountP<>nil then
      dec(fCountP^); // don't rehash the latest entry (which may not be set)
    if fCountP<>nil then
    result := HashFind(aHashCode,Elem); // fHashs[] has changed -> recompute
  with fHashs[-result-1] do begin // HashFind returned negative index in fHashs[]
    Hash := aHashCode;
    Index := n;
  result := n;


TDynArrayHashed.HashAdd((kein Wert),14531870,0)
TDynArrayHashed.FindHashedForAdding((kein Wert),True,14531870)
TSQLDBConnectionProperties.Execute('select "CommunicationID",...

This is quite a serious bug for me because it happens in production environment. Arnaud, could you have a look at TDynArrayHashed please?

#41 Re: SyNode » Beta release of SyNode - ES6 JavaScript + NPM modules » 2016-10-12 12:45:25


If so, then a quick test using Delphi 7 compiler & FullDebugMode should work without any problems. Does it?

#42 Re: mORMot 1 » Feature request: modification to TSynLog.CreateLogWriter » 2016-10-12 11:02:54


I think Arnaud means:

     for retry := 0 to 2 do begin
        for i := 1 to 10 do
          on Exception do

#44 Re: mORMot 1 » Master/Slave replications - tips :) » 2016-10-11 12:20:18

warleyalex wrote:

I just had a closer look at this post

It seems there is some interesting concepts about sync that I think could be applied to.

I just had a quick look at the article...
This looks like an interesting start point for building full offline aware master/slave replication. The basic concept is similiar to what we are doing over here (in a legacy java application) for years.
The "difficult" part would be to implement ID clash logic regarding foreign keys for tables having any kind of relations (FK constraints).

I'd recommend NOT to add "row_guid", "row_timestamp" and "sentToServerOK" for every single table participating in replication, but to introduce ONE single new table which in fact would act as a changelog.

  ActionFlag (insert/update/delete)
  SentToServerOk (only used on slave-side)

Every single ORM write operation (insert/update/delete) results in adding a new row to this table (if there's no entry with same TableName/Index and ID properties already available). I think it should be possible to use/extend mORMot's audit-trail features for this.

The great benefit of using a such a single "changelog" is that it requires only one single SELECT statement to get/send all ORM modifications from/to master. Furthermore there is no need to care about referential integrity for INSERT statement order, because sorting by TSQLRecordSyncChangelog.ID (autoinc) will always result in the right SQL statement order. This means that the sync mechanism is uncoupled from the DB Model, which makes it kind of generic. This whole thing could also easily be extended to handle syncronization of non-DB objects (like pictures,uploads,whatever...).

Just my two cents for discussion...

#45 Re: mORMot 1 » Issue with TJSONSerializer.RegisterObjArrayForJSON +DEMO » 2016-10-11 10:17:37


You're welcome, i'm happy my demo program did help!
I just did some tests, the issue is gone, thanks. smile

#46 Re: mORMot 1 » Last Modify bring a bug! » 2016-10-11 10:02:50

ab wrote:

In fact, TECDHEProtocol.CreateFrom is not enough.

Yeah, I know, but at least it fixed the compilation issue smile

#47 Re: mORMot 1 » Last Modify bring a bug! » 2016-10-11 07:58:26


I don't have compiler at hand right now, but try if following change helps to get sources compiled:

function TECDHEProtocol.Clone: IProtocol;
  result := TECDHEProtocol.CreateFrom(self);

#48 Re: mORMot 1 » TSynLog and x64 dll _TExitDllException muting » 2016-10-08 10:11:46


If EExternalException is all you can get out of those exceptions, then there is no way to use current ExceptionIgnore implementation without breaking existing code. But this feature could easily be implemented, based on Exception.Message parsing. Following idea is for discussion:

- Problem is that EExternalException is all you have over here.
- Adding EExternalException to TSynLogFamily.ExceptionIgnore is a bad idea because we want to exclude some special EExternalExceptions, not all.
- I'd suggest to add following public function to TSynLogFamily:

procedure AddExceptionIgnoreWithCallback(const aExceptionClass: TClass; const aIgnoreCallback: TSynLogIgnoreExceptionCallback);

with TSynLogIgnoreExceptionCallback being defined as:

TSynLogIgnoreExceptionCallback=function(const aMessage: string):boolean;

AddExceptionIgnoreDetailed can use some kind of DynArray for storing records with Callback function pointer and Exception fields. That private DynArray can be easisly checked for EExternalException class matches, then function(aMessage:String):boolean callback will be called to check for exclusion. Your custom function would look like:

function MyExceptionTextMatch(const aMessage:string):boolean;
  result:=aMessage='whatever, leftstr(), ....';

What do you think?

#49 Re: mORMot 1 » Newbie, stuck at the 1st hurdle » 2016-10-05 11:52:15


What's the return value of "Server.Add(Customer, true);"? Not "0"?

#50 Re: mORMot 1 » Master/Slave replications - tips :) » 2016-10-05 11:26:35


After dealing with replication/synchronization scenarios for 20+ years now, there is one thing i could say and advice to anyone from my experiences:
This kind of functionality is very very domain specific and depends hell of a lot on your use case. If you have to support such offline read/write services in your application, then don't start implementing this functionality in your DB or ORM layer. Do it in your business- or application layers. Don't think in tables, think in objects.
Replication at DB/ORM layer is painfull based on my experiences. Of course you can do that, but sooner or later you will face nasty little problems which will require you to do dirty code hacking, polluting any domain model.
Furthermore mORMot is not a very good candidate for such offline replication services at DB/ORM layers because of it's numerical ID design imho. But from my point of view this is not a limitation because:
1. It's a not a good idea to start building sync/replication services at DB/ORM layers.
2. mORMot is absolutely great about it's DDD capatibilities, and thats the rigth place to start building sync services at all.

I see the possibility to create some kind of generic object based sync/replication services for various scenarios with the help of mORMot's DDD toolbox, but thats a huge and difficult task to do.

I have some questions about your solution. Your model is 1 Master, N Slaves, right? If so, then how do you deal with ID conflicts? Do you use some kind of Master/Slave ID Converter, do you use fixed size ID ranges (e.g: Slave 1: 0-1m, Slave 2: 1m-2m,...), or do you use some kind of custom id handling based on guids or other non numerical field types?

Board footer

Powered by FluxBB