You are not logged in.
Hi Arnaud,
please have a look at pull request https://github.com/synopse/mORMot2/pull/281
Background: I need to call RunProcess() with waitfor=false. The returned process handle is of no use in this case, I need the process id of the created process.
King reagrds,
Orest.
I just found the problem at my side. The enum value has been read using Prop.GetValue(); instead of Prop.GetValueVariant(); Using Prop.GetValueVariant() works as expected.
Thanks.
Hi Arnaud,
there seems to be a regression in mormot.core.rtti.pas, procedure TRttiCustomProp.SetValueVariant().
procedure TRttiCustomProp.SetValueVariant(Data: pointer; var Source: TVarData);
var
u: pointer;
begin
if Prop <> nil then
Prop.SetValue(TObject(Data), variant(Source)) // for class properties
else
begin
u := nil; // use a temp UTF-8 conversion with records
if Source.VType > varNull then
begin
VariantToUtf8(variant(Source), RawUtf8(u));
if not SetValueText(Data, RawUtf8(u)) then
ClearValue(Data, {freenestedobjects=}true);
end;
FastAssignNew(u);
end;
VarClearProc(Source);
end;
The last line VarClearProc(Source); is raising EVariantInvalidOpError when copying tkEnumeration type properties. If I remove VarClearProc(Source) call completely, everything works as expected, no mem leaks. Why do you have to call VarClearProc(Source) at all?
We all woke up in a different world today.
This whole situation is so sad and unbelievable.
By starting this war Putin prooved to the world that he is nothing less then the next Adolf Hitler.
He is totally insane and I'm sure he won't stop once he's done with Ukraine. Just listen to his spoken and written words...
It was a massive mistake from the rest of the world to "tolerate" the annexation of Krim in 2014.
My grandma was a refugee escaping from Ukraine at the end of WW2.
My mom is born in Germany, me too. But there is 100% ukrainian blood running through my vains.
This all just feels so unreal...
Pavlo, stay safe!!!
Слава Україні! - Героям слава!
Code added into TRestServerUriContext.ExecuteSoaByMethod slows down execution of regular methods
I guess that's because of unnecessary MethodIndex recalculation after each callback.
Which testcase/benchmark did you use to come to that conclusion? I'll use that test for comparing my numbers and doing optimizations.
... and I am not sure it is very stable about the actual methods process.
It is very unsafe to play with the method indexes once the server is initialized, because a lot of code rely on them to be stable enough.
I just had a deeper dive into mORMot2 sources. You are right, current changes are not stable enough.
I'll try to do more modifications to make it stable and fast (including a heavy multi-threaded TestCase) just for curiosity, as soon as I'll find some time.
If everything will be stable without any performance penalties, then I'll do another pull request. If not, then i'll forget about dynamic service registration.
Would be handy. Otherwise everyone has to revert his local *.lpi file(s) before commiting. That's quite error-prone.
Edit:
Please also add to gitignore:
tests/data/*
tests/*.log
tests/*.res
Edit2:
I merged my cloned branch with mORMot2:master.
One of the last commits causes mormot2tests failing to compile (too many params)...
See pull request for fix: https://github.com/synopse/mORMot2/pull … 14c64369a4
All requested changes are done, commited and pushed for review.
Maybe *.lpi files should be added to git-ignore-files?
Hi Arnaud,
please have a look at my GitHub pull request at https://github.com/synopse/mORMot2/pull/72
Info:
I need the ability to switch Interface implementation factories at runtime.
TestCase is added.
King regards,
oz.
Hi,
currently there is a giant lock in TSQLRestServerDB database access.
Whenever you are writing OR reading something via the ORM Layer you will find yourself trapped in that lock (DB.Lock/DB.LockJSON/DB.Unlock/DB.UnlockJSON).
When using SQLite3 as your backend DB this makes sense for writing. SQLite only allows one single concurrent writer anyways.
But imho this is very bad for reading from db.
As long as ALL of your SELECT statements are executed fast and TSQLRestServerDB's internal JSON result caches aren't flushed too often (because of INSERT/UPDATE statements) you won't notice any delay.
But things start to become very slow if you have some not-so-fast db reads and a lot of database write operations flushing the cache all the time.
-> Every single SELECT statement will be blocking all other concurrent db access until that SELECT statements result data is fetched.
In my use-case scenario this is exactly what happens. There are some non-predictable and therefore non-optimizable SELECT statements coming from "user-land" taking 1 second + to execute. If those statements are triggered by 10 concurrent client sessions in parallel, then I have an unresponsive DB for 10*1 seconds. All other tasks (may it be background jobs accessing db, or other users trying to read/write anything from db) have to wait for all other db stuff to be finished. This results in unresponsive "system hangs".
I'm currently in the stage of implementing and testing multiple reader/single writer SQLite3 db access in my codebase.
The idea is to have one single main writer connection, but multiple reader connections (one connection per (reader) thread).
This is what i've done so far:
- Add a ThreadSafe SQLite3 connection pool to TSQLRestServerDB.
- Assign one of those pooled connections to TSQLRestServerDB's as it's main connection.
- Make all those pooled SQLite connections use "SQLITE_OPEN_SHAREDCACHE", "WAL mode" and "PRAGMA read_uncommited=true;", allowing multiple reader/single writer pattern without "SQLITE 262/Table is locked" errors.
- In TSQLRestServerDB.MainEngineList() use a thread safe connection from pool instead of main connection. Only a short cache-lookup-lock is required here, the DB.LockJSON() call is gone.
- Make each pooled connection use it's own prepared statement cache.
- Make each pooled connection use a common JSON Result cache shared by all connections.
First results look promising so far...
Are there any opinions why it SHOULDN'T be done like that? Are there any drawbacks I didn't think of?
Cheers,
oz.
@Javierus "split the project in subprojects"... in fact I was going into the opposite direction.
A lot of units will always be shared among projects.
And due to that, we have regularly a lot of problems with users having e.g. SynCommons.pas from SynPDF project and mORMot project in their path, and failing to compile.
So my idea would be to remove separate projects, and download and use mORMot as main source. If units are refactored into smaller and well-scoped units, it would be just fine to use only SynPDF or SynMustache and not the REST/ORM/SOA part.
I'm totally with you over here. Splitting into smaller units is a good idea, but don't split into sub-projects. That's worth nothing and only introduces new problems. One project to rule them all ...
About compiler support:
Hmm, well, thats a difficult decision...
Delphi 5 and Kylix support is no longer needed for sure. I think no one is using those compilers in combination with mORMot seriously.
But then things get complicated. I guess there are some users for every other version...
Let's wait for the result of the survey which versions are worth to be supported in mORMot2.
We're using Delphi 10.3 and D7 (only one older project left) over here. But the D7 project has to be merged to 10.3 anyway.
The current mORMot codebase is very stable and mature in general. Only bugfixes in 1.18 version from now on will be perfectly fine.
Maybe it's even worth the big cut to support Delphi 10.3+ and FPC only?
Being able to use current language features in mORMot2 would be nice, wouldn't it?
But that won't work in reality I guess
Hi Daniel,
without going into too much detail, I would implement it as such:
- Define relay-user-accounts on your relay server. Store the info to which in-office-server that account should connect to.
- Let in-office-server connect to your relay server.
- Establish a web-socket connection to relay-server.
- Let home-clients connect to that relay-server.
- Intercept the http request from the client, store the request in the relay-session, let the client wait here in blocking mode
- Send websocket-notification to the in-office-server
- Load the original client-request from relay-server, process/reassign http headers if required an let the in-office-server handle that request
- Intercept the http reply and redirect it to the relay-server
- Send reply to home-client.
Nice to see that you are evaluating mORMot for that task, i'm sure it could be done using mORMot.
Cheers,
Orest.
Could you share details of your experience, please?
Well, let‘s start like this: If you really want to store large blob data in a database, then sqlite is one of your best friends to do so. That's because of its basic design. It‘s a big advantage to have your storage engine running in-process for that task. Put that large blob table into its own db file and use data sharding principles if possible. Don‘t put that table in your main db file.
A short story with some oversimplified theory:
Once upon a time in database land, there was a performance issue regarding a Firebird RDBMS based 3-tire app. Must have been about 12 years ago approximately.
Everything worked like a charm, but once one specific table has reached about 250.000 records containing 180 GB of blob data, things started getting really weird... Indexes were fine, db settings were fine, page size was fine, cache options were fine etc...
Adding new records to that large blob table was fine, selects were not that fast but ok, updates were fine.
But: working with other havy used tables in that db became ultra slow.
At the end it breaks down to simple physics...
We had other tables in that db with a lot of text blob data of various length (0-16k bytes, iso8859-1 codepage).
At the end every single SQL server has to store its data into a file or set of files. There is no magic in how SQL servers work. A flat record file containing an array of fixed length records paired with a well implemented b-tree for indexing will always be the most performant way of accessing data. This is what all sql servers are doing at the end. They store fixed length data blocks with a bit of metadata info and your data as payload. Depending on the implementation of the sql server some or many tuning options and clever optimizations come into play. You can tune page and cache sizes, there are algorithms calculating expected table growth etc..
Data on disk is physically stored like that in sequence:
[Table A, Record 1]
[Table A, Record 2]
[Table A, Record 3]
[Table B, Record 1]
[Table B, Record 2]
[Table C, Record 1]
[Table A, Record 4]
[Table C, Record 2]
...
I think you got it, data is appended at the end of the file in creation order.
RDBMS try to arrange records of same type/table as close together as possible.
[Table A, Record 1]
[Table A, Record 2]
[Table A, Record 3]
[
EMPTY BLOCK for 10 new Table A records
]
[Table B, Record 1]
[Table B, Record 2]
[
EMPTY BLOCK for 10 new Table B records
]
[Table C, Record 1]
[
EMPTY BLOCK for 10 new Table C records
]
[Table A, Record xxx]
...
If you have a lot of variable-length data records (with text-blobs, big varchars, etc..) and lot of various inserts, those reserved blocks become unbalanced.
This results in your data being spread around just everywhere in your db pages.
The physical HDD heads have to do a lot of movement to collect all data from db pages on disc. Modern SSD drivers don't face that problem as bad as physical drives because of their architecture. But even SSDs have to do a lot of calculation in such a case. If there are many concurrent users who are requesting that data all the time, then the HDD will go crazy. This is what has happend in my case.
Things are getting even worse the more data is inserted into db.
Well, in Firebird RDBMS there is a way to reorganize the ODS (on disc structure): Do a simple DB backup and restore. When restoring db, all the table data is written back in sequence, so all Table A records reside next to each other on disk.
That helped a lot and improved performance, but we were forced to do those backup/restore jobs quite regularly.
It was absolutely no fun to do so. It took a lot of time to backup/restore those 200 GB of data. Data was still growing and soon we've reached a point where such backup/restore tasks took longer then the out-of-office times of the users.
The DB became unmaintainable.
Solution was to store those blobs outside of db, that fixed all those problems.
So, to come to an conclusion: It depends a lot on your use-case and data types if it is a good idea to store large blobs in db.
My general advice is: don't do it.
Do it only if you exactly know what and why you are doing it.
What are your experiences with large blobs in SQlite?
My experience with LARGE blob data:
Don't do it.
Store metadata in DB, large content outside DB in files. But be carefull not to store too many files in one folder (at least under Windows). File I/O will become incredibly slow once a certain amount of files per folder is reached.
Is your advice taking all the mORMot framework, but the ORM part?
No, you got me wrong here.
My advice is to use the ORM part as a simple data storage layer only. All the domain specific functions should reside in higher level services. Don't use TSQLRecords in your domain logic. The domain should know nothing about TSQLRecords.
If you you are in a large/complicated domain, then keep your CQRS Services who are responsible for storing TSQLRecord* data as dumb as possible. By following mORMot's DDD principles you end with a TSQLProductRecord=class(TSQLRecord)... and a correspondig TProductRecord = class(TSynPersistent). TSQLProductRecord is your ORM storage class, TProductRecord is your PODO (plain old Delphi object). Your bussines rules should aways work on TProductRecord, not on TSQLRecord* classes.
E.g.: sometimes it's not enough to store any kind of item arrays as Variant only in your entity. Searching for specific items would result in loading all entities, then searching each entity for the array-item.
If you have another higher-level service in front of your CQRS service, then the .Add method could utilize various CQRS services and split your PODO values to different TSQLRecords...
I hope you get what I mean.
You should go for the DDD way with interface based services.
Have a look at the documentation about DDD. It‘s an excellent stating point.
We are developing quite a large project based on mORMot over here using this principle, and it works like a charm!
Try to encapsulate those business rules inside CQRS services, or even in higher layered „parent“ services who are consuming those CQRS services by themself. Expose only high level services as api. Don‘t rely your business rules on TSQLRecords but on business PODOs. That‘s my advice for building complex servers based on mORMot.
There‘s simply not enough memory available for such an amount of data in X32. You could switch to X64, but this sounds like a design issue to me. Dou you really have to fetch all that data at once?
Hi,
try to execute following 2 statements using your Rest-Connection or via SynDBExplorer:
pragma integrity_check;
reindex;
Do not use any other tools for write operations to your DB.
Hi,
InnoSetup has built-in support for LZMA file compression using a dll. Have a look at https://github.com/jrsoftware/issrc/blo … s/LZMA.pas
Another option would be to go for translating headers and directly linking the C .obj file.
@oz - thanks for debugging. Now I understand the problem. I will refactor HttpApiWebSocket code on weekend. In additional to your proposal we need clean fPendingForClose sessions, because fProtocol.RemoveConnection(fIndex) will put the session in PendingForClose, but since we never receive close event in case of safari pending session list will grow to infinity...
You're welcome, i'm glad being able to help/contribute...
I've just checked it again, you're wrong about PendingForClose List growing to inifinity.
fProtocol.RemoveConnection(fIndex) puts the connection to PendingForClose, that's right.
But fState ist set to fState := wsClosedByClient after that.
THttpApiWebSocketConnection.ProcessActions is called by TSynThreadPoolHttpApiWebSocketServer.Task.
Inside TSynThreadPoolHttpApiWebSocketServer.Task, the pending connection is deleted by following block because fState=wsClosedByClient.
if conn.fState in [wsClosedByClient,wsClosedByServer,wsClosedByGuard,wsClosedByShutdown] then begin
conn.DoOnDisconnect;
if conn.fState = wsClosedByClient then
conn.Close(conn.fCloseStatus, Pointer(conn.fBuffer), length(conn.fBuffer));
conn.Disconnect;
EnterCriticalSection(conn.Protocol.fSafe);
try
conn.Protocol.fPendingForClose.Remove(conn);
finally
LeaveCriticalSection(conn.Protocol.fSafe);
end;
Dispose(conn);
end;
Adding following debug output prooves this:
writeln(conn.Protocol.fPendingForClose.Count, ' pending connections.'); -> 0 pending connections
King regards,
oz.
Pavel, I think i traced the problem down and have a fix available...
I've added a breakpoint to "function THttpApiWebSocketConnection.ProcessActions(ActionQueue: WEB_SOCKET_ACTION_QUEUE): boolean;":
Following two Actions happen when closing the tab with Firefox:
1) WEB_SOCKET_INDICATE_RECEIVE_COMPLETE_ACTION
BufferType = WEB_SOCKET_CLOSE_BUFFER_TYPE
2) WEB_SOCKET_SEND_TO_NETWORK_ACTION
Following single action happens when closing the tab with Safari:
1) WEB_SOCKET_RECEIVE_FROM_NETWORK_ACTION
THttpApiWebSocketConnection.ReadData... is called.
Inside this function -> Err= ERROR_HANDLE_EOF -> Disconnect is called.
I've changed procedure ReadData to a function:
function THttpApiWebSocketConnection.ReadData(const WebsocketBufferData): integer;
var Err: HRESULT;
fBytesRead: cardinal;
aBuf: WEB_SOCKET_BUFFER_DATA absolute WebsocketBufferData;
begin
result:=0;
if fWSHandle = nil then
exit;
Err := Http.ReceiveRequestEntityBody(fProtocol.fServer.FReqQueue, fOpaqueHTTPRequestId, 0,
aBuf.pbBuffer, aBuf.ulBufferLength, fBytesRead, @self.fOverlapped);
case Err of
ERROR_HANDLE_EOF:
begin
// Disconnect;
Result:=-1;
end;
ERROR_IO_PENDING: ; //
NO_ERROR: ;//
else
// todo: close connection
end;
end;
and the calling function
function THttpApiWebSocketConnection.ProcessActions(ActionQueue: WEB_SOCKET_ACTION_QUEUE): boolean;
...
case Action of
...
WEB_SOCKET_RECEIVE_FROM_NETWORK_ACTION: begin
for i := 0 to ulDataBufferCount - 1 do
if ReadData(Buffer[i])=-1 then begin
EnterCriticalSection(fProtocol.fSafe);
try
fProtocol.RemoveConnection(fIndex);
finally
LeaveCriticalSection(fProtocol.fSafe);
end;
fState := wsClosedByClient
fBuffer:='';
fCloseStatus := 1001;//Buffer[0].Reserved1;
EWebSocketApi.RaiseOnError(hCompleteAction, WebSocketAPI.CompleteAction(fWSHandle, ActionContext, 0));
result := False;
exit;
end;
fLastActionContext := ActionContext;
result := False;
exit;
end;
...
With this fix everything works as expected. Could you have a look at that fix and incorporate it into trunk if ok?
Cheers,
oz.
With Safari it is quite the same as with Firefox/Chrome.
Here is what I did:
Open new tab, enter url: "http://omacwin10/8888"
-> Page is loaded
Console log:
Event {isTrusted: true, type: "open", target: WebSocket, currentTarget: WebSocket, eventPhase: 2, …}
Network/Websocket log:
WebSocket-connection established
-> Enter "Test1", click Send
Console output:
MessageEvent {isTrusted: true, origin: "ws://omacwin10:8888", lastEventId: "", source: null, data: "Test1", …}
Network/Websocket log:
[out]: Test1
[in]: Test1
-> Reload page:
Console output:
[NOTHING LOGGED]
Network/Websocket log:
Connection closed
So, it basically looks like if nothing is sent by Safari. The connection is simply closed by the browser without any data being sent. It looks like if that closed connection isn't recognized by the http.sys API based server.
King regards,
oz.
Thanks Arnaud!
@ab: is there any chance to apply this little fix?
Hi,
there is a serious issue with THttpApiWebSocketServer connections consumed by Safari Browser Clients. I've tested with latest MacOS Desktop Browser and iPhone.
Steps to reproduce:
1. Open "Project31WinHttpEchoServer.dpr".
2. Change value "localhost" in constructor TSimpleWebsocketServer.Create to your (external) hostname for being able to test it with external devices.
3. Also add ",true" for registering hostname within http.sys
constructor TSimpleWebsocketServer.Create;
begin
fServer := THttpApiWebSocketServer.Create(false, 8, 1);
fServer.AddUrl('','8888', False, 'PUT YOUR PUBLIC HOSTNAME HERE', true);
fServer.AddUrlWebSocket('whatever', '8888', False, 'PUT YOUR PUBLIC HOSTNAME HERE', true);
// ManualFragmentManagement = false - so Server will join all packet fragments
// automatically and call onMessage with full message content
fServer.RegisterProtocol('meow', False, onAccept, onMessage, onConnect, onDisconnect);
fServer.RegisterCompress(CompressDeflate);
fServer.OnRequest := onHttpRequest;
fServer.Clone(8);
end;
4. Compile and run Project
5. Open Firefox, navigate to "http://hostname:8888"
-> console shows "HTTP request to /"
"Connected 0"
6. Reload page
-> console shows "Disconnected 0 1001"
-> Ok.
7. Now open Safari and repeat steps 5 and 6.
-> no disconnect message appears
It seems like if those invalid connections aren't handled properly.
8. Enter "send 0" [ENTER] in console
An external exception is fired in procedure THttpApiWebSocketConnection.InternalSend, and from now on things start going wrong. In production all of our calls to "THttpApiWebSocketConnection.Send" are protected by "try...except" blocks, but this doesn't help. Windows decides to kill the running server process as soon as this situation with undetected closed connections appears.
@mpv / @ab: Could you have a look at this please!?
King regards,
oz.
You can specify your desired timeout in TMVCSessionWithCookies.Initialize(PRecordData,PRecordTypeInfo: pointer; SessionTimeOutMinutes: cardinal): integer;
Param: SessionTimeoutMinutes.
Thanks for the informations @mpv!
Yes, but a thin layer should be implemented: onAccept, onMessage, onConnect, onDisconnect as in sample
But I don't work with TSQLRestServer in my projects, so - contribution is welcome.BTW performance of HTTP.SYS is very good - it's our stress test result:
https://unitybase.info/api/server-V41/i … yGraph.png
Those are quite impressive stats! Could you tell some details about the stress test? http.sys thread pool size, use of http or https, how many physical clients or tested on local machine only...
Hi,
i need to build some kind of "external api" for my server. I need to enhance custom URI routing for logically grouping method-based services by functionality using TSQLRestServerDB.ServiceMethodRegister.
Example:
ServiceMethodRegister('MyMethodBasedApiMethod_Number_1',MyMethodCallback1,true);
This results in following URLs:
localhost/root/MyMethodBasedApiMethod_Number_1
But those URIs should rather be accessible as:
localhost/root/api/Method/Number1
using:
ServiceMethodRegister('api/Method/Number1',MyMethodCallback1,true);
I got this working by changing following method:
function TSQLRestServerURIContext.URIDecodeREST: boolean;
...
slash := PosEx(RawUTF8('/'),URI);
if slash>0 then begin
URI[slash] := #0;
Par := pointer(URI);
InternalSetTableFromTableName(Par);
inc(Par,slash);
if (Table<>nil) and (Par^ in ['0'..'9']) then
// "ModelRoot/TableName/TableID/URIBlobFieldName"
TableID := GetNextItemInt64(Par,'/') else
TableID := -1; // URI like "ModelRoot/TableName/MethodName"
URIBlobFieldName := Par;
if Table<>nil then begin
j := PosEx('/',URIBlobFieldName);
if j>0 then begin // handle "ModelRoot/TableName/URIBlobFieldName/ID"
TableID := GetCardinalDef(pointer(PtrInt(URIBlobFieldName)+j),cardinal(-1));
SetLength(URIBlobFieldName,j-1);
end;
end;
if (Table=nil) and (slash>0) then // <----------- do not truncate URI if there was a slash, but no Table.
URI[slash]:='/' else
SetLength(URI,slash-1);
...
The idea is to not truncate URI if no Table is found. My tests show that this is working properly. Are there any hidden side-effects which i'm not aware of?
Edit:
Looks like if i'd been to fast... static MVC URIs are broken after this patch, maybe even more. So this is not working.
Hi Arnaud,
i need to implement JWT-only authentication for one of my projects. I can rely on the bricks already present right now, first tests show it is working great. But I am questioning myself if it wouldn't be better to implement a TSQLRestServerAuthenticationJWT for better integration with general mORMot authentication scheme. What's your opinion about that topic? Any advice would be appreciated.
Thanks, oz.
Well explained! We are using the same approach over here.
Just a little question: is it save to let one TSQLRestServer instance be served by different THttpServers at the same time? Meaning one WebSocket Server for server-to-server communication and one http.sys server for an Admin-MVC Client frontend? I remember trying that some time ago and faced problems as far as i can remeber. But as far as I understand it should be possible by design.
Imho it is no option to use id>=? and id<? statements for paging functionality because of deleted records. Imagine there are 100 records in your table. Paging should load 10 records for each page. Loading page 3 results in a query like: id>=20 and id<30. Everything will be ok until records will be deleted. If record 20 to 30 are delted then you will get an empty result. If record 23 and 24 are deleted then this query will contain only 8 records instead of 10.
So, go for LIMIT statements.
Sorry for replying late, the issue is fixed indeed. Thank you!
Hi Arnaud,
there's an issue with TSQLRecordMany.ManyAdd() when used via TSQLRestBatch.
Current code:
function TSQLRecordMany.ManyAdd(aClient: TSQLRest; aSourceID, aDestID: TID;
NoDuplicates: boolean; aUseBatch: TSQLRestBatch): boolean;
begin
result := false;
if (self=nil) or (aClient=nil) or (aSourceID=0) or (aDestID=0) or
(fSourceID=nil) or (fDestID=nil) then
exit; // invalid parameters
if NoDuplicates and
(InternalIDFromSourceDest(aClient,aSourceID,aDestID)<>0) then
exit; // this TRecordReference pair already exists
fSourceID^ := aSourceID;
fDestID^ := aDestID;
if aUseBatch<>nil then
result := aUseBatch.Add(self,true)<>0 else // Issue here
result := aClient.Add(self,true)<>0;
end;
aUseBatch.Add returns the batch index or -1 on error, but code is checking for <> 0, so its always false for the 1st element.
Fix:
function TSQLRecordMany.ManyAdd(aClient: TSQLRest; aSourceID, aDestID: TID;
NoDuplicates: boolean; aUseBatch: TSQLRestBatch): boolean;
begin
result := false;
if (self=nil) or (aClient=nil) or (aSourceID=0) or (aDestID=0) or
(fSourceID=nil) or (fDestID=nil) then
exit; // invalid parameters
if NoDuplicates and
(InternalIDFromSourceDest(aClient,aSourceID,aDestID)<>0) then
exit; // this TRecordReference pair already exists
fSourceID^ := aSourceID;
fDestID^ := aDestID;
if aUseBatch<>nil then
result := aUseBatch.Add(self,true)<>-1 else // Fix
result := aClient.Add(self,true)<>0;
end;
Could you apply this fix to current sources please?!
Thanks,
oz.
Hi Arnaud,
I think there's an issue with TPropInfo.SameValues.
I have a simple DTO class inheriting from TSynPersistent. This DTO class has a TSomethingObjArray published property. When calling TPropInfo.SameValue the function crashes with an access violation for the ObjectEquals method call concerning the ObjArray.
function TPropInfo.SameValue(Source: TObject; DestInfo: PPropInfo; Dest: TObject): boolean;
.
.
.
tkDynArray: begin
GetDynArray(Source,daS);
DestInfo^.GetDynArray(Dest,daD);
if daS.Count=daD.Count then
if DynArrayIsObjArray and
((@self=DestInfo) or DestInfo^.DynArrayIsObjArray) then begin
for i := 0 to daS.Count-1 do
if not ObjectEquals(PObjectArray(daS.Value)[i],PObjectArray(daD.Value)[i]) then // ->> ACCESS VIOLATION
.
.
.
It looks like if PObjectArray(daS.Value) cast is invalid.
I could fix that issue by changing:
if not ObjectEquals(PObjectArray(daS.Value)[i],PObjectArray(daD.Value)[i]) then
to:
if not ObjectEquals(TObject(daS.ElemPtr(i)^),TObject(daD.ElemPtr(i)^)) then
Could you please have a look at that issue?
Thanks, oz.
Hi all,
I like to integrate some node.js NPM packages into one of my projects using SyNode/Firemonkey engine. Now I'm searching for some informations on how to achieve that.
My current goal is to integrate the "jsreport-core" (https://www.npmjs.com/package/jsreport-core) package with all of its dependencies and use that in one of my SOA Services.
This is a minimalistic JavaScript example function which I'm trying to execute in SyNode:
var jsreport = require('jsreport-core')()
jsreport.init().then(function () {
return jsreport.render({
template: {
content: '<h1>Hello {{:foo}}</h1>',
engine: 'jsrender',
recipe: 'phantom-pdf'
},
data: {
foo: "world"
}
}).then(function(resp) {
//prints pdf with headline Hello world
console.log(resp.content.toString())
});
}).catch(function(e) {
console.log(e)
})
It fails on the first "require" call not being able to find "jsreport-core" package.
For a quick test I copied "jsreport-core" folder from my local NPM installation to my application bin dir and changed
var jsreport = require('jsreport-core')()
to:
var jsreport = require('./jsreport-core')()
Now evaluation fails with message "cannot find module bluebird" which is one of "jsreport-core" required packages.
My questions are:
- Is it right to copy all those required NPM packages to application's bin directory?
- What needs to be done to "register" all required packages in SyNode so that "require('jsreport-core')" function calls do work?
- Are there any other tasks required to be done for integrating such NPM packages to SyNode?
- Is there some kind of faq/documentation about how to integrate NPM packages to SyNode?
King regards,
oz.
Hi, current WebSocket implementation requires exclusive port usage. You can not share the same port with your IIS instance when using any other kind then http.sys mode. Just specify another port in the constructor.
How the thread is created depends on how the HTTP server is implemented.
It may be a from a thread pool, or not. It may inherit from TSynThread, but not directly.Let the logic depend from this implementation detail is a wrong idea.
There are other cleaner options available, focusing on resources, not threads.Your design is clearly breaking the Single Responsibility OOP Principle.
The cleanest way to handle such case may be to use a sicClientDriven interface-based service...
I'm with Arnaud here. What you need sounds like some kind of ressource pool, not threadpool. If using interface base services in sicClientDriven mode is no option, then there's no other way then using custom ressource pools i guess.
Cheers.
Hi,
did you have a look at TMVCRunWithViews.SetCache? Looks like a good starting point for further investigation...
Cheers, oz.
@mpv:
Any news about about this? I just can't wait to get @ab pushed to merge everything!
No worries, you're doing a great job maintaining this huge and fantastic project!
I'm the one who is to blame for releasing a build of our software without testing it enough
Gladly it was good enough to "eat" the exception without any further harm in this situation.
I tried to fix the root of this issue by myself, but had to give up after 10 minutes because a quick fix was required to get the system up again asap. I'm into mORMot for about a year now but there are parts who are still difficult to get into.
I've already pushed a new version for this customer with your fix included.
Please check http://synopse.info/fossil/info/05f616d381
Issue is fixed with this version, thanks a lot!
I just helped myself with introducing an empty try...except...end block to TDynArrayHashed.HashAdd() for "eating" the exception. Things seem to work, my logs are full of "Assertion failed" exceptions, but at least the system on my customer's server is up again. But this is not the solution obviously.
There's an issue again with TDynArrayHashed.HashAdd() which i've discovered in production.
My Setup:
A Firebird 2.5 DB is beeing accessed from an SOA service directly through SynDBZeos.
Everything works as expected for the first time. Then at the 2nd/3rd call to the same method problems start.
Assertion "assert(result<0);" fails.
procedure TDynArrayHashed.HashAdd(const Elem; aHashCode: Cardinal; var result: integer);
var n,cap: integer;
begin
n := Count;
SetCount(n+1); // reserve space for a void element in array
cap := Capacity;
if cap*2-cap shr 3>=fHashsCount then
{$ifdef UNDIRECTDYNARRAY}with InternalDynArray do{$endif} begin
// fHashs[] is too small -> recreate
if fCountP<>nil then
dec(fCountP^); // don't rehash the latest entry (which may not be set)
ReHash;
if fCountP<>nil then
inc(fCountP^);
result := HashFind(aHashCode,Elem); // fHashs[] has changed -> recompute
assert(result<0);
end;
with fHashs[-result-1] do begin // HashFind returned negative index in fHashs[]
Hash := aHashCode;
Index := n;
end;
result := n;
end;
Callstack:
TDynArrayHashed.HashAdd((kein Wert),14531870,0)
TDynArrayHashed.FindHashedForAdding((kein Wert),True,14531870)
TDynArrayHashed.AddAndMakeUniqueName('CommunicationID')
TSQLDBZEOSStatement.ExecutePrepared
TSQLDBConnectionProperties.Execute('select "CommunicationID",...
This is quite a serious bug for me because it happens in production environment. Arnaud, could you have a look at TDynArrayHashed please?
Maybe related to http://synopse.info/forum/viewtopic.php?id=3468 ?
If so, then a quick test using Delphi 7 compiler & FullDebugMode should work without any problems. Does it?
I think Arnaud means:
...
for retry := 0 to 2 do begin
for i := 1 to 10 do
try
...
break;
except
on Exception do
SleepHiRes(100);
end;
...
Just have a look at: http://synopse.info/files/html/Synopse% … #TITLE_231 for a starting point...
I just had a closer look at this post
It seems there is some interesting concepts about sync that I think could be applied to.
...
I just had a quick look at the article...
This looks like an interesting start point for building full offline aware master/slave replication. The basic concept is similiar to what we are doing over here (in a legacy java application) for years.
The "difficult" part would be to implement ID clash logic regarding foreign keys for tables having any kind of relations (FK constraints).
I'd recommend NOT to add "row_guid", "row_timestamp" and "sentToServerOK" for every single table participating in replication, but to introduce ONE single new table which in fact would act as a changelog.
SyncChangeLog
ID
TableName/TableIndex
RowID
RowTimestamp
ActionFlag (insert/update/delete)
SentToServerOk (only used on slave-side)
Every single ORM write operation (insert/update/delete) results in adding a new row to this table (if there's no entry with same TableName/Index and ID properties already available). I think it should be possible to use/extend mORMot's audit-trail features for this.
The great benefit of using a such a single "changelog" is that it requires only one single SELECT statement to get/send all ORM modifications from/to master. Furthermore there is no need to care about referential integrity for INSERT statement order, because sorting by TSQLRecordSyncChangelog.ID (autoinc) will always result in the right SQL statement order. This means that the sync mechanism is uncoupled from the DB Model, which makes it kind of generic. This whole thing could also easily be extended to handle syncronization of non-DB objects (like pictures,uploads,whatever...).
Just my two cents for discussion...
You're welcome, i'm happy my demo program did help!
I just did some tests, the issue is gone, thanks.
In fact, TECDHEProtocol.CreateFrom is not enough.
Yeah, I know, but at least it fixed the compilation issue
I don't have compiler at hand right now, but try if following change helps to get sources compiled:
function TECDHEProtocol.Clone: IProtocol;
begin
result := TECDHEProtocol.CreateFrom(self);
end;