You are not logged in.
Pages: 1
Thank you for your kindness and prompt answer, AB. really.
By the way , the rule for posting object pascal code, is a little bit hard to follow. How much is some, and how much is huge? It might be not only me, but also for some others, especially for the newcomers, are puzzled by the rule. I think it will be helpful to make the rules stated more clearly, such as no more than 25 lines or no more than 1KB of code size, or something else.
I am trying to create an index for table in MSSQL2018. I use the following code to do it, everything is ok till I try to execute the SQL statement:
vSuccess := vRestServer.Execute(vSQL);
The SQL Statement is correct, and I can execute it in the Microsoft SQL Server Management Studio 2017 correctly.
Can anybody give me some instruction to do it in the right way?
Thanks In advance.
procedure TFormDBServer.CreateIndexIDC; // Create Index for column 'IDC'
var
vSQL: string;
vModel: TSQLModel;
vSuccess: Boolean;
vRestServer: TSQLRestServer;
vDialog: TConnectExternalDB;
vProps: TSQLDBConnectionProperties;
begin
vModel := CreateDataModel('root');
vDialog := TConnectExternalDB.Create(Self); // unit SynDBExplorerClasses;
try
if vDialog.Connect(vModel, vProps) then // vProps.DBMS --> dMSSQL
begin
vSuccess := VirtualTableExternalRegisterAll(vModel, vProps);
vRestServer := TSQLRestServerDB.Create(vModel, vProps.DatabaseName);
try
with vRestServer do
begin
CreateMissingTables;
vSQL := vProps.SQLAddIndex('CardAppInfo', ['IDC'], False, False); // vSQL --> CREATE INDEX NDXCardAppInfoIDC ON CardAppInfo(IDC)
if (vSQL <> '') then
vSuccess := vRestServer.Execute(vSQL); // Error prompt: Virtual Table may not be Indexed, extended_error=1
end;
finally
FreeAndNil(vRestServer);
end;
end;
finally
FreeAndNil(vDialog);
FreeAndNil(vModel)
end;
end;
I meet the BatchSend returns HTTP 500 or 501 by chance of using BatchAdd with ForceID wrongly.
Something like this:
aPeople: TSQLPeople;
Client: TSQLRestClientURI;
ResultsArray: TIDDynArray;
.....
aPeople.IDValue := 1000;
Client.BatchStart(nil, 1000);
Client.BatchAdd(aPeople, True, True);
Client.BatchAdd(aPeople, True, True); // reproduce the duplicated key in the table People.
Client.BatchSend(ResultsArray);
After this BatchSend fails, all the subsequent BatchStart..BatchSend will fail as well. And the server goes malfunction and consumes nearly all the CPU resource available.
Thank you, AB. Really appreciate for your help, and the wonderful mORMot. I am thinking to use a local SQLite3 database in the client side, and then submit the change to server side by Batch.
My problem is that the HTTP Request of BatchSend seldom returns 408 (Timeout of http request). It's a little bit strange as the clients and server are located in the same computer.
Thank you, @AB.
I think if it was as design, it should be a fault of TSQLRestBatch. Any solution for this? Data loss is always a big issue for customers.
After reading the TSQLRestClientURI.BatchSend, I think it is better to free the fCurrentBatch only after Connecting the HTTP Server successfully:
function TSQLRestClientURI.BatchSend(var Results: TIDDynArray): integer;
begin
if self<>nil then
try
result := BatchSend(fBatchCurrent,Results);
finally
// conditionly free the fBatchCurrent, so we would have more chance to call BatchSend again.
if (Result = HTTP_SUCCESS) then
FreeAndNil(fBatchCurrent);
end else
result := HTTP_BADREQUEST;
end;
@AB, what do you think about this? Will it work? Thanks in advance.
The Batch ability of TSQLRestClientURI is very powerful. But when the BatchSend is failed(eg, the result of BatchSend is not HTTP_SUCCESS), how should we deal with this situation properly?
I have tried just call the BatchSend again, It seems does not work at all.
Any help is appreciated.
Hi, @AB. I have another feature request for the WhereCondition is empty. For most back-end database, there is a TRUNCATE command to delete all the rows from a table without logging and end the task more quickly. So can you add the TRUNCATE support for back-end database? Thank you!
Hi, @ab, I think some refactoring is also necessary for function TSQLRestStorageInMemory.AddOne(Rec: TSQLRecord; ForceID: boolean;
const SentData: RawUTF8): TID;. If the new come rec is out of order and ForceID is true, there whole table will be sorted. If a lot of new come recs all like this, the sorting would take quite a long time.
I think perhaps some flags can be added such as BeginUpdate ... EndUpdate, to control the sorting procedure to take place or not.
e.g.
OleDB:
'DRIVER={SQL Server Native Client 11.0};Server=SERVER\SQLEXPRESS;Database=master;Uid=user;Pwd=password;MARS_Connection=yes'
ODBC:
'DRIVER={ODBC Driver 13 for SQL Server};Server=SERVER\SQLEXPRESS;Database=master;Uid=user;Pwd=password;MARS_Connection=yes'
Thank you. @miab3.
My backend MSSQL is MSSQL2016. I tried the above strings, and It seems not work.
Then I compiled the whole ZEOS package V7.3-alpha. By using the TZConnection, TZQuery components, I found that the value for parameter Url of function TZDriverManager.GetConnectionWithParams(const Url: string; Info: TStrings): IZConnection;
is a quite long string: zdbc:OleDB://127.0.0.1:1433/Provider=SQLOLEDB.1'#9'Password=XXXX'#9'Persist Security Info=True'#9'User ID=YYYY'#9'Initial Catalog=ZZZZ'#9'Data Source=127.0.0.1?username=UUUU;password=PPPP'
So I give the following string in UI of SynDBExplore as: OleDB://127.0.0.1:1433/Provider=SQLOLEDB.1, and found that when SynDBZeos call
GetConnectionWithParams, the Url is just about the same.
zdbc:OleDB://127.0.0.1:1433/Provider=SQLOLEDB.1?username=UUUU;password=PPPP;controls_cp=CP_UTF8
The TZConnection, TZQuery works in Win64. But when combined zeos with mORMot, the Zeos raised a lot of exceptions such as EZSQLException OLEDB Error 80040E14 and Can't found the datatype CLOB. I use RawUTF8 datatype for UTF-8 Strings.
What is wrong for my usage of SynDBZeos and Zeos?
I just downloaded ZEOS Lib 7.3 from SourceForge, integrated with mORMot and compiles under XE8, every thing seems OK. I am trying to connect to MSSQL by chosing the connecting parameters from the Update Connection Setting of SynDBExplorerClasses. What should be specified for the Server input ? I tried the "IP", "IP,1433", and "IP:1433" as the Server parameter without success. So what should be specified for the Server and Database parameters?
Thanks in advance.
The empty where condition works well for deleting the whole table now. @AB, Thank you very much. I have no idea what happens behind the OleDB.
I met another problem under Win64 (platform Win10, the backend DB is MSSQL 2016).
// aRecord: TSQLPeople;
// RemoteRestServer: TRestServerRemoteDB;
// The Win32 version works perfectly; but the same calling raise an EOleDBException under win64 (the other SQL Statement like delete or select works fine.)
calling: RemoteRestServer.Add(aRecord, True);
Tracing:
(1) mORMot.pas 35221 function TSQLRest.InternalAdd(Value: TSQLRecord;
(2) mORMot.pas 42378 function TSQLRestServer.EngineAdd(TableModelIndex:
(3) mORMotDB.pas 1252 function TSQLRestStorageExternal.EngineAdd(TableModelIndex:
(4) mORMotDB.pas 1266 result := ExecuteFromJSON(SentData,soInsert,0);
(5) mORMotDB.pas 1846 function TSQLRestStorageExternal.ExecuteFromJSON(
(6) mORMotDB.pas 1888 Query := fProperties.NewThreadSafeStatementPrepared(SQL,false);
// the generated SQL statement is an Insert statement, and everything is correct.
(7) mORMotDB.pas 1896 Query.ExecutePrepared;
Raised an EOleDBException with message 'TOleDBConnection: Parameter Error.' // Parameter Error is in Locale language Chinese
// TSynLog Information as follows ('参数错误。' means 'Parameter Error.' in Chinese.):
20170608 22102846 ERROR "EOleDBException(02F735D0)":{"EOleDBException(02F735D0)":{"Message":"TOleDBConnection: 参数错误。"}} stack trace API 00931E8D 00D88C6C 00B4C0B2 00B47D60 00A0DAC9 009E21B8 00DAD572 00DE00AD 005F34D9 00629E0B 0062AD94 00410281 005F2BF6 005FA3A7 00629855 005F26D2 005FA59E 005FB898 00410281 005F2BF6 005FA3A7 005F959C 0053E826 00007FFB8C401C24 00007FFB8C40125E 00007FFB8C400FF5 00007FFB8392B022 00007FFB8393FAF8 00007FFB8C401C24 00007FFB8C4017BB
Thank you AB.
The modification makes some progress, but still does NOT work.
tracing:
mORMotDB.pas 1351 if ExecuteInlined('delete from % where %',[fTableName,SQLWhere],false)=nil ....
I got an EOleDBException at SynOleDB.pas 2396 procedure TOleDBConnection.OleDBCheck(aStmt:
From the SQL Server Profile, I got the following SQL statement: delete from dbo.People where
It seems still need to distinguish the SQLWhere is empty or not before calling ExecuteInlined.
The following modification works:
end; // line 1348 of mORMotDB.pas
exit;
end else if SQLWhere = '' then begin
if ExecuteInlined('delete from %',[fTableName],false)=nil then
exit; end
else if ExecuteInlined('delete from % where %',[fTableName,SQLWhere],false)=nil then
exit;
I am using mORMot 1.18.3675.
// fRemoteRestServer := TSQLRestServerDB.Create(fRemoteModel, True);
// trace into after calling RemoteRestServer.Delete(TSQLPeople, ''), at (6) the condition of (SQLWhere = '') meets the exit condition.
(1) mORMot 38807 function TSQLRestServer.Delete(
(2) mORMot 38817 result := EngineDeleteWhere(TableIndex,SQLWhere,IDs);
(3) mORMot 42430 function TSQLRestServer.EngineDeleteWhere(
(4) mORMot 42440 Rest := GetStaticTableIndex(TableModelIndex);
Rest <> nil
(5) mORMotDB 1314 function TSQLRestStorageExternal.EngineDeleteWhere(
(6) mORMotDB 1322 if (IDs=nil) or (SQLWhere='') or
(TableModelIndex<0) or (Model.Tables[TableModelIndex]<>fStoredClass) then
exit;
It seems that it is not consistent when deleting all the rows in the same table connected with different TRestServers.
For example, I want to delete all the rows in the table TSQLPeople.
If the RestServer is an instance of TSQLRestServerFullMemory or descent of TSQLRestServerFullMemory, I have to call RestServer.Delete(TSQLPeople, ''); // the where condition is empty string.
If the RestServer is an instance of TSQLRestServerDB (for me, the backend DB server is MSSQL 2016), I have to call RestServer.Delete(TSQLPeople, '1=1'); // the where condition should be a True expression string.
From the view of SQL Syntax expectation, It seems to me here the where condition like '1=1' is more acceptable.
@EMartin, I have made some modification to the SynRestDataset, an instance of TSQLHttpClient is injected into TSynRestDataset, which can fulfill all the CRUD tasks. I think that using an instance of TSQLRest to access the data is a more mORMot way.
All the details can be accessed by this URI: https://gist.github.com/anonymous/d0700 … 7b928c9eaf
Unfortunately, in this modification, RESTful service is not supported anymore. But the CommandText of TSynRestDataset can be used in a more DELPHI way, such as:
SynRestDataset.CommandText := 'SELECT * FROM BioLife '
+ 'ORDER BY Species_No ';
In order to support accessing the image data from the blob field, a helper unit, ImageLoader, is provided.
I found an bug in TSynRestSQLDataSet.InternalInitFieldDefs when handling the TEXT field with unlimited length (such as the Notes field, defined by: property Notes: RawUTF8 read fNotes write fNotes;), the lFieldDef.Size will be set to 20(the default FieldWidth of DB.ftString), but I don't know how to fix this.
@EMartin, This sounds better and hope ab can update the new implementation to the ThirdParty Demo in the near future. And hope that SynRestDataSet becomes a party of Synopse mORMot eventually!
Sorry for the crash of web server. I do read the Rules of this forum before, but I don't know exactly how much is huge. I'll pay attention to this rule anyway. I can access Github, and I'll try to learn gist later.
Modification to the TSynRestSQLDataSet Class, implemented in unit SynRestVCL, follows:
TSynRestSQLDataSet = class(TSynBinaryDataSet)
private
fRestClient: TSQLRest; // mORMot broker for handling delete, insert, and update evnets.
procedure SetupRestClient;
procedure TearDownRestClient;
protected
.....
public
public
/// initialize the instance
constructor Create(AOwner: TComponent); override;
destructor Destroy; override;
function Execute(const aSQL: RawUTF8): boolean; overload;
function RetrieveBlob(Table: TSQLRecordClass; aID: TID;
const BlobFieldName: RawUTF8; out BlobData: TSQLRawBlob): boolean; overload;
function UpdateBlob(Table: TSQLRecordClass; aID: TID;
const BlobFieldName: RawUTF8; BlobData: TStream): boolean; overload;
property DeletedID: TID read fDeletedID write fDeletedID;
property InsertedID: TID read fInsertedID;
.....
end;
constructor TSynRestSQLDataSet.Create(AOwner: TComponent);
begin
inherited Create(AOwner);
fRestClient := nil;
end;
destructor TSynRestSQLDataSet.Destroy;
begin
TearDownRestClient;
inherited;
end;
procedure TSynRestSQLDataSet.SetCommandText(const Value: string);
begin
if (Value <> fCommandtext) then
begin
fCommandText := Value;
ParseCommandText;
SetupRestClient; // create fRestClient, an instance of TSQLRest;
end;
end;
The implementation of these three methods of TSynRestSQLDataSet quite simple, just delegate the call to the broker, fRestClient.
function TSynRestSQLDataSet.Execute(const aSQL: RawUTF8): boolean;
begin
Result := fRestClient.Execute(aSQL);
end;
function TSynRestSQLDataSet.RetrieveBlob(Table: TSQLRecordClass; aID: TID;
const BlobFieldName: RawUTF8; out BlobData: TSQLRawBlob): boolean;
begin
Result := fRestClient.RetrieveBlob(Table, aID, BlobFieldName, BlobData);
end;
function TSynRestSQLDataSet.UpdateBlob(Table: TSQLRecordClass; aID: TID;
const BlobFieldName: RawUTF8; BlobData: TStream): boolean;
begin
Result := fRestClient.UpdateBlob(Table, aID, BlobFieldName, BlobData);
end;
The most significant part of TSynRestSQLDataSet, is function TSynRestSQLDataSet.PSExecuteStatement.
{$ifdef ISDELPHIXE3} // note: I moved the original local functions to the private section of TSynRestSQLDataSet
function TSynRestSQLDataSet.PSExecuteStatement(const ASQL: string;
AParams: TParams): Integer;
var DS: TDataSet;
begin
DS := nil;
result := PSExecuteStatement(ASQL,AParams,DS);
DS.Free;
end;
function TSynRestSQLDataSet.PSExecuteStatement(const ASQL:
string; AParams: TParams; var ResultSet: TDataSet): Integer;
{$else}
function TSynRestSQLDataSet.PSExecuteStatement(const ASQL: string;
AParams: TParams; ResultSet: Pointer): Integer;
{$endif}
var
lJSON: SockString;
lOccasion: TSQLOccasion;
fID: TID;
fFieldNames: RawUTF8;
fRec: TSQLRecord;
begin // only execute writes in current implementation
Result := -1;
if IsTableFromService then
DatabaseError('Cannot apply updates from a service');
lOccasion := GetSQLOccasion(aSQL);
case lOccasion of
soDelete:
begin
fID := fDeletedID; // fDeletedID is set by instance of TSynRestDataSet
if fRestClient.Delete(GetSQLRecordClass, fID) then
Result := 1;
end;
soInsert:
begin
lJSON := SQLFieldsToJSON(soInsert, fFieldNames, aSQL, '(', ') ', aParams);
fRec := GetSQLRecordClass.CreateFrom(lJSON);
try
fInsertedID := fRestClient.Add(fRec, fRec.RecordProps.FieldBitsFromCSV(fFieldNames), True);
if fInsertedID > 0 then
begin
Result := 1;
AfterInsert(Self); // Update the ID field in the instance of TSynRestDataSet
end;
finally
fRec.Free;
end;
end;
soUpdate:
begin
lJSON := SQLFieldsToJSON(soUpdate, fFieldNames, aSQL, 'set ', 'where ', aParams);
fRec := GetSQLRecordClass.CreateFrom(lJSON);
try
fID := aParams.ParamByName('ID').Value;
fRec.IDValue := fID; // fRec.ID is readonly, fRec.IDValue is writable
if fRestClient.Update(fRec, fRec.RecordProps.FieldBitsFromCSV(fFieldNames)) then
Result := 1;
finally
fRec.Free;
end;
end
end;
end;
And I have added an extra parameter, aFieldNames, to function SQLFieldsToJSON in order to return the related simple column names.
I can't show the code here because of the restriction of post regulation of this forum. (prompt: i was opened within itself, this is not allowed). The related simple column names are returned in the CSV form.
function TSynRestSQLDataSet.SQLFieldsToJSON(const aSQLOccasion: TSQLOccasion;
var aFieldNames: RawUTF8;
const aSQL, aAfterStr, aBeforeStr: string; aParams: TParams): SockString;
To many of us, the delphi program developers, TDataset is a good helper for designing the GUI. And the Restful way is better than the db based way to retrieve contents from the background database, that's one reason why we enjoyed using mORMot so much.
I think the TSynRestDataSet class, contributed by EMartin and implemented in the unit SynRectMidasVCL, is very useful if it can work in a more mORMot way, so I have made some changes to TSynRestSQLDataSet and TSynRestDataSet in the last few days. Thanks, @EMartin, for your contribution.
At present, after insert a new row into the table, the ID field is NOT updated in TSynRestDataSet, and the blob fields are not handled yet. My main idea is that if we use a instance of TSQLRest as the broker, instead of using the TWinHTTP instance directly, to handle the insert, delete and update requests, would be a more mORMot way, and should be better, IMHO.
Two event handlers are added, in the private section of TSynRestDataSet, to handle the deletion and insertion event respectively.
TSynRestDataSet = class(TCustomClientDataSet)
private
procedure DoOnBeforeDelete(aDataSet: TDataSet);
procedure DoOnAfterInsert(aDataSet: TDataSet);
.......
public
......
function Execute(const aSQL: RawUTF8): boolean;
function RetrieveBlob(Table: TSQLRecordClass; aID: TID;
const BlobFieldName: RawUTF8; out BlobData: TSQLRawBlob): boolean;
function UpdateBlob(Table: TSQLRecordClass; aID: TID;
const BlobFieldName: RawUTF8; BlobData: TStream): boolean;
.....
end;
procedure TSynRestDataSet.DoOnAfterInsert(aDataSet: TDataSet);
begin
if not (State in [dsEdit, dsInsert]) then Edit;
// InsertedID is a new added attribute of TSynRestSQLDataSet, and has been assigned to it just before calling this event handler.
FieldByName('ID').AsInteger := TSynRestSQLDatasetHack(aDataSet).InsertedID;
end;
procedure TSynRestDataSet.DoOnBeforeDelete(aDataSet: TDataSet);
begin
// fDataSet is a Instance of TSynRestSQLDataSet, and DeletedID is a new added attribute of TSynRestSQLDataSet
fDataSet.DeletedID := FieldByName('ID').AsInteger;
end;
And these two event handlers are associated with the instance of TSynRestDataSet when it is created.
constructor TSynRestDataSet.Create(AOwner: TComponent);
begin
.......
fDataSet.AfterInsert := Self.DoOnAfterInsert;
Self.BeforeDelete := Self.DoOnBeforeDelete;
end;
The three method, Execute, RetrieveBlob, and UpdateBlob are delegated to the instance of TSynRestSQLDataSet:
function TSynRestDataSet.Execute(const aSQL: RawUTF8): boolean;
begin
Result := fDataSet.Execute(aSQL);
end;
function TSynRestDataSet.RetrieveBlob(Table: TSQLRecordClass;
const BlobFieldName: RawUTF8; out BlobData: TSQLRawBlob): boolean;
var
fID: TID;
begin
fID := FieldByName('ID').AsInteger;
Result := fDataSet.RetrieveBlob(Table, fID, BlobFieldName, BlobData);
end;
function TSynRestDataSet.UpdateBlob(Table: TSQLRecordClass;
const BlobFieldName: RawUTF8; BlobData: TStream): boolean;
var
fID: Integer;
begin
fID := FieldByName('ID').AsInteger;
Result := fDataSet.UpdateBlob(Table, fID, BlobFieldName, BlobData);
end;
@zed, yes, I have read that as well, thank you. I met the truncation problem when I try the third party demo, FishFactSyn, given by EMartin.
Stores 4 bytes or 8 bytes to the Blob field, it actually depends on the server is a 32-bit or 64-bit program, that is the sizeof(Pointer). So it seems to me that EvaF's solution is better.
Here I just want ab to pay attention to this problem and give it a solution, that's it. :=)
It seems to me that you are right. @EvaF.
@ab, Could you please check the SynOleDB unit? Thank you.
procedure TOleDBStatement.ExecutePrepared; // line 1818 in unit SynOleDB
BI.pwszDataSourceType := 'DBTYPE_WVARCHAR'; // line 1919 in unit SynOleDB, BI. --> BI^.
BI.dwFlags := BI^.dwFlags or DBPARAMFLAGS_ISNULLABLE; // line 1920 in unit SynOleDB, BI. --> BI^.
// To Correct the Blob contents being truncated problem, insert the following line after line 1931 in unit SynOleDB
BI^.ulParamSize := length(P^.VBlob); // by-pass line 1948 when field type is fBlob: BI^.ulParamSize := B^.cbMaxLen;
@EMartin, finally goes to the problem why the demo of SynRestMidasVCL, FishFactSyn, does not work properly.
In TSynRestSQLDataSet (SynRestVCL), the ID is got by the following statement:
lID := aParams[0].Value; // soDelete: about line 750
or
lID := aParams.ParamByName('ID').Value; // soUpdate: about line 769
In both state, the lID is not the real row ID of TSQLBiolife, but the value of Species_No. The reason is that the ID column is not exist in the SELECT statement. So we should add the ID as the first column of the SELECT statement, like this:
SynRestDataset.CommandText := 'http://LocalHost:8080/root/BioLife?'
+ 'select=ID,Species_No,Category,Common_Name,Species_Name,'
+ 'Length_cm,Length_in,Graphic,Notes,Som&sort=Species_No';
The memory leakage still exists in the demo FishFactSyn, because of there is not a destructor for the unit FFactWin, like this one
(declared fSQLModel as a private variable of TForm1):
procedure TForm1.FormDestroy(Sender: TObject);
begin
FreeAndNil(fSQLModel);
FreeAndNil(SynRestDataset);
end;
@EMartin, there is a memory leakage problem in TSynRestDataSet (SynRestMidasVCL), as it does not release the objects created in the constructor. There should be a corresponding destructor in TSynRestDataSet, just like this:
destructor TSynRestDataSet.Destroy;
begin
fProvider.DataSet := nil;
FreeAndNil(fDataSet);
FreeAndNil(fProvider);
inherited;
end;
@ab, there should be a destructor in TSynDBDataSet (SynDBMidasVCL) as well.
@EMartin, I have found the reason why TSynRestSQLDataSet does not work under XE3 and above. The commented function PSExecuteStatement SHOULD BE IMPLEMENTED, just like in TSynDBDataSet. After the modification, The demo FishFactSyn can do insert, delete, and update, though it still does not work correctly. I will give the reasons in the following post.
TSynRestSQLDataSet = class(TSynBinaryDataSet)
protected
fBaseURL: RawUTF8;
......
{$ifdef ISDELPHIXE3}
function PSExecuteStatement(const ASQL: string; AParams: TParams): Integer; overload; override;
function PSExecuteStatement(const ASQL: string; AParams: TParams; var ResultSet: TDataSet): Integer; overload; override;
{$else}
function PSExecuteStatement(const ASQL: string; AParams: TParams; ResultSet: Pointer=nil): Integer; overload; override;
{$endif}
......
end;
......
{$ifdef ISDELPHIXE3}
function TSynRestSQLDataSet.PSExecuteStatement(const ASQL: string;
AParams: TParams): Integer;
var DS: TDataSet;
begin
DS := nil;
result := PSExecuteStatement(ASQL,AParams,DS);
DS.Free;
end;
...........
Hi, everybody!
I am trying to create a load balance proxy server, with which to accept all the http requests and redirect the requests to some other web servers implemented by the mORMot TSQLHttpServer class and handle the requests by method-based or interface-based service.
In the mORMot overview (1. Synopse mORMot Overview, about page 70), it says "SOA endpoints are configured automatically for each published interface on both server and client sides, and creating a load-balancing proxy is a matter of one method call."
I searched the mORMot document and this forum for something like "load balan" or "load-balan", and found no reference about this topic or sample application to implement such kind of proxy server. Can anybody give me some hints?
Many thanks in advance.
Is this a typo in the document (several lines before 21.2.2. Additional safety)?
Be aware than this per-table access rights depend on the table order as defined ...
I think it should be read:
Be aware that this per-table access rights depend on the table order as defined ...
It really works. Thank you! I am really enjoy using mORMot!
There are many mistakes in my last post. Sorry for the wrong bug report! This time, I have done the test both for MSSQL and SQLite3, results are listed below.
Test MSSQL DataSet: Total 15 rows, with 13 rows satisfy the query condition
(I read the note in mORMot ducumentation 9.2.4, Note that parenthesis and mixed AND OR expressions are not handled yet.)
1. Query from Server:
Case 1:
mORMot Query(Complex Query Syntax with Parenthesis):
fPeopleList := RestServer.RetrieveList(TSQLPeople,
'(DepartNo=?) AND (WorkingStatus=?) LIMIT ?',
[20, 1, 10]);
MSSQL Profiler: (note: ID is the last column, without TOP Clause in T-SQL)
exec sp_executesql N'select ID,DepartNo,BirthDate,Name,WorkingStatus from dbo.People where DepartNo=@P1
and WorkingStatus=@P2',N'@P1 bigint,@P2 bigint',20,1
ResultSet: 10 rows (The LIMIT Parameter seems worked, mORMot applied a local filter?)
Case 2:
mORMot Query (Simple Query Syntax):
fPeopleList := RestServer.RetrieveList(TSQLPeople,
'DepartNo=? AND WorkingStatus=? LIMIT ?',
[20, 1, 10]);
MSSQL Profiler: (note: ID is the first column, without TOP Clause in T-SQL)
exec sp_executesql N'select DepartNo,BirthDate,Name,WorkingStatus,ID from dbo.People where DepartNo=@P1
and WorkingStatus=@P2',N'@P1 bigint,@P2 bigint',20,1
ResultSet: 13 rows (The LIMIT Parameter does NOT work).
2. Query from Client:
Case 3:
mORMot Query(Complex Query Syntax with Parenthesis):
fPeopleList := AppServer.RetrieveList(TSQLPeople,
'(DepartNo=?) AND (WorkingStatus=?) LIMIT ?',
[20, 1, 10]);
HTTP Server InContent:
SELECT ID,DepartNo,BirthDate,BirthDate,Name,WorkingStatus FROM People WHERE (SmsProxyNo=:(20):) AND (SmsStatus=:(1):) LIMIT :(10):
MSSQL Profiler: (note: NO T-SQL at all)
ResultSet: no ResultSet
Case 4:
mORMot Query (Simple Query Syntax):
fPeopleList := AppServer.RetrieveList(TSQLPeople,
'DepartNo=? AND WorkingStatus=? LIMIT ?',
[20, 1, 10]);
HTTP Server InContent:
SELECT ID,DepartNo,BirthDate,BirthDate,Name,WorkingStatus FROM People WHERE DepartNo=:(20): AND WorkingStatus=:(1): LIMIT :(10):
MSSQL Profiler: (note: ID is the first column, NO TOP Clause)
exec sp_executesql N'select ID,DepartNo,BirthDate,Name,WorkingStatus,ID from dbo.People where DepartNo=@P1
and WorkingStatus=@P2',N'@P1 bigint,@P2 bigint',20,1
ResultSet: 13 rows (The LIMIT Parameter does NOT work).
-------------------------------------------------------------------------------------------------------------------------------------------
Test SQLite3 DataSet: Total 15 rows, with 13 rows satisfy the query condition
Redo the last 4 cases,
ResultSet: 10 rows, no problem with SQLite3.
It's a simple query, just like this expected T-SQL, SELECT TOP 200 * FROM People WHERE DepartNo=30 AND WorkingStatus=3
do the query from the Client side, I get the wrong T-SQL:
....
AppServer: TSQLHttpClient;
fWorkingPeople: TObjectList;
....
fWorkingPeople := AppServer.RetrieveList(TSQLPeople,
'DepartNo=? AND WorkingStatus=? LIMIT ?',
[30, 3, 200]);
// In the MSSQL Profiler, I get: SELECT * FROM People WHERE DepartNo=30 AND WorkingStatus=3
.......
do the same query from the Server side, I get the correct T-SQL:
....
TCustomHttpServer = class(TSQLHttpServer)
....
end;
RestServer: TSQLRestServerDB;
HttpServer: TCustomHttpServer;
fWorkingPeople: TObjectList;
.......
// do the query from the server side, the T-SQL is correct :
fWorkingPeople := RestServer.RetrieveList(TSQLPeople,
'DepartNo=? AND WorkingStatus=? LIMIT ?',
[30, 3, 200]);
.......
// In the MSSQL Profiler, I get: SELECT TOP 200 * FROM People WHERE DepartNo=30 AND WorkingStatus=3
Tank you so much, ab.
There is a good news and a bad news. The good news is that it does work for inserting. I test Inserting from both server and client side, after inserting more than 3 million rows into the table, the memory comsumed by the server just about 35MB. Terrific!
The bad news is that for the retrieving from a table with massive rows, the memory consumed by the server is still dramatically large. when I want to retrieve the top 200 rows from a table with more than 1 million rows, the server consumed more than 400MB memory. I have monitored from the Profiler of MSSQL and found that, when the retrieve command is send from the client, the LIMIT parameter of RetrieveList does not work(MSSQL receive a query command without the expected "TOP 200" parameter), I think this is another reason why the server consuming so much memory (and the memory will not release automatically after the retrieve completed).
Still for the "memory growing" problem, this time, I want to see where the memory is allocated.
From the Task Manager, It's very clear that I call RestServer.Add(aPeople, True) 4 times, the working memory set will increase 4KB.
I traced into
procedure TOleDBStatement.ExecutePrepared; // SynOleDB, line 1560
begin
.......
SetLength(ParamsStatus,fParamCount); // line 1626
OleDBConnection.OleDBCheck(self, // line 1627, after execute this, the number of working memory set will be increased by 4KB.
(fCommand as IAccessor).CreateAccessor(
DBACCESSOR_PARAMETERDATA,fParamCount,Pointer(fParamBindings),0,
fDBParams.HACCESSOR,pointer(ParamsStatus)),ParamsStatus);
fDBParams.cParamSets := 1;
....
// 3.2 ExpectResults=false (e.g. SQL UPDATE) -> leave fRowSet=nil
OleDBConnection.OleDBCheck(self, // line 1648
fCommand.Execute(nil,DB_NULLGUID,fDBParams,@fUpdateCount,nil));
.....
end;
Memory is allocated in
(fCommand as IAccessor).CreateAccessor( // line 1628
or
fCommand.Execute(nil,DB_NULLGUID,fDBParams,@fUpdateCount,nil) // line 1649
to be continued...
// I met an post error, says something like: "i" was opened with itself, this is not allowed!
// Even I Comment all the code, I can't pass the error check. :-( I changed the [ into #
//and when FreeAndNil(fCache); is executed, the process goes here:
//
//destructor TRawUTF8List.Destroy; // SynCommons, line 43571
//begin
// Capacity := 0;
// inherited;
//end;
//
//when Capacity := 0; is called, magic things happened here:
//
//procedure TRawUTF8List.SetCapacity(const Value: PtrInt); // SynCommons, line 43832
//var i: integer;
//begin
// if self<>nil then begin
// if Value<=0 then begin
// fList := nil;
// if fObjects<>nil then begin
// if fObjectsOwned then
// for i := 0 to fCount-1 do
// fObjects#i#.Free; // when i = fcount - 1, magic happens!
// fObjects := nil;
// end;
// fCount := 0;
// ......
// end;
// end;
//end;
//
//// when i = fCount - 1, fObjects#i#.Free will release all the lost memory!
//
//I just do not know how fObjects[fCount-1] becomes a big monster! I hope this can give you some help to catch the "memory growing" bug. :-)
// I met an post error, says something like: "i" was opened with itself, this is not allowed!
// Even I Comment all the code, I can't pass the error check. :-( I changed the [ into #
//----------------------- Memory Release ---------------------------
//When I call FreeAndNil(Props);
//the process goes to:
//procedure TSQLDBConnection.Disconnect; // SynDB, line 3941
//var i: integer;
// Obj: PPointerArray;
//begin
// InternalProcess(speDisconnected);
// if fCache<>nil then begin
// InternalProcess(speActive);
// try
// Obj := fCache.ObjectPtr;
// if Obj<>nil then
// for i := 0 to fCache.Count-1 do
// TSQLDBStatement(Obj#i#).FRefCount := 0; // force clean release
// FreeAndNil(fCache); // release all cached statements
// finally
// InternalProcess(speNonActive);
// end;
//end;
Hi, ab.
I have discovered some interesting thing, but I can not understand what happen to the fCache. Here is the tracing story. This time, I payed attention to the fCache member in class TSQLDBConnection.
TSQLDBConnection = class // SynDB, line 1604
fCache: TRawUTF8ListHashed;
end;
(1) when connecting to the external database server, the RestServer try to get the datetime from the database Engine:
function TSQLDBConnection.GetServerDateTime: TDateTime; // SynDB.pas
Added 'select GETDATE()' into fCache;
(2) the first time call RestServer.Add(aPeople, True), the RestServer is trying to get the max ID in the related table:
function TSQLRestStorageExternal.EngineLockedNextID: TID; // mORMot.pas, line 980
Added 'select max(ID) from dbo.People' into fCache;
(3) still the first time call RestServer.Add(aPeople, True);
function TSQLRest.Add(aTable: TSQLRecordClass; const aSimpleFields: array of const; // mORMot, line 30358
function TSQLDBConnection.NewStatementPrepared(const aSQL: RawUTF8; ....); // SynDB, line 4029
Added 'insert into dbo.People (...) values (...)' into fCache.
after this time, no matter how many times I call RestServer.Add, no more objects are added to the fCache.
Conclusion: there are only 3 prepared statements are stored in the fCache in my cases.
Thanks for your reply, ab.
I am sorry for the joke "Props stealed all the lost memory...", so don't feel unhappy about what I said. I am not familiar with OleDB programming by using the OleDB interface. But I think perhaps we need to Release the ICommandText somewhere after we use it, as the RefCount is not zero, the OleDB Provider will not Release the memory associated with this ICommandText reference. The Props is the Owner of OleDB Provider, it has the responsibility to do that.
If there is no solution for the "memory growing" problem in a short time, can we release the Props periodically as a workaround?
Hi, ab,
After try the next few lines of code, I found the secrete is in TOleDBConnection.
Props: TSQLDBConnectionProperties;
......
After connect to the external database by this Props, I just do the simplest thing, call RestServer.Add to add massive rows to the external database (the memory consumed increasely), then I intentially free this Props
procedure TFormRestServer.btnFreeConnectionPoolClick(Sender: TObject);
begin
Props.ClearConnectionPool; // line 1
Props.CleanupInstance; // line 2
FreeAndNil(Props); // line 3
end;
After the first two lines are executed, the memory consumed by the server has no change. But when line 3 is executed, all the lost memory are released!
So I traced into here: (line 5987 of unit SynDB).
destructor TSQLDBConnectionPropertiesThreadSafe.Destroy;
begin
inherited Destroy; // line 1
fConnectionPool.Free; // line 2
DeleteCriticalSection(fConnectionCS); // line 3
end;
After call the second line: fConnectionPool.Free; All the comsumed memory are released. (in the Task Manager, you can see it clearly).
So it's very clear that Props stealed all the lost memory and kept track of them very well.
Some hints: after call Props.NewThreadSafeStatementPrepared, do we need to do some thing like unprepare?
or Query.ExecutePrepared then unprepared?
Did you try with ODBC?
Yes, I tried to connect the external database with ODBC, but only partially succeeded.
I Connect the MSSQL Database with ODBC successfully, as I can get all the table names in the external database. But when I try to get some rows from one table (from the server side) like this:
PeopleList := RestServer.RetrieveList(TSQLPeople, '', [], '');
I got an exception: TODBCStatement - TODBCLib error: [07009] [Microsoft][ODBC SQL Server Driver] Invalid descriptor index (0)
All the RetrieveList calls from the Client side, fails as well.
In the server side, I tried to Add some rows to the table named People, but the following call allways returns aID with 0 and without any error message. Nothing is added to the database.
aPeople := TSQLPeople.Create;
try
aID := RestServer.Add(aPeople, True);
finally
aPeople.Free;
end;
I have done some test with this problem ( with the mORMot version 1.18.1597, 1.18.1632 and 1.18.1655). I did not retrieve data rows instead of inserting massive rows to the database.
On the Server side:
Props: TSQLDBConnectionProperties;
// get the Props like in sample 12 - SynDB Explorer
.....
if (Props.DBMS <> dSQLite) then
VirtualTableExternalRegisterAll(Model, Props); // Register all tables of the model to be external if we don't use SQLite3
//Props.UseCache := False; // It's READONLY! How can I disable the Cache??
fDbFileName := ChangeFileExt(paramstr(0),'.db3');
try
RestServer := TSQLRestServerDB.Create(Model, fDbFileName, False);
with RestServer do
begin
CreateMissingTables;
Cache.Clear;
DB.UseCache := False;
AcquireWriteMode := amBackgroundThread;
AcquireWriteTimeOut := 30;
end;
// Create the HTTP Server for Client Connection
HttpServer := TCustomHttpServer.Create(AppServerPort, [RestServer], '+', useHttpApiRegisteringURI);
HttpServer.AccessControlAllowOrigin := '*'; // allow cross-site AJAX queries
....
On the Client Side:
Model := CreateSampleDataModel(ConstHttpRoot);
AppServer := TSQLHttpClient.Create(AppServerIP, AppServerPort, Model);
.......
// Use BatchStart, BachAdd, BatchSend to Insert data rows to mORMot Server.
.......
Test Results: (Watched the memory allocated by the Server side by the TaskManager of Win7-64bit when programs were running)
(1) If the Props is connected with the Internal SQLite3 Engine, after insert more than 2 millions row into 2 tables, the total memory allocated by the server is about 75000KB (Maxium);
The mORMot server works absolutely OK, and with the FASTMM4 memory manager running in the FullDebuging mode, there is no memory leakage reported.
(2) If the selected Props is connected with the external MSQL Engine, the memory consumed by the server, seems proportional to the rows added to the external MSSQL database. Here are some test data listed below:
Rows Inserted Memory Consumed(in KB)
0 6180
25200 57160
118620 186256
253576 346664
421526 609227
563164 817312
657664 948559
711214 1017168
.... Eventually, the Server goes with no response with the Client Request, all the remaining data will be lost.
And with the FASTMM4 memory manager running in the FullDebuging mode, there is still no memory leakage reported.
(It's not bleeding. It seems the problem is related with the TOleDBMSSQLConnectionProperties, and I tried to replace it with TOleDBMSSQL2008ConnectionProperties to connect with MSSQL2008, but nothing changed.)
By the way, It seems there is a bug in 1.18.1655. When exiting the Server program, there is an AV in SynDB:TSQLDBConnectionPropertiesThreadSafe.EndCurrentThread, ant it will not happen in previous versions of mORMot.
Hi, ab,
I read all the messages from http://dba.stackexchange.com/a/4969. It seems to me that most people agree that to keep the row size as small as posible is really matter (table size , index page size, database size, time taken for backup, log and etc. are all connected with the row size).
The mORMot is a very goog framework, and the porfamance is really impressive. I think most of us select mORMot as the software foundation, not just for toy programs, we want to make real application. For a real working table, suppose every row takes 40 more bytes, one 100 million rows would take more 3800MB of disk space, and will cosume more memory for caching as well. So I have a sujestion here that can the mORMot provide us a data type mapping list for table creation? So we can adust the delphi data type maping to which DB datatype according to one's expectation (perhaps a list for the cardinal data types is enough).
I really like the feathure of AddMissingTables, and would like not to miss it. We want the Fish and bear's paw all at once.
Dewen
Thank you so much, ab!
That's the answer! The real problem is that I did NOT register the URI. This puzzled me more than one week and I couldn't find the reason by myself!
After I run the server (which runs the service) with administrator user rights once, after that the RemoteIP is correct now.
1.17 is clearly deprecated and should not be used any more.
Time to move to 1.18 or even 2.0.
Hi ab, do you means the Synopse Framework 2.0 is available at this moment? That would be wonderful!
Thanks for you reply, ab.
I registered the Service function in the client side by means of the TSQLRestServerDB, and the client which connect to another remote AppServer through TSQLHttpClient, and runs in still another remote server.
When the AppServer and Client both are running, I call the service function by entering the GET command in the Address box of Firefox which running in my laptop.
//code fragments of the client side:
TCustomHttpServer = class(TSQLHttpServer)
protected
/// override the server response - must be thread-safe
function Request(Ctxt: THttpServerRequest): cardinal; override;
end;
....................
Model: TSQLModel;
AppServer: TSQLHttpClient;
RestServer: TSQLRestServerRemoteDB;
HttpServer: TCustomHttpServer;
.......................
Model := CreateDataModel(AppServerHttpRoot); // AppServerHttpRoot, AppServerIP, AppServerPort and HttpServicePort are Properties.
AppServer := TSQLHttpClient.Create(AppServerIP, AppServerPort, aModel);
RestServer := TSQLRestServerRemoteDB.Create(AppServer);
RestServer.ServiceRegister(TAddService, [TypeInfo(IAddService)], sicShared);
HttpServer := TCustomHttpServer.Create(HttpServicePort, [RestServer], '+', useHttpApiRegisteringURI);
HttpServer.AccessControlAllowOrigin := '*'; // allow cross-site AJAX queries
Is this a bug? RemoteIP is NOT the server's IP.
I am using the Synopse Framework 1.18.1500 under XE7. I called the service by Firefox.
RemoteIP in THttpServerRequest and TServiceRunningContext is not the expected.
when the next line of code is called from the context of implementation of the service function:
RemoteIP := FindIniNameValue(pointer(ServiceContext.Request.Call.InHead),'REMOTEIP: ');
I get the Callee's IP instead of the Caller's IP! ( to call a service deployed in a remote server from another client machine, the RemoteIP is the server's IP, not the client IP).
================================ Details ==============================
In function TCustomHttpServer.Request(Ctxt: THttpServerRequest): cardinal;
Ctxt.Method: GET
Ctxt.URL: /root/AddService/Add?A=30&B=45
Ctxt.InHeaders: Param:
InHeader:
Host: 103.30.231.6:9800
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Connection: keep-alive
RemoteIP: 103.30.231.6
In the Service function Add, I Called:
RemoteIP := := FindIniNameValue(pointer(ServiceContext.Request.Call.InHead),'REMOTEIP: ');
And I get 103.30.231.6 (Synopse Framework Version: 1.18.1500, Delphi XE7)
The REMOTEIP is the Callee's IP, not the Caller's IP.
Dear ab,
This time, I generated all the tables without any modification(you are right, for the problem reported in the first level, I do have changed some bigint to smallint manually).
After I get rid of the parenthesis in the where clause, I meet another problem, the LIMIT clause has no effect anymore.
And still another problem, when I connect the MSSQL database with the TSQLRestServerDB directly, the RetrieveList can return rows. But when I add a Client side and connect it to the server side by means of TSQLHttpClient, and do the same task use RetrieveList, there are more strange things happened: when I get rid of the parenthesis in the where clause, I still can get some rows (and the LIMIT clause has no effect as well); when I keep the parenthesis in the where clause, I get nothing at all, the fSmsList is nil.
I am sorry to bother you again and I cannot jump to the links your provided in your last answer.
Thanks for your wonderful work and help!
Dewen
Yes, after get rid of the parenthesis, it works. Thank you!
Hi, everybody!
I am a newbie of mORMot user. I am trying to use mORMot to retrieve data stored in the MSSQL database as such:
var
fSmsListMT: TObjectList;
begin
fSmsListMT := RestServer.RetrieveList(TSQLSMS_SubmitMT,
'(GatewayType=?) AND (SubmitStatus=?) LIMIT ?',
[SubmitManager.GatewayType, Ord(ssReady4Submit), StrToIntDef(edtSubmitCount.Text, 50)]);
.....
end;
If there are plenty of rows in the table, there will be no problem. Bu after the last available row in the target table is consumed, the call to RestServer.RetrieveList will be always return empty from the table, even I added quite a lot of rows into it by a third party tool (such as SQL script).
I traced into the calling sequence, and I finally reached here: TSQLRestServerDB.MainEngineList in file mORMotSQLite3.pas (Thanks for the Open Source again!), and after exeucte the
"result := DB.LockJSON(SQL,ReturnedRowCount);", I checked the var result and ReturnedRowCount, even there is no record in the cache table, the var result is not empty(actually it contains the header of the target table, and with the property rowCount equals to 0.); while the another var, ReturnedRowCount, is nil.
When I changed the code as following, the TSQLRestServerDB.MainEngineList works as expected.
result := DB.LockJSON(SQL,ReturnedRowCount); // lock and try from cache
// if result='' then // Execute request if was not got from cache
if ReturnedRowCount = nil then // Execute request if was not got from cache
try
try
GetAndPrepareStatement(SQL,false);
MS := TRawByteStringStream.Create;
...........
Thanks in advance.
Best Regards,
Dewen
Pages: 1