You are not logged in.
Pages: 1
Take care MPV, we understand very well what Putin is doing and what his goal is. The rest of the world is just waking up, but Ukrainians and Poles have had their experiences with Putin. He is an extremely hypocritical dictator.
I'm from Poland, so I' quite close to you and my thoughts are with the Ukrainian Nation.
Слава Україні!
Okay, all I have done so far was easy access using TRestServerDB and SQLite3. I know, that plugging in MSSQL is supposed to be just as easy, but I am stumped at this moment and can't see how to do it, preferably w/o ZEOS.
It seems that ZEOS is not the best solution to connect to MSSQL, especially since AB has provided an excellent implementation for OLE DB.
In contrast, OLE DB itself is the most recommended and future-proof way to connect to MSSQL at this time.
Which RestServer-Class would I use and how would a connect it to the SQL Server?
Of course, the TOleDBMSSQL2012ConnectionProperties class from the SynOleDB unit.
However, you can try my trivial solution which uses Microsoft OLE DB Driver for SQL Server, which is dedicated for MSSQL 2012 R2 and later.
mORMot uses an older solution, namely SQL Server Native Client for OLEDB.
Of course, you must first download and install this driver on your computer:
https://www.microsoft.com/en-us/downloa … x?id=56730
And here's the whole unit for the connection using Microsoft OLE DB Driver for SQL Server, which I'm using at the moment developing a brand new server based on mORMot:
unit dFLEX.SynOLEDB.MSOLEDBSQL;
interface
uses
SynOleDB;
type
/// OleDB connection properties to Microsoft SQL Server 2012 R2 and later, via
// SQL Server OLE DB driver (Microsoft OLE DB SQL / MSOLEDBSQL)
TOleDBMSSQL2018ConnectionProperties = class(TOleDBMSSQL2012ConnectionProperties)
protected
/// will set the appropriate provider name, i.e. 'MSOLEDBSQL'
// - will leave older 'SQLNCLI11' or 'SQLNCLI10' on Windows XP
procedure SetInternalProperties; override;
end;
implementation
{ TOleDBMSSQL2018ConnectionProperties }
procedure TOleDBMSSQL2018ConnectionProperties.SetInternalProperties;
begin
inherited SetInternalProperties;
fProviderName := 'MSOLEDBSQL';
end;
end.
PS.
Sorry, what I wrote applies to mORMot 1.18
But for 2.0 it will be identical, except that it will no longer be a SynOleDB unit.
OK, I looked in CreateHistory and understand more how it works.
And I already know that this breaking of fTrackChangesHistoryTableIndexCount leads to problems.
Thanks AB for the explanation.
As the comment states, the data use a RecordRef to store the reference.
So there is a limitation to 64 tables.
Hmmm... Just so we understand each other correctly, you are referring to the table count limitation for the TSQLRecord class and not TSQLRecordHistory?
If you overrange it, it is at your own risk. It is not tested on our side.
OK, I understand.
When I remove this limitation, in what cases can this mechanism cause problems?
If I want to have change history tracking for more than 64 tables then how can I solve it differently?
I'll just add that I'm not keeping it all in one History table, but in several.
Also, I used what you wrote AB in the documentation: 7.3.6 Redirect to an external TSQLRest
And I delegated the history to a separate instance of TSQLRestServerDB which works with a SQLite database.
Perhaps the target will be more instances and other beeps, we'll see; my data model is large and complex...
I also have a lot of data changes (much, much more than a classic ERP system), but I MUST keep track of change history for all data.
Or am I doing something wrong ;-)
In the constructor code TSQLRestServer.Create(aModel: TSQLModel; aHandleUserAuthentication: boolean)
In the mORMot module on line 38,271 there is this code:
if fTrackChangesHistoryTableIndexCount>64 then
fTrackChangesHistoryTableIndexCount := 64; // rows are identified as RecordRef
I don't understand why this kind of code and how it relates to the comment next to this code, but it has further implications.
I have registered a table in a model that has an index in DataModel = 173.
I associated my tables with change tracking and noticed that it sometimes works and sometimes not.
During debugging it turned out that in the method TSQLRestServer.InternalUpdateEvent in line 42790 there is such a code:
if cardinal(aTableIndex)<fTrackChangesHistoryTableIndexCount then begin
TableHistoryIndex := fTrackChangesHistoryTableIndex[aTableIndex];
if TableHistoryIndex>=0 then
DoTrackChanges(TableHistoryIndex);
end;
In my case, this condition is not met because there is a hard-coded limit to the variable fTrackChangesHistoryTableIndexCount = 64.
So the DoTrackChanges method will only be called for tables that have been registered in the DataModel with an index less than 64.
My table has index=173...
When I commented out this limit in the TSQLRestServer.Create constructor then everything seems to work normally.
AB, could you shed more light on this case?
OK, I see.
Since such a call is invalid, don't you think an exception should be raised there?
procedure TSQLRecordPropertiesMapping.Init(Table: TSQLRecordClass;
const MappedTableName: RawUTF8; MappedConnection: TObject;
AutoComputeSQL: boolean; MappingOptions: TSQLRecordPropertiesMappingOptions);
begin
// I added such a private field to protect against re-initialization.
if fInitialized then
Exit // raise exception?
else
fInitialized := True;
// and then the appropriate initialization code
end;
Of course I would prefer it to work as I wrote above
But if the convection is that such a call is invalid, then IMO an exception should definitely be raised there.
Otherwise you can hit some really weird problems, don't you think AB?
OK, you're right AB, and I apologize for misleading you, I've already become looped in all of this myself.
However, the problem exists, take a look please;
- When mapping like this:
VirtualTableExternalMap(DataModel, TeoDfRmAtomT, FConnectionProperties, 'tDfRmAtom', [rpmNoCreateMissingField, rpmClearPoolOnConnectionIssue]).MapField('ID', 'IdAtom')
.MapField('ID', 'IdAtom')
.MapField('TypeKW', 'Type');
TSQLRecordPropertiesMapping.Init only calls once.
However, if the mapping is done this way:
VirtualTableExternalMap(DataModel, TeoDfRmAtomT, FConnectionProperties, 'tDfRmAtom', [rpmNoCreateMissingField, rpmClearPoolOnConnectionIssue]).MapField('ID', 'IdAtom');
VirtualTableExternalMap(DataModel, TeoDfRmAtomT, FConnectionProperties, 'tDfRmAtom', [rpmNoCreateMissingField, rpmClearPoolOnConnectionIssue]).MapField('TypeKW', 'Type');
TSQLRecordPropertiesMapping.Init will be called by every time.
Then the following call sequence occurs:
VirtualTableExternalMap -> VirtualTableExternalRegister -> TSQLModel.VirtualTableRegister -> ExternalDB(TSQLRecordPropertiesMapping).Init
And I use multiple VirtualTableExternalMap calls for the same TSQLRecord because I find it easier to write the code generator that way.
I solved the problem by setting an extra flag in TSQLRecordPropertiesMapping.Init to not allow the Init method to be called multiple times for the same TSQLRecordPropertiesMapping instance.
You should not publish the IDAtom field, even if it redirects to fID.
The ORM will create a new column, which is not what you expect.
You need to map the existing ID field as IDAtom in the DB.
Yes, you're right and I noticed that earlier, but I forgot about it while writing the database ORM class generator.
This is easily fixed by moving the IdAtom field declaration to the public section.
Then everything looks good to mORMot, and I achieve the expected effect.
But what about the mapping problem I described above?
Can you confirm if the problem exists, AB?
I'll answer these questions below, but please note that they are actually not very important.
What is important is that the method code TSQLRecordPropertiesMapping.Init is probably wrong or the method call string when mapping fields is wrong - I do not know, this is a question for AB.
TSQLRecordPropertiesMapping contains the mapping information, in it there is a field named fRowIDFieldName which stores the name of the ID field (primary key in the database).
TSQLRecordPropertiesMapping.Init is called whenever MapFields is executed and it ALWAYS sets it like this: fRowIDFieldName := 'ID';
The first call to MapFields is correct, but each call thereafter assigns me fRowIDFieldName := 'ID';
And that's exactly the problem.
Could you explain the need for IdAtom published property? I just couldn't understand it.
First - this is not an ORM First type project, let's say it's a legacy database.
Why do I need IdAtom?
Because in my database, no primary key field is simply called ID (by the way, I consider it a conceptual error, but whatever).
It's always IdSomething, and that's how I want a field with that name in TSQLRecord, not ID.
Simply put.
TSQLRecord has ID support already, you can use this field easily. ID mapping will also work without additional published property...
First please take a good look at my TSQLRecord declaration, and you'll probably notice that I'm using the ID field - note the read and write of the IdAtom field.
wloochacz wrote:Besides, you can't declare a class property named Type because it's a Delphi/Pascal keyword.
As an option, you could use '&' for avoiding Delphi keyword and reducing mappings quantity
Yes, I know that, but it seems sloppy to me and I just don't like that kind of code.
Is this typo in post or in your code
A property named TypeKW has been declared in the class, which corresponds to a field in the table named Type.
Also, I don't see any typo in my post.
.MapField('TypeKW', 'Type')
This notation maps a class property named Type to a database table field named TypeKW.
That is, exactly the opposite of my example.
Besides, you can't declare a class property named Type because it's a Delphi/Pascal keyword.
vs
.MapField('Type', 'TypeKW')
This is the correct notation for my example and mORMot generates such a SelectSQL:
select
IdAtom as ID,IdAtom,
Name,
Type as TypeKW
from dbo.tDfRmAtom
So absolutely correct.
Hello,
Short version; calling MapFields more than once for a given TSQLRecord will always set the value of RowIDFieldName to name ID.
And never mind that previously it was mapping the ID to another field name....
Details:
I have a table like this (MS SQL):
CREATE TABLE dbo.tDfRmAtom (
IdAtom dbo.uINT IDENTITY(1, 1) NOT NULL,
Name dbo.uNAME_XL NOT NULL,
Type dbo.uINT NOT NULL
CONSTRAINT PK_tDfRmAtom PRIMARY KEY CLUSTERED (IdAtom)
)
Which corresponds to the following TSQLRecord class:
TormtDfRmAtom = class(TBaseORMEntity)
strict private
FName: RawUTF8;
FTypeKW: Integer;
strict protected
procedure SetName(const AValue : RawUTF8); virtual;
procedure SetDescription(const AValue : TNullableUTF8Text); virtual;
procedure SetCaption(const AValue : TNullableUTF8Text); virtual;
procedure SetCustomData(const AValue : TNullableUTF8Text); virtual;
public
class function dfGetPKFieldName(): RawUTF8; override;
class function dfGetTableName(): RawUTF8; override;
published
property IdAtom: TID read fID write fID;
property Name: RawUTF8 read FName write SetName;
property TypeKW: Integer read FTypeKW write FTypeKW;
end;
Then executes the registration like this:
VirtualTableExternalMap(DataModel, TormtDfRmAtom, FConnectionProperties, 'tDfRmAtom', [rpmNoCreateMissingField, rpmClearPoolOnConnectionIssue])
.MapField('ID', 'IdAtom')
.MapField('TypeKW', 'Type');
When running such code, mORMot reports an invalid SQL syntax exception...
The SQL that the TableHasRows method attempts to execute looks like this:
select top(1) ID from dbo.tDfRmAtom
And it should be like this:
select top(1) IdAtom as ID from dbo.tDfRmAtom
I found the cause in the TSQLRecordPropertiesMapping.Init method in the mORMot module.
The Init method is called for every field mapping.
And it always assigns:
.
fRowIDFieldName := 'ID';
.
So the first mapping will correctly assign ID to IdAtom field name, but each subsequent mapping will again set fRowIDFieldName := 'ID'
My test was that before assigning a value to fRowIDFieldName; it simply assigns a value only if fRowIDFieldName contains an empty string.
Everything seems to be working correctly now.
AB, is this a bug?
If I may AB, I would still like to ask you to answer a question:
How to check if there was any problem while updating the database for REST/ORM layer?
As for the rest, well - it looks like my conclusions are correct....
But that's a medium problem, I can handle it, but I need to understand exactly how it all works.
Well, I have powerful experience with relational databases and hence some background in my understanding.
I need to switch over ;-)
I've already dug through the forum and all the luck that you introduced nullable types
ORM in mORMot is a new thing for me (I still think that relation handling in mORMot ORM is... questionable), and since we are rewriting Server, so I want to do it properly and according to mORMot philosophy.
Well, I'm still researching, testing and learning.
Just learned that for the Cache to work on the server side is not as transparent as you write about it in the manual.
I can't do CreateAndFillXXX for example, and by using Retrieve methods.
There are really a lot of such cases, so thank you very much for your answers.
Let's keep going!
The logs should give you more information.
Does this mean that I can't find out in the runtime about an exception that occurred while the ORM was updating the data?
But the problem with NULL is that
- SQLite3 accepts to put a NULL or an 0 everywhere IIRC;
That doesn't seem like the right assumption to me, take a look please:
https://sqlite.org/nulls.html
and
https://stackoverflow.com/questions/251 … ger-fields
You can clearly see that SQLite does not treat null and 0 values as the same.
And very well, because it would be a disaster.
Even for foreign keys defined in SQLite as on cascade set null everything is OK.
Simply put, a null value is null, and a 0 value is 0 and thus null != 0.
Period.
- there is no "null" support in the Delphi language - we have the variant nullable types, but it is for standard values, not joined tables.
OK, I understand.
I also understand that performing a single query in this case for a one-to-one relationship requires left join, not inner join.
I guess this is due to the mechanism of SQLite virtual tables, which admittedly (supposedly) support lef join, but their performance in this case is tragic.
It looks like mORMot only supports inner join - is this correct and is that what this is about?
I guess this is not supported by the JOIN feature of the ORM yet.
Understood.
So it is also impossible in this case to assign NULL value for IdUser field?
And is there any workaround for the problem?
OnUpdate event or something similar?
And please, tell me how to check if there was any problem while updating the database?
Right now simply the Update method returns False, but I don't know why this command failed....
Model: Database first.
Database: MSSQL 2014
Connection Properties: TOleDBMSSQL2012ConnectionProperties
I have the following data model:
TUser = class(TSQLRecord)
strict private
FLogin : RawUTF8;
FName : RawUTF8;
public
property IdUser : TID read FID write FID;
published
property Login : RawUTF8 index cudtNAME_M read FLogin write FLogin;
property Name : RawUTF8 index cudtNAME_M read FName write FName;
end;
TEmployee = class(TSQLRecord)
strict private
FBirthDate : TNullableDateTime;
FFirstName : RawUTF8;
FGender : Integer;
FHireDate : TNullableDateTime;
FIdEmployee : Int64;
FIdUser : TUser;
FIsEnabled : Boolean;
FLastName : RawUTF8;
FReleaseDate : TNullableDateTime;
public
property IdEmployee : TID read fID write fID;
published
property FirstName : RawUTF8 index cudtNAME_M read FFirstName write FFirstName;
property LastName : RawUTF8 index cudtNAME_M read FLastName write FLastName;
property BirthDate : TNullableDateTime read FBirthDate write FBirthDate;
property Gender : Integer read FGender write FGender;
property HireDate : TNullableDateTime read FHireDate write FHireDate;
property ReleaseDate: TNullableDateTime read FReleaseDate write FReleaseDate;
property IsEnabled : Boolean read FIsEnabled write FIsEnabled;
property IdUser : TUser read FIdUser write FIdUser;
end;
There is a little trick in there with mapping the ID field to another name, but I don't think it matters in this case. I tested it on single entities and it is OK.
I just want to have a property in an entity with a name just like in the database.
Everything works fine for data reads, but I have a problem with updates.
I have code like this:
var
lID : Int64;
lEmployee : TEmployee;
begin
lEmployee := TEmployee.CreateAndFillPrepare(ORM.SQLRestServerDB, [100]);
try
lEmployee.IsEnabled := False;
if not ORM.SQLRestServerDB.Update(lEmployee) then
begin
// How to check UPDATE error?
end;
finally
lEmployee.Free;
end;
end;
An employee with ID = 100 has a value in the IdUser = null field.
As you can see, I am not touching this value at all.
However, update tries to put a value of 0 (null) in the IdUser field, which is obviously not allowed because the foreign key checks for this value.
Allowed values are NULL or an existing ID from the User table, but certainly not zero.
1. How do I implement this correctly?
2. How to add NULL values to such related fields?
3. And how to check for error on save?
Of course, if I declare TEmployee.IdUser as TNullableInteger then everything is OK.
It's just that that's not quite the point.
OK, perhaps we'll come back to this.
But first I need to get competent in ORM, so dear AB I will definitely have questions
I would just try the opposite: the SQLParser
(which is not a parser by the way, more a SQL constructor)
Well, no.
SQLBuilder can't modify existing SQL.
This object can, because it's built on top of a full parser.
And the fact that I showed how to build Where (you may have noticed the Clear method call in my example) and OrderBy, does not mean that it can only do that
should be a stand-alone class, in which you inject/create your SynDB statement when needed.
As I wrote earlier, SQLParser is a standalone class that doesn't depend on anything.
But there are functional problems with it and it is just inconvenient to use.
This is simple, convenient helper code that could just as well be implemented as ClassHeleper as well. But it has its limitations, which I didn't agree with.
I would forget about TQuery.
This idea is better following SOLID principle, and the Single Responsibility principle, I think.
Too bad you want to forget about TQuery.
It's very simple and super convenient to use, which is exactly the opposite of ISQLDBStatement
I'll ask you straight - are you not planning to develop TQuery?
Just so we understand each other, my point is that the design and use of ISQLDBStatement is very far from the conventions that have been known for years.
I understand that it works perfectly with the ORM layer.
The problem is that I don't always can or want to use ORM.
For me, the ideal solution would not be an ORM, but a DataMapper.
DataMapper, where I could have full control over all SQL statements for an entity.
Is it even possible in mORMot to allow it to do this?
And if so, where would one start?
TQuery was just a wrapper to be compatible with the DB.pas homonymous class.
Yes I know that, I read the documentation
However, its compatibility with classic TQuery and further with TParam/TField is hmm... debatable.
Personally I think that naming data types identically as in DB module, but which however provide different contract and work differently, introduces more harm than good.
I understand the idea, but...
Besides, TQuery is exactly what e.g. FIreDAC lacks; a lightweight simple class, serving primarily to read data as efficiently as possible without the unnecessary overhead of TDataSet.
I wanted to extend it a bit and maybe also propose some changes for the benefit of the community.
But for that I need to have virtual methods in TQuery.
So the parameter array works as needed to call SynDB classes.
Exactly.
But not the way it works in every other DAC for Delphi that I know of.
And that's just the point.
Perhaps extending the native SynDB classes could make more sense.
Perhaps.
However, it's so different, and I have too little time to discern the topic thoroughly, that I don't dare touch it without (your AB? ) support.
But OK; where should I look to implement something similar for native SynDB classes?
And I would rather see the SQL generation not be directly within the query/statement class, but as a stand-alone class - and unit.
I had a similar opinion when I started doing this, and this is exactly how it was implemented and can still be used.
However, various use cases in a real project showed that such a solution has limitations and is not very convenient to use. That's why SQLParser was aggregated into TQuery class. And that at first it was an independent class, that's why I provided communication with TQuery using dedicated interface (and precisely - if TQuery inherited from TInterfacedObject or even from TSingletonImplmentation it wouldn't be bad, but it's not necessary).
The limitation I came across is that it needs to remember the state of WHERE clauses for an instance of TQuery.
SQLParser has methods like Save/RestorePrimaryWhere so that when you modify the WHERE at runtime, you can simply and quickly return to the original query.
And besides, the syntax I showed is just convenient, and the overhead is practically negligible, also because SQLParser is created by Query only when it is needed. You don't use the parser - it won't be created.
Oh no, this is me thanking you for your work!
By the way; I would like to ask you to mark all (so also GetXXX methods from protected section) methods of SynDB.TQuery class as virtual.
I would like to use this class for my own purposes, but I need to be able to overload these methods in e own child class.
Why do I need to do this?
And, for example, to build an SQL parser into SynDB.TQuery; I've already done that and it works like this:
lQuery.SQL.Text := 'select * from vPpSchedulerTask where IdppTask = :ID';
lQuery.SQLParser.Where
.Clear
.Add('DateStart', coAnd, opGreaterEqual)
.Add('DocNo', coAnd, opIsNotNull);
lQuery.SQLParser.OrderBy
.Add('DateStart', soDesc);
lQuery.SQLParser.CommitSQL([Now - 90], [ftDateTime]);
lQuery is an object based on TQuery, but brings support for an SQL parser.
This code will result in the construction of such SQL:
select * from vPpSchedulerTask
where DateStart >= :DateStart and DocNo is not null
order by DateStart desc
And since it's a parser (based on gaSQLParser) it can handle much more complex queries.
I have of course tested for joins, subqueries and even Common Table Expression (CTE).
By the way, I found out that parameters are created only after they are explicitly asked for via ParamByName.
Which raises the question of whether Params[index] makes sense, because it will only work properly if we explicitly request this parameter via ParamByName.
Strange...
Wouldn't it be better to create the parameter list after the SQL change, i.e. in the OnSQLChange method?
Hello all!
line 3809 in SynDB module (TQuery.Execute):
if P=nil then
ESQLQueryException.Create('No SQL statement');
And it should be:
if P=nil then
raise ESQLQueryException.Create('No SQL statement');
Dear AB, am I right?
@Javierus
Fair enough, in such a specific architecture.
So just put all needed mORMot units in every of your packages, and they will be fine.
It will be okay under one condition - if your application will not use two different BPLs and each of them will contain the same modules - for example, mORMot modules.
For this reason, you cannot do as you wrote.
That's why, in my code, mORMot has been moved to dedicated BPL, which is loaded when needed.
And I don't think that BPL is an anti-pattern or even better - it is very convenient solution, but you have to learn how to use it correctly.
I personally use an architecture that shares a large part of the code between the client and the service. One of such shared modules are mORMot, configuration, DAL, etc.
De-facto my exe file is a BPLi loader according to the configuration for the service.
And the configuration determines whether this application will be a server, a service or a desktop application.
BPLs have their advantages, closing reusable code in one library, faster compilation and even the possibility of incremental upgrades (just download and replace specific libraries and not one or more monolithic exe files) of the target solution.
And in Delphi, from a code usage point of view, it makes no difference whether you use BPL or not. Everything works exactly the same and does not require any additional actions or other coding, you just make uses and use the code that you need and is in a specific library - that's very convenient.
OLEDB is Windows specific only.
For Linux, You must use ODBC, i guess...
Great news for SynOLEDB
"The OLE DB story at last has clarity. Microsoft has decided to undeprecate (a new word) OLE DB and will be releasing a new OLE DB provider, Microsoft OLE DB Driver for SQL Server (MSOLEDBSQL), in Q1 2018."
Source:
http://www.dbdelta.com/deprecated-sql-s … hnologies/
and
https://blogs.msdn.microsoft.com/sqlnat … ql-server/
Thank You, AB
@woolachz do you have the last commit (4299) ?
Delphi XE and very, very old mORMot - 1.18?
BTW - how to check the version of the mORMot? ;-)
You don't need to use TFDQuery.
Yes, indeed - you may use just just FireDAC Phys Layer.
FireDAC Phys Layer Samples
You may execute or fetch data form server without TDataSet.
OK, ISQLDBRows is really fast and really simple.
Using TFDquery will be fine but in the end it's somehow the same approach as Datasnap (many allocations in memory -> many LOCK instructions -> slower performance).
I'm power user for AnyDAC/FireDAC, and I'm sorry - this is not true.
AnyDAC / FireDAC is very fast. But you have to be able to use it, which is not so obvious.
It has really extensive capabilities (macros and SQL preprocessor with the registration of its own functions? No problem!), Which can cause confusion for beginners and even experienced users.
I use my own version of FireDAC, which I have equipped with many unique features, eg built-in SQL parser/builder in TFDQuery.
Which allows me, for example, to change the WHERE clauses in this way:
// qReport is a TFDQuery
// Add to where clause: (DataStart >= :DataStart and DataStart <= :DataStart)
qReport.SQLParser.Where.Clear.
.BeginGroup()
.Add('DateStart', coAnd, opGreaterEqual)
.Add('DateStart', coAnd, opLessEqual)
.EndGroup();
// Commit changes to SQL and prepare Query
qReport.SQLParser.CommitSQL([RepFrom, RepTo], [ftDateTime, ftDateTime]);
Or this way:
function TSopSalesOfferProdCalcLogic.GetdsTppTechRoutePattern: TboTppTechRoutePattern;
begin
// TboTppTechRoutePattern is a strong typed TFDQuery
if not Assigned(FdsTppTechRoutePattern) then
begin
FdsTppTechRoutePattern := TboTppTechRoutePattern.Create(Self);
FdsTppTechRoutePattern.SetFastForward(True, False, 15);
// modify Select SQL for fetch one row by Primary Key value
FdsTppTechRoutePattern.SQLParser.Clear.Where.Add(FdsTppTechRoutePattern.PKFields);
FdsTppTechRoutePattern.SQLParser.CommitSQL();
end;
Result := FdsTppTechRoutePattern;
end;
For me, works fine:
Hello,
TObjectDynArrayWrapper/IObjectDynArray is really useful.
But, by how it works by design is not suitable for using in a server to populate DTO object arrays.
Why?
Because it always destroys the instances of objects on the array that it wraps.
However, the method on the server must pass the created objects, not destroy them when the wrapper is destroyed.
Therefore, I suggest adding the OwnObjects parameter to the TObjectDynArrayWrapper constructor.
This does not change the default behavior of the wrapper.
Something like that:
type
TObjectDynArrayWrapper = class(TInterfacedObject, IObjectDynArray)
private
FOwnsObjects : Boolean;
protected
fValue: PPointer;
fCount: integer;
public
/// initialize the wrapper with a one-dimension dynamic array of TObject
/// If AOwnsObjetcs is True, by default, then wraaper destroy all objects on the list on destroy himself.
constructor Create(var aValue; AOwnsObjetcs : Boolean = True);
/// delete all TObject instances, and release the memory
// - is not to be called for most use, thanks to reference-counting memory
// handling, but can be handy for quick release
procedure Clear;
end;
implmentation
constructor TObjectDynArrayWrapper.Create(var aValue; AOwnsObjetcs : Boolean = True);
begin
fValue := @aValue;
FOwnsObjects := AOwnsObjetcs;
end;
procedure TObjectDynArrayWrapper.Clear;
var i: integer;
begin
if fValue^ <> nil then
begin
// free all objects on aray if OwnsObjects is True
if FOwnsObjects then
for i := fCount - 1 downto 0 do
try
TObjectDynArray(fValue^)[i].Free;
except
on Exception do;
end;
TObjectDynArray(fValue^) := nil; // set capacity to 0
fCount := 0;
end
else
if fCount > 0 then
raise ESynException.Create('You MUST define your IObjectDynArray field BEFORE the corresponding dynamic array');
end;
AB, what do you think?
Hmdsadeghian,
Did you read it?
https://github.com/synopse/mORMot/tree/ … ical%20DDD
In example 04 you have shown how to combine DTO with SQLRecord...
https://github.com/synopse/mORMot/blob/ … sitory.pas
In fact, you are right and I thank you very much for your time.
Sorry for confusion, too much code, too little sleep: /
However, I lost a lot of time, because the exception that mORMot reports is far from the cause of the problem ...
Can we do anything about it in this case?
PS.
In my opinion, if a method returns an object by out, it should create it by the default constructor. Always.
The mORMot is exactly the opposite.
Of course, if you have adopted this convention then there is nothing left to do with it.
Sample application:
1. Get array of TdtoPpTool (TdtoPpToolArray)
Everything is OK.
2. Get TdtoPpTool = AV on Delphi Client.
So, really it is not a bug? ;-)
SourceCode based on sample 14 (Project14ServerHttp):
http://www.dflex.pl/dflex-ftp/mORMot/mO … roblem.zip
Between Delphi clients, it will use the "extended" JSON syntax, as documented.
With a real REST client - if and only if there is an UserAgent defined in the HTTP headers - slower standard JSON will be emitted.
OK, how to control it?
How to setup server to any client (Delphi program or real REST Client), get the same JSON content?
OK, I'm going to prepare a test, because we probably did not understand ...
As jacll writes, AV is on the client side of Delphi program.
Hello!
DTO Class
type
TdtoPpTool = class(TdtoBaseClass)
strict private
FIdTool : RawUTF8;
FToolCode : RawUTF8;
FToolConfDescription: RawUTF8;
FToolName : RawUTF8;
published
property IdTool : RawUTF8 read FIdTool write FIdTool;
property ToolCode : RawUTF8 read FToolCode write FToolCode;
property ToolName : RawUTF8 read FToolName write FToolName;
end;
TdtoPpToolArray = array of TdtoPpTool;
Intf
IdrsPpTask = interface(IInvokable)
['{82314755-AD47-4E0D-BBE8-322664E9061A}']
function GetTool(out ACTR : TdtoPpTool) : TrsAnswer;
end;
Class:
function TdrsPpTask.GetCtr(out ACTR: TdtoPpTool): TrsAnswer;
begin
ACTR := TdtoPpTool.Create;
ACTR.IdTool := 'id';
ACTR.ToolCode := 'code';
Result.Reset(rsaOK);
end;
Server response:
20171125 18432413 ret mORMot.TSQLRestServerFullMemory(0C60AE80) {"result":[{IdTool:"fiuck",ToolCode:"code",ToolName:""},{"Status":0,"Msg":""}],"id":12632}
Server response, please take a look:
{
IdTool:"id",
ToolCode:"code",
ToolName:""
}
This is not a VALID JSON.
JSON does not contain quotes for property names. Of course, the client is AV during Json To Object... (Dlaczego nie ma wyjątku - Invalid returned JSON content)
Interestingly, if the out parameter is an array type (TdtoPpToolArray = array of TdtoPpTool;), then all is OK.
Please AB, fix it.
Or tell mi how to fix this
PS.
Looks like a similar problem to (dzięki Jacek, masz flachę ;-)):
https://synopse.info/forum/viewtopic.php?id=3763
I've just introduced TObjectDynArrayWrapper.Slice as you requested.
Thank you!
BTW - Slice?
Are you sure it's the right name? ;-)
In my opinion, it's more about compacting this array than slicing it...
Hello!
If I wrote the code (Delphi XE and actual mORMot) according to the documentation, i.e.:
var
DA : IObjectDynArray; // defined BEFORE the dynamic array itself
A : array of TOA;
i : integer;
begin
DA := TObjectDynArrayWrapper.Create(A);
for I := 1 to 10 do
DA.Add(TOA.Create(IntToStr(I)));
end;
I'm getting an exception: You MUST define your IObjectDynArray field BEFORE the corresponding dynamic array.
If I wrote:
var
A : array of TOA;
DA : IObjectDynArray; // defined BEFORE the dynamic array itself
i : integer;
begin
DA := TObjectDynArrayWrapper.Create(A);
for I := 1 to 10 do
DA.Add(TOA.Create(IntToStr(I)));
end;
Everything is OK.
This is a bug in manual or what?
In addition, please add the Compact method to DynArrayWrapper. Something like that:
IObjectDynArrayEx = interface(IObjectDynArray)
['{FD8E284A-9585-4FE8-96CB-0FB98350731C}']
procedure Compact;
end;
TObjectDynArrayWrapperEx = class(TObjectDynArrayWrapper, IObjectDynArrayEx)
public
procedure Compact;
end;
implementation
{ TObjectDynArrayWrapperEx }
procedure TObjectDynArrayWrapperEx.Compact;
begin
SetLength(TObjectDynArray(fValue^), Count);
end;
Sorry, I don't see any Select-SQL above, except my query ;-)
I see nonsense part of select (Both group by and distintct over PublishedMonth field? Why?) with incorrect limit (in MSSQL is TOP) append to the end of select clause, which is incorrect on MSSQL.
@leus Not exactly.
In the above query, the ORM changed LIMIT into a TOP, as expected by MSSQL.The MSSQL limitation is about TOP + DISTINCT.
This is not true.
Query like this:
select distinct top 10 i.ItemCode from tIvItem i
Work like a charm.
But this query:
select top 10 distinct i.ItemCode from tIvItem i
Has incorrect syntax on MSSQL.
Thank you very much. I will try to use this.
<pl mode on>Adam, Arnaud szybciej to robi, niż Ty jesteś w stanie odczytać <pl mode off>
Arnaud Bouchez, YOU ARE fu....gFANTASTIC-robo-architect-blogger-TOP-coder!!! AMAZING! GREAT WORK!!!!
I'm jealous ;-)
itSDS wrote:is it possible to use it under Firemonkey (Crossplatform) ?
Not yet.
The crossplatform clients do not support WebSockets yet - we rely on Indy for the communication on FMX, and this library does not handle them...
Hmm... just the long shot (WebSockets based on Indy):
https://github.com/andremussche/DelphiWebsockets
Should be fixed by http://synopse.info/fossil/info/f89f7d899
Now is OK!
Thanks a lot for finding out the root cause!
Circumventing all limitations/issues of all Delphi compilers is really a nightmare, and very time consuming!
Thanks again
No, no - I thank you very much!
PS.
Now only remains to implement interface-based callbacks and will be great! ;-)
Maybe XE bug - i don't know...
But, there is workaround (SynCommons.pas):
class TSynCache
property property Count: integer read fNameValue.Count;
I changed to:
property Count: integer read GetCount;
Where GetCount is:
function TSynCache.GetCount: integer;
begin
Result := fNameValue.Count;
end;
And now i'ts working!
Arnaud can you confirm this?
I have no problem to compile your package with Delphi XE7.
My IDE added the following references:
requires rtl, vcl, dbrtl, FireDACCommon, FireDACCommonDriver, FireDAC;
... and implicity import all mORMmot units. That's normal.
OK, but package not working in Delphi XE.
Please, any idea?
Your code won't compile as such (the override in THttpDBServer).
There is a missing parent class, obviously.
I don't think so ;-)
Please check yourself, here you can download the package:
http://www.dflex.pl/dflex-ftp/mORMot_Package.zip
I tried to create a package containing SynCOMmons.pas, SynDB.pas and mORMot.
No problem with Delphi 7 and Delphi XE7.
Strange...
Which version of the compiler are you using?
Delphi XE Pro
What if you put just SynCommons in your package?
No changes...
Did you know what object/class this "count" is member of?
Unfortunately, no.
Delphi raise error with SynCommons.pas(1)
What is total bull...t, because the first line of the SynCommons.pas does not have any error...
Try to disable HASINLINE conditional in Synopse.inc.
Unfortunately, then exactly the same error.
If you try to create a package for the mORMot:
[DCC Error] SynCommons.pas(1): E2234 Getter or setter for property 'Count' cannot be found
I'm using the latest NightlyBuild revision:
http://synopse.info/files/mORMotNightlyBuild.zip
Of course i'm defined the USEPACKAGES conditional in my project options.
My package source:
package Package1;
{$R *.res}
{$ALIGN 8}
{$ASSERTIONS ON}
{$BOOLEVAL OFF}
{$DEBUGINFO ON}
{$EXTENDEDSYNTAX ON}
{$IMPORTEDDATA ON}
{$IOCHECKS ON}
{$LOCALSYMBOLS ON}
{$LONGSTRINGS ON}
{$OPENSTRINGS ON}
{$OPTIMIZATION ON}
{$OVERFLOWCHECKS OFF}
{$RANGECHECKS OFF}
{$REFERENCEINFO ON}
{$SAFEDIVIDE OFF}
{$STACKFRAMES OFF}
{$TYPEDADDRESS OFF}
{$VARSTRINGCHECKS ON}
{$WRITEABLECONST OFF}
{$MINENUMSIZE 1}
{$IMAGEBASE $400000}
{$IMPLICITBUILD ON}
{$DEFINE USEPACKAGES}
requires
rtl;
contains
HTTPDB.Server in 'HTTPDB.Server.pas';
end.
My unit code:
unit HTTPDB.Server;
interface
uses
SynDBVCL,
SynDB,
SynDBDataset,
SynDBFireDAC,
SynDBRemote,
SynVirtualDataset,
SynCommons;
type
THttpDbServer = class
strict private
FDBConnectionProp: TSQLDBConnectionProperties;
FHttpServer : TSQLDBServerHttpApi;
function GetDBConnectionProp: TSQLDBConnectionProperties;
function GetHttpServer : TSQLDBServerHttpApi;
public
procedure BeforeDestruction; override;
property DBConnectionProp: TSQLDBConnectionProperties read GetDBConnectionProp;
property HttpServer : TSQLDBServerHttpApi read GetHttpServer;
end;
implementation
uses
uADPhysIB;
procedure THttpDbServer.BeforeDestruction;
begin
inherited;
FHttpServer.Free;
FDBConnectionProp.Free;
end;
function THttpDbServer.GetDBConnectionProp: TSQLDBConnectionProperties;
begin
if not Assigned(FDBConnectionProp) then
FDBConnectionProp := TSQLDBFireDACConnectionProperties.Create('IB?CharacterSet=WIN1250', '127.0.0.1:C:\NX\DATA.FDB', 'SYSDBA', 'masterkey');
Result := FDBConnectionProp;
end;
function THttpDbServer.GetHttpServer: TSQLDBServerHttpApi;
begin
if not Assigned(FHttpServer) then
FHttpServer := TSQLDBServerHttpApi.Create(DBConnectionProp, 'FB', '8092', 'user', 'pass');
Result := FHttpServer;
end;
end.
Thank you for your help!
Pages: 1