You are not logged in.
I never said it would be easy
- Still I am confident a solution will turn up ![]()
Anyway, which solution is picked, the one with one table, wulth multiple joins for derived classes, or whatever solution is picked should be "hidden" to my named collection. I talk to my named collection, CRUD-ing my objects. How the collection gets and fetches data is the ORM's task.
I don't like the blob solution that much either. But in the end, just like any other programmer, I got to have a working solution, and I actually counted on it that the ORM would take this 'dirty work' out of my hands...
Meanwhile I'll take a deeper look into the TSQLRecordReference. Potentially it's a good alternative to the blob solution.
Anyway, I'm sure the TSQLPropInfo redesign you are currently refactoring is still very valuable, and can mainly be regarded as independent of the inheritance problem.
I certainly hope you (or we) can find a way to create a way for these 'Named collections of objects', however it is going to be implemented.
Highest regards
Hans
... I took a look at Eric Evans 'Domain Driven Design Quickly', and think that storing more or less arbitrary objects (including it's classtype, inheritance path and base and derived properties) is definitely in the domain of what an ORM should do.
So: what these classes and properties are intended for is very likely not in the ORM's domain, but storing and retrieving them is.
Wy do I, you, we all need inheritance? - Because I, you and the rest of us am programming using object orientation! ![]()
My program will consist eg of simulation parameters. but the main application is only aware of the fact that there are simulation parameters, but not what the specifics look like. Using a plugin system I force my modules to use the simulation parameters base class, but what is defined in the inherited class is not restricted (sure there are limits). Still, I want ONE collection of all my simulation parameters, and not a separate collection for each derived class. I believe this fits the DDD model quite well...
If I want to separate the persistance layer from my App implementation layer, then I would want to add my parameter class as a property of the TSQLRecord. The store/retrieve routine should then be smart enough to store the entire parameter class+properties, even if it is a derived one from the base storage class. This is what I hope for that the new property mapper will bring...
In this case it would work similar to a TCollection + TCollectionItem (equivalent to table and TSQLRecord) and a TPersistent derived property in the TCollectionItem. I imaginbe something like this (untested)
type
// parameter classes
TMyParametersbase=class(TPersistent)
publised
SomeBaseProperty:integer ...
end;
TMySpecialParameters=class(TMyParametersbase)
publised
MySpecialProperty:string....
end;
// parameter presistency through mORMot
TMyParameterRecord=class(TSQLRecord)
published
property Params:TMyParametersBase read FParams write SetParams;
end;I can imagine some trouble exists when you have to create the MyParameterRecord instance, and the actual Params class is still unknown (could be TMyParametersBase as well as TMySpecialParameters, even when property is declared as TMyParametersBase. So... a very possible solution would be something like this:
TMyParameterRecord=class(TSQLRecord)
protected
procedure WriteParams(aSQLColumn:???);
procedure ReadParams(aSQLColumn:???);
procedure DefineSQLProperties(aSQLInterface:???);
public // NOT published!
property Params:TMyParametersBase read FParams write SetParams;
end;
procedure TMyParameterRecord.DefineSQLProperties(aSQLInterface:???);
begin
aSQLInterface.AddProperty('Params', ftBlob, ReadParams, WriteParams , Params.HasData);
end; In the example above my 1st idea is to store it as an object resource in a blob stream. Otherwise, maybe I would store my custom object as a JSON stream in the blob. This would make things a bit more standardized, though creating objects from the stream will be a bit hard I guess.
I currently use a solution similar like this to store my params and calculated results in a database. My Params are all derived from TCOmponent, and registered using RegisterClasses. The NotifyComponent procedure is overriden to put the loaded params component in the correct property/field. My TCollection is a TDataset, and my TCollectionItem is the "current" record.
BUT... in a perfect world ![]()
Being able to store classes (and derived ones) in the same table would be swell. Well, from my POV we are not actually talking about a table but much more about a named collection of objects wich could be an object itself. This collection approach would produce far more cleaner code at "my side" of the ORM, as all the handling of property mapping loading and saving is handled by the ORM. In this case I can truly forget I am using an SQL DB, and just dump my collections of objects (of different derived classes) in the ORM layer. Whether the ORM requires one or ten tables to store my objects, should not be my concern.
In the latter case having a "true" ORM it would off course be quite important that "loading" an object not only restores (base) property values into the "existing" object, but it should also be able to restore the entire object, including its actual class type and it's special properties. Thus not just the base type and base properties.
That it is required to register my classes (including the derived ones) in the ORM makes perfect sense., Otherwise the ORM would have no way to generate metadata or SQL statements. AMOF the VCL and FMX use the same approach with the RegisterClasses calls. When reading a stream of objects, you never know what the next classtype will be unless you have read the class header. That's probaly why it is in the header ![]()
As stated before: How the ORM actually handles the base and derived classes in the model in order to store and retrieve them should be the task of the ORM, and be transparent to the created base and derived classes that require storing and retrieval.
with the $ifdef I did not think of changing the numerical type into a UUID or string. More like supporting platforms, whe on one compiler NativeInt is defined on another some different identifier...
allowing GUID or string record ID's would seriously complicate the underlaying code, and IMHO this is not a serious requirement.
I can foresee using Int64 (or Uint64 for that matter) being used in the near future.
Ab,
Is it possible to have a construction like this
TMyBaseObject=class(TPersistent);
TMySpecialObject=class(TMyBaseObject);
TMySQLrecord=class(TSQLRecord)
published
property MyObject:TMyBaseObject ...
end;where TMyBaseObject actually is a TMySpecialObject which is restored to the same type it was when it was save when i call TMySQLRecord.FillOne ? This would also meet my demands, even though it complicates things a bit.
It's not the tablename mapping that I am worrying about. To me it seems to make perfect sens to have a bunch of objects in one container (let's call this the table) that are not all of exactly the same class. The Rest client/server should defenitely be able to store and retreive these objects, and also to instantiate them correctly, so if I go through a collection, the object class should "change".
I fear this requirement will imply quite a bit of re-thinlking the mORMot if is not supported at all.
as stated before: To me saving and loading objects of different derived classes is a very elementary requirement for an ORM.
The first thing I can think of is that SomeSQLRecord.FillOne should be changed into Rec:=SomeSQLRecordCollection.GetNext since it is impossible to keep the same objact instance when the object type changes....
I fear this is a mayor setback for us as I really counted on this being in the mORMot ![]()
Hans
At the moment I dont think we need than 2G objects :\
But it's more about the fact that ID is an integer now, and might be something different in the future (Int64 like in SQLite3). So I think it would generally be a good idea to define a special type (nothing fancy) for the ID.
About GUID's : it's not really a M$ thing, it is actually known as a UUID and used on many platforms: http://en.wikipedia.org/wiki/Globally_unique_identifier though the notation with curly braces seems to be a M$ thing. I believed it to be M$ invention too until we did some research...
About the ID being an a simple integer: IMHO an ID, especialliy a generated ID should be impossible to be negative, so it should be a UINT32 or UINT64. But from what I read in the docs, this can be a bit of a problem for JSON communication.
Targeting as int32 or int64 instead of integer ensures the same type is defined independent of the platform it is used on.
If you want the ID to be interchangablke with pointers, you'd netter use NativeUInt. and again defining your
TSQLRecordID=NativeUInt
will come in handye because you can pout it in $ifdefs if it needs to be a different type (or type identifier) on a different platform.
Hans
consider these two classes
TSQLMyBaseClass=class(TSQLRecord)
...
published
MyBaseProp:integer read FMyBaseProp write FMyBaseProp
end;
TSQLMySpecialClass=class(TSQLMyBaseClass)
...
published
MySpecialProp:integer read FMySpecialProp write FMySpecialProp;
end;I have registered them both using the same table name on the Server side). Unfortunately, I seem to be unable to get it working. eg adding records failed before because the client used a different SQLTable name than the server.
So I tried setting ther RecordProps.SQLtablename from within the model, but now I get an AV.
What am I missing here?
I defenitely want my derived class and base class to appear in the same table, so I can iterate through a whole collection of objects of different derived types from the same base type. I believe this is one of the major requirements for an ORM... :s
Regards - Hans
Int the sources I have, in SQLite3Commons.pas, about position 3592 it says
property ID: integer read GetID write fID;FAIK this is not an int64
so it needs to be refactored anyway. And if we have to, then I suggest also creating a separate type for it. It the type is still the same as the old one, as I suggested before, it wont break your existing code, though it would be wise to change your ID related variables to the same type.
In delphi x64 compiler integer is still defined as int32. Maybe not in lazarus, I dont know. I agree Int64 is a better choice for the ID anyway. or a TGUID. (Where did I read this before
)
Hans
Hi Ab
I'm writing an object that requires a (DB) link to another object and found the ID is declared directly as integer. I think it would be well-advised to generate a special type identifier for the TSQLRecord.ID, so code wont break if later on the ID is eg changed to a cardinal or Int64. (I know JSON doesn't like cardinals from the doc, but not exactly why).
It's a bit of work right now, but may help save a lot of work in the future.
type
TSQLRecordID=integer;
TSQLRecord=class(TObject)
...
TSQLRecordID;
...
property ID:TSQLRecordID ...
...
end;I see. Thx for bearing wit me :>
Pity :s , but I am still not convinced. I mean, D7 also needs to perform type checking when loading dfm files etcetera isn't it? This seems quite hard to me if it cannot access the 'actual' type data from within the TPropInfo structure.
Looking at this (from 2002!) http://www.nldelphi.com/forum/showthread.php?t=2900
It seemed to me you could get to the TypeInfo of a property way back then.
I cannot imagine the TypeInfo being present in D7... Maybe it is missing in lazarus? - One could in that case alwasys fall back to TypeInfo.Name comparison.
Anyway, I adjusted the code to avoid the TypeData function call present in XE3, and it still works like a charm.
type
TMyCustomObject=class(TPersistent)
private
FMyGUID: TGUID;
published
property MyGUID:TGUID read FMyGUID write FMyGUID;
end;
TSomeCustomRecord=record
MyInteger:integer;
end;
procedure TForm1.Button1Click(Sender: TObject);
VAR Obj:TMyCustomObject;
ObjTypeInfo,
GUIDTypeInfo,
CustomRecordTypeInfo:PTypeInfo;
PropList:PPropList;
PropCnt:integer;
PTypeData:pointer;
begin
GUIDTypeInfo:=TypeInfo(TGUID);
CustomRecordTypeInfo:=TypeInfo(TSomeCustomRecord);
Obj:=TMyCustomObject.Create;
try
ObjTypeInfo:=Obj.ClassInfo;
PropCnt:=GetPropList(ObjTypeInfo,PropList);
if PropList[0].PropType^=GUIDTypeInfo then // CHANGED FOR D7 COmpatability
MessageDlg('It''s a match for TGUID!',mtInformation,[mbOK],0)
else MessageDlg('It''s not a match TGUID!',mtError,[mbOK],0); // ERROR if here
if PropList[0].PropType^=CustomRecordTypeInfo then // CHANGED FOR D7 COmpatability
MessageDlg('It''s a match for TSomeCustomRecord!',mtError,[mbOK],0) // ERROR if here
else MessageDlg('It''s not a match TSomeCustomRecord!',mtInformation,[mbOK],0);
finally
Obj.Free;
end;
// these assignments to keep the optimizer from dropping the variables prematurely
GUIDTypeInfo:=nil;
PropCnt:=0;
PropList:=nil;
ObjTypeInfo:=nil;
end;Ab, I have found a clean way for checking if a property is of type GUID. I have tested this with XE3, but I am pretty sure this also works in the other versions... It's based on comparing the TypeData pointer rather than the typekind.
type
TMyCustomObject=class(TPersistent)
private
FMyGUID: TGUID;
published
property MyGUID:TGUID read FMyGUID write FMyGUID;
end;
TSomeCustomRecord=record
MyInteger:integer;
end;
procedure TForm1.Button1Click(Sender: TObject);
VAR Obj:TMyCustomObject;
ObjTypeInfo,
GUIDTypeInfo,
CustomRecordTypeInfo:PTypeInfo;
PropList:PPropList;
PropCnt:integer;
begin
GUIDTypeInfo:=TypeInfo(TGUID);
CustomRecordTypeInfo:=TypeInfo(TSomeCustomRecord);
Obj:=TMyCustomObject.Create;
try
ObjTypeInfo:=Obj.ClassInfo;
PropCnt:=GetPropList(ObjTypeInfo,PropList);
if PropList[0].PropType^.TypeData=GUIDTypeInfo^.TypeData then
MessageDlg('It''s a match for TGUID!',mtInformation,[mbOK],0)
else MessageDlg('It''s not a match TGUID!',mtError,[mbOK],0); // ERROR if here
if PropList[0].PropType^.TypeData=CustomRecordTypeInfo^.TypeData then
MessageDlg('It''s a match for TSomeCustomRecord!',mtError,[mbOK],0) // ERROR if here
else MessageDlg('It''s not a match TSomeCustomRecord!',mtInformation,[mbOK],0);
finally
Obj.Free;
end;
// these assignments to keep the optimizer from dropping the variables prematurely
GUIDTypeInfo:=nil;
PropCnt:=0;
PropList:=nil;
ObjTypeInfo:=nil;
end;How about a 'DefineProperties' kind of solution like the one that is used in TComponent streaming? This kind of solution also provides a means to add your own custom datatypes with their custom readers and writers, and testing for "empty/Default" values.
I guess using RTTI you might check on the type name, and simply use 'TGUID' as identifier. Or maybe compare pointers for the PTypeInfo of TGUID?
I have been thinking about it a little, and for tackling similar future issues maybe you could "Open up" the type-to-field mapping using eg virtual functions or by providing some means to "Register" custom type mappings:
RegisterTypeToFieldType(aTypeInfo:PTypeinfo;aFieldType:TFieldType);This of course also implies that there needs to be some data handling added. So why not also add pointers to data handling routines:
TFieldToPropProc:procedure(aField:TField;aInstance:TSQLObject;aProperty:PPropInfo) of Object;
TPropToFieldProc:procedure(aField:TField;aInstance:TSQLObject;aProperty:PPropInfo) of Object;
procedure RegisterTypeToFieldTypeMapping(aType:TTypeinfo;aFieldType:TFieldType;aReader:TFieldToPropProc;aWriter:TPropToFieldProc);BTW it's just a suggestion, I am quite sure you can think of better prototypes then the ones i scribbled here ![]()
Hi Ab
I want to add a TGUID type field to my TSQLModel class, but unfortunately it is mapped to sftUnknown, and not even a binary or string representyation is created. When debugging TSQLRecordClassToExternalFields it reveals the TGUID property getting mapped to sftUnknown, which causes no type to be generated for the field.
Would you be so kind to
1) Add TGUID support for the native field types
2) Generate some identifier for the unknown type eg "TypeUnknown" or maybe raise an exception stating which field(=property) causes the error.
Thx - Hans
Yes, but still for my model 'Limitless' memory (as in 2^64 bytes) defines a far more straightforward model for programing a simulation. I know there are hardware limitations, but we can go disk swapping until such vast amounts of RAM are in budget.
I love this one
- great job.
What we are creating is a Simulation Server. The simulation server should be able to address huge amounts of memory since we want to have a "live" model of our objects (50M+) that interact with each other.
By changing properties of our "living objects" and next "harvesting" the results we are interested in we want to make a more or less real time simulation. The # of objects could easily take up more than 3 GB of RAM. Next to that we will require some cache for our NexusDB server and other stuff, so yes, we are really impatient to go 64 bit.
We decided not to create a "fat client" because in the end we might want to browse the harvested data using a tablet, phone or using the internet. Apart from that, the fat client would require sending all the object data over the network. This is definitely a no-no for us.
The fact that it's a little slower due to 64bit will be well compensated by the fact that our objects are all "alive" during a simulation session. Yes it will require somne serious server hardware, but hey, the server does not have to be portable.
(FYI Our previous version of the app processes each object from the DB fetched one-by-one / batched causing a way top much processing time when simulating changing properties. Hence the intention to have a "live" model)
Hans
Hi Ab
We (Bascy and I) have noticed that your testing code is in the same units as the production code. Is this a rational choice, is it like this because it is convenient for you, or just because it "grew into it" over time?
I would advise separating the production code from the testing code (No surprise here
), and provide the "testing framework" completely separate, preferably in a separate UnitTest subfolder.
Which does not take away my respect for the way you maintain and write your code. I am still very impressed.
Regards
Hans
Hi
Since there is no 64 bit c(++) compiler available for XE3 (yet). I'd lite to move to 64bit, but is there another way to get to a 64bit sqlite3.obj ?
Or maybe there is another way we can use sqlite3 in delphi 64 bit somehow? e.g. using DLL's or something like that?
Hans
I have an identical unit test performing a Create-Read-Update-Delete sequence (CRUD)
This tries
* Direct mode
* Batched Mode
* Transaction Mode
* Batched+Transaction Mode
And it works great in NamedPipe and MessagingMode.
But... only in HTTP mode, only the Transaction Mode test failes. The three others work perfectly, including batched+transaction mode!
It failes on this line
CheckTrue(SQLWorkbaseRestClient.Update(Rec),'Update record failed');After some kind timeout (UpdateRecord takes over 5 sec). Any idea where to look? -Most obvious is out NexusDB driver... I have noty had the chance to test this with e.g. Native SQLite3 or Oracle driver.
Hans
Yes, some of them are quite subjective ![]()
Inmy world ERangeError is only raised by compiler range checking. SO I decided it to be "Fatal"
The EConvertError is indeed very open for discussion
.
EChainedException is a special exception I have created myself to allow "automatic" raiseouterexecption behaviour. Thus, when a EChainedexception is Created/raised, and another exception is already 'active' this one gets to be "owned" by the Echainedexception so I can put "high level" error messages without losing the low level exception information. The subexception object is destroyed togehter with the chainedexception object. (I had this already in D6, way before Emb. decided to implement RaiseOuterException)
EChainedException itself should NOT be raised, but always as a derived one. Therefore the (underived) EChainedException itself is a fatal.
That's exactly what I did - Ive added URL validation to my alias handler which ensures this problem cannot happen anymore.
Ah, I found the culprit. Indeed I forgot to 'kill' one interface in my UnitTest Teardown routine.
In my Unit tests I insist on actively terminating client sessions, just to make sure.
The nice thing of the IsFatalExceptionb is that it also checks the sub-exception (if any) so a AV catched and next raised with RaiseOuterException with even EABort would still be a fatal exception.
try
try
ProduceAccessViolation
except
on E:Exception do
E.RaiseOuterException(EAbort.Create('Aborted'));
end
except
if IsFatalException(ExceptObject) then // Will be true because EAbort is raised because of an aV
....
end;Yes that's a cool way to fix it too.
But.. unfortunately my LCS server supports NamedPipe and Messaging (and later DLL mode), which would probably break my code.
It's also a nice feature to have a more or less readable alias in requests, and generally its not a bad idea to have an Alias restricted a bit like regular identifiers.
In my case, the URI part after the LCSServer/ actually is intended to be a kind of database alias. I dont mind aliases being subject to some validation checks. at all. SO I already implemented a vaslidation check allowing only valid url characters and no special characters.
http://www.blooberry.com/indexdot/html/ … coding.htm
Hans
in function TServiceFactoryClient.Invoke and AV is raised when my Interface for a dynamic rest server is destroyed (CLient side). This AV is catched by
I have added this check to get around the internal AV.
// compute URI according to current routing scheme
if Assigned(FClient) and Assigned(fClient.Model) then
uri := fClient.Model.Root+'/'
else Exit;Anyway, I feel any try .. except block should at least re-raise low level system errors like AV's and alike, because if you don't these can cause very bad unexpected effects on other parts of the app.
destructor TInterfacedObjectFake.Destroy;
begin
if (fFactory<>nil) and (fFactory.InstanceCreation=sicClientDriven) and
(fClientDrivenID<>0) then // fClientDrivenID=0 if no instance used
try // release server instance
fFactory.NotifyInstanceDestroyed(fClientDrivenID);
except
; // ignore any exception here
end;
inherited;
end;Usually these kind of "catch all" except blocks look like this in my app:
try
dosomethingdangerous;
except
if IsFatalException(ExceptObject) then
raise;
end;The IsFatalException routine looks something like this:
const
//List of standard Delphi exceptions that will raise a FatalChainedException
cDefaultFatalExceptions : Array [0..16] of TClass =
(EOutOfMemory, ERangeError, EIntOverflow, EInvalidOp,
EOverflow, EUnderflow, EInvalidPointer, EInvalidCast,
EConvertError, EAccessViolation, EPrivilege,
{$WARN SYMBOL_PLATFORM OFF}
EAbstractError,
{$WARN SYMBOL_PLATFORM ON}
{$WARN SYMBOL_DEPRECATED OFF}
EStackOverflow, EWin32Error,
{$WARN SYMBOL_DEPRECATED ON}
EIntfCastError, ESafecallException, EInvalidArgument
);
{ Determine if an Object is a Fatal exception
An object can be qualified as fatal in several different ways:
- a ChainedException or one of its InnerExceptions is a FatalException
- The object-class is listed in vFatalExceptions
- one of the methods in vFatalExceptionDetectors qualifies the object as fatal
}
function IsFatalException(E: TObject): Boolean;
var
i: Integer;
Addr:pointer;
begin
result := false;
if Assigned(E) then
begin
if not (E is TObject) then
Result:=True // if the exception is not derived from TObject, it's a fatal exception
else if not (E is Exception) then
Result:=True // if the exception is not derived from exeption, it's a fatal exception
else if (E.ClassType=Exception) then
Result:=True // if the exception object is directly Exception class without being derived from Exception, it's a fatal exception
else if E is EChainedException then
Result:=EChainedException(E).Fatal // this will actually recurse
else
begin // an exception derived from exception
while not Result and Assigned(E) do
begin
// pass by list of fatal exception classes
for i := Low(vFatalExceptions) to High(vFatalExceptions) do
begin
if (E is vFatalExceptions[i]) then
begin
Result := true;
break;
end;
end;
// pass by list of fatal exception detection handlers
for i:=low(vFatalExceptionDetectors) to high(vFatalExceptionDetectors) do
begin
vFatalExceptionDetectors[i](E,Result);
if Result then
Break;
end;
if not Result then
GetInnerExceptionInfo(E,E,Addr);
end;
end;
end;
end;
initialization
RegisterFatalExceptions(cDefaultFatalExceptions);My Unit test uses GUIDstrings for sub-urls. So My SQLModel.Root looks like
LCSServer/{..........}
Unfortunately, the HTTP translates the { into %7B causing the URIMatch to fail.
Obvously, I can change my sources to use a different system for random aliases, but
1) I guess there should be some check in there validating the URI, complaining about invalid characters (Like {, ? and other reserved characters)
2) OR make the URIMatch a bit smarter so it uses the same coding for matching the URI.
3) Or make the HTTPServer translate the html UTI back to a regular URI...?
Regards - Hans
Yes, each has it's own model.
It seems like the SetDB call jumps into the SQLite3 engine, which causes the destructor to be called (maybe the wrong one?) which clears the FServer properties.
No AV encountered regarding this matter upto now.
Hans
I'm using a dynamically create TRestServer, and for this I have to register my model with a certein server.
Unfortunately, the TSQLRestServer calls (through the SQLite3 engine) the destructor TSQLVirtualTableModuleServerDB.Destroy;, whign inturn calls CleanRegisteredVirtualTableModule.
This causes my TVirtualTableModules.FServer to ne set to nil, and my CreateMissingTables call to fail.
Model := TLCSWorkbaseModel.Create(lURI);
VirtualTableExternalRegisterAll(lModel, FConnectionProperties);
RestServerDB := TLCSServerSQLRestServerDB.Create(lModel, lSqLite3FileName, False); // indirectly calls TSQLVirtualTableModuleServerDB.Destroy; cause FServer to be NIL
Model.Owner := lRestServerDB; //Make Restserver destroy model
RestServerDB.CreateMissingTables; // exception FServer=nil raised hereSo for now I disabled the CleanRegisteredVirtualTableModule call from within TSQLVirtualTableModuleServerDB.Destroy, and my dynamic table creation works again. I dont get why the
destructor TSQLVirtualTableModuleServerDB.Destroy;
begin
// HH:Disabled
// if fServer<>nil then
// (fServer as TSQLRestServerDB).CleanRegisteredVirtualTableModule;
inherited Destroy;
end;Remember I'm creating a 2nd RestServer on the fly here, one (my "root" server) is already running.
Here's the call stack to the destructor call when I create my secondary rest server.
SQLite3.TSQLVirtualTableModuleServerDB.Destroy
:0040949b TObject.Free + $B
SynSQLite3.sqlite3_free_table($3909DD8)
:00673be1 sqlite3_free_table + $278D
SynSQLite3.sqlite3_create_module_v2(59809240,'External',$38BF43C,$38BF410,SynSQLite3.sqlite3InternalFreeObject)
:00673c60 sqlite3_create_module_v2 + $1C
:00695D34 Sqlite3::TSQLVirtualTableModuleSQLite3::SetDB(Self=:038BF410, aDB=:039777B8)
:00695DE4 Sqlite3::TSQLVirtualTableModuleServerDB::TSQLVirtualTableModuleServerDB(Self=:038BF410, aClass=:0062A310, aServer=:038B7C00, ...)
:00693898 Sqlite3::TSQLRestServerDB::InitializeEngine(Self=:038B7C00)
:00693758 Sqlite3::TSQLRestServerDB::TSQLRestServerDB(Self=:038B7C00, aModel=:0398E0E0, aDB=:039777B8, aHandleUserAuthentication=true, ...)
:0069392A Sqlite3::TSQLRestServerDB::TSQLRestServerDB(Self=:038B7C00, aModel=:0398E0E0, aDBFileName={ L"C:\\Users\\Hans\\AppData\\Roaming\\SN" }, aHandleUserAuthentication=true, aPassword={ NULL }, ...)
:00A1BAE3 Ulcs_server_main::TLCSRestServer::TLCSRestServer(Self=:038B7C00)
:00A1C5EA Ulcs_server_main::LCSRestServer()
:00A1C379 Ulcs_server_main::TLCSHttpServer::TLCSHttpServer(Self=:03953A60)
:00A1C4DA Ulcs_server_main::LCSHttpServer()
:00A3AC96 Lcs_server::initialization()
:75b4339a kernel32.BaseThreadInitThunk + 0x12
:776a9ef2 ntdll.RtlInitializeExceptionChain + 0x63
:776a9ec5 ntdll.RtlInitializeExceptionChain + 0x36Y're welcome
I'd rather suggest keeping the variable length record as this best reflects the (union) struct in the M$ http.h definitions. But it's up to you off course. Thanks for the quick fix!
I would make me very happy if you add the necessary CheckOSError() calls to the Http.SendHttpResponse() calls as well. And to any other place where it would make sense.
Regards
Hans
I have managed to slaughter the beast. After googlong a lot, I found out there were same issues with the translation of the HTT_DATA_CHUNK translation because of alignment differences between MS-C++ and Delphi. Obviously, in XE3 these alignment difderences have been changed, as applying the following patch makes everything work again. Ab I guess you will take this up a bit more cleanly using the $Include file and it's defines. I have verified this is working on both XE3 and XE2 now.
Regards - Hans
{$ifdef VER240} // XE3
HTTP_DATA_CHUNK = record
case DataChunkType: THttpChunkType of
hctFromMemory: (
FromMemory: record
pBuffer: pointer;
BufferLength: ULONG;
Reserved1: ULONG;
Reserved2: ULONG;
Reserved3: ULONG;
end; );
hctFromFileHandle: (
FromFileHandle: record
ByteRange: HTTP_BYTE_RANGE;
FileHandle: THandle;
end; );
hctFromFragmentCache: (
FromFragmentCache: record
FragmentNameLength: word; // in bytes not including the #0
pFragmentName: PWideChar;
end; );
end;
{$else} // not XE3
HTTP_DATA_CHUNK = record
case DataChunkType: THttpChunkType of
hctFromMemory: (
FromMemory: record
Reserved1: ULONG;
pBuffer: pointer;
BufferLength: ULONG;
Reserved2: ULONG;
Reserved3: ULONG;
end; );
hctFromFileHandle: (
FromFileHandle: record
ByteRange: HTTP_BYTE_RANGE;
FileHandle: THandle;
end; );
hctFromFragmentCache: (
FromFragmentCache: record
FragmentNameLength: word; // in bytes not including the #0
pFragmentName: PWideChar;
end; );
end;
{$endif}
PHTTP_DATA_CHUNK = ^HTTP_DATA_CHUNK;Ab would you like to debug on my XE3 using TeamViewer?
yes in 64bit mode it surely breaks. But for codereadability Id's trongly suggest using <>nil or even better length() when checking (dynamic) arrays.
Back to the XE3 problem
I cannot find (significant) difference between the Resp^ structures in XE2 and XE3. Also the FRequestID and Req^.RequestID seem to be fine.
Which leaves me frowning ... WTF is missing here... Do you know what to look for (apart from the obvious) of SendHTTPResponse parameters being reported as Invaid param,eters (error 87)
I have found 1 error in SynCRTStock: the pHttpResonse should NOT be a VAR parameter but rather a CONST parameter
SendHttpResponse: function(ReqQueueHandle: THandle; RequestId: HTTP_REQUEST_ID;
Flags: integer; {var} const pHttpResponse: HTTP_RESPONSE; pReserved1: pointer;
var pBytesSent: cardinal; pReserved2: pointer=nil; Reserved3: ULONG=0;
pOverlapped: pointer=nil; pReserved4: pointer=nil): HRESULT; stdcall;And something strange is going on here too, around line 3780 in SynCrtStock
if integer(fCompress)<>0 thenFCompress is a THttpSocketCompressRecDynArray, so nothing like an ordinal. Mayby you mean length(FCompress)<>0 or maybe you mean a completely different field of some object?
Adding a FALSE AND makes no difference in the XE2 compiled app (still works fine) and on the XE3 app (still fails).
Great!
I have added a CheckOSError() routine that verifies the result of Http.SentHttpResponse, and it tells me the parameter is invalid. SO I guess something is messed up ad server side and not the client side. I'l investigate it some more.
procedure CheckOSError(aResult:HResult); inline;
begin
if aResult<>NO_ERROR then
raiseLastOSError(aResult)
end;* Update there's a CheckOSError routine in the system.sysutils, but it's not inline
I fear some regression tests regarding HTTP Fail:
Synopse mORMot Framework Automated tests
------------------------------------------
1. Synopse libraries
1.1. Low level common:
- System copy record: 22 assertions passed 161us
- TDynArray: 519,427 assertions passed 151.46ms
- TDynArrayHashed: 1,200,629 assertions passed 156.41ms
- Fast string compare: 7 assertions passed 113us
- IdemPropName: 10 assertions passed 104us
- Url encoding: 105 assertions passed 1.00ms
- IsMatch: 599 assertions passed 200us
- Soundex: 35 assertions passed 89us
- Numerical conversions: 785,498 assertions passed 133.56ms
- Curr64: 20,039 assertions passed 1.46ms
- CamelCase: 5 assertions passed 78us
- Bits: 4,614 assertions passed 104us
- Ini files: 7,004 assertions passed 23.94ms
- Unicode - Utf8: 61,082 assertions passed 1.29s
- Iso8601 date and time: 24,000 assertions passed 4.70ms
- Url decoding: 1,100 assertions passed 278us
- TSynTable: 873 assertions passed 4.87ms
- TSynCache: 404 assertions passed 1.00ms
- TSynFilter: 1,005 assertions passed 2.73ms
- TSynValidate: 677 assertions passed 758us
- TSynLogFile: 42 assertions passed 646us
Total failed: 0 / 2,627,177 - Low level common PASSED 1.78s
1.2. Low level types:
- RTTI: 34 assertions passed 437us
- Url encoding: 200 assertions passed 662us
- Encode decode JSON: 250,233 assertions passed 167.71ms
Total failed: 0 / 250,467 - Low level types PASSED 171.83ms
1.3. Big table:
- TSynBigTable: 19,232 assertions passed 67.76ms
- TSynBigTableString: 16,054 assertions passed 31.64ms
- TSynBigTableMetaData: 384,060 assertions passed 1.59s
- TSynBigTableRecord: 452,185 assertions passed 4.02s
Total failed: 0 / 871,531 - Big table PASSED 5.71s
1.4. Cryptographic routines:
- Adler32: 1 assertion passed 351us
- MD5: 1 assertion passed 352us
- SHA1: 5 assertions passed 353us
- SHA256: 5 assertions passed 349us
- AES256: 6,372 assertions passed 72.21ms
- Base64: 11,994 assertions passed 119.05ms
Total failed: 0 / 18,378 - Cryptographic routines PASSED 194.34ms
1.5. Compression:
- In memory compression: 12 assertions passed 221.80ms
- Gzip format: 19 assertions passed 408.56ms
- Zip format: 36 assertions passed 757.35ms
- SynLZO: 3,006 assertions passed 84.67ms
- SynLZ: 13,016 assertions passed 282.07ms
Total failed: 0 / 16,089 - Compression PASSED 1.75s
1.6. Synopse PDF:
- TPdfDocument: 4 assertions passed 6.42ms
- TPdfDocumentGDI: 6 assertions passed 14.27ms
Total failed: 0 / 10 - Synopse PDF PASSED 21.50ms
2. mORMot
2.1. Basic classes:
- TSQLRecord: 47 assertions passed 356us
- TSQLRecordSigned: 200 assertions passed 4.90ms
- TSQLModel: 3 assertions passed 380us
Total failed: 0 / 250 - Basic classes PASSED 6.41ms
2.2. File based:
- Database direct access: 10,138 assertions passed 220.81ms
- Virtual table direct access: 12 assertions passed 3.17ms
- TSQLTableJSON: 19,030 assertions passed 47.97ms
- TSQLRestClientDB: 599,030 assertions passed 3.51s
Total failed: 0 / 628,210 - File based PASSED 3.78s
2.3. File based WAL:
- Database direct access: 10,138 assertions passed 224.36ms
- Virtual table direct access: 12 assertions passed 1.42ms
- TSQLTableJSON: 19,030 assertions passed 43.01ms
- TSQLRestClientDB: 599,030 assertions passed 3.44s
Total failed: 0 / 628,210 - File based WAL PASSED 3.71s
2.4. Memory based:
- Database direct access: 10,136 assertions passed 197.53ms
- Virtual table direct access: 12 assertions passed 1.30ms
- TSQLTableJSON: 19,030 assertions passed 37.69ms
- TSQLRestClientDB: 667,323 assertions passed 4.05s
Total failed: 0 / 696,501 - Memory based PASSED 4.29s
2.5. Client server access:
- TSQLite3HttpServer: 21 assertions passed 8.01ms
using THttpApiServer
! - TSQLite3HttpClient: 1 / 1 FAILED 26.68ms
! - Http client keep alive: 48 / 84 FAILED 23.16ms
first in 22.18ms,
! - Http client multi connect: 48 / 84 FAILED 14.61ms
first in 13.65ms,
- Named pipe access: 3,085 assertions passed 600.30ms
first in 60.84ms, done in 139.89ms i.e. 7148/s, aver. 139us, 33.4 MB/s
- Local window messages: 3,084 assertions passed 65.89ms
first in 1.34ms, done in 61.64ms i.e. 16221/s, aver. 61us, 75.8 MB/s
- Direct in process access: 3,052 assertions passed 59.11ms
first in 816us, done in 58.13ms i.e. 17201/s, aver. 58us, 80.4 MB/s
Total failed: 97 / 9,411 - Client server access FAILED 802.38ms
2.6. Service oriented architecture:
- Weak interfaces: 56 assertions passed 382us
- Service initialization: 127 assertions passed 2.21ms
- Direct call: 596,163 assertions passed 38.47ms
- Server side: 596,173 assertions passed 38.40ms
- Client side REST: 596,175 assertions passed 546.46ms
- Client side JSONRPC: 596,173 assertions passed 621.30ms
- Client side synchronized REST: 596,173 assertions passed 1.27s
- Security: 135 assertions passed 1.11ms
- Custom record layout: 596,173 assertions passed 567.17ms
Total failed: 0 / 3,577,348 - Service oriented architecture PASSED 3.09s
2.7. External database:
- External records: 1 assertion passed 612us
- Auto adapt SQL: 168 assertions passed 10.60ms
- Crypted database: 253,272 assertions passed 265.75ms
- External via REST: 243,436 assertions passed 908.07ms
- External via virtual table: 243,436 assertions passed 1.65s
Total failed: 0 / 740,313 - External database PASSED 2.84s
Synopse framework used: 1.17
SQlite3 engine used: 3.7.12.1
Generated with: Delphi XE3 compiler
Time elapsed for all tests: 28.19s
Tests performed at 4-10-2012 16:12:32
Total assertions failed for all test suits: 97 / 10,063,895
! Some tests FAILED: please correct the code.
Done - Press ENTER to ExitI'd be more than happy to fix it for you in XE3. What things should I consider looking for..?
BTW I can confirm no errors are reported in XE2
Hans
It turns out that when destroying a TDBRestserver, the registered TSQLVVirtualtableModule 's may already be destroyed...
This causes a very ugly AV in the destructor
destructor TSQLRestServerDB.Destroy;
var i: integer;
begin
try
for i := 0 to high(fRegisteredVirtualTableModule) do
if fRegisteredVirtualTableModule[i]<>nil then
// to avoid GPF in TSQLVirtualTable.Destroy e.g.
fRegisteredVirtualTableModule[i].fServer := nil; // MAY VERY WELL ACCESS A DESTROYEDE COMPONENT!!!
inherited Destroy;
finally
try
fStatementCache.ReleaseAllDBStatements;
finally
fOwnedDB.Free;
end;
end;
end;Therefor I added an empty, virtual "unregister" method to TSQLRestServer, and an overriden one to TSQLRestServerDB:
// for de-registering a viirtual table module that gets destroyed for some reason
procedure TSQLRestServerDB.UnregisterVirtualTableModule(aVirtualTableObject:TSQLVirtualTableModule);
VAR i,cnt:integer;
begin
cnt:=0;
for i := low(fRegisteredVirtualTableModule) to high(fRegisteredVirtualTableModule) do
begin
if fRegisteredVirtualTableModule[i]=aVirtualTableObject then
fRegisteredVirtualTableModule[i]:=nil;
end;
end;And added a destructor to TSQLVirtualTableModule:
destructor TSQLVirtualTableModule.Destroy;
VAR i:integer;
begin
if assigned(FServer) then
FServer.UnregisterVirtualTableModule(self);
inherited; // for debugging purpose
end;And now, my ugly AV has disappeared!
My suggestion would be to use an TObjectList or TList for maitaning a list of objects rather than the fRegisteredVirtualTableModule array of objects. Also for a 'register/Unregister kind of way to manage the list of virtual tables in a TSQLRestServerDB. Hell, why not add an indexed property VirtualTableModule[idx:integer] and a VirtualTableModuleCOunt for that matter...
Or maybe a (protected/public) VirtualTableModuleList:TObjectList ...
Regards - Hans
Using RAD studio XE3 (not knowning if this is the cause of the issue)
InternalSendRequest raises an EOSError Code 12030 on the WinHttpReceiveResponse call.
I have turned off my Firewall, and verified my server is running.
Even better: I am quite sure the server receives the responses as log entries are added to the log
C:\Users\Hans\Sources\LCSS\bin\D17\Win32\LCS_Server.exe 0.0.0.0 (2012-10-04 11:26:46)
Host=OBELIX User=Hans CPU=8*9-21-258 OS=13.1=6.1.7601 Wow64=1 Freq=4042841
TSQLLog 1.17 2012-10-04T12:55:48
20121004 12554824 srvr TLCSRestServer(7EE747F0) GET LCSServer/TimeStamp -> 200
20121004 12555255 srvr TLCSRestServer(7EE747F0) GET LCSServer/auth?UserName=Admin -> 200
20121004 12555357 srvr TLCSRestServer(7EE747F0) GET LCSServer/auth?UserName=Admin -> 200
20121004 12555750 srvr TLCSRestServer(7EE747F0) GET LCSServer/TimeStamp -> 200
20121004 12555949 srvr TLCSRestServer(7EE747F0) GET LCSServer/TimeStamp -> 200
20121004 12560028 srvr TLCSRestServer(7EE747F0) GET LCSServer/auth?UserName=Admin -> 200
20121004 12560104 srvr TLCSRestServer(7EE747F0) GET LCSServer/auth?UserName=Admin -> 200Nevertheless the client appears to be unable to receive the requests which are actually marked on the server as successfull (200). In the above I'm just trying to make an autorized connection.
I don't get what I'm doing wrong here. Maybe it's something with the PrepServer code ? - this routine runs in administrator mode without any trouble:
procedure PrepLCSServer;
VAR R:string;
begin
R:=THttpApiServer.AddUrlAuthorize(cLCSServerName, cLCSDefaultNetworkPort, false);
if R>'' then
raise Exception.CreateFmt('PrepLCSServer failed. Error [%s]',[R]);
end;COuld it have something to do with your recent changes to allow sub-urls?
FYI the same test routines work wothout a problem using the namedpipe and message clients, they use the same REST server instance so I think the REST server is not the real problem, but it is somewhere in the HTTP communication wrapper.
Help! I'm losing it ![]()
Hans
OK, I missed that. - Thx
Just rethinking this: shouldnt there be (or is it already there) some kind of timeout on the message send/wait in the client side? Now my client "freezes", and no error is returned unti I shutdown my server.
I wouldnt want "al" my clients to freeze without an error message if my server hangs...
OK. Thx
I confirm this fixed.
My server app is a console app, waiting for a (console) readln to finish. This made the client app wait endlessly for a response.
I had to add a "MessageDlg" call to my server main routine, obvioulsy to ensure there is a message handling loop. Now it works fine.
I must say I actually expected the TSQLRestServer to implement the message loop...? - so: Is mormot missing a message handling loop, or is it WAD and did I miss something?
jay - that sounds great! (waiting impatiently)
SO no priority here ![]()
Maybe we could define a standard "interface query" interface which allows querying the # of interfaces, an enumerator returning the hashes of the available interfaces, and next an interface quary wich returns the interface name and GUID, as well as (named) methods and properties. None of this would have to be SOAP, but could be provided through the regular JSON/Mormot interfaces.
Ah that's the cause.
Thx for the quck solution.
COnsidering the fact that it is required to have the interfaces registered at both the clienbt and server side, it seems very hard to create a generic Mormot ORM browser.
Is it possible to create some kind of Mormot enterprise manager, or mormot explorer that allows browsing and investigation of interfaces and (virtual) tables provided by an arbitrary mormot server?
I think it would be a very welcome addition to the mormot framework to have this kind of "enterprise manager".
Hans
The InitRestCLient is a virtual method that allows different communications to be tested with the same base TTestCase . AMOF it's the onbly routine that is overriden in my derived testcase classes.
procedure TLCSServerTestBase.SetUp;
begin
inherited;
FModel := TLCSModel.Create(cLCSServerName);
InitRestClient(FSQLRestCLient); // virtual method call
CheckTrue(FSQLRestCLient.SetUser('Admin', 'synopse'), 'Authentication failed');
CheckTrue(FSQLRestCLient.ServiceRegister([TypeInfo(ILCSServer), TypeInfo(ILCSAliasManager)], sicShared),'ServiceRegister failed for ILCSServer');
CheckTrue(FSQLRestCLient.Services.Info(TypeInfo(ILCSServer)).Get(FLCSServer), 'Get(FLCSServer) failed');
CheckNotNull(LCSServer, 'LCS Server Interface not retrieved from server');
CheckTrue(FSQLRestCLient.Services.Info(TypeInfo(ILCSAliasManager)).Get(FLCSAliasManager), 'Get(FLCSAliasManager) failed');
CheckNotNull(LCSAliasManager, 'LCS Aliasmanager Interface not retrieved from server');
end;What might help is the actual error message
No "LCSServer" window available - server may be downThe server is defenitely not down as far as I know, as the other tests (HTTP and Name pipe) run OK.
COuld it be something stupid like an invalid window name? I found int the mormot sources that actually not the window name is set but the window class name. Also, the client seems to look for the window as classname and not the actual window name, so that looks ok.