You are not logged in.
Pages: 1
consider these two classes
TSQLMyBaseClass=class(TSQLRecord)
...
published
MyBaseProp:integer read FMyBaseProp write FMyBaseProp
end;
TSQLMySpecialClass=class(TSQLMyBaseClass)
...
published
MySpecialProp:integer read FMySpecialProp write FMySpecialProp;
end;
I have registered them both using the same table name on the Server side). Unfortunately, I seem to be unable to get it working. eg adding records failed before because the client used a different SQLTable name than the server.
So I tried setting ther RecordProps.SQLtablename from within the model, but now I get an AV.
What am I missing here?
I defenitely want my derived class and base class to appear in the same table, so I can iterate through a whole collection of objects of different derived types from the same base type. I believe this is one of the major requirements for an ORM... :s
Regards - Hans
Offline
Yes, default table name mapping is fixed and read only.
What is possible is to use an external database. With them, you can specify a custom table name at the model level.
The main DB could even be a pure SQLite3 in-memory table, only used to address external tables.
Note that performance may be less efficient in some cases (with JOINed requests). And TSQLRecordMany features are not fully tested with external DBs.
But in this case, you can setup your external table name.
If you need the same table to be accessed via several inherited classes, this is a new feature, not implemented yet.
But I suspect that if you need such a feature, you are putting too much business in your persistence layer.
Offline
It's not the tablename mapping that I am worrying about. To me it seems to make perfect sens to have a bunch of objects in one container (let's call this the table) that are not all of exactly the same class. The Rest client/server should defenitely be able to store and retreive these objects, and also to instantiate them correctly, so if I go through a collection, the object class should "change".
I fear this requirement will imply quite a bit of re-thinlking the mORMot if is not supported at all.
as stated before: To me saving and loading objects of different derived classes is a very elementary requirement for an ORM.
The first thing I can think of is that SomeSQLRecord.FillOne should be changed into Rec:=SomeSQLRecordCollection.GetNext since it is impossible to keep the same objact instance when the object type changes....
I fear this is a mayor setback for us as I really counted on this being in the mORMot
Hans
Offline
Ab,
Is it possible to have a construction like this
TMyBaseObject=class(TPersistent);
TMySpecialObject=class(TMyBaseObject);
TMySQLrecord=class(TSQLRecord)
published
property MyObject:TMyBaseObject ...
end;
where TMyBaseObject actually is a TMySpecialObject which is restored to the same type it was when it was save when i call TMySQLRecord.FillOne ? This would also meet my demands, even though it complicates things a bit.
Offline
I would not follow this path.
Here is some general remark:
Why do you need inheritance here?
If this is because in some cases, you would need less columns of data in the database, you would rather use the parent class.
There is an option to specify which fields are to be retrieved from the DB in FillPrepare() method.
So you can easily mimic "inheritance" in the properties respect, i.e. about columns extend.
What we can implement easily in mORMot is the following.
If you create a TSQLRecord instance which is not part of the database model, but which is a sub-class of an existing class of the model, any CRUD operations, including FillPrepare() could :
1) accept to work with this sub-class, and use the main class declared in the model for its ORM mapping;
2) by default only get the fields defined in the sub-class, ignoring the main class.
That is, the sub-class won't be part of the DB model, which still fits with the db layout, but you would be able to easily work with sub-classes of a main class.
I suppose this would be enough for your need.
I've added the point to the roadmap.
I'm still re-iterating my previous remark: you are IMHO going too low in your model, starting from the DB, not from the domain models.
See http://devlicio.us/blogs/casey/archive/ … abase.aspx
From my experiment, there should be a layer (at least logical) between the repository and the business model.
That is what I like with Domain-Driven-Design, in contract with Database-Driven-Design: so-called "Persistence Ignorance" is a good pattern.
And, with mORmot, it is not in contradiction with execution speed and development ROI.
Offline
Wy do I, you, we all need inheritance? - Because I, you and the rest of us am programming using object orientation!
My program will consist eg of simulation parameters. but the main application is only aware of the fact that there are simulation parameters, but not what the specifics look like. Using a plugin system I force my modules to use the simulation parameters base class, but what is defined in the inherited class is not restricted (sure there are limits). Still, I want ONE collection of all my simulation parameters, and not a separate collection for each derived class. I believe this fits the DDD model quite well...
If I want to separate the persistance layer from my App implementation layer, then I would want to add my parameter class as a property of the TSQLRecord. The store/retrieve routine should then be smart enough to store the entire parameter class+properties, even if it is a derived one from the base storage class. This is what I hope for that the new property mapper will bring...
In this case it would work similar to a TCollection + TCollectionItem (equivalent to table and TSQLRecord) and a TPersistent derived property in the TCollectionItem. I imaginbe something like this (untested)
type
// parameter classes
TMyParametersbase=class(TPersistent)
publised
SomeBaseProperty:integer ...
end;
TMySpecialParameters=class(TMyParametersbase)
publised
MySpecialProperty:string....
end;
// parameter presistency through mORMot
TMyParameterRecord=class(TSQLRecord)
published
property Params:TMyParametersBase read FParams write SetParams;
end;
I can imagine some trouble exists when you have to create the MyParameterRecord instance, and the actual Params class is still unknown (could be TMyParametersBase as well as TMySpecialParameters, even when property is declared as TMyParametersBase. So... a very possible solution would be something like this:
TMyParameterRecord=class(TSQLRecord)
protected
procedure WriteParams(aSQLColumn:???);
procedure ReadParams(aSQLColumn:???);
procedure DefineSQLProperties(aSQLInterface:???);
public // NOT published!
property Params:TMyParametersBase read FParams write SetParams;
end;
procedure TMyParameterRecord.DefineSQLProperties(aSQLInterface:???);
begin
aSQLInterface.AddProperty('Params', ftBlob, ReadParams, WriteParams , Params.HasData);
end;
In the example above my 1st idea is to store it as an object resource in a blob stream. Otherwise, maybe I would store my custom object as a JSON stream in the blob. This would make things a bit more standardized, though creating objects from the stream will be a bit hard I guess.
I currently use a solution similar like this to store my params and calculated results in a database. My Params are all derived from TCOmponent, and registered using RegisterClasses. The NotifyComponent procedure is overriden to put the loaded params component in the correct property/field. My TCollection is a TDataset, and my TCollectionItem is the "current" record.
BUT... in a perfect world
Being able to store classes (and derived ones) in the same table would be swell. Well, from my POV we are not actually talking about a table but much more about a named collection of objects wich could be an object itself. This collection approach would produce far more cleaner code at "my side" of the ORM, as all the handling of property mapping loading and saving is handled by the ORM. In this case I can truly forget I am using an SQL DB, and just dump my collections of objects (of different derived classes) in the ORM layer. Whether the ORM requires one or ten tables to store my objects, should not be my concern.
In the latter case having a "true" ORM it would off course be quite important that "loading" an object not only restores (base) property values into the "existing" object, but it should also be able to restore the entire object, including its actual class type and it's special properties. Thus not just the base type and base properties.
That it is required to register my classes (including the derived ones) in the ORM makes perfect sense., Otherwise the ORM would have no way to generate metadata or SQL statements. AMOF the VCL and FMX use the same approach with the RegisterClasses calls. When reading a stream of objects, you never know what the next classtype will be unless you have read the class header. That's probaly why it is in the header
As stated before: How the ORM actually handles the base and derived classes in the model in order to store and retrieve them should be the task of the ORM, and be transparent to the created base and derived classes that require storing and retrieval.
Last edited by h.hasenack (2012-10-21 20:40:22)
Offline
... I took a look at Eric Evans 'Domain Driven Design Quickly', and think that storing more or less arbitrary objects (including it's classtype, inheritance path and base and derived properties) is definitely in the domain of what an ORM should do.
So: what these classes and properties are intended for is very likely not in the ORM's domain, but storing and retrieving them is.
Offline
You are right about the current limitation of mORMot about inheritance.
There is one unique feature yet, which is one table per class.
That is, there is a direct mapping between classes and tables.
You can inherit classes, but each should be stored in its own table. You can not persist a collection of mixed classes.
Note that there is a nice feature of mORMot named TRecordReference, demonstrated e.g. in the SynFile main demo, which is the fact that you can create one-to-one or one-to-many relationship from one table to several tables.
See for instance in FileTables.pas how TSQLAuditTrail.AssociatedRecord() is defined as a TRecordReference, then is able to point to either a TSQLMemo, a TSQLData or a TSQLSafeMemo/TSQLSafeData. That is, you can have one-to-one or one-to-many relationship between one table and several tables, here Memo / Data / SafeMemo / SafeData, with retrieval of the corresponding class on the fly.
I do not like very much the "BLOB" or "JSON" serialization trick.
But this can be done by now, with the current mORMot features, to implement you pattern.
Your parent class may have a RawUTF8 or TSQLRawBlob field, then each inherited class will have their own properties, with getters and setters, to retrieve and set the information within the parent's storage field. There is everything needed to make binary or JSON serialization.
Not very efficient, and not SQL/RDBMS ready, unless you create your own SQL function to search within the content. And even in this case, you will loose the indexing features of the DB.
If you compare e.g. with Hibernate, you'll see that there are several ways of implementing inheritance persistence, with correct SQL mapping.
See http://docs.jboss.org/hibernate/orm/3.6 … tance.html
We would need to implement "Table per class hierarchy" in mORMot.
(added in the project RoadMap)
The current TSQLPropInfo* refactoring won't help much here: this is a new design of ORM mapping.
I suspect the TSQLModel shall be enhanced to handle such cases, i.e. create a global table with a "discriminator" column, which will reflect the class type stored. A TSQLModelRecordProperties sub-class could be defined to handle such information.
I've moved SQLFromSelectWhere() from a global function to a TSQLModel method, ready to handle the "discriminator" column.
In fact, this is not very efficient from the DBRMS point of view (adding the "discriminator" column is not optimal, even if it could be fast in practice when an index is defined), but this is much better than the BLOB/JSON approach.
"Table per subclass" pattern will need a JOIN between several tables, one per sub-class.
I do not think we will implement it by now.
Neither the "Map one table per concrete class with union-subclass mapping" pattern, which is far away from the KISS principle, heavily relying on SQL UNIONs.
Offline
I never said it would be easy - Still I am confident a solution will turn up
Anyway, which solution is picked, the one with one table, wulth multiple joins for derived classes, or whatever solution is picked should be "hidden" to my named collection. I talk to my named collection, CRUD-ing my objects. How the collection gets and fetches data is the ORM's task.
I don't like the blob solution that much either. But in the end, just like any other programmer, I got to have a working solution, and I actually counted on it that the ORM would take this 'dirty work' out of my hands...
Meanwhile I'll take a deeper look into the TSQLRecordReference. Potentially it's a good alternative to the blob solution.
Anyway, I'm sure the TSQLPropInfo redesign you are currently refactoring is still very valuable, and can mainly be regarded as independent of the inheritance problem.
I certainly hope you (or we) can find a way to create a way for these 'Named collections of objects', however it is going to be implemented.
Highest regards
Hans
Offline
TRecordReference - risky busyness?
I noticed this in SQLite3Commons.pas:
/// a reference to another record in any table in the database Model
// - stored as an 32 bits unsigned integer (i.e. a pointer=TObject)
// - type cast any value of TRecordReference with the RecordRef object below
// for easy access to its content
// - use TSQLRest.Retrieve(Reference) to get a record value
// - don't change associated TSQLModel tables order, since TRecordReference
// depends on it to store the Table type in its highest bits
TRecordReference = type PtrUInt;
So this means when I have another table added to my model (which might be inserted before or after the referred table, depending on it's name or the loaded plugins), my TRecordReference fields get to be invalid, simply because during downtime my table order may have changed. Using part of the integer as a table index is kind-a awkward IMO.
Anyway, this is pretty bad when building a plugin-based data model, as I cannot (and do not want to) guarantee that my tables are registered in the same order, or for that matter that they will still all be there once the app has been shutdown and restarted. Plugins may be (un)installed during downtime, causing the datamodel to change between sessions. Anyway, this should definitely NOT break my datamodel. (Well, at least not when a table that is not "used" by any record reference is removed/added)
Wouldnt it be much more sensible to have a "combined" link, similar to TMethod (ObjectInstancePointer,ProcPointer), where the 1st pointer points to the targeted class (or table) and the 2nd pointer points to the actual target record, or contains the record ID? Wouldn't it be a lot safer if it's something like this:
TSQLRecordReference=record
case type of
Unresolved:
TargetRecordID:TSQLRecordID;
TargetTableClassName:UTF8string;
Resolved:
TargetRecord:TSQLRecord
end;
It is probably not as efficient as an int32, but is surely a lot safer.
(PS I didnt check the syntax of the record, but I'm pretty sure you understand what I mean)
(PPS I failed to create a new topic on TRecordReference, i got sime SMTP timeout, so I'm trying to post it here. Previewe worked just fine)
Offline
This is the error I got when trying to post the new topic:
Error: Could not connect to smtp host "smtp.ictuswin.com" (0) (php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution).
Offline
My IMHO about inheritance:
I 100% agree with AB about "JSON" serialization trick is not a good solution. Also I don't like "Table per subclass" pattern because of JOIN((OUTER JOIN!) operation.
But "Table per class hierarchy" is not very good too. This pattern generates 2 BIG problem in big databases:
1) "sparse table" (I don't know exactly how it's on english) - I mean: for "regular employee" 3 fields always NULL (hourlyRate, perDiem, ContractPeriod), for "contract employee" 2 field always null (salary and bonus). In real project we have match more fields, so match more NULL in table. Creating index on this field become ineffective. Tablespace used ineffective too.
2) many RDBS has limitation of 255 column per table, so if we have many different Employee descendants with unique set of fields we can easily reach this limitation. It is a real problem for example in IBM FileNet
For inheritance emulation in my project I use "table per class" as here
but add second table
EMP(id, name, location, class_name)
and in every CRUID operation of RegularEmployee and ContractEmploye I modify(insert/update/delete) corresponding record in EMP table:
EMP.ID = REG_EMP.ID, EMP.name = REG_EMP.name, EMP.class_name = 'RegularEmployee'
(I have one ID generator per database (not per table), so can easy do EMP.ID = REG_EMP.ID). It may be done by database triggers on *Employee tables in current mORMot project (but I rewrite TSQLRecord)
I define read only Employee = TSQLRecord what point to EMP - so I can get all my employee and reference to employee from another class.
I can easy select all RegularEmployee without JOIN
It's not a normal database form, some fields a duplicated, I limit inheritacne by one level (not possible to define ContractEmploye descendants), but this schema give very good performance on big databases.
To apply this pattern to mORMot we need only 2 things:
1) One ID generator per database
2) Something like
TEmployee = class(TSQLRecordUnity) <- we need to to make it read-only
published
name
location,
class_name <- this is something like discriminator in hibernate
end;
TRegularEmployee = class(TEmployee)
published
salary,
bonus
end;
TContractEmploye = class(TEmployee)
published
salary,
bonus
end;
in TSQLREcord CRUID generation check our descendant and generate 2 insert/update/delete - one for TEmployee and one for our class.
Sorry for my english.
Offline
I also don't like the JSON/BLOB solution to store the objects in a stream.
I like the TSQLRecordReference solution a bit better (wich boils down to the same: storing the "special" class data somewehere else than in the "main table") more, apart from the fact that it is currently not very robust when the datamodel changes.
About the solution with one ID generator: this means every TSQLRecord has it's own ID (IMO not bad at all) , and resides somehow in the a (global) collection (Not that bad either). Nevertheless you will probably want to be able to "quick-select" all objects of a certain class:
When picking a base class, the derived objects should also be incorporated. When picking a derived class, the base class objects should NOT be incorporated. This seems doable, provided there is some smart table memorizing the class hierarchy. If you want a named collection, one can add a TSQLRecordCollection that holds the ID's of all objects belonging to that collection. In this case, the global collection simulates the "heap" whilst the NamedCollection simulates a TList. It also implies some serious trouble when an TSQLRecord object is deleted from the DB, as in all lists referring to it, it must be removed too.
About the multi table or single table solution for inheritance:
Well, first of all I don't care "that much" about how ORM stores it's data. (Well, off course I am interested in the solution, but it should be of no concern to me. My app interfaces with the ORM, not with the underlying DB.)
I do not specifically want/need a table-per-inheritance tree. As stated before : What I do want is: multiple (named) collections of objects (in this case TSQLRecord). And this TSQLNamedRecordCollection should be able to perform the CRUD actions. However it seems logical to me, to have the collection define a base class for storing the object. But I can also imagine the TSQLNamedRecordCollection being able to CRUD any TSQLRecord derived class. That would be more than perfect. In this case the TSQLNamedRecordCollection behaves like a TObjectList, and "Owns" the TSQLRecord objects in it.
Offline
Pages: 1