You are not logged in.
Hi,
As the title says, I'm failing to find how can I use postgresql database schemas in addition to the 'PUBLIC' one, is a very neat feature of Postgres and one of the reasons why we have chosen this database, the plan is to have a multitenant database for an application provided under the SAAS model.
Has anyone used SCHEMAS with mORMot and PG ?
Offline
Not yet on our side.
But you may be able to use the TSQLDBConnectionProperties.ForcedSchemaName property for this exact purpose.
I've just enhanced the documentation to make it more explicit.
See http://synopse.info/fossil/info/ed3fc0d … 40ecba1a81
Offline
My suggestion is not to use ForceSchemaName - this really useful only for MSSQL from my POV.
For Postgre we do such:
1) create ROLE.
2) Create SCHEMA with auth for this role
3) TSQLDBZEOSConnectionProperties.Create('postgresql://localhost:5432', 'postgre', 'ourRole', 'ourRolePwd')
Since role have access only to this schema there is no need to use schema prefix in SQL statements, so mORMot will work fine.
Below is Schema creation script example (bat file):
if "%DB_SERVER%"=="" SET DB_SERVER=localhost:5432
if "%DATABASE%"=="" SET DATABASE=postgres
if "%DBA%"=="" SET DBA=postgres
if "%NEW_ROLE%"=="" SET NEW_ROLE=myRole
if "%NEW_ROLE_PWD%"=="" SET NEW_ROLE_PWD=%NEW_ROLE%
SET PG_MASTER=postgresql://%DBA%:%DBA_PWD%@%DB_SERVER%/%DATABASE%
SET NEW_SCHEMA=postgresql://%NEW_ROLE%:%NEW_ROLE_PWD%@%DB_SERVER%/%DATABASE%
@echo Create schema %NEW_SCHEMA%
@psql -q -U %DBA% -c "CREATE ROLE %NEW_ROLE% LOGIN PASSWORD '%NEW_ROLE_PWD%' VALID UNTIL 'infinity'; CREATE SCHEMA %NEW_ROLE% AUTHORIZATION %NEW_ROLE%;" %PG_MASTER%
@if errorlevel 1 goto err
@echo Create ID generators
@psql -q -U %NEW_ROLE% -c "create table my_table (ID BIGINT not null);" %NEW_SCHEMA%
@if errorlevel 1 goto err
@echo ****** Schema %NEW_SCHEMA% created success
@goto end
:err
@echo !!!!!!!!!!!!!! Schema %NEW_SCHEMA% creation fail !!!!!!!!!!!!!!!!
EXIT /B 1
:end
Offline
Thanks Arnaud and MPV, I can see both approaches are at connection level, so if I have a shared TSQLRestServerDB instance on the server I wont be able to change the schema based on the client who did the request (which is my intention).
I have used mORMot on a server which act as a proxy between a delphi middleware and a Titanium Appcelerator mobile application with success, and now I'm beginning with the ORM part and enjoying the tremendous work of AB so far, I think I can solve the issue with an array of TSQLRestServerDB and access them using Interface based services, What do you think?
Offline
Each mORMot ORM table has a single database.
You can't just switch from one DB to another based on the client.
The external DB is set to one ORM table.
So IMHO you won't be able to use directly the CRUD operations with a multi-DB backend.
An array of TSQLRestServerDB with a "repository" pattern implementing the high-level operations could definitively help.
Your service can return directly RawJSON content, then this RawJSON could be consumed on the client side via a TSQLTableJSON instance, which could be injected to regular TSQLRecord methods.
Take a look for instance at constructor TSQLRecord.CreateAndFillPrepare(const aJSON: RawUTF8).
Sounds like if Celso has the very same request at the same time - see http://synopse.info/forum/viewtopic.php?id=1946 - but is a little bit more confused than you about how to implement it?
Perhaps you may collaborate, and create a simple sample project showing the SOA repository pattern consuming several database? We may then add it to our official source code repository.
Or we may introduce ORM routing between mORMot nodes, to implement your needs directly at framework level...
See http://synopse.info/forum/viewtopic.php … 907#p11907 and feature request http://synopse.info/fossil/info/3453f314d
Offline
Thank you Arnaud,
I can see both requests could be useful for my purposes I'll keep following the progress of the threads, and int the meantime I'm going to try to make a proof of concept and post the code here, may be others find it useful or of course point out the flaws.
Offline
This is what I have so far:
type
TlxServerSpec = record
ServerType: TSQLDBDefinition;
Host,
Port,
DBName,
DBSchema,
User,
Pass : RawUTF8;
end;
TlxServerList = class
private
FDBServers : TStringList;
public
constructor Create;
destructor Destroy;override;
procedure Add(aSpec: TlxServerSpec; aModel: TSQLModel);
function GetServer(schemaName: RawUTF8): TSQLRestserverDB;
end;
implementation
const
SCHEMA_SEPARATOR = ':';
{ TlxServers }
procedure TlxServerList.Add(aSpec: TlxServerSpec; aModel: TSQLModel);
var
props : TSQLDBZEOSConnectionProperties;
DBServer: TSQLRestServerDB;
FullSchemaName,
portStr : RawUTF8;
begin
//Own convention, separating DB from schema with :
FullSchemaName := aSpec.DBName + SCHEMA_SEPARATOR + aSpec.DBSchema;
if FDBServers.IndexOf( FullSchemaName ) >= 0 then
Exit; // a server for the schema already exists
if aSpec.Port<>'' then
portStr := ':'+aSpec.Port;
Props := TSQLDBZEOSConnectionProperties.Create(
TSQLDBZEOSConnectionProperties.URI(aSpec.ServerType, aSpec.Host+portStr),
aSpec.DBName, aSpec.User, aSpec.Pass);
Props.ForcedSchemaName := aSpec.DBSchema; // <--- Not recommended by MPV
VirtualTableExternalRegisterAll(aModel,Props);
try
DBServer := TSQLRestServerDB.Create(aModel,':memory:',false);
DBServer.CreateMissingTables(0);
FDBServers.AddObject(aSpec.DBSchema, DBServer);
finally
Props.Free;
end;
end;
constructor TlxServerList.Create;
begin
FDBServers := TStringList.Create;
end;
destructor TlxServerList.Destroy;
var
i: Integer;
begin
for I := 0 to FDBServers.Count-1 do
TSQLRestServerDB(FDBServers.Objects[i]).Free;
FDBServers.Free;
inherited;
end;
function TlxServerList.GetServer(schemaName: RawUTF8): TSQLRestserverDB;
begin
Result := TSQLRestServerDB( FDBServers.Objects[FDBServers.IndexOf(schemaName)] );
end;
I can create a global instance of the previous class when starting the server like this :
var
Model: TSQLModel;
sSpec : TlxServerSpec;
begin
Model := TSQLModel.Create([TSQLPais, TSQLCliente]);
try
FDBServerList := TlxServerList.Create;
//Populate the list of physicaldb/schemas from a SQLite db maybe, still not decided
sSpec.ServerType := dPostgreSQL;
sSpec.Host := 'localhost';
sSpec.Port := '5432';
sSpec.DBName := 'mormot';
sSpec.DBSchema := 'public';
sSpec.User := 'postgres';
sSpec.Pass := 'mypass';
FDBServerList.Add( sSpec, Model);
sSpec.ServerType := dPostgreSQL;
sSpec.DBName := 'test';
FDBServerList.Add( sSpec, Model);
finally
Model.Free;
end;
end;
Before I go further I would like to hear why MPV doesn't recommend using ForcedSchemaName, I'm planning to follow his advice and create roles for the schemas for security reasons, but I would also like to have explicitly stated on each connection what Schema to use.
What do you think?
Offline
You are searching FDBServers.IndexOf( FullSchemaName ) but you do later on FDBServers.AddObject(aSpec.DBSchema, DBServer).
It should be FDBServers.AddObject(FullSchemaName, DBServer) IMHO, to be consistent.
There is one big issue with your code: you are using a single TSQLModel instance for all your servers.
This is not correct: each DB server should have its own TSQLModel.
You may use TSQLModel.Create(aModel) to clone the original model, then newModel.Owner := DBServer to let the dedicated TSQLModel instance be freeed by the DBServer.
Just a detail: you may use TRawUTF8List instead of TStringList, to avoid any string conversion issue.
And you would benefit of the Create(aOwnObjects=true) constructor, which will let your Objects[].Free be done at list destruction.
Note that since you defined the same "root" value for all DB servers, you would NOT be able to add the DB servers to the main HTTP server instance.
There would be a conflict of duplicated "root".
But if you want to use the DB servers from services, sounds fine to me.
Offline
You are searching FDBServers.IndexOf( FullSchemaName ) but you do later on FDBServers.AddObject(aSpec.DBSchema, DBServer).
It should be FDBServers.AddObject(FullSchemaName, DBServer) IMHO, to be consistent.
Good catch!
There is one big issue with your code: you are using a single TSQLModel instance for all your servers.
This is not correct: each DB server should have its own TSQLModel.
That's the kind of thing I need to be aware of, thanks for the advise.
Just a detail: you may use TRawUTF8List instead of TStringList, to avoid any string conversion issue.
Good to know you have a dedicated class :-D I'll change it.
Note that since you defined the same "root" value for all DB servers, you would NOT be able to add the DB servers to the main HTTP server instance.
There would be a conflict of duplicated "root".
But if you want to use the DB servers from services, sounds fine to me.
I was originally thinking of retrieving data only from services, but you just gave me another idea which I'm going to work on.
THANK YOU !
Offline
I have to say this, your framework is AWESOME!
I had to tweak my classes because I realized I shouldn't be freeing Props and Model instances, also used TRawUTF8List as suggested and as you said I don't need tho free the owned objects by hand, no memory leaks so far and now I can retrieve data using URIs which includes the database + schema
I'm enjoying working with mORMot, at first it looks complicated but once you get used to it, is really easy.
Offline
One question about database connection, I made a test with 3 TSQLRestServerDB instances, after creating them I can see on windows task manager processes 3 new postgres.exe processes, which I think is fine, but each request (eg. http://localhost:8080/lxapp.mormot.public/pais/1) is creating a new postgres.exe process, don't know if it is correct or if there is some setting I am missing.
Any thoughts ?
Offline
I'll need to investigate more, I'm creating 3 TSQLRestServerDB instances in a demo application and one postgres instance for each TSQLRestServerDB is being created also by mORMot which seems correct to me, I was thinking that each instance of TSQLRestServerDB will route the incoming requests through one of these 3 already created postgres processes, something like:
TSQLRestServerDB Instance 1 <---> TSQLDBZEOSConnection <---> postgres process 1
TSQLRestServerDB Instance 2 <---> TSQLDBZEOSConnection <---> postgres process 2
TSQLRestServerDB Instance 3 <---> TSQLDBZEOSConnection <---> postgres process 3
So when I make a request on the browser and the url is routed to one of the TSQLRestServerDB instances it would be reusing the DB connection already created (kind of a DB connection pool) but what I'm seeing is a new db connection (TSQLDBZEOSConnection) is created for each page refresh on the browser (yes, using the very same url every time).
I can see on SynDBZeos.pas that the following code gets executed:
function TSQLDBConnectionPropertiesThreadSafe.ThreadSafeConnection: TSQLDBConnection;
...
result := NewConnection;
and Newconnection is as follows:
function TSQLDBZEOSConnectionProperties.NewConnection: TSQLDBConnection;
Begin
result := TSQLDBZEOSConnection.Create(self);
End;
Thus what is happening is a new connection is being created each time :
TSQLDBZEOSConnection <---> postgres process 4
TSQLDBZEOSConnection <---> postgres process 5
TSQLDBZEOSConnection <---> postgres process 6
...
Maybe I'm wrong on my understanding of who the "client" is, this is what I thought was happening :
Client <------> TSQLHttpServer instance <----> | TSQLRestServerDB Instance 1 <------> postgres process 1
(browser) | TSQLRestServerDB Instance 2 <------> postgres process 2
| TSQLRestServerDB Instance 3 <------> postgres process 3
I'm considering the browser the "client" of the TSQLHttpServer instance, and the former as the "client" of the TSQLRestServerDB instances which in turn are "client" of the already created postgres processes but it looks like if no connection created at first is being "reused".
I don't know if my english is good enough to make my case, I hope you can understand what I mean :-)
Last edited by moctes (2014-08-19 20:51:34)
Offline
Ohh ! I can see clearly now, so there is a pool of 32 threads by default (correct me if I'm wrong) and once I have reached this number of instances they are being reused! Tested and checked, you are as always right.
Thank you for the clarification.
Offline
Enabling the logs with per thread identification is very informative.
It is always a good idea to find out how it works.
The LogView tool is very useful, since it will allow to filter the per thread activity.
Offline