You are not logged in.
Pages: 1
Make it explicit using VariantToUTF8() function.
Thanks!
-David
Do I need to do anything about this warning? I get it when I'm passing a TDocVariant's "property" into a mORMot function that expects RawUTF8 parameters (e.g., TSQLAuthUser.ComputeHashedPassword).
Thanks.
-David
When the reUserCanChangeOwnPassword is set, can the user update *any* column for their row in TSqlAuthUser? That's what it looks like in the code, but I wanted to ask to be certain.
I want it to work that way, actually. But the name makes me wonder if that's considered kosher.
Thanks.
-David
OK, after digging around, I see the TSqlAuthUser 'Data' column isn't passed to the client. It's just on the server.
-David
SetUser() is returning true, but the SessionUser.Data isn't there (it's empty). The Data column in the database is *not* empty.
procedure TJournalVolumeClient.LoginUser(const userLogin, userPassword: string);
var
hashedPassword: RawUTF8;
...
userData: Variant;
begin
...
if FDatabase.SetUser(UnicodeStringToUTF8(userLogin), hashedPassword, true) then
begin
TDocVariant.New(userData);
userData := VariantLoadJSON(FDatabase.SessionUser.Data);
...
end;
Am I missing a step?
-David
The issue, it seems, is me.
I should have paid more attention to the "authentication failure" part of the exception. I wasn't properly setting the admin user the first time through.
Thanks for your help, though.
-David
Now I'm getting that same error when I don't even use an HTTP server. I'm just creating the REST server in a stand-alone EXE.
Again, everything else works, just not the interface access.
Edited to add: And it works the same way. That is, on initial run with no database (so the database has to be created), there is an error attempting to access the interface. On subsequent runs, it works as expected.
-David
Unfortunately, that didn't work. No amount of sleeping seems to matter. I've put in hard waits for up to 30 seconds. I've even had the client disconnect from the server and reconnect, and I still get that "routing not supported" exception when the running client is the process that launched the server process.
Everything else seems to work, BTW. The client can set the user, even the admin user, and can retrieve data from the server. It just can't register that interface.
Does it matter that the interface is set to only be accessed by the admin user group? And/or that it is the only interface? I'll add another, less-restricted interface to the server and see if that makes a difference.
-David
I'm experimenting with a local/LAN client that detects the server not running and launches it in the background (just doing ShellExecute for now).
This works fine except in one case: The first time the server is run. (That is, when the server has to create databases and tables and what not.)
In that case, the server launches normally and seems to work. But when the client calls ServiceRegister, the server throws it back with a "routing not supported" exception. I can then close both client and server, and restart the client ... and the server launches and works fine. No exceptions when registering/using the interface.
I've tried forcing the client to wait after launching the server to give it time, but no length of time seems to make a difference.
So I'm curious if there is a flag I can check to know the server is ready for connections, and especially using interfaces.
Any thoughts?
-David
Each client will have its own TSQLModel, with its own root.
You have something wrong with your client side models...
Unique roots turned out not to be the issue. Instead, I had roots with spaces in them. I took the spaces out and everything worked as expected.
Is that specified somewhere? That roots don't like spaces? Or should I have known that based on how roots are used as URIs? Or both?
-David
So I've implemented something along these lines. There is a single HTTP server that manages a collection of models, each with its own root.
My question now is: How do I connect from the client side?
My first stab at it uses a TSQLHttpClient per database. One for the primary database on the server, and one for each of the sub-databases. That's not working for the sub-databases. I can create the TSQLHttpClient object, but when I try ServerTimeStampSynchronize, it returns False. Also, attempts to Retrieve from the TSQLHttpClient object don't generate an error, but they also don't work.
Any hints?
Thanks!
-David
Ah, OK. Thanks.
Try use "root" for each databases:
1) create main server and database
2) create TSQLModel with "root"
3) create TSQLRestServer with created TSQLModel
4) create TSQLHttpServer with created TSQLRestServer
5) repeat steps 2-4 for each database changing the "root"All HTTP servers can share the same HTTP configuration (port, etc.), the REST server is different because the "root" change.
I think this can work.
Best regards.
Esteban
Since each non-root database will have its own URI, can they have their own authentication, as well? Since each sub-database will have its own users/passwords.
I'm guessing this would involve a separate set of authentication tokens passed back and forth?
Part of the complication is caused by legacy. As many complications are...
Thanks.
-David
OK. I think I understand. I'll experiment and see how it goes.
Thank you, Esteban and Arnaud.
-David
What I'm after isn't truly multitenancy. My apologies for the digression.
I will attempt to describe what I'm trying to do better, since even after reading (and rereading) the SAD, and looking at the various samples, I'm still not sure where to start or if what I want to do is even feasible.
The entry point is a central database that has some global settings but is mostly a list of databases.
The physical location of these databases may vary (on the hard drive, or on the LAN, or whatever), but each of them has the same schema. In other words, they would have the same ORM model definitions and relationships (so the same tables with the same names).
The client software connects to the central database and retrieves the list of available database.
The client may open one or more of these databases simultaneously.
Objects retrieved from a database need to know which database they came from (for updating purposes).
Now, from what I've read, a particular TSqlRecord class is assigned to a single Model, which is assigned to a single Database/Server. That appears to me to be a limiting factor. Is that correct? Or can a TSqlRecord be associated with multiple models? Or can a single model be associated with multiple databases?
In the SAD there is an example (in 7.3.6) given of creating specific types of TSqlRecord for a specific database, to differentiate them between multiple databases. As I understand it, this doesn't work for me.
Do I need separate models and associated servers? That is, one model+server for the central database, and another model that can be associated with the other databases? If so, do those have to be separate exe's? Or can I just create servers as necessary?
Thank you for your help.
-David
Thanks, erick! I'll look for that in the book post-Xmas.
-David
Is there an example of how this would be implemented?
From reading the SAD (that's the long AF web page of documentation, right?) and searching the forums, I'm not seeing a way to register a new user via REST. Or, for that matter, any of the other connection methods. Did I just miss it? For web-based products, this would seem a common requirement.
The idea is that a new user would register (created by the server with "user" privileges). For all other REST access, normal user+password authentication would be used. So the REST API has only one gaping security hole in it. (Which gaping hole is a good argument for only allowing the server to create new users...)
I've ordered Erick Engelke's new book, but it won't get here until next week, and I have no idea if it covers this. And I'm impatient. So I'm asking here.
Thanks!
-David
It's one of the variations on multitenancy. The separate-database-per-tenant is usually more challenging, and so not the typical approach, because commercial databases tend to take a lot of diskspace and/or other overhead. Sqlite helps to eliminate those issues, I would think.
I was researching this a few months ago. That's when I learned the term.
So you already have an example doing this?
-David
A single database may be tempting, but it is often done since setup a database is a heavy task for classical RDBMS (like Oracle or MSSQL).
But if you use mORMot and SQLite3 storage (or MongoDB), you can easily create separated databases, for each client.
It will come from how you define your classes.Keep in mind that with mORMot, it is very easy to make a clear distinction between logical and physical views, and make proper OOP.
Having separated databases has several advantages:
- easier to backup or purge
- easier to change your hosting (you can change one client location from one server to another)
- more agile for scaling (cloud-like hosting without the cloud)
- you can replicate each database in several nodes (using mORMot replication features) to implement a real-time backup or offline work
- safer design (data of several clients would never mix)
- may be mandatory for regulatory purposes
- is a very good selling point: your client will have its own database!
What you're describing is called a multitenant architecture. I'd love to see an example of that using mORMot, especially using Sqlite.
-David
Yes, you can override the Setter and Getter for any attribute.
For example, the standard mORMot database userids store a hash of the password. You can set the password, and it is hashed before being stored, so you cannot read it back as plain text.
Erick
I'll check that out. What source file is that in?
Update: Never mind. I found it. That's normal property getter/setter behavior. I was asking about how to get between the model and the database.
-David
Looking through the Sqlite3\DDD files, I can see one way it could (maybe) happen: hooking into the repository interface implementation.
Maybe creating an intermediary class? A DTO, maybe?
For example, using the TUser class from the Sqlite3\DDD\dom folder, instead of having a TSQLRecordUser class that's a field-for-field match of TUser, have a TUserDTO and a TSQLRecordUserDTO. TInfraRepoUser methods are passed a TUser object. The methods convert the TUser to a TUserDTO (consolidating fields, compressing, whatever), then call the ORM* commands.
What would that break? Offhand, I can see it might screw with the automatic table naming.
How far into left field have I wandered?
-David
Peculiarities of Sqlite aside: Is it possible to override the how a field is loaded/saved on the server?
I can see a case for creating a separation in the data layer, and the new layer provides the encryption/decryption.
-David
Supposing I have an object that has a text field with a JSON object containing properties. Defining that for ORM is pretty simple. I like how simple it is, in fact.
But what if I want to override how it's actually saved in the database on the server? For example, if I want to compress the text, then encrypt it. And, since I still want the database to use a text field, encode it with base64.
Is it possible to override the load/save on the server to accomplish that?
The idea is that the client never knows. The client just says "Gimme" and the object appears with the field in its normal (decoded, decrypted, and decompressed) JSON state. NOTE: This idea assumes secure communication between the client and the server, providing "security in flight".
Since Sqlite doesn't really *care* what you stuff into a TEXT field, I suppose it might be possible to skip the base64 encoding and just load/save as binary.
So, ignoring that this might be considered "odd", is it possible?
Thanks!
-David
Thank you for the replies! (And I have the book in my cart on Amazon.)
I will ponder and dig around in the docs and samples some more. Then probably come back with more questions.
-David
Update: I think I'm probably clinging too tightly to "how it was done before". A single database is probably a (much) cleaner design.
See Multiple databases in one Mormot server for similar discussion
That didn't answer my questions. Also, my case is nowhere near that extreme. At most a dozen external databases, and that's on the long side.
Of course, I'm not even sure if I'm asking the right question.
I'm looking at multiple collections of object models, each tied to a separate database on the server. That is, one set of models tied to the main server (the collection of databases), and a set per open database.
Is that doable in a single client connected to a single server? And can I still use the ORM features?
Thanks.
-David
I will. Thanks!
-David
I'm very new to mORMot, so I'm still trying to wrap my head around some of it. And that includes trying to envision how I might build my next project using the framework.
What I envision is a server, with a central database of mounted databases. Though separate, each database uses the same model. That includes the same table names, schema, etc.
A client gets the list of databases from the server (which seems a simple enough model), picks one (or more), and links to it (them).
So there's a central model, which is the available databases (and whatever user/auth or additional information seems useful).
But there's also a model that encompasses what's in a mounted database. And possibly more than one of these active at a time. And, if possible, it might be fun to perform queries across all of them.
Is it possible to do that within a single server? And using the ORM features?
Forgive me if this is vague. I'm trying to keep from getting too bogged in the details.
Thanks!
-David
Pages: 1