You are not logged in.
If I may add my 2 cents...
@ab is 100% correct that this is a marketing issue, and not a code issue. Splitting mORMot into several repositories will be undesirable and create more difficulties than it solves. Imho the refractoring of the code into more logical folders and files with the move to v2 already achieved what a repository split would do. So please do not split the wonderful mORMot into different projects.
Maybe restating the issue as one of discoverability can help guide to the future. People finding out about the project need a way to quickly discover what can be done/achieved using the project and exactly where to find out how.
There are currently 4 ways to find out: mORMot has wonderful documentation and even a getting started guide on github. There are also example projects and the blog. Those are real gems that very few opensource projects have. I think what @zen010101 touched on, is to have one place that lists a few of the major things that can be done using this project, with direct links to more information. The main github project page for mORMot already does that. People just need to know what is possible before needing to find out how. And after they know what can be done, they need a direct way to get to the parts of the documentation that is relevant. So for example going directly to the documentation of how to use IDocDict without first learning how TDocVariantData is implemented.
My suggestion would be to make a very simple main/index page for this website that lists a few of the most commonly used parts of the project with links directly to the relevant part of the documentation or to the blog.
Example (note how it deliberately does not require knowledge of the project code or terms - that can be discovered later):
JSON
Quickly convert json into objects, or convert objects into json.
Here is a short 5 lines code sample. [...]
Here are the links to the relevant documentation on (un)serialization
This project already has everything, all it needs is a summary page up front
Its easy to check if this is a db or code issue, just run the query directly on your preferred db tool and see how long it runs without any code or services involved. My guess is that you should be able to reduce the query runtime by correctly indexing the tables.
I've seen how correct indexes can dramatically reduce query times, but also the reverse. Adding and combining indexes that are not used will increase insert times, and increase the chances of the db engine using the wrong indexes - slowing down search times.
ab's suggestion of aggregation is very valuable and makes a big difference. It is something you should seriously consider. Grouping data in separate summary/reporting tables and populating/updating them at fixed intervals provides a lot of stability and speed. It means only a small number of records are queried by reports, and also that running those reports does not impact the real datatables that are used by operations.
A full thread-safe descendant of IList won't be really thread-safe.
This seems a bit contradictory, is there something in the way that the locks are implimented that makes it impossible to make IList threadsafe?
Would something like this be safe where each function does a lock, calls the underlying function on IList and then unlock?
TSafeList<T> = class
IItems: IList<T>;
public
...
function Add(const value: T; wasadded: PBoolean = nil): PtrInt;
procedure AddFrom(const Another: IList<T>; Offset: PtrInt = 0; Limit: PtrInt = -1);
function Count: Integer;
...
end;
function TSafeList<T>.Add(const value: T; wasadded: PBoolean): PtrInt;
begin
IItems.Safe.Lock;
Result := IItems.Add(value, wasadded);
IItems.Safe.UnLock;
end;
Thanks @ab. I’ve done the whole lock/unlock thing everytime any of my code touches the IList, which obviously leads directly to copy/paste issues, and unreadable code. I guess the only sensible way of using it then would be if I define a class which has the ilist as a property and custom add/delete/count/index/pop functions that lock/unlock as needed.
Should the .Safe lock be used before/after all Add/Delete/Indexof/Pop methods to make it threadsafe, or is it already used by some of these functions? O no, do I need to add read locks around all the .Count functions too?
Would it be possible to make a fully threadsafe descendant of IList?
Ok, this makes a lot of sense, and it looks like this is what is done in the restws_chatserver sample project
Just use a TRestHttpServer and define WEBSOCKETS_DEFAULT_MODE (or useBidirAsync or useBidirSocket)
But this is where I start to feel like a preschool child learning the alphabet:
The define the callbacks as interface parameters in your SOA definition
From the sample project, it looks as if the interfaces has to be defined twice. One that does the work and always contain a callback parameter in all the functions, and one that is used to send the response to the request. When a response needs to be sent, the working function needs to step through all the connections manually, find the appropriate one and trigger the callback, then delete it from the list of callbacks? Since there is no implementation of the callback interface's functions, I guess only one standard one needs to be defined?
Thanks, not sure yet how to fit that workflow in with a javascript client, but will try.
There are now so many server types, that its becoming a bit difficult to determine when and how to use the different server types. Is the TWebSocketAsyncServerRest supposed to be used together with a TRestServer similar to how its used with TRestHttpServer, or how should the different interface functions be triggered?
Thanks @ab. Then it may be a good idea for me to wait a few days.
I need bidirectional communication between my current rest server and webclients. Websockets seem to be the obvious solution for this, but i am not sure how to handle the authentication side of things. Currently the standard mormot auth is used for the rest calls and that works very well. But when upgrading the connection to ws there wont really be a url to sign when sending messages. Switching to JWT seems like a bad idea, because then we'd be open to replay attacks.
Has anybody implemented an authentication/session scheme over websockets? I would really appreciate some tips and guidance from someone who has gone through this.
Thank you so much for this @ab!
Besides being very very useful, it also makes the code more readable and understandable, especially when someone unfamiliar with mORMot needs to work on/review our code.
High-level wrappers like these makes it possible for the power of mORMot to be fully used without needing to know the technical details of everything that is possible, but undiscoverable in this great framework.
Thanks @ab. Looks like it is working now.
If there is a way to get OnBeforeURI to be triggered in this scenario, then I can do a redirect or handle it in many ways.
If it is possible, that will be wonderful! Thanks @ab!
Thanks to @htits2008 for pointing me in the right direction. For anybody else affected by the same issue, here is a procedure to fix it. Just remember to add all new methods where you call it, otherwise they will still be affected by it. Maybe there is a way to get a list of all the method based functions (not the interface functions) and step through them at create time so that this procedure can be called without parameters? Will check later when more time is available:
procedure FixURIRoutingMess(const HttpServer: TRestHttpServer; const methods: array of string);
var
I, R: Integer;
J: TUriRouterMethod;
tmpURI: string;
begin
for I := 0 to Length(methods) -1 do
for R := 0 to HttpServer.RestServerCount -1 do
begin
tmpURI := '/' + HttpServer.RestServer[R].Model.Root + '/' + methods[I];
for J := urmGet to urmHead do
HttpServer.Route.Rewrite(J, tmpURI + '/', J, tmpURI);
end;
end;
This can be called something like this (based on the pastee sample above):
FixURIRoutingMess(aHttpsServer, ['method1']);
Unfortunately this doesn’t only affect my root function, but All my method functions. It is breaking bookmarked links at the clients and 3rd party integrators to the extent that upgrading to mormot 2 is now blocked.
Is there any way to get OnBeforeURI to trigger in this scenario in mORMot 2 like it does in 1?
If it is possible to get the OnBeforeURI to trigger for when the uri has a trailing / then I can do a redirect or handle it somehow.
I'm using Delphi 10.3.
example-03:
in u_SharedTypes, the following line causes [dcc32 Error] u_SharedTypes.pas(75): E2034 Too many actual parameters
pmCheckedFileName^ := TPath.Combine(dirName, fileName, False)
Removing the ,false parameter will cause it to compile.
I hope there is a way to restore the mORMot1 uri parsing that allows the trailing /, since I am porting projects to mORMot 2 that depends on it being consistent.
The example-03 does not compile, but I've created my own tiny example dpr here which shows the difference between mORMot 1 and 2 and can be used to highlight/test the issue:
https://paste.ee/p/tKeqf
I notice a difference between how urls are parsed between mORMot 1 and 2 and was wondering if there is a setting to get the original behaviour back. Sample code:
TMyRestServer = class(TSQLRestServerFullMemory)
published
procedure Method1(Ctxt: TSQLRestServerURIContext);
end;
...
procedure TMyRestServer.Method1(Ctxt: TSQLRestServerURIContext);
begin
Ctxt.Returns('{"value":"hello"}');
end;
When using mORMot1, accessing all these urls will give the same response:
https://127.0.0.1:8084/test/Method1
https://127.0.0.1:8084/test/Method1/
https://127.0.0.1:8084/test/Method1/someval
{"value":"hello"}
But with mORMot2, all of the above works except for https://127.0.0.1:8084/test/Method1/ which returns:
{"errorCode":400,"errorText":"Invalid URI"}
Is this the expected behaviour?
Thanks ab.
I did look at the long work sample before posting and it did look promissing. But it looks like that would require clients redevelop their existing solutions to use websockets.
Always difficult justifying breaking backwards compatibility to 3rd parties. Also not too sure how to modify almost 200 rest interfaces and their clients to fit into that workflow.
Increasing the thread pool size is the quick solution, yes. But that misses the point. The point I was trying to make, is that there are different uses for rest calls on the same system. Some are quick and high priority, such as logging in and editing records. It is reasonable to expect them to complete in a few ms, and not consume 32 threads. However, there are other types of functionality that takes longer, but are lower priority, such as running complicated reports.
The idea is to be able to put "slow" type tasks in their own queue and not allow them to bring the whole rest server to a halt. If the thread pool size is simply increased to 32, then running 32 slow reports (or 1 slow report being used by 40 users) will effectively prevent other users from completing higher priority tasks such as logging in and editing records until those slow tasks have been completed.
If there is a functionality to create "tasks" for those requests, they can be placed in a queue or something to manage them, releasing them from the rest server's thread pool until they complete and perform a callback to instruct the rest server to complete the request.
I have interface based services running on TRestHttpServer, which provide reports to third parties. One of the reports can be very slow and can take 15 or 20 seconds to complete. Is it possible to offload such a rest request to its own task without blocking the other worker threads or the http server?
Not sure I'm explaining it very clearly. To test for options, I've changed this sample \mORMot2\ex\ThirdPartyDemos\martin-doyle\04-InterfacedBasedServices and added the following function to the TExampleService rest interface:
function TExampleService.Time(var WaitSeconds: Integer): RawJson;
var
StartTime: TDateTime;
begin
StartTime := now;
Sleep(WaitSeconds*1000);
Result := '{"startTime": "' + DateToISO8601(StartTime) + '", "endTime": "' + DateToISO8601(now) + '"}';
end;
Then reduced the threadpoolcount from 4 to 1. The effect is, as expected, that several requests to this function are serialized, so running:
curl http://localhost:11111/root/example/Time?WaitSeconds=10
at the same time in different windows produce the following results:
{"result":[10,{"startTime": "2023-09-28T12:33:55.415Z", "endTime": "2023-09-28T12:34:05.420Z"}]}
{"result":[10,{"startTime": "2023-09-28T12:34:05.421Z", "endTime": "2023-09-28T12:34:15.430Z"}]}
This has the effect that the requests are executed one after the other, blocking new requests. My initial thought was that mORMot's new async http server would be ideal for this. But since these are rest requests already used by 3rd parties, I can not change that to websockets at this stage.
Is there something similar to the CurrentServiceContext from mORMot 1 in mORMot2?
Hi @ab
Not sure where to look, has OAuth 2.0 support been added to mormot 2?
Has anybody had any success implimenting this oauth2.0 into their project?
@mpv thanks that is a good suggestion. I'm a huge fan of postgresql. Unfortunately Mysql is mandated for this project.
Off topic: I really miss PGAdmin3. The web based tools always feel like swimming in syrup. I know there are lots of db tools for pg that works great, but typing in pgadmin3 always brought a smile to my face
Thanks @ab. I'll take a look.
Re the parameter binding, is this what you are referring to? https://dev.mysql.com/doc/c-api/5.6/en/ … param.html
I'm planning to start migrating one of my larger projects to mORMot 2 later this year. Since that will be a fairly big change anyway, the plan is to put the project on a diet and reduce the number of third party libraries it uses.
While looking at the tfb benchmarks for mORMot (seriously seriously impressive - thanks guys!!!), I was wondering if it would be possible to do something similar to mormot.db.raw.postgres and create mormot.db.raw.mysql to directly use libmysql.dll? That would remove the need for using firedac (and its quirks) in order to use libmysql.
Does anybody with more knowledge of libmysql know how do-able this would be?
Is it possible to get the number of rows with ISQLDBROWS in Mormot 1? I need to know if no rows were returned. While it is possible to set a local boolean variable first to false and then to true during each ISQLDBROWS.Step, it will add to an already complex code base.
Is there something similar to TSQLDBStatement.TotalRowsRetrieved that can be used after the code has stepped through the data to determine if an empty set has been returned?
Something along the lines of:
var
data: ISQLDBROWS;
begin
data := Connection.Execute('select * from table', []);
while data.step do
...
if data.TotalRowsRetrieved = 0 then
HandleNoData;
EDIT: Please ignore my idiocy, I completely forgot about being able to just do: data.Instance.TotalRowsRetrieved
Please allow me an uneducated question (this is far outside my expertise):
Would building these same benchmarks on both Fpc and Delphi compilers and running them on the same hardware provide any meaningful comparison between the compilers as well or would the current tests not provide relevant data?
Will the InitArrayFrom still provide the same structure / result when it is not an array? I thought that would only apply to dynamic array types.
It doesnt look as if the InitArray* functions do the same
Thanks TOrmTableWritable works perfectly. I've been using windows grep for most of the day searching for the replacements. Not sure how I missed this one. Looks like my list is now starting to become useful. I should be able to start migrating as soon as I find out about TDocVariantData.InitFromTypeInfo
What is the mormot 2 class for TSQLTableWritable called?
Is there a document available which lists which units different declarations have moved to? For example TDocVariantData moved from SynCommons to mormot.core.variants etc?
I am trying to find the location for TDocVariantData.InitFromTypeInfo and also busy making a list of common mormot types that I use a lot and where they are now to make migrating existing projects easier.
Thank you so much!
Does anybody still have copies of these gists available? I need a mormot server with oauth authentication and was hoping this would be something to build on. But the gists seem to be all gone already.
Our thoughts and prayers are with you and the Ukraine people. This war should never have started. We hope it ends quickly.
Hi @ab
In your blog post about mORMot 2 performance, you mention that "The official release of mORMot 2 is around the edge". We are very curious (and excited) for that day. Is there a specific date or feature parity that must be reached or are we getting close?
Thanks Arnaud. This is very exciting and must have taken you a lòt of time!
So much appreciated. It is a very useful expansion of the framework.
Can't wait to get my hands on the examples and start playing with this.
Yes, that is probably it. Maybe its a browser issue. I wont worry too much about that now.
32 threads
Maybe this is a browser issue and not a framework issue. It happens when I test using two tabs of the same browser, but not when using two different browsers at the same time, such as chrome and firefox.
I have done something similar to the documentation, but it looks as if all requests are then executed sequentially. I created a method that executes the functions like this
if Services['Calculator'].Get(svr) then
if SameText(req.FunctionName, 'Add') then
Ctxt.Returns(svr.Add(1, 2))
To come to that conclusion, I did something like this in Add:
...
StartTime := now;
Sleep(5000);
StopTime := now;
Result := '{"function": "Add", "StartTime": "' + DateTimeToStr(StartTime) + '", "StopTime": "' + DateTimeToStr(StopTime) + '"}';
..
When I open 2 browsers and call the above method at almost the same time, the results indicate that they are executed one after the other:
browser1: {"function": "Add", "StartTime": "2022/05/12 11:26:56", "StopTime": "2022/05/12 11:27:01"}
browser2: {"function": "Add", "StartTime": "2022/05/12 11:27:01", "StopTime": "2022/05/12 11:27:06"}
Is this the expected behaviour, or am I doing something wrong? The interface was declared with sicShared, but the same effect is seen when using sicPerThread or sicSingle .
I need to temporarily put something in place of a Datasnap server that third parties integrated to. So what is important, is that I maintain exactly the same uri formats. Unfortunately that means that third party clients over which I have no control, need to be able to make POST requests such as /root/Calculator/Add/1/2 which I need to translate to the registered interface functions, ie to ICalculator.Add(1, 2) and execute it.
How would I then find and execute the correct function on that interface with the correct parameters? I assume I'll have to search for the methodindex from the interfacefactory and then step through and assign the declared parameters?
Have someone done something like this before?
Since ExecuteCommand is private, how would the function be called after the parameters are assigned?
Is it currently possible to do for example
GET 'customers/{CustomerId}/invoices/{InvoiceNumber}'
or
POST /root/Calculator/Add/1/2
using interface based services?
TSQLRestServerFullMemory
It looks as if it always thinks the binary data is json, so maybe that is used as a fallback when it can't determine the data type? Since these are partial files, not full files (which would be too big - up to 1 gb), I assume determining the content type will not succeed. How can I provide / override / force the content type when serving binary data?
Providing the full path of the original file does not resolve the issue - is still sends the content-type as json.
I am attempting to correctly set the content-type when sending parts of binary files using code like this:
AddToCSV('Accept-Ranges: bytes', Ctxt.Call.OutHead, #13#10);
AddToCSV('Content-Range: bytes '+ inttostr(RangeStart) +'-'+inttostr(RangeEnd)+'/'+inttostr(SrcFileSize), Ctxt.Call.OutHead, #13#10);
AddToCSV(GetMimeContentTypeHeader('',SrcFileName), Ctxt.Call.OutHead, #13#10);
Ctxt.ReturnBlob(blob, HTTP_PARTIALCONTENT);
In the procedure, the content-type is correctly set as video/H264 in the Ctxt.Call.OutHead, but when the header reaches the client, it is content-type: application/json; charset=UTF-8
Any suggestions where this might be overridden? I have tried stepping through ReturnBlob, but when it returns to my proc, the header still looks correct, so I am not sure where it is happening.
Using v1.18