You are not logged in.
Pages: 1
Just wanted to share how I am securing commands getting executed in a concurrently opened transaction. Of course this is no solution, but at least its preventing the worst. ;-)
1) Implemented a new procedure TransactionCheck
2) Calling it from RemoteProcessMessage for commandExecs
(syndb.pas)
procedure TSQLDBProxyConnectionProtocol.TransactionCheck(AConnection: TSQLDBConnection; ASessionId: Integer);
begin
//prevent executing command in someone others t ransaction
if (ASessionID<>0) and (fTransactionSessionID <> 0) and (ASessionID <> fTransactionSessionID) and
(AConnection.Properties.InheritsFrom(TSQLDBConnectionPropertiesThreadSafe)) and
(TSQLDBConnectionPropertiesThreadSafe(AConnection.Properties).ThreadingMode=tmMainConnection) then
begin
raise ESQLDBRemote.Create('Remote transaction active');
end;
end;
procedure TSQLDBConnection.RemoteProcessMessage(const Input: RawByteString;
out Output: RawByteString; Protocol: TSQLDBProxyConnectionProtocol);
...
cExecute, cExecuteToBinary, cExecuteToJSON, cExecuteToExpandedJSON: begin
RecordLoad(InputExecute,O,TypeInfo(TSQLDBProxyConnectionCommandExecute));
ExecuteWithResults := header.Command<>cExecute;
>>>> Protocol.TransactionCheck(self,header.SessionID);
Stmt := NewStatementPrepared(InputExecute.SQL,ExecuteWithResults,true);
if fBlobAsNull in InputExecute.Force then
Stmt.ForceBlobAsNull := true;
@ab can I provide you more details?
thanks for your effort! I can confirm that date 9999-12-31 it is working now with array insert!
Sorry for my delay (I was out of office last week).
Thanks,
Matt
Hi,
in our legacy application we're using SynDbOracle. I realized it is not possible saving years > 4096 using array binding. In our application the date 9999-12-31 is used as infinite. With "normal" parameter binding it is no problem saving those values.
The problem is the conversion from ISO string to TOracleDate in TSQLDBOracleStatement.ExecutePrepared
....
ftUTF8,ftDate: begin
L := length(VArray[j])-2; // -2 since quotes will be removed
if VType=ftDate then
if L<=0 then
oDataDAT^[j].From(0) else
>>>>>> oDataDAT^[j].From(PUTF8Char(pointer(VArray[j]))+1,L)
....
The problem can also easily be reproduced using a TOracleDate variable.
1) do a From('9999-01-01T15:00:00')
2) do a ToIso8601
You'll see that the resulting value is 1807-01-01T15:00:00 which can also be found in a inserted record.
Is this behaviour known ?
best regards,
Matthias
I am using TSQLDBServerHttpApi on the server side. On the server side it is handled in SynDb.TSQLDBConnection.RemoteProcessMessage. The database connection on the server is a Oracle Connection (SynDbOracle). As far as I understood, the problem is in the RemoteProcessMessage. On cTryStartTransaction it correctly checks whether a concurrent transaction exists or not. On cExecute it just executes the statement on the connection, regardless if a transaction from another session is active or not.
Hi,
using the http api server in single threaded mode (tmMainConnection) it allows to use transactions. If a transaction of one client is going on, it corretly rejects transaction starts of other clients (after some internal retries).
But if another client does an insert or update out of any transaction this one is executed in the currently active transaction of the other client.
Is this a known or the intended behaviour?
Best regards,
Matthias
Great work , thanks Del! Your report just saved me!
Hi,
i just wanted to share a thought. In SetInternalProperties of the SQLDBProxyConnectionProperties, called from the constructor, the initial connect is made to the SQLDBServerHttpApi. After getting the token the session is established and the session id is retrieved.
SynDB.pas
procedure TSQLDBProxyConnectionPropertiesAbstract.SetInternalProperties;
...
begin
...
Process(cGetToken,self,token);
...
CurrentSession := Process(cGetDBMS,InputCredential,fDBMS);
end;
If anything is going wrong here (causing an exception), the destructor is called - since the exception happened in the constructor. In the destructor Process(Quit) is called, trying to end the session on the server.
destructor TSQLDBProxyConnectionPropertiesAbstract.Destroy;
begin
try
inherited Destroy;
Process(cQuit,self,self);
finally
fProtocol.Free;
end;
end;
If the connection never has been established (due to an connection error, wrong password) it will raise another exception in the destructor and in case of an timeout related connection error another wait for the timeout to happen. Wouldn't it make more sense to suppress the call to quit in such scenarios?
Thanks for your opinion,
Matthias
Hi pvn0,
thanks for the updating the ticket with the resolution! I think that at least the resolution in your case was something different. In my scenario the debugger is not involved.
I just have the problem that some single requests are delayed under load, all other is just working perfect.
I guess mpv is referring to ab's comment about winhttp vs socket client. I am still researching in this matter.
thanks,
Matthias
Hi mpv,
thanks for your link. Didn't found it in the forum while investigating.
thanks,
Matthias
Hi,
we are using the TSQLDBServerHttpApi for our legacy application to proxy the sql access using http.sys and WinHttp client classes.
In load situations I noticed a delay with processing of some http requests of 500-1000 ms (also the cGetTocken and cGetDBMS command), sometimes even more.
I found out that the delay already takes place in THttpApiServer.Execute as the BeforeRequest is already delayed.
Seems like the request is read out delayed, though client and server are on the same virtual Windows 2016 Server machine and I am using the tmThreadPool, ensuring that there are enough threads available (tried with 10 to 100 server instances).
But other requests (placed later from other client processes) are processed meanwhile.
In my last load simulation we talk about 900 requests (commands) within 1 minute. The statements are simple inserts in a small table and take about 30 ms and less.
I've done a logman httptrace and it appears that the TimeCreated already contains the delay.
Any ideas what could be responsible for the delay or where I could dig into for further investigations?
Thanks,
Matthias
ok, granted. Didn't know that yet.
But nevertheless the exception in the constructor seems to cuase a memory leak with 504 bytes each time.
I used this code to measure the memory consumption before and after the object creation. I instanced a TSQLDBWinHTTPConnectionProperties with a unused server port.
function GetMemoryUsed: int64;
var
ManagerState: TMemoryManagerState;
BlockTypeState: TSmallBlockTypeState;
begin
{$WARN SYMBOL_PLATFORM OFF}
GetMemoryManagerState(ManagerState);
{$WARN SYMBOL_PLATFORM ON}
result := ManagerState.TotalAllocatedMediumBlockSize + ManagerState.TotalAllocatedLargeBlockSize;
for BlockTypeState in ManagerState.SmallBlockTypeStates do
result := result + BlockTypeState.UseableBlockSize * BlockTypeState.AllocatedBlockCount;
end;
Hi,
I noticed that the construction of an TSQLDBWinHTTPConnectionProperties instance fails with an exception if the server is not available or the login information is incorrect.
As the exception happens in the constructor it leaks 504 bytes each time.
Overwriting the constructor and putting the inherited in a try except does not work either. Access violations happens on destroying the not fully created instance.
constructor TMyWinHTTPRemoteConnectionProperties.Create(const aServerName, aDatabaseName, aUserID, aPassWord: RawUTF8);
begin
try
inherited;
except
self.Free;
raise;
end;
end;
Also destroying a TSQLDBWinHTTPConnectionProperties instance where the server got unavailable results in an exception. I assume that also the memory of this instance won't be released.
Is this the expected behaviour?
thanks,
Matthias
Hi,
as we succeeded in integrating this, I wanted to share our solution. Maybe someday it's useful to someone other.
The key was that all requests are handled by the SynDb.TSQLDbConnection.RemoteProcessMessage method. Instead of just letting the connection create the NewPreparedStatement we called our newly introduced template method, inherited the connection and implemented our template method. In this template method we decide whether the request can be handled locally or we inject our own new statement instance, which is a RemoteStatement again. Nothing more to change and it really works like a charme.
Only transactions aren't duplicated in this scenario, but we don't need in this particular case. If anyone has an idea of how to easily add this functionality, it's welcome ;-)
thanks,
Matt
Testing continued, other than with the previous query just truncating the data this query results in an ORA-01406 exception.
select bound from (
Select :bound bound from dual connect By Rownum <= 100001
union all
Select :bound bound from dual connect By Rownum <= 100001
) order by 1
If using only 10001 rows instead, it works. It also does work if executed the second time (from cache).
In TOAD 12.9.0.71 both variants work.
thanks,
Matthias
Of course I can follow that this is the expected behaviour.
But just to let you know, tested in TOAD 12.9.0.71 it just works never having executed this query on the instance.
Also as mentioned before it just works if using SynDBFireDAC.
thanks, Matt
Thanks for your suggestions!
Swapping columns did not help, but casting the data does the trick.
But why is it as expected? If I execute the same query using SynDBFireDAC, no errors will occur. Also using TOAD would not cause an error!
thanks,
Matthias
Hi,
if we use the following query
select bound from (Select :bound bound from dual union all Select :bound2 bound from dual) order by 1
the column named "bound" returns ''.
SynLog reveals, that the column has been truncated.
20170621 09311207 SQL TOurOracleStatement(04242EE0) select bound from (Select 'Testinhalt' bound from dual union all Select 'Testinhalt2' bound from dual) order by 1
20170621 09311207 DB Truncated column BOUND
If the query is run again, it works due to the query cache. Disabling the query cache always generates this error.
The logging is called in the TSQLDBOracleStatement.GetCol-method.
Can anyone confirm this or have an idea of how to solve it?
thanks for your response,
Matthias
Hi,
We are porting our legacy application to mormot using the database layer, but not the ORM part.
We want to implement the http connection using our own SynDB HTTP Server. As a special requirement we want to redirect/proxy SQL commands affecting certain tables to a central system.
We figured out that TSQLDBServerHttpApi should be a good point to start.
What would be the right place for implementing an inspection of the SQL-Statement or HTTP-Request to decide whether it has to be redirected or not.
How could we do a redirection and properly hand over the THttpServerRequest to the uplink server?
thanks,
Matt
Hi,
we are migrating the database layer of our existing legacy application to mORMot, mainly using the compatibility class TQuery.
For accessing the database we use FireDAC as we also utilize the oracle events feature (DBMS_ALERT).
Running our own testcases we experienced a problem using query parameters if null values are involved. FireDAC complains that the datatype is not defined.
Especially if the query is cached it can happen that a parameter is null the first time, and of correct value the next time the query is executed.
Finally we fixed it on our side by calling a template method in procedure TQuery.Execute(ExpectResults: Boolean); before the actual parameter binding takes place. In this template method we set the actual data type of the original FireDAC parameter instance. We know the parameter types anyways, as in our existing code the parameters had been declared explicitly.
Maybe there is also another possible approach to fix it.
Best regards,
Matt
Pages: 1