You are not logged in.
Hello, sadly I'm again,
but I'm currently completely stuck on a simple task and hope to find the answer through this post.
I build in a function a query (for example):
SELECT a.* FROM Item AS a JOIN (SELECT DISTINCT ItemID FROM ItemFolderEntry WHERE (ItemFolderID = 1) OR (ItemFolderID = 3)) AS aa ON (aa.ItemID = a.ID)
EXCEPT
SELECT a.* FROM Item AS a JOIN (SELECT DISTINCT ItemID FROM ItemFolderEntry WHERE (ItemFolderID = 2)) AS aa ON (aa.ItemID = a.ID)
and execute it as this:
...
JSONResult:=TSQLAccess.RestServer.DB.ExecuteJSON(CompoundSQL);
if JSONResult = '' then
Exit;
JSONTable:=TSQLTableJSON.Create([TSQLRecordItem], CompoundSQL, JSONResult);
Item:=TSQLRecordItem.Create;
try
Item.FillPrepare(JSONTable);
while Item.FillOne do
TargetList.Add(ConvertItem(Item));
finally
Item.Free;
JSONTable.Free;
end;
...
I retrieve results, but not what I expect. The compund query is not considered, there can be a UNION, INTERSECT, EXCEPT but it has no effect, it's always ignored.
When I use the same query directly in SQLite (cli) on the same tables (copies without the collations SYSTEMNOCASE or ISO8601) I get the result like concepted.
Is there any code in the framework (i has researched and hasn't found), which manipulate such compound queries?
Last edited by WladiD (2010-11-16 10:44:30)
Offline
I look and look and can't find anything.
The only difference are the custom collate functions. It's possible that one of them work not correct and so SQLite is not able to detect identical rows? The documentation says:
For the purposes of determining duplicate rows for the results of compound SELECT operators, NULL values are considered equal to other NULL values and distinct from all non-NULL values. The collation sequence used to compare two text values is determined as if the columns of the left and right-hand SELECT statements were the left and right-hand operands of the equals (=) operator, except that greater precedence is not assigned to a collation sequence specified with the postfix COLLATE operator. No affinity transformations are applied to any values when comparing rows as part of a compound SELECT.
In the "Item" table I have both framework collations ISO8601 and SYSTEMNOCASE. May be this is the issue.
Offline
IMHO collations are nothing to do with this. You are just comparing integer values (ItemFolderID or such), which don't use collation at all.
To be sure, put a debugger breakpoint in the collation routines, or comment them in TSQLDataBase.DBOpen (SQLite3.pas).
So I suspect your CompoundSQL statement is not transmitted to the SQLite engine.
Step by step debugg the TSQLRequest.Execute method.
Is the retrieved JSON content correct?
Which SQLite3 cli version are you using?
What is the purpose of your ConvertItem function?
Online
IMHO collations are nothing to do with this. You are just comparing integer values (ItemFolderID or such), which don't use collation at all.
To be sure, put a debugger breakpoint in the collation routines, or comment them in TSQLDataBase.DBOpen (SQLite3.pas).
You are partially right. The subselect for the right part of the join (...FROM Item AS a JOIN (SELECT DISTINCT)...) don't need collate, because there is only a compare/select on integers. But the compund operators (EXCEPT, UNION...) require the collate functions, if the result contains text fields which has a default collate, see SQLite doc (section Compound Select Statements). Yes I set a break in Utf8SQLCompNoCase and it stops on each text column with collate for every row, on executing the posted query.
So I suspect your CompoundSQL statement is not transmitted to the SQLite engine.
Step by step debugg the TSQLRequest.Execute method.
Is the retrieved JSON content correct?
The posted query is executed without any errors, the JSON content is well formed and filled with unexpected items .
Which SQLite3 cli version are you using?
What is the purpose of your ConvertItem function?
The SQLite cli is 3.7.2.
ConvertItem converts only from TSQLRecordItem to another TObject descendant, there is no high level actions, so I can definitely say, that there is nothing wrong.
I write currently a test project to reproduce the behaviour and post it here later.
Offline
Good news, I figured out and fixed the bug!
If a field with your custom collate ISO8601 is empty '' (not NULL), so SQLite calls the registered collate function with length 0 for s1len or s2len, but the pointers s1 or s2 maps to the string of the previous call. So no errors or av occurs, but the results was wrong. I extended the ISO8601 collate function as follows and now it works like it should.
function Utf8SQLDateTime(CollateParam: pointer; s1Len: integer; s1: pointer;
s2Len: integer; s2: pointer) : integer; {$ifdef USEC}cdecl;{$endif}
var V1,V2: Int64; // faster than Iso8601ToDateTimePChar: uses integer math
begin
// begin patch
if s1Len = 0 then
s1:=PUTF8Char('');
if s2Len = 0 then
s2:=PUTF8Char('');
// end patch
if (s1=s2) then begin
result := 0;
exit;
end;
V1 := Iso8601ToSecondsPUTF8Char(s1,s1Len);
V2 := Iso8601ToSecondsPUTF8Char(s2,s2Len);
if (V1=0) or (V2=0) then // any invalid date -> compare as UTF-8 strings
result := UTF8ILComp(s1,s2,s1Len,s2Len) else
if V1<V2 then
result := -1 else
if V1=V2 then
result := 0 else
result := +1;
end;
May be the same issue exists in the function SQLite3.Utf8SQLCompNoCase, because here are also no length checks.
It's time for next commit
P.S. I can post the announced test project, if you are interesting...?
Offline
Great finding!
About Utf8SQLCompNoCase, the called UTF8ILComp low level function properly handles s?Len=0 cases. I just added the case when both s1Len and s2Len are equal to 0.
See http://synopse.info/fossil/info/8d52dda6a5
I'm still waiting for the official 3.7.4 release of the engine, which should not be late now.
About the test project, it's very interesting!
The more samples we have, the better!
Online
I'm not sure, whether this simple test project is related to examples, but here as promised:
program CollateFunctionIssue;
{$APPTYPE CONSOLE}
uses
SysUtils,
SynCommons,
SQLite3Commons,
SQLite3,
DateUtils;
type
TSQLRecordItem = class(TSQLRecord)
private
FItemID:RawUTF8;
FTitle:RawUTF8;
FStartTime:TDateTime;
FEndTime:TDateTime;
published
property OriginItemID:RawUTF8 read FItemID write FItemID stored FALSE;
property StartTime:TDateTime read FStartTime write FStartTime;
property EndTime:TDateTime read FEndTime write FEndTime;
property Title:RawUTF8 read FTitle write FTitle;
end;
TSQLRecordItemFolder = class(TSQLRecord); // Isn't really needed for this example, just for ref
TSQLRecordItemFolderEntry = class(TSQLRecord)
private
FItemFolderID:TSQLRecordItemFolder;
FItemID:TSQLRecordItem;
published
property ItemFolderID:TSQLRecordItemFolder read FItemFolderID write FItemFolderID;
property ItemID:TSQLRecordItem read FItemID write FItemID;
end;
var
RestServer:TSQLRestServerDB;
function AddItem(OriginItemID, Title:String; StartTime:TDateTime):Integer;
var
Item:TSQLRecordItem;
begin
Item:=TSQLRecordItem.Create;
try
Item.OriginItemID:=StringToUTF8(OriginItemID);
Item.Title:=StringToUTF8(Title);
Item.StartTime:=StartTime;
Result:=RestServer.Add(Item, TRUE);
finally
Item.Free;
end;
end;
procedure AddItemToFolder(ItemID, FolderID:Integer);
var
FolderEntry:TSQLRecordItemFolderEntry;
begin
FolderEntry:=TSQLRecordItemFolderEntry.Create;
try
FolderEntry.ItemFolderID:=Pointer(FolderID);
FolderEntry.ItemID:=Pointer(ItemID);
RestServer.Add(FolderEntry, TRUE);
finally
FolderEntry.Free;
end;
end;
procedure AddTestData;
var
ItemID1, ItemID2, ItemID3:Integer;
begin
Writeln('Begin AddTestData');
ItemID1:=AddItem('123', 'First Item', Now);
ItemID2:=AddItem('456', 'Second Item', IncDay(Now, 1));
ItemID3:=AddItem('789', 'Third Item', IncDay(Now, 2));
AddItemToFolder(ItemID1, 1);
AddItemToFolder(ItemID1, 2);
AddItemToFolder(ItemID2, 1);
AddItemToFolder(ItemID2, 2);
AddItemToFolder(ItemID3, 3);
Writeln('End AddTestData');
end;
const
DBFileName = 'Test.sqlite';
var
DBExists:Boolean;
CompoundSQL:RawUTF8;
JSONResult:RawUTF8;
JSONTable:TSQLTableJSON;
Item:TSQLRecordItem;
begin
DBExists:=FileExists(DBFileName);
RestServer:=TSQLRestServerDB.Create(TSQLModel.Create([TSQLRecordItem, TSQLRecordItemFolder,
TSQLRecordItemFolderEntry]), DBFileName);
if not DBExists then
begin
RestServer.CreateMissingTables(0);
AddTestData
end
else
Writeln('DB already exists.');
{**
* Folder 1 | 2 | 3 |
* ----------------------
* Item 1 | X | X | |
* 2 | X | X | |
* 3 | | | X |
*
* @see AddTestData
*
* This query should return only the Item #3!
*}
CompoundSQL:='SELECT a.* FROM Item AS a JOIN (SELECT DISTINCT ItemID FROM ItemFolderEntry WHERE (ItemFolderID = 1) OR (ItemFolderID = 3)) AS aa ON (aa.ItemID = a.ID)' + #13#10 +
'EXCEPT ' + #13#10 +
'SELECT a.* FROM Item AS a JOIN (SELECT DISTINCT ItemID FROM ItemFolderEntry WHERE (ItemFolderID = 2)) AS aa ON (aa.ItemID = a.ID)';
Writeln; Writeln(CompoundSQL); Writeln;
JSONResult:=RestServer.DB.ExecuteJSON(CompoundSQL, TRUE);
if JSONResult = '' then
Exit;
JSONTable:=TSQLTableJSON.Create([TSQLRecordItem], CompoundSQL, JSONResult);
Item:=TSQLRecordItem.Create;
try
Item.FillPrepare(JSONTable);
while Item.FillOne do
Writeln(Format('Item #%d, "%s"', [Item.ID, UTF8ToString(Item.Title)]));
finally
Item.Free;
JSONTable.Free;
end;
Readln;
RestServer.Free;
end.
Offline
Thanks, the TSQLTableJSON and FillPrepare/FillOne code could be instructive.
It shows own you can map a SQL result into an existing class.
Resulting code is easy to follow, thanks to ORM.
Your AddItem() function made myself an idea about a new generic method.
Stay tuned!
Online
Here we are.
I've added a new
function TSQLRest.Add(aTable: TSQLRecordClass; const aSimpleFields: array of const): integer
method to add a record from a supplied list of const parameters, following the declared order of published properties of the supplied table.
Therefore, you could have used the following code, instead of an hand-made AddItem() function:
ItemID1 := RestServer.Add(TSQLRecordItem, ['123', 'First Item', Now, 0]);
ItemID2 := RestServer.Add(TSQLRecordItem, ['456', 'Second Item', IncDay(Now, 1), 0]);
ItemID3 := RestServer.Add(TSQLRecordItem, ['789', 'Third Item', IncDay(Now, 2), 0]);
Nice, isn't it?
I've fixed also some problems about sftMany/TSQLRecordMany field type handled as a not simple field.
Online
I've added two methods:
function TSQLRest.Update(aTable: TSQLRecordClass; aID: integer;
const aSimpleFields: array of const): boolean;
function TSQLRecord.SimplePropertiesFill(const aSimpleFields: array of const): boolean;
To have the same opportunity for updating simple field content in one line.
Online
Very cool methods, which allows fast one-liner, but can be also evil, because they rely on the order and count of the published properties.
Imagine, you write a part of a application today with this construct on several places and in a couple months you extend your TSQLRecord descendant, the code further compiles successful, but fails silently (without an exception) on runtime. Really bad were the changing (can be a mistake) of the properties, in such case it happily insert wrong data. This can be very frustrated.
These are just my worries.
But why not replace "aSimpleFields: array of const" with "aFieldsJSON:RawUTF8".
Example call were like that:
ItemID1 := RestServer.Add(TSQLRecordItem, '{"OriginID":"123","Title":"FirstItem"}');
Advantages:
* No need to list all values
* Independent from the order
* The field names can be validated and a exception thrown, if anything is wrong
Disadvantages:
* Still no checks by compiler
* Is just a string
In delphi I see no perfect solution for this kind of concept.
Offline
Your syntax with JSON is more difficult to write.
The current implementation with array of const is similar to your AddItem() function. If you change the order in the AddItem() function parameters between two parameters of the same type, you'll have another silent error...
About silent mistake, I know it could happen, but:
- you add or remove properties/fields in a table, rather than changing its order (what's the interest of changing the order? nope);
- the number of parameters is checked when the Add() method is called: if there is a missing parameter (a new one), it will fail.
So, as you say, there is no perfect solution for this kind of concept:
- if you want to be 100% relying on the Delphi strong typing, use a TSQLRecord instance, then properties;
- if you want to type fast, prototype and have less code to read, the new Add() method can help you, for example to populate a database with const values - in all cases, it's more easy to code than a SQL statement;
- if you want to ensure less regression, don't forget to put a comment in the TSQLRecord definition (at the same line than the published keyword, for instance), saying that the published properties order must stay the same, since the Add(array of const) method is used in the code.
Online
It sounds to me, that you are offended about my post?
Many apologies, if it's so!
I hope for a good collaboration further.
Offline
It sounds to me, that you are offended about my post?
Ho no, not at all!!!
I just don't see any better alternative to this Add() method.
I agree 100% about your post and your point of view.
I was perhaps a bit rude in my answer, because I answered straight away, because I didn't have a lot of time when typing...
Sorry for the misunderstanding.
Online