You are not logged in.
Pages: 1
If BLOB data is more then 64 bytes, function RetrieveBlob returns data with garbage (part of data + null bytes + another part of data + more nulls etc.).
It seems that bug in function TODBCStatement.ColumnBlob (SynDBODBC.pas). I comment buggy usage of ColumnDataSize and add one line to fix a bug:
function TODBCStatement.ColumnBlob(Col: integer): RawByteString;
var res: TSQLDBStatementGetCol;
offs: integer;
begin
res := GetCol(Col,ftBlob);
case res of
colNull: result := '';
colWrongType: ColumnToTypedValue(Col,ftBlob,result);
else
with fColumns[Col] do begin
result := copy(fColData[Col],1,ColumnDataSize);
offs := 0;
while res=colDataTruncated do begin
//inc(offs,ColumnDataSize);
res := GetColNextChunk(Col);
if ColumnDataSize<=0 then
break;
//SetLength(result,offs+ColumnDataSize);
//move(pointer(fColData[Col])^,PByteArray(result)^[offs],ColumnDataSize);
result := result + copy(fColData[Col],1,ColumnDataSize); // < bug fix >
end;
end;
end;
end;
Bug reproduced on MySQL 5.5 and PostgreSQL 9.3 via ODBC drivers.
Offline
I've committed http://synopse.info/fossil/info/58f2ed96c3
But I still can't know what was wrong with the previous implementation, which was a bit more complex, but avoided an unneeded temporary memory allocation.
IMHO both logics are equally the same.
Thanks for the feedback!
Offline
The ColumnDataSize indicates the number of bytes that remain to read, if data is truncated. But previous logic was based on the assumption that it indicates the actual bytes count in the received buffer fColData.
My fix is not optimized and I suggested this as a quick fix. Actually, it works only because Copy function ignore size parameter if it grater then string size.
Offline
From here: https://msdn.microsoft.com/en-us/library/ms715441.aspx
SQLSTATE: 01004
Error: String data, right truncated
Description: Not all of the data for the specified column, Col_or_Param_Num, could be retrieved in a single call to the function. SQL_NO_TOTAL or the length of the data remaining in the specified column prior to the current call to SQLGetData is returned in *StrLen_or_IndPtr. (Function returns SQL_SUCCESS_WITH_INFO.)
So, after first call of GetData, if we detect that data is truncated, we can use ColumnDataSize to calculate total buffer size for the next call GetData and set result string size. But, I don't know what we should do in case if function return SQL_NO_TOTAL.
Offline
Also, I propose increase default buffer size for big blobs in function TODBCStatement.BindColumns:
if ColumnSize>65535 then
ColumnSize := 0; // avoid out of memory error for BLOBs
After ColumnSize sets to zero, buffer size sets to 64 and first call GetData returns very small part of data and generate info message in logs about buffer size:
000000000184E389 ! info [01004] The buffer was too small for the GetData. (-2)
0000000001B122A8 ! info [01004] The buffer was too small for the GetData. (-2)
I propose set it at least 64k (MINIMUM_CHUNK_SIZE) or much greater 10*64k for example ("640k will be enough"(c)).
Although, it will not be very important if after first call GetData, buffer will be increased to the actual blob size (now it's increased only to 64k).
Last edited by zed (2015-08-15 14:41:32)
Offline
Pages: 1