You are not logged in.
Well, deterministic approaches are quite common these days, and the literature leans heavily their way.
Traditional approaches of encrypting a growing database of private keys are getting abandoned in fields where security is paramount and data loss is not acceptable.
They have proved to be just as resilient to attacks (provided the seed is truly random), while being more resilient to data-loss and corruption.
For instance you can have a look at BIP 0032 for a now battle-tested use case of deterministic key generation, and there are schemes for post-quantum resilience as well, should that materialize (f.i. https://dl.acm.org/doi/10.1145/3372297.3423361)
Usage purpose is for sharing of secure notes, but without a centralized server controlling access to the notes (and thus having access to everything), so something closer to a password vault or PGP.
> If you change your password, you also change your key pair
In the scheme, the password/seed phrase define everything: the "login" is public and untrustable (and amenable to DB admin abuse), tieing password changes to pubkeys is a feature.
If a password is suspected to be compromised, then the keys should be revoked ASAP, as they could be compromised and they are what matters.
Merely changing a weak password without revoking its keys would be pointless.
> Good password hashing requires a salt, which is not a secret value,
The "login" would be something universally unique and human-friendly, like an email, and the salt would be that unique login + random salt provided per user by the server.
The client app can also use a pepper, stored on the user hardware, and backed up by the user wherever he/she wants.
At setup (or reset) of an "account", you would register the pubkey(s) on the server, which other users could use as a "phonebook" in the first steps of sharing notes.
Any changes in pubkeys for a given login would be tracked / detectable, and serve as a first line of defense against impostors.
For an actual password change, the user would post a notification of the new pubkey(s) with the old key(s), then it's up to interlocutors to accept the change (or not, leaving the possibility to confirm through other channels).
If a basic user forgets a password completely, then recovery would be through the notes he shared, that other users could share back again.
In a corporate setting, automatic sharing to a recovery account could be used (with the recovery seed kept offline or on a secure element)
> The resulting public key would, by construction, be usable in an offline dictionary attack.
Yes, but unlike in a password-protected access, there is neither a clear-text database nor a server-side decryption key to be leaked (or attacked).
This is a tradeoff: the cost of an offline attack would have to be way higher than more classic attacks against the server, backups or plain old social engineering.
The cost comes from the dictionary attack needing to be against each user's keys, with an access to the per-user random salt 'and pepper), and attacking a user's data will not help attacking another user's data.
On the password hashing side, I'm currently looking at a Argon2id and BalloonHash for password hashing (memory hard all the way, logic gets ever faster and more parallel, but memory access keeps lagging).
Thanks, so safe. The context I am foreseeing would not be very sensitive as far as key generation goes (basically on user demand rather than automated)
I disagree on deterministic key generation, with a secure seed phrase, the security is comparable or higher than a classic password-protected storage (as demonstrated by HD wallets in the crypto world), and it makes backups and recovery more foolproof (both against data loss and attacks against backups). Vulnerability of the CSPRNG (not so rare in VMs) is eliminated.
Hi,
I have been looking at tweaks that would be required to make SynECC support deterministic key generation and derivation based on a seed phrase.
That could be achieved by feeding getRandomNumber a CSPRNG based on the seed, however, getRandomNumber calls TAESPRNG.Fill, where the Main CSPRNG can be specified, but it can only be specified globally, and it is potentially called from all over the place. So not deterministic without involving global locks and/or singe threading.
Alternatively it would be possible by adapting ecc_make_key_pas, where that would be trivial, but from looking at the comments in the code, that brings two questions
Is ecc_make_key_pas implementation safe in Delphi Win32 ? There is a comment about needing "proper UInt64 support at compiler level", not sure what it applies to exactly.
How safe / resilient to side channel or timing attacks is the Pascal implementation vs the C one ? is it more a stopgap solution for ARM, or are the issues just performance ones ?
Thanks!
Found the issue: TPdfCanvas.TextRect always does a BeginText/EndText, so SetTextMatrix is completely ignored
Hello,
I am trying to render rotated text, but SetTextMatrix does not seem to have any effect.
I have been trying with variations of the following code, with SetTextMatrix inside or outside BeginText, there was no effect... anything obvious I missed ?
var a := rotation*(PI/180);
var c := Cos(a);
var s := Sin(a);
pdf.Canvas.SetTextMatrix(c, s, -s, c, 0, 0);
pdf.Canvas.BeginText;
if centered then begin
X := X + pdf.DefaultPageWidth div 2;
Y := Y + pdf.DefaultPageHeight div 2;
end;
pdf.Canvas.TextRect(PdfRect(X, Y, X+1, Y+1), text, paCenter, False);
// pdf.Canvas.TextOut(X, Y, text); // tried this as well, no better
pdf.Canvas.EndText;
When trying to "modify" an existing .ods file (actually a zip) by recreating using AddFromZip, I bumped on a few issues:
* AddFromZip does not use RetrieveFileInfo, so it ends up with entries of size zero (this is is easy to fix)
* TZipRead.Create ignores folders
* TZipWrite does not support adding folders
The last two issues are the most problematic,
In that particular .ods, if the empty folders are missing, this is detected as a corrupt .ods
To create a test .ods just create a new spreasheet in LibreOffice, and hit "save".
About one year later, MS AV is still present, with varying degrees of occurrence depending on binary...
I got my executable whitelisted, but it's not enough, apparently the only surefire way is to add an exclusion.
It is not a SynZip issues yes, but it does not happen (as bad) with various unzip.exe I tested.
Delphi RTL must be doing something that triggers MS AV in a bad way...
No difference between Win32 et Win64...
I simplified the code to just this, to try variation of the locking and access mode (with no effect)
var buf := z.UnZip(i);
var h := CreateFile(PChar(fileName), GENERIC_READ or GENERIC_WRITE,
0, nil, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, 0);
WriteFile(h, buf[1], Length(buf), nb, nil);
Assert(nb = Length(buf));
CloseHandle(h); // <-------- this is the slow part, and only it
Another clue possibly:
Just tried with an old unzip.exe (http://gnuwin32.sourceforge.net/packages/unzip.htm), it shows a very high MsMpEng.exe activity, but the extraction is overall quite fast (5-6 seconds, vs 40 seconds with SynZip in a Delphi binary)
By reference 7zip takes 1-2 seconds with minimal MsMpEng activity (comparable to the activity when scanning the extracted files)
No digital signature on the unzipper, but running a full scan on the extracted binaries is very fast, barely registering in CPU usage, and AFAICT it happens for 7zip as well.
(but the unzipped binaries are signed)
Apparently some other zip extractors run into the same issue (https://thomasmullaly.codes/2017/11/19/ … hocolatey/)
I have tried renaming the file, no effect.
I also tried writing the first 16 kB header with zeroes, and then the rest of the file with content, this seems enough to prevent the slowdown. But when opening the file again to write the missing 16 kB headers, the MsMpEng slowdown kicks in again.
I also investigated when the slowdown occurs, it's on the FileClose, writing the data itself is fast.
When delaying the close, by not freeing the TFileStream and not doing the FileClose, then it is possible to unzip all the files at high speed.
Tthe solution might be to defer all the FileClose to a background thread or an asynchronous process of some sort...
Hi,
I am experiencing very slow unzip times when unzipping executables (.exe, .dll) with SynZip, while unzipping the same archive with 7Zip is very fast.
Looking at the Windows task manager, all the CPU time is spent in the Microsoft Antivirus scanner, apparently every time SynZip writes a block, the AV does a scan.
By comparison, 7Zip only shows antivirus activity when the unzip is complete.
I have tried setting exclusive mode for the TFileStream in the .Unzip() method, but that showed no improvement...
I also tried to unzip in memory, then write all at once, no improvement either...
Any other ideas ?
Someone made a patch for System.Zip to add ZIP64 format support
https://github.com/ccy/delphi-zip
Diff'ing with Delphi's System.Zip will pinpoint the changes in headers.
I see the issue derives from when the hash leads to src-o <= 2, the compressor becomes dead in the water, as it then just copies bytes src to dest.
IIRC some LZ variations avoid that by "adopting" the not yet scanned bytes, which results in a form of RLE.
If I understood correctly the "src-o = 1" situation, it is basically a case where the dictionary gets "refreshed" at every step for that particular hash?
Doing some basic tests, I have found it's possible to tailor the compressed stream so that it can be made to work, however the issue is in the way the decoder looks up offsets, the RLE can only happen after the offset has been used once, which would mean the compression phase would need to maintain an image of the decompressor offset state.
For instance in an ouput with always the same bytes, you can enter the block guarded by src-o>2 the second time (f.i. when CWBit > 2), if you enter it the first time, then the "move(offset(h)^,dst^,t)" ([ replace by ( to escape BBcode) in the decompressor will fail as offset(h) is still empty.
At the moment I am not sure to fully understand the CWBit logic though, so I may be overlooking something simple
Only noticed it here when investigating a storage oddity for one sensor which got de-connected (so sending only zeroes), and became was the #1 user of storage space.
Is there any description of how SynLZ beyond looking at the code of SynLZcompress1pas ?
Thanks!
Any chance for SynLZ issue? (https://synopse.info/forum/viewtopic.php?pid=27504)
Function is useful under 64bit, when you have many SQLite databases open at the same time, as the cache-size setting is per-database (connection), while this limit is global for the process, so this allows to limit the total cache size.
Relevant snippets below
/// sets and/or queries the soft limit on the amount of heap memory
/// that may be allocated by SQLite.
soft_heap_limit64: function(N: Int64): Int64; cdecl;
'backup_pagecount','serialize','deserialize','soft_heap_limit64',
'config','db_config');
When testing compression, I bumped on an odd issue with SynLZ: when the input data is a long sequence of the same byte, then compression fails.
The issue still happens if you introduce any non-repeated sub-sequence (for instance if you give any byte in the sequence another value).
Another example is for an input 8bit string of 'hello' followed by 1000 space character followed by 'world', which will not compress at all.
I don't understand why InitSocketInterface() should be called twice?
It is executed in the initialization section of SynCrtSock, and that's all...Anyway, I've patched the function as https://synopse.info/fossil/info/5924b5bc17
Thanks!
The reason it's called twice is I have some units and projects using only SynWinSock, so they have their own call InitSocketInterface, and when used in projects that also use SynCrtSock, then InitSocketInterface gets called twice.
If it's not meant to be called more than once, I guess the SynSockCount mechanism could be removed, and the initialization performed directly by SynWinSock ?
Also about the Windows version dependency, from the doc, it appears the function is available since WinXP, but only got standardized in Windows 8.1 / Windows Server 2012 R2.
For a patch, if you want to restrict support to Windows 8.1 and Windows Server 2012 R2, the cleanest option would simply be to remove SockEnhancedApi and all accompanying code (either for all versions, or just for Win64).
If you still want to support the legacy API, I guess the "SockEnhancedApi := False;" could be moved within the "if SynSockCount = 0 then begin" (and same with SockSChannelApi)
function InitSocketInterface(const Stack: TFileName = ''): Boolean;
begin
result := False;
EnterCriticalSection(SynSockCS);
try
if SynSockCount = 0 then begin
SockEnhancedApi := False;
SockSChannelApi := False;
SockWship6Api := False;
if Stack = '' then
LibHandle := LoadLibrary(DLLStackName)
else
But since there were explicitly left outside of the "if SynSockCount = 0" I guess there may be side effects ?
Hi,
I have encountered an odd issue with InitSocketInterface when used under Win64, it can combine to trigger a Windows bug in ResolveNameToIP, the bug in question is the same as in the stackoverflow post (https://stackoverflow.com/questions/215 … it-windows), you can end up with "DEADF00D" in the result in Win64 because of a Windows bug in deprecated GetHostByName function.
The issue is that when InitSocketInterface is called twice, while the LoadLibrary stuff is protected by SynSockCount, the SockEnhancedApi variable is not, it is reset to False, so instead of using the GetAddrInfo API, it is the GetHostByName that is being called, which sometimes bugs in Win64.
Would it be possible to add a standard mechanism to provide a custom getRandomNumber ?
Or more generically, a way so that TAESPRNG would not directly be referenced everywhere in SynCrypto, but only through an indirection, so that a custom PRNG can be used instead?
Currently it can be hacked in by modifying the source, but that is a little bit "dirty".
Uses cases would be to map it straight to OS provided CSPRNG (in cases where that is a requirement), and to xor with other source(s) of randomness in other cases.
Thanks!
A few years later...
After running some SQLite backup benchmarks against 3.14.2, using the bakup API, I have observed that the DLL is about 15 to 25% faster than the static .obj.
However a backup is essentially an I/O bound task, so the quality of the C compiler should not be playing a very significant role... which makes me wonder if the performance between dll & obj could not come from using different I/O functions, or different I/O options?
Interestingly the performance delta is in the same ballpark as back in the 3.7.15.1 days... so it might have been the same I/O problem.
Also there seems to be an issue with the Firebird driver, as it allocates a second connection during the course of executing a statement on an existing connection, minimal test case below
program Test;
{$APPTYPE CONSOLE}
uses
SysUtils, SynDB, SynDBFirebird;
var
props : TSQLDBFirebirdConnectionProperties;
conn : TSQLDBConnection;
stmt : TSQLDBStatement;
begin
props := TSQLDBFirebirdConnectionProperties.Create('yourserver:yourdatabase.fdb', '', 'user', 'password');
conn := props.NewConnection;
stmt := conn.NewStatement;
stmt.Prepare('select field from sometable', True); // fails with -502 here
stmt.ExecutePrepared;
while stmt.Step do
Writeln(stmt.ColumnString(0));
end.
When tracing the code, there is a second connection initiated from within inherited Connect... or is there something wrong with my snippet above (besides not releasing stuff)
Two suggestions:
- under Windows shouldn't it be looking for fbclient.dll (newer dll) before gds32.dll (legacy)
- if you are not using the embedded dll, TSQLDBFirebirdConnectionProperties.Create raises an exception which is then gobbled up, but the exception is annoying when debugging as it cannot be masked individually, so the suggestion would be either use a dedicated exception class (which could then be masked in the IDE) or avoid throwing an exception in the first place
Well, it was worth a try
This would make SynSQLite3 directly usable with Delphi strings, without having to use casts and temporary strings:
bind_text16
result_text16
prepare_v2_16
value_text16
column_decltype16
column_name16
I have no idea, I have not been able to reproduce it predictably. The only safe assumption would be 'any of them', and maybe even "asynchronously' (ie. not withing an API call).
I am not sure there is a single place where it would be appropriate... I would expect the exception to be raised in the context of the thread that made use of THttpRequest (rather than the callback's), and it may need to be checked/raised in the various internals so that they can gracefully abort when the security error is encountered.
However, it is quite possible that without a Delphi exception being raised in the callback, the API functions would naturally fail and cleanup properly, in which case checking the security error and raising the exception somewhere high-level (like at the end of THttpRequest.Request or InternalREST) could be enough.
Also any chance of having TWinHttpProgress be declared as "of object"?
Very nice!
Got a few more issues
While digging around the SynCrtSock internals, I noticed THttpRequest.Create takes an aHttps boolean parameter, which is hard-coded (at least for WinHTTP) to WINHTTP_FLAG_SECURE_PROTOCOL_MODERN, which is actually not very modern (it allows SSL3 and all TLS versions).
Ideally, this should be changed to an enumeration, so that you get the option to allow all https protocols (when you do not care about security, so even SSL2), use only moderately secure https protocols (TLS but not SSL3), or only the secure one (TLS 1.2 right now).
Also I think that I have found the source of my occasional WinHTTP Error 12175 (http://synopse.info/forum/viewtopic.php?id=2709), it is the WinHttpSetStatusCallback which goes directly to WinHTTPSecurityErrorCallback, which raises an exception. Apparently the callback mechanism and Delphi exceptions do not play nice: the thread that handles the callback becomes incapable of issuing further WinHTTP queries, and subsequently errors with 12175. The issue is very infrequent, probably related to network issues during handshake or something else that is transient, I have been unable to reproduce it at will.
Rather than raising an exception, the callback should probably just set an error state on the TWinHTTP instance, which could then later be handled in the proper context.
Works fine, thanks!
Could you consider a couple other change for TWinHttpAPI (and other Http queries) ?
The first should be simple and innocuous enough, it is to have InternalRetrieveAnswer use instance fields for content length & bytes Read rather than local variable. The idea is that when you are downloading something big, and the request is in a thread, it allows to provide some form of download progress feedback.
The second is a bit more structural, it is to allow the downloaded data to be passed to a callback/event, rather than accumulated by InternalRetrieveAnswer. The idea here is also for largish downloads that may not fit in RAM (or are just undesirable in RAM) to be directly written to a file, directly parsed by a linear parser, streamed to something else etc. When such an "OnPartialDataReceived" event is defined, the event would be fully responsible for the data, and Request method OutData parameter would be left empty.
The default User-Agent is defined in function DefaultUserAgent (SynCrtSock)
It affects THttpClient, THttpRequest and all descendants that do not explicitly override it (in my case, it is a TWinHTTP descendant)
Currently the User-Agent for HTTP queries is like
Mozilla/4.0 (compatible; MSIE 5.5; Windows; Synopse mORMot 1.18.2634 TdwsWinHTTP)
it should probably be changed to something like
Mozilla/5.0 (Windows; Synopse mORMot 1.18.2634 TdwsWinHTTP)
The reason is that I am using it to fetch RSS feeds on https://www.beginend.net, and I have noticed that at least "Mozilla/4.0" and "MSIE 5.5" are getting filtered and you end up with 403 errors.
Thanks!
When compiling with Delphi XE:
* in SynCommon, SearchRecToDateTime there is a warning about F.Time being deprecated, can be fixed by changing the ifdef to "{$ifdef ISDELPHIXE}" (rather than XE2)
Minor windows service issues:
* mORMotService redefines types from WinSvc, notably TServiceStatus, but does not exposes the corresponding API calls, so it conflicts with it. Solution is to move the API calls from implementation to interface (so unit can be used as an alternative to WinSvc), or to move all structure declarations from interface to implementations (so they will not conflict).
* after a TServiceController.Delete, a call to .State returns ssErrorRetrievingState instead of ssNotInstalled
This improves things for insufficient rights, but seemed to fail reporting when the service really was not installed.
I have gone the extra mile here by introducing a TServiceStateEx with an ssAccessDenied element, and checking the GetLastError after calling CreateOpenService.
When TServiceController.CreateOpenService succeeds at opening the SC Manager, but fails at OpenService, it ends up in a state with FSCHandle <> 0 and FHandle = 0
However TServiceController.GetState then proceeds to report that as ssNotInstalled, when it should be ssErrorRetrievingState, or some more detailed information should be preserved.
https://msdn.microsoft.com/en-us/librar … s.85).aspx
Lists the cases when OpenService can return a null handle, among which ERROR_ACCESS_DENIED and ERROR_INVALID_NAME do not correspond to a service not installed.
I have a .zip file of about 600 MB that unzips to a file of 3.2 GB
Using TZipRead.UnZip file overload, the unzipping takes about 77 seconds, while unzip from the Windows Shell it takes 45 seconds or less.
Also checked with 7zip, which does it in 54 seconds.
In all three cases, disk usage is at 100%, so the problem probably originates from using TFileStream, which does not buffer anything, and does not allow to set OS buffering flags.
One minimal solution would be to provide an Unzip overload where you could pass a TStream rather than a filename. This way we could pass a buffered stream, and it could also have other uses.
However I am using neither WCF nor IIS
Opened question on stackoverflow http://stackoverflow.com/questions/3124 … of-queries
Which version of Windows are you using?
Did you try several?
Windows 2012 R2 for the production server.
In other tests (Win 2008 R2) it never occurred, but the test machine is not able to handle the same amount of load or run for days, so it's no apple-for-apple comparison. The backup server did not exhibit it either, but it's mostly inactive in comparison to the production one, so no surprise.
The load is high enough in terms of I/O that a non-virtualized OS is required for production, which makes full-load testing a bit more involved (attempts by different specialist on hyper-v, vmware & kvm all failed miserably, under virtualization performance ended up around 10% of bare-metal, it's a very peculiar workload with relatively little CPU usage, but loads of small disk & network I/O).
Using the WinHTTP client, I am sometimes getting a WinHTTP 12175 error, usually after several days of operation and tens of thousandths of queries.
When that happens, the only way to "fix" the errors is to restart the service, and occasionally the errors will not go away until Windows (the server) is restarted.
Interestingly enough, this error appears "gradually": getting some of them for some https queries, then after some time, the service gets them for 100% of https queries, then later on, they popup even for regular http queries (the service makes lots of http & https queries, on the order of dozens per seconds, to many servers).
AFAICT there is no memory leak or corruption, outside the 12175 errors, the rest of the service is operating just fine, no AV, responds to queries, etc.
Anyone else has seen this behavior? Any workarounds?
I have a suspicion this could be related to Windows Update doing OS certificate updates, but I could never positively confirm it.
TZipRead.UnZip is declared as returning a string, but it seems to be a relic of pre-Unicode Delphi, shouldn't it be a an AnsiString or a byte array? it's in PasZip, which apparently is a relic
In SynZip, TZipRead.Create raises an exception when a an entry has zero bytes, but a zero size file in a zip is valid.
Thanks!
While you're at it, could you integrate service description support?
It's the ChangeServiceConfig2 stuff in my dwsWindowsService, it's just an import, a field & a call, but it makes a service look more "normal" in the service manager.
The function is supported since XP & Windows 2003, so there is probably no need to bother with an OS version check.
TService.GetControlHandler raises an exception when fControlHandler is nil, however this exception kills the service immediately, and cannot be trapped in a try/except or try/finally (I am not sure why exactly), so the exception message is never visible.
Took some time to poinpoint it with manual logs in the source
The solution can be to check the ControlHandler assignment in ServicesRun, before calling StartServiceCtrlDispatcher, as exception can be properly trapped at that point, maybe by wrapping the check in a "ValidateService" method.
Would it be possible to add credentials support in TWinHTTP?
I hacked it by overriding InternalSendRequest, but it could probably be standardized.
Below are the relevant snippets to be adapted (could not test NTLM, and PASSPORT did not seem to work, but I did not try very hard).
For Windows servers, the strongest one is NEGOTIATE (secure, domain-level authentication)
type
TdwsWinHTTP = class (TWinHTTP)
FAuthScheme : TWebRequestAuthentication;
FUserName : String;
FPassword : String;
procedure InternalSendRequest(const aData: RawByteString); override;
end;
function WinHttpSetCredentials(hRequest: HINTERNET;
AuthTargets: DWORD; AuthScheme: DWORD;
pwszUserName: PWideChar; pwszPassword: PWideChar;
pAuthParams: Pointer) : BOOL; stdcall; external 'winhttp.dll';
const
WINHTTP_AUTH_TARGET_SERVER = 0;
WINHTTP_AUTH_TARGET_PROXY = 1;
WINHTTP_AUTH_SCHEME_BASIC = $00000001;
WINHTTP_AUTH_SCHEME_NTLM = $00000002;
WINHTTP_AUTH_SCHEME_PASSPORT = $00000004;
WINHTTP_AUTH_SCHEME_DIGEST = $00000008;
WINHTTP_AUTH_SCHEME_NEGOTIATE = $00000010;
procedure TdwsWinHTTP.InternalSendRequest(const aData: RawByteString);
var
winAuth : DWORD;
begin
if FAuthScheme<>wraNone then begin
case FAuthScheme of
wraBasic : winAuth := WINHTTP_AUTH_SCHEME_BASIC;
wraDigest : winAuth := WINHTTP_AUTH_SCHEME_DIGEST;
wraNegotiate : winAuth := WINHTTP_AUTH_SCHEME_NEGOTIATE;
else
raise EWinHTTP.CreateFmt('Unsupported request authentication scheme (%d)',
[Ord(FAuthScheme)]);
end;
if not WinHttpSetCredentials(fRequest, WINHTTP_AUTH_TARGET_SERVER,
winAuth, PChar(FUserName), PChar(FPassword), nil) then
RaiseLastOSError;
end;
inherited InternalSendRequest(aData);
end;
Well done, this is much more readable than PDF on any screen AFAICT, even for the first part.
PDF is good for displaying on dried-dead-tree-paste, but annoying for any other use
Multi-page HTML would be even better, at least at the top levels. It would be more friendly on mobile, and simpler to navigate, also you would get better search engine indexation & lookups. Ideally if you had a permanent page/url structure that would be even better (i.e not something based on fragile section or chapter numbers), as it could then be linked externally, support 3rd party annotations/comments (using any of the tools that already exist for that).
But don't let the above distract from the fact that it's a *huge* step forward IMHO, there was a lot of doc, but being PDF, it was kind of a "ghetto doc" if you take my meaning. This now looks "pro"