RodgerRabbit said:tl;dr on Kiwifarm?
DeepthroatSlayer said:As long as mrz continues to doom us to stalking by farmers, I put my hope in him to end the stalker culture. Either he'll give up and/or die soon or he'll actually take them down in an awesome blaze of glory.
Couldntthinkofaname said:After more than 3k shitposts let alone your alts. Congratulations.
mrz said:DeepthroatSlayer said:As long as mrz continues to doom us to stalking by farmers, I put my hope in him to end the stalker culture. Either he'll give up and/or die soon or he'll actually take them down in an awesome blaze of glory.
Still implementing timing measurement code etc, starting to shape up decently, already determining ranges of timing measurements for each character in a character set and brute forcing out a buffer based on them, in C code itself I've gotten (in the best case, worst case is still no matches, but best case every couple of iterations) up to all but the last four characters of a 32 byte buffer so far, but it's very noisy and not precise enough yet, I still have a lot to do, my last improvement was to start using assembly RDTSC to get timings from CPU directly instead of using a nanotime function, now I am implementing a filtering algorithm that tries to remove noise from the gathered timings.
We define the ith percentile (or quantile) qi of a distribution R to be the small-
est real number qi[m] such that P[R ≤ qi] = i/100. The 50th percentile is the
familiar median and the 0-th percentile is the minimum response time.
The true quantile qi is unknown because it depends on the true distribution
R. We can compute an estimator qi, which is an empirical estimate of qi derived
from our measurements by first ranking a set of measurements into
r(1) ≤ r(2) ≤ ... ≤ r(N)
and setting qi = r(⌊iN/100⌋) . The empirical quantiles qi are well known to be
weakly consistent under mild conditions, meaning that they converge to the
true quantile qi as the sample size increases
At first I tried to do it entirely myself, but now I'm basing it largely off of
Opportunities and Limits of Remote
SCOTT A. CROSBY, DAN S. WALLACH and RUDOLF H. RIEDI
I dunno if it will be precise enough to work against their cookie string though, the system can get timing differences of 20 microseconds over the internet but I dunno if the PHP === will have so much timing difference based on short circuiting, on a LAN it can get timing differences of 100 nano seconds though so it would definitely work on a LAN.
We have shown that, even though the Internet
induces significant timing jitter, we can reliably distinguish remote timing dif-
ferences as low as 20μs. A LAN environment has lower timing jitter, allowing
us to reliably distinguish remote timing differences as small as 100ns (possibly
even smaller). These precise timing differences can be distinguished with only
hundreds or possibly thousands of measurements.
Even if it doesn't work it will be nice to have this implemented in any case. My impression is that it should work in practice though, because as is mentioned here
In short, a timing attack uses statistical analysis of how long it takes your application to do something in order to learn something about the data it’s operating on. For HMACs, this means using the amount of time your application takes to compare a given value with a calculated value to learn information about the calculated value.
Take the recent Keyczar vulnerability that Nate Lawson found. He was able to take the fact that Keyczar used a simple break-on-inequality algorithm to compare a candidate HMAC digest with the calculated digest.
This is the offending code in Python:
return self.Sign(msg) == sig_bytes
This is the same vulnerable pattern being used for checking session cookie string in the database against the user provided value.
and in Java:
return Arrays.equals(hmac.doFinal(), sigBytes);
A value which shares no bytes in common with the secret digest will return immediately; a value which shares the first 15 bytes will return 15 compares later. That’s a difference of perhaps microseconds, but given enough attempts—which is usually easy to arrange in web applications (how many of you throttle requests with bad session cookies?)—the random noise becomes a very predictable, normally distributed skew, leaving only the signal.
Right about now, most people are thinking about the improbability of measuring the difference between two array comparisons in a web application, given all the routers, parsers, servers, proxies, etc. in the way.
So. Almost microsecond resolution with hundreds to thousands of measurements. Who wants to bet that network jitter compensation models get worse instead of better?
A worst-case scenario for guessing an HMAC would require \(20 \times 256 \times n\) measurements, where \(n\) is the number of measurements required to pin down a single byte. So—around 5,000,000 requests. You could do that in less than a week at a barely-perceptible 10 req/s.
It’s an attack which takes some planning and analysis, but it’s viable.
Load on the server from users can be conceptualized as a random delay, but it's already known that introducing random delays doesn't prevent this class of attack, so I don't think that will make a difference (perhaps it will require more samples to be taken though).
This paper doesn't seem to mention disabling the CPU cache on the client machine, but certainly CPU cache is introducing a lot of noise, I think perhaps they are filtering it with that filtering algorithm, but I think I will additionally write a kernel module in assembly to disable the CPU cache entirely.