During some research for a project of mine I stumbled upon this:
https://www.theregister.com/2022/03/21/new_linux_kernel_has_improved/https://unix.stackexchange.com/questions/704737/kernel-5-10-119-caused-the-values-of-proc-sys-kernel-random-entropy-avail-and-p
<Due to some kernel patches in recent years /dev/random (and getrandom(0)) now behaves exactly like /dev/urandom, generating an infinite amount of peudorandom data regardless of how little entropy is in the pool.The patches' author wrote about it here:
https://www.zx2c4.com/projects/linux-rng-5.17-5.18/Sadly he does not explain why he decided to make /dev/random non-blocking.
But he does say
>That means tinfoil hatters who are concerned about ridiculous hypothetical CPU backdoors have one less concern to worry aboutPhew, I sure am glad that is solved by this very trustworthy person. He's also a SystemD developer. So awesome.
Here's a post I made elsewhere about why I believe that the traditional blocking-on-low-entropy /dev/random should be preferred to /dev/urandom:
First of all, the fact that whenever anyone mentions /dev/random, someone comes out of nowhere and tells you to use /dev/urandom instead, should be a huge red flag to anyone paying attention.
But let's look at their "arguments".
Allegedly this is how /dev/random
doesn't work:
https://www.2uo.de/myths-about-urandom-structure-no.pngThis picture is so stupid, it gives me cognitive dissonance.
Of course this isn't how /dev/random works and nobody ever claimed it was.
If random and urandom both fed off the same pool, applications using /dev/urandom would deplete the pool and /dev/random would block forever.
NSA please take your strawman arguments elsewhere.
And this is how it allegedly
does work:
https://www.2uo.de/myths-about-urandom-structure-yes.pngThis is the same picture just with less lines; it still makes no sense.
But if you bother to read the "By the way" text you find out the those pictures are not
"pretty rough simplifications" but straight out lies:
>In fact, there isn't just one, but three pools filled with entropy. One primary pool, and one for /dev/random and /dev/urandom each, feeding off the primary poolNow
this makes sense.
This is how it used to work until this horrible kernel patch, and this is how it SHOULD work.
Secure pseudorandom numbers through /dev/random, which on standard desktop and server systems should always have enough entropy for whatever needs you have (when was the last time one of your applications froze because /dev/random was depleted?), and fast but not quite as secure pseudorandom numbers for the rare edge cases where randomness in a low entropy environment is needed and you don't care if it isn't quite as secure.
Free choice for free people. No big deal.
Nobody is, sorry, nobody
was forced to use one or the other.
So why does the 2uo.de guy go out of his way to lie to us in order to convince us that there never was a difference and we all should be using urandom?
Who knows. Let's ignore the NSA shill and talk about the djb article to which the 2uo guy refers:
>djb remarked that more entropy actually can hurt.>http://blog.cr.yp.to/20140205-entropy.htmlThat's not what djb wrote at all, so yet another lie.
But djb does mention /dev/urandom so let's look at his concerns.
tl;dr:
> Anon buys an external USB entropy generator that is malicious
> The USB device uses side channels to learn about the other entropy sources
> The USB device bruteforces which value it needs to generate to make the PRNG output a specific value after combining all the entropy sources
> Now Anon's ECDSA key is broken :(
djb lost me at "Anon buys an external USB entropy generator" but I get the point and I agree that this is something worth considering.
Now let's see how this attack applies to /dev/random and urandom respectively.
As we learned in the "By the way" box, they used to have separate pools which both got fed from a master pool.
/dev/random:
> Anon buys an external USB entropy generator that is malicious
> The USB device uses side channels to learn about the other entropy sources
> The USB device predicts which bytes go into the /dev/random pool and which go into urandom
> The USB device predicts which bytes will be read from the /dev/random pool at some point in the future
> The USB device bruteforces which value it needs to generate to make the PRNG output a specific value after pulling bytes at random from the /dev/random pool (it's a pool, not a FIFO queue)
> Now Anon's ECDSA key is broken :(
/dev/urandom:
> Anon buys an external USB entropy generator that is malicious
> The USB device uses side channels to learn about the other entropy sources
> The USB device predicts which bytes go into the /dev/random pool and which go into urandom
> The USB device bruteforces which value it needs to generate to make the PRNG output a specific value after pulling the freshly inserted bytes from the depleted /dev/urandom pool (which if depleted acts like a FIFO queue)
> Now Anon's ECDSA key is broken :(
I'm no expert in this area so please correct me if I made wrong assumptions here but it seems to me that /dev/random complicates this attack because the attacker also has to predict which bytes get read from the /dev/random pool the next time an application reads from it and a larger pool makes this more difficult.
djb argues that a system should only collect entropy at boot and then never again:
>before crypto: the whole system collecting enough entropy;>after: the system using purely deterministic cryptography, never adding any more entropy. And his reasoning is the following:
>how can anyone simultaneously believe that> >we can't figure out how to deterministically expand one 256-bit secret into an endless stream of unpredictable keys (this is what we need from urandom), but> >we can figure out how to use a single key to safely encrypt many messages (this is what we need from SSL, PGP, etc.)? Well… I don't believe that. Nobody does.
That's why people use ephemeral keys whenever possible. OTR with PFS and similar protocols.
Nobody expects their keys to be safe forever, so if circumstances allow it, applications use signing keys to exchange trustworthy throwaway public keys, so an attacker cracking their encryption keys or stealing both parties' signing keys can only intercept the current or, at best, future communication, not retroactively decrypt every conversation the parties ever had.
Same goes for PRNGs - if an attacker gets hold of your seed, he can predict all future random numbers until you change your seed.
If new entropy is such a big risk, why not just generate one single number and use that for the rest of your lifetime as basis for all your cryptography? Store it on your HDD so next time you boot you don't have to generate new entropy because that's dangerous?
What? Stop looking at me like that! What could go wrong???
Well…
>We show that elliptic-curve cryptography implementations on mobile devices are vulnerable to electromagnetic and power side-channel attacks. We demonstrate full extraction of ECDSA secret signing keys from OpenSSL and CoreBitcoin running on iOS devices, and partial key leakage from OpenSSL running on Android and from iOS's CommonCrypto. These non-intrusive attacks use a simple magnetic probe placed in proximity to the device, or a power probe on the phone's USB cable. They use a bandwidth of merely a few hundred kHz, and can be performed cheaply using an audio card and an improvised magnetic probe.https://dl.acm.org/doi/10.1145/2976749.2978353https://en.wikipedia.org/wiki/Electromagnetic_attack?&useskin=vectorOkay so maybe I don't want someone who manages to get my PRNG state one single time to be able to sniff on me for all eternity.
Thoughts?