Cryptography ServicesCryptography Services is a dedicated team of consultants from iSEC Partners, Matasano, Intrepidus Group, and NCC Group focused on cryptographic security assessments, protocol and design reviews, and tracking impactful developments in the space of academia and industry.
https://cryptoservices.github.io/
Wed, 26 Jul 2017 00:55:22 +0000Wed, 26 Jul 2017 00:55:22 +0000Jekyll v3.4.5Confidential Transactions from Basic Principles<p>During my time at NCC Group this summer, I had the opportunity to dig into all sorts of
cryptocurrency software to see how they work and what kind of math they rely on. One sought-after
property that some cryptocurrencies (ZCash, Monero, all CryptoNote-based coins) support is
confidential transactions. To explain what this means, we’ll first look at what Bitcoin transactions
do.</p>
<p>At its core, a Bitcoin transaction is just a tuple (\(\{a_i\}, \{b_i\}, \{v_i\})\) where
\(\{a_i\}\) are the input addresses, \(\{b_i\}\) are the output addresses, and
\(\{v_i\}\) are the amounts that go to each output. We’ll ignore the proof-of-work aspect, since
it isn’t quite relevant to where we’re going with this. Each transaction appears unencrypted for the
whole world to see in the public ledger. This is all well and good, but it makes transactions
<a href="https://bitcoin.org/en/protect-your-privacy">easy</a> to
<a href="https://www.sciencemag.org/news/2016/03/why-criminals-cant-hide-behind-bitcoin">trace</a>, even when
the coins go through multiple owners. One way to make this harder is to use a tumbler, which
essentially takes in Bitcoin from many sources, mixes them around, and hands back some fresh
uncorrelated coins (you might be familiar with this concept under the term “money laundering”).</p>
<p>The goal of confidential transactions is to let <em>just</em> the participants of a transactions see the
\(v_i\) values, and otherwise hide them from the rest of the world. But at the same time, we want
non-participants to be able to tell when a transaction is bogus. In particular, we don’t want a user
to be able to print money by spending more than they actually have. This property was easily
achieved in the Bitcoin scheme, since the number of Bitcoin in each address \(a_i\) is publicly
known. So a verifier need only check that the some of the outputs doesn’t exceed the sum of account
contents of the input addresses. But how do we do this when the account contents and the output
values are all secret? To show how, we’ll need a primer in some core cryptographic constructions.
There is a lot of machinery necessary to make this work, so bear with me.</p>
<h2 id="schnorr-signatures">Schnorr Signatures</h2>
<p>The purpose of a signature is to prove to someone who knows your public information that you have
seen a particular value. In the case of Schnorr Signatures, I am working in an abelian group
\(\mathbb{G}\) of prime order \(q\) with generator \(G\) (more generally, I guess this is a
vector space that’s also a group) and I have a public key \(P = xG\) where \(x \in
\mathbb{Z}_q\) is my secret key.</p>
<p>First, we’ll start off with Schnorr proof of knowledge. I would like to prove to a verifier that I
know the value of \(x\) without actually revealing it. Here’s how I do it:</p>
<ol>
<li>First, I pick a random \(\alpha \leftarrow \mathbb{Z}_q\) and send \(Q = \alpha G\) to the
verifier.</li>
<li>The verifier picks \(e \leftarrow \mathbb{Z}_q\) and sends it to me.</li>
<li>I calculate \(s = \alpha - ex\) and send \(s\) to the verifier.</li>
<li>
<p>Lastly, the verifier checks that \(sG + eP = Q\). Note that if all the other steps were
performed correctly, then indeed</p>
<script type="math/tex; mode=display">sG + eP = (\alpha - ex)G + exG = \alpha G - exG + exG = \alpha G = Q</script>
</li>
</ol>
<p>We can quickly prove that this scheme is <em>sound</em> in the sense that being able to consistently pass
verification implies knowledge of the secret \(x\). To prove this, it suffices to show that an
adversary with access to such a prover \(P\) and the ability to rewind \(P\) can derive \(x\)
efficiently. Suppose I have such a \(P\). Running it the first time, I give it any value \(e
\leftarrow \mathbb{Z}_q\). \(P\) will return its proof \(s\). Now I rewind \(P\) to just
before I sent \(e\). I send a different value \(e’ \neq e\) and receive its proof \(s’\). With
these two values, I can easily compute</p>
<script type="math/tex; mode=display">\frac{s - s'}{e' - e} = \frac{\alpha - ex - \alpha + e'x}{e' - e} = \frac{x(e' - e)}{e' - e} = x</script>
<p>and, voilà, the private key is exposed.</p>
<p>Ok that was pretty irrelevant for where I’m going, but I thought it was a nice quick proof. So how
can we use proof of knowledge to construct a signature? Well we can tweak the above protocol in
order to “bind” our proofs of knowledge to a particular message \(M \in \{0,1\}^* \). The trick
is to use \(M\) in the computation of \(e\). This also makes the interactivity of this protocol
unnecessary. That is, since I am computing \(e\) myself, I don’t need a challenger to give it to
me. But be careful! If we are able to pick \(e\) without any restrictions in our
proof-of-knowledge algorithm, then we can “prove” we know the private key to any public key \(P\)
by first picking random \(e\) and \(s\) and then retroactively letting \(Q = sG + eP\). So in
order to prevent forgery, \(e\) must be difficult to compute before \(Q\) is determined, while
also being linked somehow to \(M\). For this, we make use of a hash function \(H: \{0,1\}^* \to
\mathbb{Z}_q\). Here’s how the algorithm to sign \(M \in \{0,1\}^* \) goes. Note that because
this no longer interactive, there is no verifier giving me a challenge:</p>
<ol>
<li>I pick a random \(\alpha \leftarrow \mathbb{Z}_q\) and let \(Q = \alpha G\).</li>
<li>I compute \(e = H(Q \,||\, M)\)</li>
<li>I compute \(s = \alpha - ex\)</li>
<li>I return the signature, which is the tuple \(\sigma = (s, e)\)</li>
</ol>
<p>Observe that because hash functions are difficult to invert, this algorithm essentially guarantees
that \(e\) is determined after \(Q\). To verify a signature \((s, e)\) of the message \(m\),
do the following:</p>
<ol>
<li>Let \(Q = sG + eP\)</li>
<li>Check that \(e = H(Q \,||\, M)\)</li>
</ol>
<p>Fantastic! We’re now a fraction of the way to confidential transactions! The next step is to extend
this type of proof to a context with multiple public keys.</p>
<p>(Extra credit: prove that Schnorr is sound in the Random Oracle Model. That is, assume an adversary
has the ability to run and rewind the prover \(P\) as before, but now also has to ability to
intercept queries to \(H\) and return its own responses, as long as those responses are <em>random</em>
and <em>consisent</em> with responses on the same query input)</p>
<h2 id="aos-ring-signatures">AOS Ring Signatures</h2>
<p>The signatures that end up being used in confidential transactions are called ring signatures. It’s
the same idea as a regular signature, except less specific: a ring signature of the message \(m\)
over the public keys \(\{P_1, P_2, \ldots, P_n\}\) proves that someone with knowledge of <em>one of
the private keys</em> \(\{x_1, x_2, \ldots, x_n\}\) has seen the message \(m\). So this is a
strict generalization of the signatures above, since regular signatures are just ring signatures
where \(n=1\). Furthermore, it is generally desired that a ring signature not reveal which private
key it was that performed the signature. This property is called <em>signer ambiguity</em>.</p>
<p>The <a href="https://www.iacr.org/cryptodb/archive/2002/ASIACRYPT/50/50.pdf">Abe, Okhubo, Suzuki</a> ring
signature scheme is a generalization of Schnorr Signatures. The core idea of scheme is, for each
public key, we compute an \(e\) value that depends on the <em>previous</em> \(Q\) value, and all the
\(s\) values are random except for the one that’s required to “close” the ring. That “closure” is
performed on the \(e\) value whose corresponding public key and private key belong to us.</p>
<p>I’ll outline the algorithm in general and then give a concrete example. Denote the public keys by
\(\{P_1, \ldots, P_n\}\) and let \(x_j\) be the private key of public key \(P_j\). An AOS
signature of \(M \in \{0,1\}^* \) is computed as follows:</p>
<ol>
<li>Pick \(\alpha \leftarrow \mathbb{Z}_q\). \(Q = \alpha G\) and let <script type="math/tex">e_{j+1} = H(Q \| M)</script>.</li>
<li>Starting at \(j+1\) and wrapping around the modulus \(n\), for each \(i \neq j\), pick
\(s_i \leftarrow \mathbb{Z}_q \) and let <script type="math/tex">e_{i+1} = H(s_iG + e_iP_i \,\|\, M)</script></li>
<li>Let \(s_j = \alpha - e_jx_j\)</li>
<li>Output the signature \(\sigma = (e_0, s_0, s_1, \ldots, s_n)\).</li>
</ol>
<p>That’s very opaque, so here’s an example where there are the public keys \(\{P_0, P_1,
P_2\}\) and I know the value of \(x_1\) such that \(P_1 = x_1G\):</p>
<ol>
<li>I start making the ring at index 2: \(\alpha \leftarrow \mathbb{Z}_q \).
\(e_2 = H(\alpha G \,||\, M)\).</li>
<li>I continue making the ring. \(s_2 \leftarrow \mathbb{Z}_q \).
\(e_0 = H(s_2 G + e_2 P_2 \,||\, M) \).</li>
<li>I continue making the ring. \(s_0 \leftarrow \mathbb{Z}_q \).
\(e_1 = H(s_0 G + e_0 P_0 \,||\, M) \).</li>
<li>Now notice that \(e_2\) has been determined in two ways: from before, \(e_2 = H(\alpha
G\,||\, M)\), and also from the property which must hold for every \(e\) value: \(e_2 =
H(s_1 G + e_1 P_1\,||\, M)\). The only \(s_1\) that satisfies these constraints is \(s_1 =
\alpha - e_1x_1\), which I can easily compute, since I know \(x_1\).</li>
<li>Finally, my signature is \(\sigma = (e_0, s_0, s_1, s_2)\).</li>
</ol>
<p>The way to verify this signature is to just step all the way through the ring until we loop back
around, and then check that the final \(e\) value matches the initial one. Here are steps for the
above example; the general process should be easy to see:</p>
<ol>
<li>Let \(e_1 = H(s_0 G + e_0 P_0 \,||\, M) \).</li>
<li>Let \(e_2 = H(s_1 G + e_1 P_1 \,||\, M) \).</li>
<li>Let \(e’_0 = H(s_2 G + e_2 P_2 \,||\, M) \).</li>
<li>Check that \(e_0 = e’_0\).</li>
</ol>
<p>The verification process checks that <em>some</em> \(s\) value was calculated <em>after</em> all the \(e\)
values were determined, which implies that some secret key is known. Which \(s\) it is is
well-hidden, though. Notice that all the \(s\) values but the last one are random. And also notice
that the final \(s\) value has \(\alpha\) as an offset. But that \(\alpha\) was chosen
randomly and was never revealed. So this final \(s\) value is completely indistinguishable from
randomness, and is thus indistinguishable from the truly random \(s\) values. Pretty cool, huh?</p>
<p>There’s one tweak we can make to this that’ll slightly improve efficiency and make notation easier.
Including \(M\) at every step really isn’t necessary. It just has to get mixed in at <em>some</em> point
in the process. A natural place to put it is in <script type="math/tex">e_0 = H(s_{n-1} G + e_{n-1} P_{n-1} \,\|\, M)</script>
and calculate the other \(e\) values without the \(m\), like <script type="math/tex">e_{i+1} = H(s_iG + e_iP_i)</script>.</p>
<h2 id="borromean-ring-signatures">Borromean Ring Signatures</h2>
<p>If you thought we were done generalizing, you’re dead wrong. We’ve got one more step to go. Consider
the following situation (and withhold your cries for practical application for just a wee bit
longer): there are multiple sets of public keys \(\mathcal{A}_1, \mathcal{A}_2, \mathcal{A}_3 \).
I, having one private key in each \(\mathcal{A}_i \), would like to sign a message \(M\) in each
of these rings. In doing so, I am proving “<em>Some</em> key in \(\mathcal{A}_1 \) signed \(M\) <em>AND</em>
<em>some</em> key in \(\mathcal{A}_2 \) signed \(M\) <em>AND</em> <em>some</em> key in \(\mathcal{A}_3 \) signed
\(M\).” The naïve approach is to make a separate AOS signature for each set of public keys, giving
us a final signature of \(\sigma = (\sigma_1, \sigma_2, \sigma_3)\). But it turns out that there
is an (admittedly small) optimization that can make the final signature smaller.</p>
<p>Gregory Maxwell’s <a href="https://github.com/ElementsProject/borromean-signatures-writeup">Borromean ring
signature scheme</a><sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup> makes the
optimization of pinning \(e_0\) as a shared \(e\) value for all rings \(\mathcal{A}_i\). More
specifically, the paper defines</p>
<script type="math/tex; mode=display">e_0 = H(R_0 \| R_1 \| \ldots \| R_{n-1} \| M)</script>
<p>where each \(R_i = s_{i, m_i-1} G + e_{i, m_i-1} P_{i, m_i-1}\) when \(j_i \neq m_i-1\), and
\(R_i = \alpha_i G\) otherwise, and \(m_i\) denotes the number of public keys in the
\(i^\textrm{th}\) ring, and \(j_i\) denotes the index of the known private key in the
\(i^\textrm{th}\) ring. The whole \(R\) thing is a technicality. The gist is that the last
\(e\) and \(s\) values of every ring (whether it correspond to the known private key or not) are
incorporated into \(e_0\). Here’s a pretty picture from the Maxwell Paper to aide your geometric
intuition (if one believes in such silly things)</p>
<p class="center"><img src="/images/sigs/borromean.png" alt="hey this isn't a borromean ring" /></p>
<p>The signature itself looks like</p>
<script type="math/tex; mode=display">\sigma = (e_0, (s_{0,0}, s_{0,1}, \ldots, s_{1,m_0-1}), \ldots,
(s_{n-1,0}, \ldots, s_{n-1,m_{n-1}-1}))</script>
<p>where \(s_{i,j}\) is the \(j^\textrm{th}\) \(s\) value in the \(i^\textrm{th}\) ring.</p>
<p>For clarity, I did slightly modify some details from this paper, but I don’t believe that the
modifications impact the security of the construction whatsoever. There is also the important detail
of mixing the ring number and position in the ring into at least one \(e\) value per-ring so that
rings cannot be moved around without breaking the signature. The mixing is done by simply hashing
the values into some \(e\).</p>
<p>Anyway, the end result of this construction is a method of constructing \(n\) separate ring
signatures using \(\sum m_i + 1\) values (the \(s\) values plus the one \(e_0\)) instead of
the naïve way, in which we would have to include \(e_{0,0}, e_{1,0}, \ldots, e_{n-1,0}\). This
saves us \(n-1\) integers in the signature.</p>
<p>You might be wondering how large \(n\) is that such savings are worth a brand-new signature
scheme. If you are wondering that, stop reading, because you won’t get an answer. Onwards towards
more theory!</p>
<h2 id="pedersen-commitments">Pedersen Commitments</h2>
<p>Alright, we have all the signature technology we need. Now let’s turn that fear of math into fear of
commitment(s). A commitment is a value that is published prior to the revealing of some information.
The commitment proves that you knew that information before it was revealed. Suppose I wanted to
prove to someone that I know the winner of tomorrow’s horse race, but I don’t want to tell them
because they might make a massive bet and raise suspicion. I could tweet out the SHA256 hash</p>
<p class="center"><code class="highlighter-rouge">1c5d6a56ec257e5fe6f733e7e81f6f2571475d44c09faa9ecdaa2ff1c4a49ecd</code></p>
<p>Once the race is over, I tweet again, revealing that the preimage of the hash was “Cloud Computing”.
Since finding the preimage of a hash function is capital-D-Difficult, I have effectively proven that
I knew ahead of time that Cloud Computing would win (note: the set of possible winners is so small
that someone can easily just try all the names and see what matches. In this case, I would pick a
random number and commit to “Cloud Computing.ba9fd6d66f9bd53d” and then reveal <em>that</em> later.)</p>
<p>Pedersen commitments are a type of commitment scheme with some nice properties that the hashing
technique above doesn’t have. A Pedersen commitment in an abelian group \(\mathbb{G}\) of prime
order \(q\) requires two public and unrelated generators, \(G\) and \(H\) (by unrelated, I
mean there should be no obvious relation \(aG = H\)). If I want to commit to the value \(v \in
\mathbb{Z}_q \) I do as follows:</p>
<ol>
<li>Pick a random “blinding factor” \(\alpha \leftarrow \mathbb{Z}_q \).</li>
<li>Return \(Q = \alpha G + v H\) as my commitment.</li>
</ol>
<p>That’s it. The way I reveal my commitment is simply by revealing my initial values \((\alpha,
v)\). It’s worth it to quickly check that the scheme is <em>binding</em>, that is, if I make a commitment
to \((\alpha, v)\), it’s hard to come up with different values \((\alpha’, v’)\) that result in
the same commitment. For suppose I were able to do such a thing, then</p>
<script type="math/tex; mode=display">\alpha G + v H = \alpha' G + v' H \implies (\alpha - \alpha')G = (v' - v)H
\implies G = \frac{v' - v}{\alpha - \alpha'}H</script>
<p>and we’ve found the discrete logarithm of \(H\) with respect to G, which we assumed earlier was
hard. Another cool property (which is totally unrelated to anything) is <em>perfect hiding</em>. That is,
for any commitment \(Q\) and any value \(v\), there is a blinding factor \(\alpha\) such that
\(Q\) is a valid commitment to \((\alpha, v)\). This is just by virtue of the fact that, since
\(G\) is a generator, there must be an \(\alpha\) such that \(\alpha G = Q - vH\) (also since
\(H\) is also a generator, this also works if you fix \(Q\) and \(\alpha\) and derive
\(v\)). Perfect hiding proves that, when \(\alpha\) is truly random, you cannot learn anything
about \(v\), given just \(Q\).</p>
<p>Lastly, and very importantly, Pedersen commitments are additively homomorphic. That means that if
\(Q\) commits to \((\alpha, v)\) and \(Q’\) commits to \((\alpha’, v’)\), then</p>
<script type="math/tex; mode=display">Q + Q' = \alpha G + vH + \alpha' G + v'H = (\alpha + \alpha')G + (v + v')H</script>
<p>So the commitment \(Q + Q’\) commits to \((\alpha + \alpha’, v + v’)\). We’ll use this property
in just a second.</p>
<h2 id="hiding-transaction-amounts">Hiding Transaction Amounts</h2>
<p>Ok so back to the problem statement. We’ll simplify it a little bit. A transaction has an input
amount \(a\), an output amount \(b\), and a transaction fee \(f\), all in \(\mathbb{Z}_q\).
To maintain consistency, every transaction should satisfy the property \(a = b + f\), i.e., total
input equals total output, so no money appears out of thin air and no money disappears into
nothingness. We can actually already prove that this equation is satisfied without revealing any of
the values by using Pedersen commitments. Pick random \(\alpha_a \leftarrow \mathbb{Z}_q,\,
\alpha_b \leftarrow \mathbb{Z}_q\), and let \(\alpha_f = \alpha_a - \alpha_b\). Now make the
Pedersen commitments</p>
<script type="math/tex; mode=display">P = \alpha_a G + aH \quad Q = \alpha_b G + bH \quad R = \alpha_f G + fH</script>
<p>and publish \((P,Q,R)\) as your transaction. Then a verifier won’t be able to determine any of the
values of \(a\), \(b\), or \(f\), but will still be able to verify that</p>
<script type="math/tex; mode=display">P - Q - R = (\alpha_a - \alpha_b - \alpha_f) G + (a - b - f)H = 0G + 0H = \mathcal{O}</script>
<p>Remember, if someone tries to cheat and picks values so \(a - b - f \neq 0\), then they’ll have to
find an \(\alpha\) such that \(-\alpha G = (a - b - f) H\) which is Hard. So we’re done, right?
Problem solved! Well not quite yet. What we actually have here is a proof that \(a - b - f \equiv
0\, (\textrm{mod } q)\). See the distinction? For example, let \(q\) be a large prime, say, 13.
I’ll have the input to my transaction be 1🔥TC (Litcoin; ICO is next week, check it out). I’d like
to print some money, so I set my output to be 8🔥TC. I’ll be generous and give the miner 4🔥TC as my
fee. Then anyone can check via the generated Pedersen commitments that</p>
<script type="math/tex; mode=display">a - b - f = 1 - 9 - 5 = -13 \equiv 0\, (\textrm{mod } 13)</script>
<p>So this transaction passes the correctness test. What happened? I overflowed and ended up wrapping
around the modulus. Since all our arithmetic is done modulo \(q\), none of the above algorithms
can tell the difference! So how can we prevent the above situation from happening? How do I prove
that my inputs don’t wrap around the modulus and come back to zero? One word:</p>
<h2 id="rangeproofs">Rangeproofs</h2>
<p>To prove that our arithmetic doesn’t wrap around the modulus, it suffices to prove that the values
\(a,b,f\) are small enough such that their sum does not exceed \(q\). To avoid thinking about
negative numbers, we’ll check that \(a = b + f\) instead of \(a - b - f = 0\), which are
identical equations, but the first one will be a bit easier to reason about. To show that \(b + f <
q\), we will actually show that \(b\) and \(f\) can be represented in binary with \(k\) bits,
where \(2^{k+1} < q\) (this ensures that overflow can’t happen since \(b,f < 2^k\) and \(2^k +
2^k = 2^{k+1} < q\)). In particular, for both \(b\) and \(f\), we will make \(k\) Pedersen
commitments, where each \(v\) value is provably 0 or a power of two, and the sum of the
commitments equals the commitment of \(b\) or \(f\), respectively. Let’s do it step by step.</p>
<ol>
<li>I start with a value \(v\) that I want to prove is representable with \(k\) bits. First, pick
a random \(\alpha \leftarrow \mathbb{Z}_q\) and make a Pedersen commitment \(P = \alpha G + v
H\)</li>
<li>Break \(v\) down into its binary representation: \(v = b_0 + 2b_1 + \ldots + 2^{k-1}b_{k-1}
\).</li>
<li>
<p>For each summand, make a Pedersen commitment, making sure that the sum of the commitments is
\(P\). That is,</p>
<script type="math/tex; mode=display">% <![CDATA[
\forall 0 \leq i < k-1 : \textrm{pick } \alpha_i \leftarrow \mathbb{Z}_q, \quad
\textrm{let } \alpha_{k-1} = \alpha - \sum_{i=0}^{k-2} \alpha_i %]]></script>
<p>Then for all \(i\), commit</p>
<script type="math/tex; mode=display">P_i = \alpha_i G + 2^ib_i H</script>
<p>This ensures that \(P = P_0 + P_1 + \ldots + P_{k-1}\). The verifier will be checking this
property later.</p>
</li>
</ol>
<p>Great. So far we’ve provably broken down a single number into \(k\) constituents, while hiding all
the bits. But how does a verifier know that all the \(b\) values are bits? What’s preventing me
from picking \(b_0 = 3^{200}\), for example? This is where we will use ring signatures! For each
commitment, we’ll make the set \(\mathcal{A}_i = \{P_i, P_i - 2^iH\}\) and treat that as a set
of public keys for a ring signature. Note that, because we know the binary expansion of \(v\), we
know the private key to exactly one of the public keys in \(\mathcal{A}_i\). This is because</p>
<script type="math/tex; mode=display">b_i = 0 \implies P_i = \alpha_i G + 0H = \alpha_i G</script>
<script type="math/tex; mode=display">b_i = 1 \implies P_i - 2^iH = \alpha_i G + 2^iH - 2^iH = \alpha_i G</script>
<p>So to prove that \(b_i = 0 \textrm{ or } 1\), we construct a ring signature over
\(\mathcal{A}_i\). Since the ring signature is signer-ambiguous, a verifier can’t determine which
key did the signing. This means we get to hide all the bits, while simultaneously proving that they
are indeed bits! We get some space savings by using Borromean signatures here, since we’ll have
\(k\) total signatures of size 2 each. The final rangeproof of the value \(v\) is thus</p>
<script type="math/tex; mode=display">R_v = (P_0, \ldots, P_k, e_0, s_0, \overline{s_0}, s_1, \overline{s_1}, \ldots, s_k, \overline{s_k})</script>
<p>where \(s_i\) and \(\overline{s_i}\) are the \(s\) values of the \(i^\textrm{th}\) ring
signature. Obviously, the choice of binary representation as opposed to, say, base-16 representation
is arbitrary, since you can make rings as big as you want, where each public key corresponds to
a digit in that representation. But note that the space savings that Borromean ring signatures give
us come from the number of rings, not their size. So it appears to be a good strategy to make the
rings as small as possible and let the center \(e_0\) value take the place of as many \(e\)
values as possible.</p>
<h2 id="putting-it-all-together">Putting It All Together</h2>
<p>So to recap, we have picked transaction input \(a\), output \(b\), and fee \(f\), and hidden
them with Pedersen commitments \(P_a\), \(P_b\), and \(P_f\). This gives verifiers the ability
to check correctness of the transaction up to modulus-wrapping. Then we constructed the commitments’
corresponding rangeproofs \(R_a\), \(R_b\), and \(R_f\) so that a verifier gets the last piece
of assurance that the transaction is correct <em>and</em> there is no overflow. So, in total, a
confidential transaction is the tuple</p>
<script type="math/tex; mode=display">(P_a, P_b, P_f, R_a, R_b, R_f)</script>
<p>And that’s how confidential transactions work! If I want to send 🔥TC to someone, I can construct a
confidential transaction that I make public, and then privately reveal the commitments for
\(P_a\), \(P_b\) and \(P_f\) so that they can be sure that I actually sent what I claim.
Because the commitments are binding, they can be certain that I can’t claim to someone else that I
sent different \(a\), \(b\) or \(f\) values.</p>
<p>There’s plenty more detail in how transactions are constructed that I didn’t cover, but I hope I was
able to explain the core of confidential transactions, and hopefully interest you in cryptography a
little bit more. There’s a lot of cool stuff out there, and cryptocurrencies are a massive playing
field for novel constructions.</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>Sorry, you’re gonna have to compile the \(\LaTeX\) yourself. Every PDF on the internet is
either outdated or erroneous. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>
Fri, 21 Jul 2017 02:53:07 +0000
https://cryptoservices.github.io/cryptography/2017/07/21/Sigs.html
https://cryptoservices.github.io/cryptography/2017/07/21/Sigs.htmlcryptographyNew Practical Attacks on 64-bit Block Ciphers (3DES, Blowfish)<p><img src="/images/64bit/sweet32.png" alt="sweet32" /></p>
<p>A pair of researchers from INRIA have identified a new technique called <a href="https://sweet32.info/">Sweet32</a>. This attack exploits known blockcipher vulnerabilities (collision/birthday attacks) against 64-bit block ciphers like <strong>3DES</strong> and <strong>Blowfish</strong>. It affects any protocol making use of these “light” blockciphers along with CBC-mode for a long period of time without re-keying. While cryptographers have long known that combining block ciphers with long lasting connections have these security implications, but it is relatively easy for product maintainers or users of various software to create vulnerable conditions.</p>
<center><iframe width="640" height="360" src="https://www.youtube.com/embed/xNDSv3eJJHI" frameborder="0" allowfullscreen=""></iframe></center>
<p>In the case of TLS, the attack can be performed actively instead of passively observing large amounts of communication. Attackers can utilize the same kinds of JavaScript based techniques used in <strong>BEAST</strong> to exploit Sweet32.</p>
<p><img src="/images/64bit/beast1.png" alt="beast evil" /></p>
<p>The first technique used in BEAST (diagrammed above) is to make the victim visit a malicious website which will execute some JavaScript in the victim’s browser. The script then sends many HTTPS queries to the targeted website. Meanwhile, the attacker will adopt a man-in-the-middle position to observe, or even tamper, with the connections being made by the script.</p>
<p><img src="/images/64bit/beast2.png" alt="beast http" /></p>
<p>A second, and more efficient, way of doing this is for the attacker to man-in-the-middle every connection made by the victim and wait for the victim to visit an HTTP server not using SSL/TLS, as shown above. After that, the attacker can easily tamper with the responses of the HTTP server to inject snippets of JavaScript. The attack then unfolds in the same way it did in the previous BEAST-style technique.</p>
<p>For the Sweet32 attack to work, the server needs to support 3DES or Blowfish. The victim’s browser also needs to be old enough to request these ciphers in priority as well. By observing many messages (785 GB worth), the attacker will be able to detect a collision in CBC-mode and will be able to extract the secret (in their experiment, a 16-byte session cookie). This took on average 38 hours in the researchers’ setting, which can be deemed impractical since web servers also have the ability to limit long-lived TLS connections. It is important to note that browsers do not have this capability.</p>
<p>This attack was also tested on OpenVPN where it was way more devastating: for the same amount of data needed to extract an authorization token, the attack took 19 hours.</p>
<p>Mitigating these problems appears to be more complex than just deprecating 64-bit ciphers (which TLS 1.3 already does). Lightweight ciphers are making their ways into embedded devices, remarkably helped by <a href="http://www.nist.gov/itl/csd/ct/lwc-workshop2016.cfm">NIST who has been pushing for a first draft</a>.</p>
<p>In the context of TLS, re-negotiating a new set of keys (re-keying) stops the attack. But it needs to be done faster than previously thought! While most standards recommend to re-key around the birthday bound (\(2^{32}\)), this new research shows that it is insufficient. The probability of detecting a collision before the re-keying happens is still too high to consider it a safe countermeasure. The optimal strategy outlined in the paper is to re-key after \(2^{21}\) encrypted messages.</p>
Sat, 03 Sep 2016 02:53:07 +0000
https://cryptoservices.github.io/cryptographic/attacks/2016/09/03/Sweet32.html
https://cryptoservices.github.io/cryptographic/attacks/2016/09/03/Sweet32.htmlcryptographicattacksWhat are State-sized adversaries doing to spy on us? Or how to backdoor Diffie-Hellman<p>In the history of American cryptography, companies wanting to export
their products abroad would have needed to comply to a few official laws
called the <em>U.S. Export rules</em>. These stated that no strong
cryptographic algorithms could be shipped outside of the country, unless
weakened down to brute-forceable sizes (for the government). Some
exceptions were made, notably in the <a href="http://www.cypherspace.org/adam/hacks/lotus-nsa-key.html">Lotus
Notes</a>
software, where an asymmetric backdoor had to be implemented in exchange
for the right to use stronger cryptography.</p>
<p>Many years have passed, and the US has now lost its computational
advantage: China is ranked first on the top 500 super computers in the
world with the <a href="http://www.top500.org/lists/2015/11/">Tianhe-2</a> machine.
The U.S. Export rules have now overcome their stay and have been gently
relaxed, although they still are the source of many troubles including
the recent critical attacks on TLS: <a href="https://freakattack.com/">FREAK</a>
and <a href="https://weakdh.org/">LOGJAM</a>. Backdoors seem to be the new hot area
of research for the NSA, GCHQ and probably other governmental secret
agencies.</p>
<p>In this work we’ll talk a bit more about the recent history of these
backdoors: from the <em>Dual EC</em> PRNG standardized by the NIST organization
to the recent <a href="http://kb.juniper.net/InfoCenter/index?page=content&id=JSA10713&actp=search">Juniper
Networks</a>
and
<a href="http://www.dest-unreach.org/socat/contrib/socat-secadv7.html">socat</a>
cryptographic vulnerabilities. We’ll also explain how we figured out a
way to subtly backdoor one of the oldest-in-use and
still-considered-secured asymmetric cryptographic construction:
<strong>Diffie-Hellman</strong>.</p>
<p>The paper is available on <a href="http://eprint.iacr.org/2016/644">ePrint</a> as well as on <a href="https://www.nccgroup.trust/us/our-research/how-to-backdoor-diffie-hellman/?research=Whitepapers">NCC Group</a>.</p>
Mon, 27 Jun 2016 10:40:07 +0000
https://cryptoservices.github.io/event/2016/06/27/how-to-backdoor-diffie-hellman.html
https://cryptoservices.github.io/event/2016/06/27/how-to-backdoor-diffie-hellman.htmleventReal World Crypto 2017<p><strong>Real World Crypto</strong> is <em>THE</em> convention anyone interested in cryptography – but usually annoyed by most of the too theoretical conventions of the field – should attend. It seeks to bridge the world of applied-cryptography to the one of academia. The speakers and the audience come from both the industry and universities to mingle together for a few days in what is in my eyes <strong>the most amazing convention about cryptography</strong>.</p>
<p><a href="http://www.realworldcrypto.com/rwc2016">Last year the program</a> went through the new cool-kids protocols like <em>TLS 1.3</em> and <em>QUIC</em>. Sharing stories of all sorts, from the attacker’s point of view (with supporting papers being released during the talk) as well as from the defense point of view with companies like Google and cryptographers like Adam Langley. Current privacy tools like Tor and i2p were talked about and post-quantum algorithms were mentioned. The introduction of password hashing algorithms with Argon 2 was made. New technologies like the one of the block-chain, Intel’s SGX and Property-Preserving Encryption were in the mix. Different stories of little cryptographic flaws found in the wild as well as more serious ones like BREACH and Lucky 13 concluded the show. I’m missing a lot of them.</p>
<p><strong>Dan Boneh</strong> gave us wine and sent us home, just after nominating <strong>Phillip Rogaway</strong> and the international <strong>miTLS</strong> team for the Levchin Prize, the conference’s prize of $10,000 given to the greatest contributors of the year in cryptography. In the large room of the Stanford campus you would have been able to see most of the alive, great cryptographers, clapping by your sides.</p>
<p>That is why we are filled with enthusiasm for next year’s edition of the conference: <a href="http://www.realworldcrypto.com/rwc2017">Real World Crypto 2017</a> and we are proud to announce that <strong>we will be sponsoring the event</strong>. So we hope to see you next year =)</p>
Thu, 23 Jun 2016 15:48:07 +0000
https://cryptoservices.github.io/event/2016/06/23/real-world-crypto-2017.html
https://cryptoservices.github.io/event/2016/06/23/real-world-crypto-2017.htmleventThe Noise Protocol Framework<p>WhatsApp just announced their integration of the Signal protocol (formerly known as the Axolotl protocol). An interesting aspect of it is the use of a TLS-like protocol called <strong>Noise Pipes</strong>. A protocol based on the Noise protocol framework, a one-man work led by <strong>Trevor Perrin</strong> with only a few implementations and <a href="http://noiseprotocol.org/">a moderately long specificiations available here</a>. I thought it would be interesting to understand how protocols are made from this framework, and to condense it in a 25 minutes video. Here it is.</p>
<iframe width="853" height="480" src="https://www.youtube.com/embed/ceGTgqypwnQ" frameborder="0" allowfullscreen=""></iframe>
Wed, 27 Apr 2016 18:53:07 +0000
https://cryptoservices.github.io/cryptography/protocols/2016/04/27/noise-protocol.html
https://cryptoservices.github.io/cryptography/protocols/2016/04/27/noise-protocol.htmlcryptographyprotocolsBeyond the BEAST Returns to Black Hat USA<p>Last year we premiered a new training course we developed as a back-to-back sold-out offering at Black Hat in Las Vegas. This year <a href="https://www.blackhat.com/us-16/training/beyond-the-beast-a-broad-survey-of-crypto-vulnerabilities.html">we’re offering it again at Black Hat</a>. Since debuting last year, we’ve offered the course more than a half-dozen times, and gotten outstanding feedback that has helped us improve it each successive offering. We’ve updated the course significantly since last year - improving the layout, content, and exercises. We’ve taken a few existing topics and added a few more to create the new Subverting Signatures module, retooled our coverage of Randomness to include more analysis on PRNGs in the abstract and more exploiting specific broken PRNGs, and included more information about ECC - both background and attacks.</p>
<p>The Cryptography Services practice at NCC Group spends our days researching and assessing cryptographic implementations and protocols. We kept seeing the same types of flaws being demonstrated again and again - sometimes verbatim but sometimes in a slightly new incarnation. We took all of those flaws, grouped them up a bit, and turned it into a training course that will help you design and implement secure cryptographic systems - or identify weaknesses in existing ones.</p>
<blockquote>I think, the training was awesome. The exercises were helpful and you guys were around to help out with the dumbest of questions. I have been looking for cryptanalysis training for a while, and this was exactly what I wanted. - Attendee</blockquote>
<p>We’ll talk about what attacks in the past took advantage of them, how algorithms and protocols have evolved over time to address these concerns, and what they look like now where they’re at the heart of the most popular bugs today. The other major areas we hit are cryptographic exploitation primitives such as chosen block boundaries, and more protocol-related topics, such as how to understand and trace authentication in complex protocols.</p>
<ul>
<li>
<p>Module One focuses on what the right and wrong questions are when you’re talking about cryptography with people - why focusing on matching keylengths isn’t going to find you something exploitable and what will.</p>
</li>
<li>
<p>Module Two focuses on randomness, unpredictability, uniqueness. It covers the requisite info on spotting Random vs SecureRandom, but quickly dives deeper and talks about why randomness, uniqueness, and unpredictability are so important for constructions like GCM and stream ciphers (as well as CBC and key generation).</p>
</li>
<li>
<p>Module three focuses on integrity, and covers unauthenticated modes like ECB/CBC/CTR, AEAD modes, encrypt-then-mac, and how to take advantage of this topic in spaces like disk encryption.</p>
</li>
<li>
<p>Module four is about complicated protocols and systems deployed at scale, and how to trace through them, following how trust is granted, what its scope is, how it can be impersonated, and how the system falls apart when anything is slightly off.</p>
</li>
<li>
<p>Module five is all about signatures. We talk about signature reuse, reinterpretation, and more - including one of our favorite flaws: the SSL 3 omission that persisted and was exploited in new ways for a full 19 years before finally being fixed.</p>
</li>
<li>
<p>Module six is Math. There’s just no getting around it - but it also leads to some of the most impressive attacks. We look at several standards, many provably secure, and show how a slightest missing sanity check allows for an often-devastating adaptive chosen ciphertext attack on RSA, DSA, ECC, and unauthenticated block cipher modes.</p>
</li>
<li>
<p>Module seven tackles side channels, going in depth on the two aspects of cryptographic oracles: how the oracle is exposed and how to take advantage of what it tells you. We cover timing, error, and the CPU cache, starting off showing how to apply the attacks you’ve just learned, and then moving on to show how to extract key bits from hand-optimized algorithm implementations.</p>
</li>
</ul>
<p>We wrap up by talking about the cryptographic community. We lay out what news sources we read to keep up on the latest happenings and do a whirlwind tour of some interesting topics coming up in the future - things like wide-block constructions and hash-based digital signatures.</p>
<blockquote>I found great value in the presentation and knowledge transferred. The course is spot on. - Attendee</blockquote>
<p>Course requirements are minimal. We’ve targeted it at students who have a strong interest in cryptography and some measure of cryptographic understanding (such as the difference between symmetric and asymmetric crypto). The ideal student has investigated one or more recent cryptographic attacks deeply enough to be able to explain it, but has not sat down and read PKCS or NIST standards describing algorithm implementation. No explicit understanding of statistics or high-level math is required, as the focus is on the underlying causes of the vulnerabilities. We cover a wide breadth of topics in the course, and provide printed slide decks.</p>
Fri, 12 Feb 2016 12:30:07 +0000
https://cryptoservices.github.io/cryptography/training/2016/02/12/crypto-course-comes-back-to-black-hat.html
https://cryptoservices.github.io/cryptography/training/2016/02/12/crypto-course-comes-back-to-black-hat.htmlcryptographytrainingHash-Based Signatures Part IV: XMSS and SPHINCS<p>This post is the ending of a series of blogposts on hash-based signatures. You can find <a href="/quantum/2015/12/04/one-time-signatures.html">part I here</a></p>
<p>So now we’re getting into the interesting part, the real signatures schemes.</p>
<p><strong>PQCrypto</strong> has released an <a href="http://pqcrypto.eu.org/docs/initial-recommendations.pdf">initial recommendations</a> document a few months ago. The two post-quantum algorithms advised there were <strong>XMSS</strong> and <strong>SPHINCS</strong>:</p>
<p><img src="/images/hash-based-signatures/Screen_Shot_2015-12-03_at_3.17_.34_PM_.png" alt="pqcrypto" /></p>
<p>This blogpost will be presenting XMSS, a stateful signature scheme, while the next will focus on SPHINCS, the first stateless signature scheme!</p>
<h2 id="xmss">XMSS</h2>
<p>The <strong>eXtended Merkle Signature Scheme</strong> (XMSS) was <a href="https://eprint.iacr.org/2011/484.pdf">introduced in 2011</a> and became an <a href="https://datatracker.ietf.org/doc/draft-irtf-cfrg-xmss-hash-based-signatures/">internet-draft in 2015</a>.</p>
<p>The main construction looks like a Merkle tree, excepts a few things. The XMSS tree has a <strong>mask</strong> XORed to the child nodes before getting hashed in their parents node. It’s a different mask for every node:</p>
<p><img src="/images/hash-based-signatures/xmss_tree.png" alt="xmsstree" /></p>
<p>The second particularity is that a leaf of the XMSS tree is not a hash of a one-time signature public key, but the root of another tree called a L-tree.</p>
<p>A L-tree has the same idea of masks applied to its nodes hashes, different from the main XMSS Trees, but common to all the L-trees.</p>
<p>Inside the leaves of any L-tree are stored the elements of a WOTS+ public key. This scheme is explained at the end of <a href="/quantum/2015/12/04/one-time-signatures.html">the first article of this series</a>.</p>
<p>If like me you’re wondering why they store a WOTS+ public key in a tree, here’s what Huelsing has to say about it:</p>
<blockquote>
<p>The tree is not used to store a WOTS public key but to hash it in a way that we can prove that a second-preimage resistant hash function suffices (instead of a collision resistant one).</p>
</blockquote>
<p>Also, the main public key is composed of the root node of the XMSS tree as well as the bit masks used in the XMSS tree and a L-tree.</p>
<h2 id="sphincs">SPHINCS</h2>
<p>SPHINCS is the more recent one, combining a good numbers of advances in the field and even more! Bringing the statelessness we were all waiting for.</p>
<p>Yup, this means that you don’t have to keep the state anymore. But before explaining how they did that, let’s see how SPHINCS works.</p>
<p>First, SPHINCS is made out of many trees.</p>
<p>Let’s look at the first tree:</p>
<p><img src="/images/hash-based-signatures/first_tree1.jpg" alt="sphincs layer" /></p>
<ul>
<li>Each node is the hash of the XOR of the concatenation of the previous nodes with a level bitmask.</li>
<li>The public key is the root hash along with the bitmasks.</li>
<li>The leaves of the tree are the compressed public keys of WOTS+ L-trees.</li>
</ul>
<p>See the WOTS+ L-trees as the same XMSS L-tree we previously explained, except that the bitmask part looks more like a SPHINCS hash tree (a unique mask per level).</p>
<p>Each leaves, containing one Winternitz one-time signature, allow us to sign another tree. So that we know have a second layer of 4 SPHINCS trees, containing themselves WOTS+ public keys at their leaves.</p>
<p>This go on and on… according to your initial parameter. Finally when you reach the layer 0, the WOTS+ signatures won’t sign other SPHINCS trees but HORS trees.</p>
<p><img src="/images/hash-based-signatures/second_tree.jpg" alt="sphincs structure" /></p>
<p>A HORST or HORS tree is the same as a L-tree but this time containing a HORS few-time signature instead of a Winternitz one-time signature. We will use them to sign our messages, and this will increase the security of the scheme since if we do sign a message with the same HORS key it won’t be a disaster.</p>
<p>Here’s a diagram taken out of the SPHINCS paper making abstraction of WOTS+ L-trees (displaying them as signature of the next SPHINCS tree) and showing only one path to a message.</p>
<p><img src="/images/hash-based-signatures/sphincs.png" alt="sphincs" /></p>
<p>When signing a message M you first create a “randomized” hash of M and a “random” index. I put random in quotes because everything in SPHINCS is deterministically computed with a PRF. The index now tells you what HORST to pick to sign the randomized hash of M. This is how you get rid of the state: by picking an index deterministically according to the message. Signing the same message again should use the same HORST, signing two different messages should make use of two different HORST with good probabilities.</p>
<p>And this is how this series end!</p>
<p>EDIT: here’s another diagram from <a href="https://eprint.iacr.org/2015/1042.pdf">Armed SPHINCS</a>, I find it pretty nice!</p>
<p><img src="/images/hash-based-signatures/Screen_Shot_2015-12-08_at_2.11_.44_PM_.png" alt="sphincs" /></p>
Tue, 08 Dec 2015 16:13:37 +0000
https://cryptoservices.github.io/quantum/2015/12/08/XMSS-and-SPHINCS.html
https://cryptoservices.github.io/quantum/2015/12/08/XMSS-and-SPHINCS.htmlquantumHash-Based Signatures Part III: Many-times Signatures<p>We saw previously what were one-time signatures (OTS), then what were few-time signatures (FTS). But now is time to see how to have practical signature schemes based on hash functions. Signature schemes that you can use many times, and ideally as many times as you’d want.</p>
<p>If you haven’t read <a href="/quantum/2015/12/04/one-time-signatures.html">Part I</a> and <a href="/quantum/2015/12/07/few-times-signatures.html">Part II</a> it’s not necessarily a bad thing since we will make abstraction of those. Just think about OTS as a public key/private key pair that you can only use once to sign a message.</p>
<h2 id="dumb-trees">Dumb trees</h2>
<p>The first idea that comes to mind could be to use a bunch of one-time signatures (use your OTS scheme of preference). The first time you would want to sign something you would use the first OTS keypair, and then never use it again. The second time you would want to sign something, you would use the second OTS keypair, and then never use it again. This can get repetitive and I’m sure you know where I’m going with this. This would also be pretty bad because your public key would consist of all the OTS public keys (and if you want to be able to use your signature scheme a lot, you will have a lot of OTS public keys).</p>
<p>One way of reducing the storage amount, secret-key wise, is to use a seed in a pseudo-random number generator to generate all the secret keys. This way you don’t need to store any secret-key, only the seed.</p>
<p>But still, the public key is way too large to be practical.</p>
<h2 id="merkle-trees">Merkle trees</h2>
<p>To link all of these OTS public keys to one main public keys, there is one very easy way, it’s to use a <strong>Merkle tree</strong>. A solution invented by Merkle in 1979 but <a href="http://discovery.csc.ncsu.edu/Courses/csc774-F11/reading-assignments/Merkle-Tree.pdf">published</a> a decade later because of some uninteresting editorial problems.</p>
<p>Here’s a very simple definition: a Merkle tree is a basic binary tree where every node is a hash of its childs, the root is our public key and the leaves are the hashes of our OTS public keys. Here’s a drawing because one picture is clearer than a thousand words:</p>
<p><img src="/images/hash-based-signatures/merkle.jpg" alt="merkle tree" /></p>
<p>So the first time you would use this tree to sign something: you would use the first OTS public key (A), and then never use it again. Then you would use the B OTS public key, then the C one, and finally the D one. So you can sign 4 messages in total with our example tree. A bigger tree would allow you to sign more messages.</p>
<p>The attractive idea here is that your public key only consist of the root of the tree, and every time you sign something your signature consists of only a few hashes: <strong>the authentication path</strong>.</p>
<p>In our example, a signature with the first OTS key (A) would be: <code class="highlighter-rouge">(1, signature, public key A, authentication path)</code></p>
<ul>
<li>
<p>1 is the <em>index</em> of the signing leaf. You have to keep that in mind: you can’t re-use that leaf’s OTS. This makes our scheme a <strong>stateful</strong> scheme.</p>
</li>
<li>
<p>The <em>signature</em> is our OTS published secret keys (see the previous parts of this series of articles).</p>
</li>
<li>
<p>The <em>public key</em> is our OTS public key, to verify the signature.</p>
</li>
<li>
<p>The <em>authentication path</em> is a list of nodes (so a list of hashes) that allows us to recompute the root (our main public key).</p>
</li>
</ul>
<p>Let’s understand the authentication path. Here’s the previous example with the authentication path highlighted after signing something with the first OTS (A).</p>
<p><img src="/images/hash-based-signatures/authpath.jpg" alt="authpath" /></p>
<p>We can see that with our OTS public key, and our two hashes (the neighbor nodes of all the nodes in the path from our signing leaf to the root) we can compute the main public key. And thus we can verify that this was indeed a signature that originated from that main public key.</p>
<p>Thanks to this technique we don’t to know all of the OTS public keys to verify that main public key. This saves space and computation.</p>
<p>And that’s it, that’s the simple concept of the Merkle’s signature scheme. A many-times signature scheme based on hashes.</p>
<p><a href="/quantum/2015/12/08/XMSS-and-SPHINCS.html">…part IV is here</a></p>
Mon, 07 Dec 2015 17:13:37 +0000
https://cryptoservices.github.io/quantum/2015/12/07/many-times-signatures.html
https://cryptoservices.github.io/quantum/2015/12/07/many-times-signatures.htmlquantumHash-Based Signatures Part II: Few-Times Signatures<p>If you missed the <a href="/quantum/2015/12/04/XMSS-and-SPHINCS.html">previous blogpost on OTS</a>, go check it out first. This is about a construction a bit more useful, that allows to sign more than one signature with the same small public-key/private-key. The final goal of this series is to see how hash-based signature schemes are built. But they are not the only applications of one-time signatures (OTS) and few-times signatures (FTS).</p>
<p>For completeness here’s a quote of some paper about other researched applications:</p>
<blockquote>
<p>One-time signatures have found applications in constructions of ordinary signature schemes [Mer87, Mer89], forward-secure signature schemes [AR00], on-line/off-line signature schemes [EGM96], and stream/multicast authentication [Roh99], among others
[…]
BiBa broadcast authentication scheme of[Per01]</p>
</blockquote>
<p>But let’s not waste time on these, today’s topic is HORS!</p>
<h2 id="hors">HORS</h2>
<p>HORS comes from an update of BiBa (for “Bins and Balls”), published in 2002 by the Reyzin father and son in a paper called <a href="https://www.cs.bu.edu/~reyzin/papers/one-time-sigs.pdf">Better than BiBa: Short One-time Signatures with Fast Signing and Verifying</a>.</p>
<p>The first construction, based on one-way functions, starts very similarly to OTS: generate a list of integers that will be your private key, then hash each of these integers and you will obtain your public key.</p>
<p>But this time to sign, you will also need a <strong>selection function</strong> \(S\) that will give you a list of index according to your message \(m\). For the moment we will make abstraction of it.</p>
<p>In the following example, I chose the parameters \(t = 5\) and \(k = 2\). That means that I can sign messages \(m \) whose decimal value (if interpreted as an integer) is smaller than \( \binom{t}{k} = 10 \). It also tells me that the length of my private key (and thus public key) will be of \( 5 \) while my signatures will be of length \( 2 \) (the selection function S will output 2 indexes).</p>
<p><img src="/images/hash-based-signatures/hors1.jpg" alt="hors1" /></p>
<p>Using a good selection function S (a bijective function), it is <strong>impossible</strong> to sign two messages with the same elements from the private key. But still, after two signatures it should be pretty easy to forge new ones.
The second construction is what we call the HORS signature scheme. It is based on “subset-resilitient” functions instead of one-way functions. The selection function \(S\) is also replaced by another function \(H\) that makes it infeasible to find two messages \(m_1\) and \(m_2\) such that \(H(m_2) \subseteq H(m_1)\).</p>
<p>More than that, if we want the scheme to be a few-times signature scheme, if the signer provides \(r\) signatures it should be infeasible to find a message \(m’\) such that \(H(m’) \subseteq H(m_1) \cup \dots \cup H(m_r) \). This is actually the definition of “subset-resilient”. Our selection function \(H\) is r-subset-resilient if any attacker cannot find (even with small probability), and in polynomial time, a set of \(r+1\) messages that would confirm the previous formula. From the paper this is the exact definition (but it basically mean what I just said)</p>
<p><img src="/images/hash-based-signatures/definition.png" alt="definition" /></p>
<p>so imagine the same previous scheme:</p>
<p><img src="/images/hash-based-signatures/hors1.jpg" alt="hors1" /></p>
<p>But here the selection function is not a bijection anymore, so it’s hard to reverse. So knowing the signatures of a previous set of message, it’s hard to know what messages would use such indexes.</p>
<p>This is done in theory by using a <strong>random oracle</strong>, in practice by using a hash function. This is why our scheme is called HORS for <strong>Hash to Obtain Random Subset</strong>.</p>
<p>If you’re really curious, here’s our new selection function:</p>
<p>to sign a message \(m\):</p>
<ol>
<li>
<p>\(h = Hash(m)\)</p>
</li>
<li>
<p>Split \(h\) into \(h_1, \dots, h_k\)</p>
</li>
<li>
<p>Interpret each \(h_j\) as an integer \(i_j\)</p>
</li>
<li>
<p>The signature is \( sk_{i_1}, \dots, sk_{i_k} \)</p>
</li>
</ol>
<p>And since people seem to like my drawings:</p>
<p><img src="/images/hash-based-signatures/drawing.jpg" alt="drawing" /></p>
<p>…<a href="/quantum/2015/12/07/many-times-signatures.html">Part III is here</a></p>
Mon, 07 Dec 2015 16:13:37 +0000
https://cryptoservices.github.io/quantum/2015/12/07/few-times-signatures.html
https://cryptoservices.github.io/quantum/2015/12/07/few-times-signatures.htmlquantumHash-Based Signatures Part I: One-Time Signatures (OTS)<h2 id="lamport">Lamport</h2>
<p>On October 18th 1979, Leslie Lamport <a href="http://research.microsoft.com/en-us/um/people/lamport/pubs/dig-sig.pdf">published</a> his concept of <strong>One Time Signatures</strong>.</p>
<p>Most signature schemes rely in part on one-way functions, typically hash functions, for their security proofs. The beauty of Lamport scheme was that this signature was only relying on the security of these one-way functions.</p>
<p><img src="/images/hash-based-signatures/lamport.jpg" alt="lamport" /></p>
<p>here you have a very simple scheme, where \(x\) and \(y\) are both integers, and to sign a single bit:</p>
<ul>
<li>
<p>if it’s \(0\), publish \(x\)</p>
</li>
<li>
<p>if it’s \(1\), publish \(y\)</p>
</li>
</ul>
<p>Pretty simple right? Don’t use it to sign twice obviously.</p>
<p>Now what happens if you want to sign multiple bits? What you could do is hash the message you want to sign (so that it has a predictible output length), for example with SHA-256.</p>
<p>Now you need 256 private key pairs:</p>
<p><img src="/images/hash-based-signatures/lamport-full.jpg" alt="lamport-full" /></p>
<p>and if you want to sign \(100110_2 \dots\),</p>
<p>you would publish \((y_0,x_1,x_2,y_3,y_4,x_5,\dots)\)</p>
<h2 id="winternitz-ots-wots">Winternitz OTS (WOTS)</h2>
<p>A few months after Lamport’s publication, Robert Winternitz of the Stanford Mathematics Department proposed to publish \(h^w(x)\) instead of publishing \(h(x)|h(y)\).</p>
<p><img src="/images/hash-based-signatures/wots.jpg" alt="wots" /></p>
<p>For example you could choose \(w=16\) and publish \(h^{16}(x)\) as your public key, and \(x\) would still be your secret key. Now imagine you want to sign the binary \(1001<em>2\) (\(9</em>{10}\)), just publish \(h^9(x)\).</p>
<p>Another problem now is that a malicious person could see this signature and hash it to retrieve \(h^{10}(x)\) for example and thus forge a valid signature for \(1010<em>2\) (\(10</em>{10}\)).</p>
<p>This can be circumvented by adding a short Checksum after the message (which you would have to sign as well).</p>
<h2 id="variant-of-winternitz-ots">Variant of Winternitz OTS</h2>
<p>A long long time after, in 2011, Buchmann et al <a href="https://eprint.iacr.org/2011/191.pdf">published an update</a> on Winternitz OTS and introduced a new variant using families of functions parameterized by a key. Think of a MAC.</p>
<p>Now your private key is a list of keys that will be use in the MAC, and the message will dictates how many times we iterate the MAC. It’s a particular iteration because the previous output is replacing the key, and we always use the same public input. Let’s see an example:</p>
<p><img src="/images/hash-based-signatures/wots-variant.jpg" alt="wots variant" /></p>
<p>We have a message \(M = 1011<em>2 (= 11</em>{10})\) and let’s say our variant of W-OTS works for messages in base 3 (in reality it can work for any base \(w\)). So we’ll say \(M = (M_0, M_1, M_2) = (1, 0, 2)\) represents \(102_3\).</p>
<p>To sign this we will publish \((f_{sk_1}(x), sk_2, f^2<em>{sk_3}(x) = f</em>{f_{sk_3}(x)}(x))\)</p>
<p>Note that I don’t talk about it here, but there is still a checksum applied to our message and that has to be signed. This is why it doesn’t matter if the signature of \(M_2 = 2\) is already known in the public key.</p>
<p>Intuition tells me that a public key with another iteration would provide better security</p>
<p><img src="/images/hash-based-signatures/notes.jpg" alt="note" /></p>
<p>here’s Andreas Hulsing’s answer after pointing me to <a href="https://www.youtube.com/watch?v=MecexfUT4OQ">his talk on the subject</a>:</p>
<blockquote>
<p>Why? For the 1 bit example: The checksum would be 0. Hence, to sign that message one needs to know a preimage of a public key element. That has to be exponentially hard in the security parameter for the scheme to be secure. Requiring an attacker to be able to invert the hash function on two values or twice on the same value only adds a factor 2 to the attack complexity. That’s not making the scheme significantly more secure. In terms of bit security you might gain 1 bit (At the cost of ~doubling the runtime).</p>
</blockquote>
<h2 id="winternitz-ots-wots-1">Winternitz OTS+ (WOTS+)</h2>
<p>There’s not much to say about the W-OTS+ scheme. Two years after the variant, Hulsing alone published an upgrade that shorten the signatures size and increase the security of the previous scheme. It uses a chaining function in addition to the family of keyed functions. This time the key is always the same and it’s the input that is fed the previous output. Also a random value (or mask) is XORed before the one-way function is applied.</p>
<p><img src="/images/hash-based-signatures/wots_plus.jpg" alt="wots+" /></p>
<p>Some precisions from Hulsing about shortening the signatures size:</p>
<blockquote>
<p>WOTS+ reduces the signature size because you can use a hash function with shorter outputs than in the other WOTS variants <em>at the same level of security</em> or longer hash chains. Put differently, using the same hash function with the same output length and the same Winternitz parameter w for all variants of WOTS, WOTS+ achieves higher security than the other schemes. This is important for example if you want to use a 128 bit hash function (remember that the original WOTS requires the hash function to be collision resistant, but our 2011 proposal as well as WOTS+ only require a PRF / a second-preimage resistant hash function, respectively). In this case the original WOTS only achieves 64 bits of security which is considered insecure. Our 2011 proposal and WOTS+ achieve 128 - f(m,w) bits of security. Now the difference between WOTS-2011 and WOTS+ is that f(m,w) for WOTS-2011 is linear in w and for WOTS+ it is logarithmic in w.</p>
</blockquote>
<h2 id="other-ots">Other OTS</h2>
<p>Here ends today’s blogpost! There are many more one-time signature schemes, if you are interested here’s a list, some of them are even more than one-time signatures because they can be used a few times. So we can call them few-times signatures schemes (FTS):</p>
<ul>
<li>1994, <a href="ftp://ftp.inf.ethz.ch/pub/crypto/publications/BleMau94.pdf">The Bleichenbacher-Maurer OTS</a></li>
<li>2001, <a href="http://www.netsec.ethz.ch/publications/papers/biba.pdf">The BiBa OTS</a></li>
<li>2002, <a href="https://www.cs.bu.edu/~reyzin/papers/one-time-sigs.pdf">HORS</a></li>
<li>2014, <a href="https://cryptojedi.org/papers/sphincs-20141001.pdf">HORST</a> (HORS with Trees)</li>
</ul>
<p>So far their applications seems to be reduce to be the basis of Hash-based signatures that are the current advised signature scheme for post quantum usage. See <a href="http://pqcrypto.eu.org/docs/initial-recommendations.pdf">PQCrypto initial recommendations</a> that was released a few months ago.</p>
<p>PS: Thanks to <a href="https://huelsing.wordpress.com/">Andreas Hulsing for his comments</a></p>
<p><a href="/quantum/2015/12/07/few-times-signatures.html">Part II of this series is here</a></p>
Fri, 04 Dec 2015 16:13:37 +0000
https://cryptoservices.github.io/quantum/2015/12/04/one-time-signatures.html
https://cryptoservices.github.io/quantum/2015/12/04/one-time-signatures.htmlquantum