bumbledraven a day ago

I noticed that Victor Shoup is an author. I recognize the name from citations in papers by cryptographer D. J. Bernstein (known for creating ChaCha20, Ed25519, qmail, and more). Shoup is referenced in over 100 pages on Bernstein's cr.yp.to domain (https://www.google.com/search?q=shoup+site%3Acr.yp.to), so my first impression is to regard this paper as potentially legit and important.

That said, my very uneducated guess is that the problem being solved here is not important for many users of distributed consensus algorithms. If you have a bunch of nodes that need to agree on something, you generally don't mind sharing a cryptographic secret among them as part of the set-up.

monocultured 5 days ago

This sounds interesting but the description goes way over my head. Anyone care to explain in layman's terms the concept and the non-obvious benefits?

  • eterm 5 days ago

    It's hard to answer that without you saying what you think the obvious benefits are.

    Consensus algorithms are important for all kinds of distributed computing problems. A simple example would be failover. If you have a leader database that replicates to 4 others, and you want another node to take over if the leader DB fails, then you need a consensus algorithm to prevent a situation where 2 different machines both think they're the new leader in a netsplit.

    There are many other equivalent problems in distributed computing, from atomic transactions to "exactly once" messaging systems.

    Asynchronous consensus is a model where you cannot make assumptions about the bounded nature of call timings, whereas in a synchronous model, you can assume everything is bounded.

    Byzantine fault tolerance is important for security under byzantine faults, that is to say malicious actors acting deliberately against what the protocol specifies they should do.

Rhapso 5 days ago

So the protocol seems to boil down to:

1) Already have a leader "Dealer"

2) The leader builds a K-of-N set of shared secret keys.

3) They publish a mapping of each participant (participant_i->hash(secret_i))

4) The leader transmits each key to each participant

5) Participants exchange secrets pairwise, armed with the upfront mapping of participants->secrets

6) Select K and a k-of-N secret scheme such that a majority of participants now have a shared key

lots of the claims aren't meaningful:

- "post quantum" for example isn't a special value in this situation.

- "minimal use of cryptography" isn't relevant to practicality

- The "experimental" component doesn't meaningful contribute to the conclusion.

- No public-key-encryption really means "outsource sender identification to the network layer"

- They pretend using a system of equations to solve for a shared key isn't"cryptography".

In general the contribution of the paper reads as "offusicated". The lack of "public key cryptography" sets them up for a novel problem to solve, but it is an arbitrary handicap that doesn't provide utility.

This is academic "make up a novel and nontrivial problem and then solve it", its of utility to the process of producing grad students and publication count but not something we need to get excited about. Read it like a survey paper of the space, which it does well as.

  • c0742e9366 4 days ago

    I disagree with this uncharitable view of the paper.

    First, an important missing point is that the protocol does not require trusted setup. In contrast, most prior works require that parties hold threshold secret keys (necessitating a trusted third-party or expensive setup procedure).

    Second, a lot of effort is currently being poured into the transition to post-quantum. So having a post-quantum secure protocol is evidently valuable to a lot of people.

    Third, Byzantine agreement protocols usually always assume pairwise private and authenticated communication channels. It makes sense that protocols should not need to concern themselves with the communication layer and such channels can be realized from standard cryptographic building blocks anyways. Here the paper not using any PK-cryptography is especially nice because the protocol can be layered on top without a lot of fuss—no matter what the channels are based on.

    Last, this problem is far from "made up". It was an obvious (and also seemingly hard to solve) to people working in this area. Also, Byzantine agreement is a practically important problem and this is an elegant solution.

  • xhkkffbf 5 days ago

    I'm not as cynical. I think quests like this are important to understanding what's important and not-so-important in protocols. Quests like post-quantum algorithms are important for me not because I believe in the chance of any quantum machine coming along. It's because we learn so much. The foundation of many public key systems is pretty sketchy. We trust them because no one has publicly described how to break them. But that doesn't mean that the truth isn't out there.

    • Rhapso 5 days ago

      Right but "Avoiding PKI" and "Secure Against Quantum Computers" barely even correlate. We have methods we consider post-quantum in heavy use. "PKI is generally sketchy and we would like to explore alternatives" is actually a much better rationalization than "post-quantum buzzword dropping"

      • ogisan 4 days ago

        I disagree. Avoiding PKI and post-quantum security correlate very much. Even under plausibly post-quantum assumptions we only have a couple of assumptions from which we can build public key encryption. In contrast, here they avoid all use of public key cryptography which makes it provably post-quantum secure. It’s not using a buzzword for the sole sake of selling the paper. In general, using “minimal cryptography” (like random oracles / one-way functions) translates to real-world efficiency because you can instantiate these from a plethora of different concrete candidates.

        • ilya_m 4 days ago

          > Avoiding PKI and post-quantum security correlate very much. Even under plausibly post-quantum assumptions we only have a couple of assumptions from which we can build public key encryption.

          These statements presuppose an overly expansive definition of PKI, i.e., distribution of keys for public-key encryption. A more conservative definition is PKI = availability of trustworthy publicly verifiable signatures (i.e., public-key certificates). Post-quantum signatures can be based on target collision-resistant hash functions, like XMSS.

          The paper assumes pairwise private and authenticated channels. While in practice this is not necessarily a good substitute for PKI, in theory it is a strictly weaker setting.

      • kreetx 4 days ago

        I'm only somewhat into cryptography, but is PKI considered as sketchy in some way?

        • Rhapso 4 days ago

          No.

          The proofs for the security security of most of the methods of public/private key systems are weaker than "you can't reverse this hash function".

          They are still robust in the face of computers that may physically exist in foreseeable futures. Elliptic Curve Encryption was adopted for being post-quantum twenty years ago and more work since then I haven't followed.

          The person I am arguing with is imagining a future pessimistic beyond what most would consider reasonable.

          • nmadden 4 days ago

            Elliptic curve crypto is not post-quantum. (Indeed it’s likely to be broken before RSA if cryptographically relevant quantum computers occur).

  • treyd 5 days ago

    > - No public-key-encryption really means "outsource sender identification to the network layer"

    Yeah I didn't read the full paper but from the abstract my intuition was telling me they just assumed away a bunch of things that are fairly necessary when actually implementing a BFT consensus as part of the environment.

simonpure 4 days ago

One of the original authors also published a follow up with some additional details and analysis that may be of interest (still reading through it myself) [0]

[0] https://eprint.iacr.org/2024/696

api 5 days ago

Haven’t had a chance to go through this but my instant #1 question is security against malicious nodes.

All such protocols, even Bitcoin and friends, break under a sufficiently costly Sybil attack. The trick with cryptocurrency is to make the attack so expensive that it requires a highly economically irrational actor.

What are the thresholds here?

  • chc4 5 days ago

    This is a Byzantine agreement protocol. By definition it is a construction taking into account malicious nodes: that is what Byzantine agreement means. Table 1 says that this algorithm, like all the rest except one in the table, has fault tolerance 1/3 which is optimal. The second page also says "We consider the presence of a static adversary A that can corrupt up to t out of the n ≥ 3t + 1 parties. "

    • repelsteeltje 5 days ago

      > ... static adversary A that can corrupt ...

      Doesn't that mean that an adversary using multiple identities would be able to do so? And therefore, some means of limiting the number of identities (through public key or prior trust) would still be desirable?

      What am I missing? Is this mitigated through staking?

      • mike_hearn 5 days ago

        You're not missing anything. Systems like this assume there is a pre-created set of honest and independent nodes, which might later get hacked. They don't apply in the more realistic setting the PoW blockchain algorithm solves, where nodes can enter and leave the consensus at will and may be malicious or non-independent from the start.

        • chc4 5 days ago

          You just do rounds of fixed sets of parties, like Ethereum proof of stake does. The set of nodes in each round are then needed to be 2/3rds honest. They then have Sybil resistance in each round as identities aren't free, since you need a stake in order to be selected for a round.

          • HeatrayEnjoyer 4 days ago

            >Is it not still a simple matter of cost? An attacker who can 51% Blockchain has enough money to overcome resource scarcity walls of any construction.

  • Quiark 4 days ago

    It's important to realise the difference between consensus and Sybil prevention protocols. Consensus such as this one assumes participants are already selected and known to some extent. Then it runs its network messages to ensure it agrees on a value where everyone has the same value and won't revert or anything.

    Sybil prevention is something that I wasn't taught at school and it concerns itself with creating the pre-requisites for consensus algorithm. Given the world population, how do we establish participant set for consensus while minimizing their chance to attack. Well, maybe by requiring them to waste record-breaking amounts of stupid compute.

  • alfiedotwtf 5 days ago

    I don’t think a Sybil attack will work on a Staking network since the leader is known and a MITM can’t broadcast a block without the leader’s key. The only think they can do is take them off line to prevent a solved block from getting sent to the network, but that problem exists without taking into account Sybil attacks

  • kfrzcode 4 days ago

    Hedera Hashgraph uses Proof-of-Stake consensus, stake weighting, aBFT and minimum stake threshold with node staking delegation. Malicious nodes can lose delegators. If a node doesn't participate in consensus it won't receive rewards. I'm not aware of any consensus mechanisms invulnerable to Sybil attacks but I am simple pleb.

    https://ieeexplore.ieee.org/abstract/document/9191430

bloopernova 5 days ago

I kind of wish Git had a feature where multiple people could sign a PR or commit.

Is there a multiple signature method that isn't "just" signing other people's signatures?

  • leni536 5 days ago

    I guess you could sign a git note[1] on an existing commit. As a bonus you could add metadata about what you are actually signing off, like "approved" or even "rejected".

    Just adding an additional signature on top of an existing commit wouldn't carry too much info. What are you actually signing off on? Is it approval? Is it acknowledgement? And then it can be hard to sign off on something negative, like a code review that rejects.

    [1] https://git-scm.com/docs/git-notes

  • remram 4 days ago

    You can sign tags, and you can make as many tags as you want. So for example you could have 'dev1-1.0' signed by dev1, 'dev2-1.0' signed by dev2, etc to have that revision signed by multiple developers.

  • k__ 4 days ago

    Yes.

    Threshold cryptography.

    Each participant generates one part of a key, and can only create one part of the signature.

    You can do stuff like requiring 3 of 5 signatures parts to be valid, etc.

  • jazzyjackson 5 days ago

    if you have a trusted central party you could reconstruct shards of a signing key, known as threshold cryptography eg Shamirs Secret Sharing

    • treyd 5 days ago

      You don't need a trusted central party. You can sign using just shamir shares with a multi-round MPC protocol without ever reconstructing the real key. There's round-optimized schemes like MuSig2 and FROST for specifically schnorr-based signatures, which GPG doesn't support but are used in other places. I don't know what the effort to integrate that into git would look like.

      • kreetx 4 days ago

        If you're just signing, is there any role for MPC in in there?

        • treyd 3 days ago

          To do the signing without having to reconstruct the privkey in any single location.

glitchc 5 days ago

Please note the Cryptology ePrint Archive is not a peer-reviewed source. The description of the paper may indicate another (peer-reviewed) publication, but this is entirely optional and not required for a submission.

Edit: Adding this as a PSA in case folks start debating the veracity assuming this has undergone review by experts.

  • ilya_m 4 days ago

    To be fair, peer-reviewed publications are not what they used to be. For example, the conference reviewing process is not designed to validate correctness of the submissions. (Obviously wrong papers are filtered out most of the time, but this is a welcome side effect, not - apparently - their main objective.) See, for instance, recent Carlini's experience of reporting a demonstrably flawed paper to the chairs of a major conference: https://nicholas.carlini.com/writing/2024/yet-another-broken...

    • HideousKojima 4 days ago

      >To be fair, peer-reviewed publications are not what they used to be.

      They were never what most people thought they were. At their best, they amounted to "a few relevant experts in the same field read the paper and didn't find any blatantly obvious methodological errors."

    • Rhapso 4 days ago

      Yeah, sadly this paper is a reasonable example of the status quo. There is a contribution in there under all the cruft (if we assume a trusted leader, and we can authenticate message origin, we don't have to use Public/private key encryption to build a shared secret key, hash functions and k-of-n secret sharing are enough) it is just very narrow and they feel like they have to "dress it up" a lot.

      If they actually wrote their contribution in clear terms they couldn't get it published because it sounds too simple. I think they should be able able to get it published without inflating it's complexity like this. They are just reacting to a broken system.

      • c0742e9366 4 days ago

        It’s unfair to equate this paper to obviously flawed ones since all their claims seem to be properly substantiated. Also, the protocol does not assume a trusted leader (otherwise agreement would be trivially solved).

        In general, I am also not fond of this writing style. However, if one reads more papers published in this community/area, then one notices that many of them are written similarly. Since the primary audience of these papers are other researchers in the same area, they are presumably able to read past the cruft efficiently.

        I also agree that academia incentivizes overselling results. In this case, however, this is a nice result and not oversold by the authors (being somewhat knowledgeable in this field).

steelframe 5 days ago

Their protocol only uses cryptographic hash functions, which means that it's post-quantum secure. One reason why this is significant is because existing post-quantum public key algorithms such as SPHINCS+ use much larger keys than classic public key algorithms such as RSA or ECDH.

*Edit: As other have pointed out, for SPHINCS+ it's the signature size and not the key size that's significantly larger.

  • westurner 5 days ago

    Only PQ hash functions are PQ FWIU. Other cryptographic hash functions are not NIST PQ standardization finalists; I don't think any were even submitted with "just double the key/hash size for the foreseeable future" as a parameter as is suggested here. https://news.ycombinator.com/item?id=25009925

    NIST Post-Quantum Cryptography Standardization > Round 3 > Selected Algorithms 2022 > Hash based > SPHINCS+: https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography...

    SPHINCS+ is a hash based PQ Post Quantum (quantum resistant) cryptographic signature algorithm.

    SPHINCS+: https://github.com/sphincs/sphincsplus

  • EGreg 5 days ago

    For those here who don’t know

    SPHINCS is essentially Lamport Signatures and SPHINCS+ is essentially removing the need to store “state” by using chains of hashes and a random oracle

    https://crypto.stackexchange.com/questions/54304/difference-...

    I prefer tham to lattice-based methods because it seems to me that cryptographic hashes (and other trapdoor functions) are more likely to be quantum-resistant than lattices (for which an algorithm like Shor’s algorithm merely hasn’t been found yet).

    • chc4 5 days ago

      In case you haven't seen, there was actually a quantum algorithm on preprint recently for solving some class of learning with errors problems in polynomial time. https://eprint.iacr.org/2024/555

      • fdupress 5 days ago

        And since you apparently haven't seen, the abstract now includes the following note.

        > Note: Update on April 18: Step 9 of the algorithm contains a bug, which I don’t know how to fix. See Section 3.5.9 (Page 37) for details. I sincerely thank Hongxun Wu and (independently) Thomas Vidick for finding the bug today. Now the claim of showing a polynomial time quantum algorithm for solving LWE with polynomial modulus-noise ratios does not hold. I leave the rest of the paper as it is (added a clarification of an operation in Step 8) as a hope that ideas like Complex Gaussian and windowed QFT may find other applications in quantum computation, or tackle LWE in other ways.

        • chc4 5 days ago

          Oops, I did not, thank you!

  • jythonscript 5 days ago

    Minor note, I'm not an expert in this field but I believe SPHINCS+ uses smaller key sizes than RSA, but is notable for producing much larger signatures than RSA or ECDSA.

  • lagniappe 5 days ago

    > Their protocol only uses cryptographic hash functions, which means that it's post-quantum secure

    Can you elaborate on this?

    • woodruffw 5 days ago

      Inverting a general-purpose cryptographic hash function in the quantum setting is (roughly) as hard as it is in the classical setting.

      (Roughly because of Grover’s algorithm, but there are algorithms that perform similarly or better on classical machines. Which is why modern hash functions have relatively large margins anyways.)