* Re: [bitcoin-dev] Proof-of-Stake Bitcoin Sidechains
@ 2019-02-01 9:19 99% ` ZmnSCPxj
0 siblings, 0 replies; 58+ results
From: ZmnSCPxj @ 2019-02-01 9:19 UTC (permalink / raw)
To: Matt Bell, Bitcoin Protocol Discussion
Good morning Matt Bell,
Thinking of this further, I observe that there are limits on the number of operations in a SCRIPT (I believe 201 non-push operations, and maybe a smaller number of CHECKSIG operations?).
This implies that the number of signatories of the sidechain funds in the mainchain are limited.
This is an important care point.
I am uncertain what is the best way to solve this issue.
---
In any case, I now present a design for a proof-of-mainstake sidechain, now without any modifications to Bitcoin mainchain.
---
I observe that a blockchain is, stripped to its barest minimum, nothing more than a Merklized singly-linked list.
Each block header is a node in a singly-linked list.
It commits to the previous block header, and also commits to the block data (traditionally a Merkle binary tree).
Via such block headers, a chain of blocks --- a blockchain --- is formed.
---
I observe that a (non-coinbase) transaction in Bitcoin refers to at least one existing transaction output.
A representation of that transaction output must be committed to in the transaction.
If we create single-input single-output transactions, a chain of transactions is formed.
---
Thus the idea: the sidechain *is* the transaction chain.
I observe that the mainchain *must* contain some UTXO(s) that are purportedly controlled by the sidechain rules.
It is also possible that the sidechain funds be a single UTXO, with deposits and withdrawals requiring that the single UTXO be spent, in order to maintain the invariant that the sidechains funds are handled completely in a single UTXO.
In addition, it is possible for a transaction to commit to some data arbitrarily, either via `OP_RETURN`, or via some technique such as pay-to-contract (which reduces space utilization on the mainchain compared to `OP_RETURN`).
When we use the above technique (i.e. the sidechain only "owns" a single mainchain UTXO at each block of the mainchain):
1. Each transaction commits to the previous transaction (via spending the output of the previous transaction).
2. Each transaction may commit to some other data (via `OP_RETURN` or other technique).
I observe also that under a blockchain:
1. Each block header commits to the previous block header.
2. Each block header commits to the block data.
From this, the analogy is simple and obvious.
The sidechain "blockchain" *is* the transaction chain.
---
Under certain forms of proof-of-stake, the block must be signed by some set of signatories of the stakers.
Under transaction rules, the transaction must be signed according to the SCRIPT, and the SCRIPT may indicate that some set of signatories must sign.
Thus, it becomes possible to simply embed the sidechain block headers on the mainchain directly, by spending the previous transaction (== sidechain block header).
This spend requires that the transaction be signed in order to authorize the spend.
However, these same signatures are in fact also the signatures that, under proof-of-stake, prove that a sufficient signatory set of the stakers has authorized a particular block of the proof-of-stake blockchain.
The magic here is that, on the mainchain, a transaction may only be spent once.
Thus, nothing-at-stake and stake-grinding problems disappear.
---
Now let us introduce some details.
We have two mainchain-to-sidechain requests:
1. An indication that a mainchain coin owner wants to stake coins to the sidechain.
2. An indication that a mainchain coin owner wants to transfer coins to the sidechain.
From within the sidechain, sidechain-to-mainchain withdrawals must somehow be signalled, but that is up to the sidechain to define.
We shall ignore it here.
When a sidechain receives a request to add stake, then the current stakers create a mainchain transaction, spending the sidechain UTXO, plus the staked coins, and outputting the next sidechain UTXO (with the signatory set modified appropriately), plus a stake UTXO that locks the coins.
When a sidechain receives a request to transfer coins from mainchain to sidechain, then the current stakers create a mainchain transaction, spending the sidechain UTXO, plus the transferred coins, and outputting the next sidechain UTXO (with the same signatory set).
Multiple such requests can be processed for each transaction (i.e. sidechain block).
This simply consumes the sidechain UTXO, any stake or transfer coins, and creates the next sidechain UTXO and any stake UTXOs.
Now, the indication to stake is a UTXO with a special script.
It has two branches:
1. A signature from the current signatory set.
2. Or, 2 OP_CSV and the staker signature.
The intent of the latter branch is to ensure that, if the current signatories ignore the incoming staker, the incoming staker can still recover its funds.
If the current set of stakers accepts the incoming staker (which requires both that they change the signatory set, and put the money being staked into a stake UTXO which is simply a long CSV and the staker signature), then the first branch is performed and the coin is an input of the sidechain block header (== sidechain managing transaction).
A similar technique is used for mainchain-to-sidechain transfers.
If the mainchain-to-sidechain transfer is ignored (either deliberately, or by an accident of disrupted communication from the mainchain trasnferrer to the sidechain network), the mainchain transferrer can recover its money and try again.
---
Ideally, every mainchain block would have a sidechain managing transaction (== sidechain block header).
Of course, we must consider fees.
Obviously the sidechain itself must charge fees within the sidechain.
Some fraction of those fees will be spent in order for the sidechain managing transaction to be confirmed on the mainchain.
Now it may happen that the sidechain managing transaction is not confirmed immediately on the mainchain.
The sidechain stakers (the signatory set) may elect to sacrifice even more of their fees to increase the fees of the sidechain managing transaction via RBF.
---
Now perhaps the sidechain may wish to have a faster (or more regular) block rate than the mainchain.
In such a case, it may maintain a "real" blockchain (i.e. headers that commit to a single previous header, and commits the block data).
Then each sidechain managing transaction would commit to the latest sidechain block header agreed upon by the stakers.
These sidechain blocks need not be signed; they only become "real" if they are committed to, directly or indirectly, in a sidechain managing transaction.
At each sidechain block, the signatory set (the stakers) create a sidechain managing transaction.
If it is immediately put into a mainchain block, then the next sidechain managing transaction spends it.
Otherwise, if it is not put into a mainchain block, then the stakers just recreate the sidechain managing transaction with RBF.
Regards,
ZmnSCPxj
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Safer NOINPUT with output tagging
@ 2019-02-01 9:36 99% ` ZmnSCPxj
0 siblings, 0 replies; 58+ results
From: ZmnSCPxj @ 2019-02-01 9:36 UTC (permalink / raw)
To: Anthony Towns; +Cc: Bitcoin Protocol Discussion
Good morning aj,
I certainly agree.
I hope that PSBT support becomes much, much, much more widespread.
Regards,
ZmnSCPxj
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Thursday, January 31, 2019 2:04 PM, Anthony Towns <aj@erisian•com.au> wrote:
> On Mon, Dec 24, 2018 at 11:47:38AM +0000, ZmnSCPxj via bitcoin-dev wrote:
>
> > A boutique protocol would reduce the number of existing onchain wallets that could be integrated in such UI.
>
> Seems like PSBT would be a sufficient protocol:
>
> 0) lightning node generates a PSBT for a new channel,
> with no inputs and a single output of the 2-of-2 address
>
> 1. wallet funds the PSBT but doesn't sign it, adding a change address
> if necessary, and could combine with other tx's bustapay style
>
> 2. lightning determines txid from PSBT, and creates update/settlement
> tx's for funding tx so funds can be recovered
>
> 3. wallet signs and publishes the PSBT
> 4. lightning sees tx on chain and channel is open
>
> That's a bit more convoluted than "(0) lightning generates an address and
> value, and creates NOINPUT update/settlement tx's for that address/value;
> (1) wallet funds address to exactly that value; (2) lightning monitors
> blockchain for payment to that address" of course.
>
> But it avoids letting users get into the habit of passing NOINPUT
> addresses around, or the risk of a user typo'ing the value and losing
> money immediately, and it has the benefit that the wallet can tweak the
> value if (eg) that avoids a change address or enhances privacy (iirc,
> c-lightning tweaks payment values for that reason). If the channel's
> closed cooperatively, it also avoids ever needing to publish a NOINPUT
> sig (or NOINPUT tagged output).
>
> Does that seem a fair trade off?
>
> Cheers,
> aj
>
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Predicate Tree in ZkVM: a variant of Taproot/G'root
@ 2019-02-01 17:56 99% ` Oleg Andreev
0 siblings, 0 replies; 58+ results
From: Oleg Andreev @ 2019-02-01 17:56 UTC (permalink / raw)
To: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 7595 bytes --]
A follow-up comment: I've have sent this email right before Pieter's talk on miniscript at Stanford yesterday. I want to express my appreciation to the thinking about scripts/contracts that Pieter, Andy, Greg have been promoting for long time. These ideas influenced a lot the design decisions in ZkVM: "blockchain as a court", very limited functionality and clarity of scripts, and, as Pieter laid out yesterday, composition of policies. These are all the same values that I'm trying to reflect in ZkVM, that's why i think it might be interesting to this mailing list.
Also, Neha Narula asked a question this morning:
> Isn't this a DoS vector, in that an attacker could generate a lot of expensive code to execute in the VM which would then be rejected once the checks get executed? If so, how critical is this deferred execution of point operations to your design?
The answer: hopefully it's not a DoS vector, we are working on this right now. Programs for `call` and `delegate` have to be statically built into the transaction bytecode string, and cannot be constructed within the VM (so it's very similar to P2SH). ZkVM is similar to Bitcoin Script in that the execution cost is proportional to the program length: one cannot make a short program that would use loops or recursion into dynamically constructed programs to exhibit arbitrary validation cost. For those familiar with TxVM released last year, we are removing loops and dynamic program construction, and gas-like "runlimit" with them from ZkVM.
Another feature is inspired by old proposal by Pieter (IIRC) to treat checksig as all-or-nothing. ZkVM does not do dynamic branching based on outcomes of expensive operations. Signature checks, predicate tree traversal - all have to unconditionally succeed.
1. This makes the program execution (w/o ECC ops) very fast and proportional to the length of the program.
2. Then, all the collected ECC ops give precise metric of how expensive the rest of the validation would be.
3. Plus, the constraint system proof blob (that comes with the transaction) by itself gives an exact measurement of the bulletproofs validation cost.
The upstream protocol ("blockchain rules") can have soft- or hard- caps on both program length and amount of ECC operations (similar to the limit on sig checks per block in Bitcoin). That said, we haven't drilled into specifics what these caps should be and how they should be enforced, that's still in the works.
> On Jan 31, 2019, at 15:44 , Oleg Andreev <oleganza@gmail•com> wrote:
>
> Hi,
>
> We've been working for a thing called ZkVM [1] for the last few weeks. It is a "blockchain virtual machine" in the spirit of Bitcoin, with multi-asset transfers and zero-knowledge programmable constraints.
>
> As a part of its design, there is a "Predicate Tree" — a variant of Taproot by Greg Maxwell [2] and G'root by Anthony Towns [3] that I would like to present here. Hopefully it is useful to the discussion, and I would appreciate any feedback.
>
> ## Background
>
> In ZkVM there are linear types Contract and Value (in addition to plain data types), where Contract also implements "object capabilities" pattern: Contract "locks" a collection of data and Values under a "predicate" which is represented by a single group element ("point" in ECC terms). The predicate can be "satisfied" in a number of allowed ways which makes the contract unlock its contents, e.g. release the stored Value which can then be locked in a new unspent output.
>
> ## Predicate Tree
>
> Predicate is a point that represents one of three things, which allows composing conditions in an arbitrary tree:
>
> 1. Public key
> 2. Program
> 3. Disjunction of two other predicates
>
> Public key allows representing N-of-N signing conditions (and M-of-N with proper threshold key setup, although small combinations like 2-of-3 can be non-interactively represented as a tree of 3 combinations of 2-of-2 conditions):
>
> P = x*B (x is a secret, B is a primary generator point)
>
> Program commitment is a P2SH-like commitment:
>
> P = hash2scalar(program)*B2 (B2 is orthogonal to B, so one cannot sign for P, but must reveal the program)
>
> Disjunction (asymmetric to allow happy-path signing with the left predicate):
>
> P = L + hash2scalar(L,R)*B
>
>
> ## VM instructions
>
> To use the predicate trees, ZkVM provides 4 instructions:
>
> 1. `signtx` to verify the signature over the transaction ID treating the predicate as a pubkey.
> 2. `call` to reveal the committed program and execute it.
> 3. `left`/`right` to replace the contract's predicate with one of the sub-predicates in a disjunction.
> 4. `delegate` to check a signature over a program and execute that program (pay-to-signed-program pattern).
>
> More details are in the ZkVM spec: https://github.com/interstellar/zkvm/blob/main/spec/ZkVM.md#signtx
>
> `call` and `delegate` differ in that `call` reveals and runs a pre-arranged program (like in P2SH), while `delegate` allows choosing the program later which can be signed with a pre-arranged public key. `delegate` also enables use cases for SIGHASH: if a specific output or outputs or constraints must be signed, they can be represented by such program snippet. Likewise, a "revocation token" for the payment channel (LN) can be implemented with `delegate` instruction.
>
>
> ## Performance
>
> For performance, the following rules are built into ZkVM:
>
> 1. All point operations are deferred. Signature checks, disjunction proofs, program commitment proofs - are not executed right away, but deferred and verified in a batch after the VM execution is complete. This enables significant savings, especially since half or third of the terms reuse static points B and B2.
> 2. `signtx` does not accept individual signatures, but uses a single aggregated signature for the whole transaction. All the pubkeys are remembered in a separate set and combined via MuSig-style [4] protocol to check the single 64-byte signature over txid in the end of the VM execution. In other words, signature aggregation is not optional for `signtx` (signatures over txid). Note: the `delegate` instruction permits arbitrary programs, so it uses one signature per program.
>
>
> ## What is different from Taproot/G'root
>
> (1) No pure hash-based MAST: each time one peels off a layer of a tree, there's an ECC check which is more expensive than pure-hash merkle path check, but makes the design simpler and all ECC ops are batched alongside bulletproofs R1CS verification statement, making the performance difference unimportant.
>
> (2) There is no designated blinding factor or a pubkey with the program commitment like in G'root. This is not something i'm particularly sure about, but will lay out the current rationale:
> 1. The happy-path one-time-key normally acts as a sufficient blinding factor for the program.
> 2. If the program needs a blinding factor, it can be embedded as a `<garbage> drop`.
> 3. The combo of "sign + programmatic constraints" is done by having instructions inside the program that wrap the value(s) in a transient contract with the required pubkey and leaving it on the stack.
>
>
> ## References
>
> [1] https://github.com/interstellar/zkvm
> [2] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html
> [3] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-July/016249.html
> [4] https://blockstream.com/2018/01/23/musig-key-aggregation-schnorr-signatures/
>
>
>
[-- Attachment #2: Type: text/html, Size: 9558 bytes --]
^ permalink raw reply [relevance 99%]
* [bitcoin-dev] Card Shuffle To Bitcoin Seed
@ 2019-02-02 19:51 99% rhavar
2019-02-04 6:49 99% ` Adam Ficsor
2019-02-04 21:05 99% ` James MacWhyte
0 siblings, 2 replies; 58+ results
From: rhavar @ 2019-02-02 19:51 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 2454 bytes --]
More of a shower-thought than a BIP, but it's something I've long wish (hardware) wallets supported:
---
Abstract: Bitcoin Wallets generally ask us to trust their seed generation is both correct and honest. Especially for hardware and air gapped wallets, this is both a big ask and more or less impossible to practically verify. So we propose a bring-your-own-entropy approach in which the wallet can function completely deterministically. Our method is based on shuffling physical deck of cards. There are 52! (2^219.88) different shuffle order, which is a big enough space to be secure against collision and brute force attacks. Conveniently a shuffled deck of cards also can serve as a physical backup which is easy to hide in plain sight with great plausible deniability.
Representation:
Each card has a suit which can be represented by one of SCHD (spades, clubs, hearts, diamonds) and a value of one of 23456789TJQKA where the numbers are obvious and (T=ten, J=jack, Q=queen, K=king, A=ace) so "7 of clubs" would be represented by "7C" and a "Ten of Hearts" would be represented with "TH".
An deck of cards looks like:
2S,3S,4S,5S,6S,7S,8S,9S,TS,JS,QS,KS,AS,2C,3C,4C,5C,6C,7C,8C,9C,TC,JC,QC,KC,AC,2H,3H,4H,5H,6H,7H,8H,9H,TH,JH,QH,KH,AH,2D,3D,4D,5D,6D,7D,8D,9D,TD,JD,QD,KD,AD
And can be verified by making sure that every one of the 52 cards appears exactly once.
Step 1. Shuffle your deck of cards
This is a lot harder than you'd imagine, so do it quite a few times, with quite a few different techniques. It is advised to do at *least* 7 good quality shuffles to achieve a true cryptographically secure shuffle. Do not look at the cards while shuffling (to avoid biasing) and don't be afraid to also shuffle them face down on the table. Err on the side over over-shuffling.
See also: https://en.wikipedia.org/wiki/Shuffling#Sufficient_number_of_shuffles
Step 2. Write out the order (comma separated)
And example shuffle is:
5C,7C,4C,AS,3C,KC,AD,QS,7S,2S,5H,4D,AC,9C,3H,6H,9D,4S,8D,TD,2H,7H,JD,QD,2D,JC,KH,9S,9H,4H,6C,7D,3D,6S,2C,AH,QC,TH,TC,JS,6D,8H,8C,JH,8S,KD,QH,5D,5S,KS,TS,3S
Step 3. Sha512 it to create a seed
In the example above you should get:
dc04e4c331b1bd347581d4361841335fe0b090d39dfe5e1c258c547255cd5cf1545e2387d8a7c4dc53e03cacca049a414a9269a2ac6954429955476c56038498
Step 4. Interpret it
e.g. For bip32 you would treat the first 32 bytes as the private key, and the second 32 bytes as as the extension code.
-Ryan
[-- Attachment #2: Type: text/html, Size: 3345 bytes --]
^ permalink raw reply [relevance 99%]
* [bitcoin-dev] BIP157 server Murmel introduced, enchancement suggestion to BIP158
@ 2019-02-03 20:15 99% Tamas Blummer
0 siblings, 0 replies; 58+ results
From: Tamas Blummer @ 2019-02-03 20:15 UTC (permalink / raw)
To: bitcoin-dev; +Cc: jimpo
TLDR: I suggest to add outpoint filter to BIP158 as it proved to be useful while developing a filter server and allows further checks in filter client.
Murmel is my project within the rust-bitcoin community. https://github.com/rust-bitcoin/murmel
Its goal is to provide a lightweight, at least SPV security, settlement layer for the Lightning Network implementation in Rust. https://github.com/rust-bitcoin/rust-lightning
Murmel relies on BIP157 (Client Side Block Filtering). Since Bitcoin Core does not yet support this protocol extension, I also added filter and block server functionality to Murmel and this might be useful for
development purposes any other BIP157 client project.
You may compile and run a Murmel filter server to support your client development. It bootstraps within a few hours. Follow the instructions at: https://github.com/rust-bitcoin/murmel
While implementing both client and server side I made an observation that should be considered for BIP158:
BIP158 specifies base filter containing all scripts spent or created by a block (except those with OP_RETURN). I found it useful to also compute a filter on spent and created outpoints.
The Murmel filter server consults these outpoint filters to find the transactions with the spent scripts while computing the base script filter. Since outpoints usually getting spent shortly after created, this approach works well enough to
keep up with the blockchain, although far too slow to rely on it while bootstrapping. An advantage of this approach of looking up UTXO is that there is nothing to be recomputed at re-org, filters are consulted following
the path from current tip back to genesis. This fits well with Murmel’s storage, that is my other project Hammersbald https://github.com/rust-bitcoin/hammersbald, a highly efficient truly append only blockchain store in Rust.
Filter matching is also nicely parallelizable looking up subsets of spent outputs in parallel.
A lightweight client can use outpoint filters to efficiently validate spent coins or miner reward, which goes beyond SPV guarantees. This is probabilistically possible now, and definitely once filters are committed.
For above reasons I suggest to also add outpoint filter to BIP158, so filter servers may support it, as does Murmel. Murmel is moving quickly; I tagged the version as of this mail with DEVLIST for later reference.
Tamas Blummer
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Card Shuffle To Bitcoin Seed
2019-02-02 19:51 99% [bitcoin-dev] Card Shuffle To Bitcoin Seed rhavar
@ 2019-02-04 6:49 99% ` Adam Ficsor
2019-02-04 21:05 99% ` James MacWhyte
1 sibling, 0 replies; 58+ results
From: Adam Ficsor @ 2019-02-04 6:49 UTC (permalink / raw)
To: rhavar, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 3161 bytes --]
Unlike mouse movement it works in a CLI software, which is great. However,
isn't there something else you can use instead of cards? Something with
invariant culture and maybe more common.
On Sun, Feb 3, 2019 at 7:27 PM Ryan Havar via bitcoin-dev <
bitcoin-dev@lists•linuxfoundation.org> wrote:
> More of a shower-thought than a BIP, but it's something I've long wish
> (hardware) wallets supported:
>
> ---
>
> Abstract: Bitcoin Wallets generally ask us to trust their seed generation
> is both correct and honest. Especially for hardware and air gapped wallets,
> this is both a big ask and more or less impossible to practically verify.
> So we propose a bring-your-own-entropy approach in which the wallet can
> function completely deterministically. Our method is based on shuffling
> physical deck of cards. There are 52! (2^219.88) different shuffle order,
> which is a big enough space to be secure against collision and brute force
> attacks. Conveniently a shuffled deck of cards also can serve as a physical
> backup which is easy to hide in plain sight with great plausible
> deniability.
>
>
> Representation:
>
> Each card has a suit which can be represented by one of SCHD (spades,
> clubs, hearts, diamonds) and a value of one of 23456789TJQKA where the
> numbers are obvious and (T=ten, J=jack, Q=queen, K=king, A=ace) so "7 of
> clubs" would be represented by "7C" and a "Ten of Hearts" would be
> represented with "TH".
>
> An deck of cards looks like:
>
>
> 2S,3S,4S,5S,6S,7S,8S,9S,TS,JS,QS,KS,AS,2C,3C,4C,5C,6C,7C,8C,9C,TC,JC,QC,KC,AC,2H,3H,4H,5H,6H,7H,8H,9H,TH,JH,QH,KH,AH,2D,3D,4D,5D,6D,7D,8D,9D,TD,JD,QD,KD,AD
>
> And can be verified by making sure that every one of the 52 cards appears
> exactly once.
>
>
> Step 1. Shuffle your deck of cards
>
> This is a lot harder than you'd imagine, so do it quite a few times, with
> quite a few different techniques. It is advised to do at *least* 7 good
> quality shuffles to achieve a true cryptographically secure shuffle. Do not
> look at the cards while shuffling (to avoid biasing) and don't be afraid to
> also shuffle them face down on the table. Err on the side over
> over-shuffling.
> See also:
> https://en.wikipedia.org/wiki/Shuffling#Sufficient_number_of_shuffles
>
> Step 2. Write out the order (comma separated)
>
> And example shuffle is:
>
>
> 5C,7C,4C,AS,3C,KC,AD,QS,7S,2S,5H,4D,AC,9C,3H,6H,9D,4S,8D,TD,2H,7H,JD,QD,2D,JC,KH,9S,9H,4H,6C,7D,3D,6S,2C,AH,QC,TH,TC,JS,6D,8H,8C,JH,8S,KD,QH,5D,5S,KS,TS,3S
>
> Step 3. Sha512 it to create a seed
>
> In the example above you should get:
>
> dc04e4c331b1bd347581d4361841335fe0b090d39dfe5e1c258c547255cd5cf1545e2387d8a7c4dc53e03cacca049a414a9269a2ac6954429955476c56038498
>
> Step 4. Interpret it
>
> e.g. For bip32 you would treat the first 32 bytes as the private key, and
> the second 32 bytes as as the extension code.
>
>
>
>
> -Ryan
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
--
Best,
Ádám
[-- Attachment #2: Type: text/html, Size: 4738 bytes --]
^ permalink raw reply [relevance 99%]
* [bitcoin-dev] Interrogating a BIP157 server, BIP158 change proposal
@ 2019-02-04 11:41 99% Tamas Blummer
2019-02-04 20:18 99% ` Jim Posen
0 siblings, 1 reply; 58+ results
From: Tamas Blummer @ 2019-02-04 11:41 UTC (permalink / raw)
To: bitcoin-dev, Olaoluwa Osuntokun, jimpo, alex
TLDR: a change to BIP158 would allow decision on which filter chain is correct at lower bandwith use
Assume there is a BIP157 client that learned a filter header chain earlier and is now offered an alternate reality by a newly connected BIP157 server.
The client notices the alternate reality by routinely asking for filter chain checkpoints after connecting to a new BIP157 server. A divergence at a checkpoint means that the server disagrees the client's history at or before the first diverging checkpoint. The client would then request the filter headers between the last matching and first divergent checkpoint, and quickly figure which block’s filter is the first that does not match previous assumption, and request that filter from the server.
The client downloads the corresponding block, checks that its header fits the PoW secured best header chain, re-calculates merkle root of its transaction list to know that it is complete and queries the filter to see if every output script of every transaction is contained in there, if not the server is lying, the case is closed, the server disconnected.
Having all output scripts in the filter does not however guarantee that the filter is correct since it might omit input scripts. Inputs scripts are not part of the downloaded block, but are in some blocks before that. Checking those are out of reach for lightweight client with tools given by the current BIP.
A remedy here would be an other filter chain on created and spent outpoints as is implemented currently by Murmel. The outpoint filter chain must offer a match for every spent output of the block with the divergent filter, otherwise the interrogated server is lying since a PoW secured block can not spend coins out of nowhere. Doing this check would already force the client to download the outpoint filter history up-to the point of divergence. Then the client would have to download and PoW check every block that shows a match in outpoints until it figures that one of the spent outputs has a script that was not in the server’s filter, in which case the server is lying. If everything checks out then the previous assumption on filter history was incorrect and should be replaced by the history offered by the interrogated server.
As you see the interrogation works with this added filter but is highly ineffective. A really light client should not be forced to download lots of blocks just to uncover a lying filter server. This would actually be an easy DoS on light BIP157 clients.
A better solution is a change to BIP158 such that the only filter contains created scripts and spent outpoints. It appears to me that this would serve well both wallets and interrogation of filter servers well:
Wallets would recognize payments to their addresses by the filter as output scripts are included, spends from the wallet would be recognized as a wallet already knows outpoints of its previously received coins, so it can query the filters for them.
Interrogation of a filter server also simplifies, since the filter of the block can be checked entirely against the contents of the same block. The decision on filter correctness does not require more bandwith then download of a block at the mismatching checkpoint. The client could only be forced at max. to download 1/1000 th of the blockchain in addition to the filter header history.
Therefore I suggest to change BIP158 to have a base filter, defined as:
A basic filter MUST contain exactly the following items for each transaction in a block:
• Spent outpoints
• The scriptPubKey of each output, aside from all OP_RETURN output scripts.
Tamas Blummer
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Interrogating a BIP157 server, BIP158 change proposal
2019-02-04 11:41 99% [bitcoin-dev] Interrogating a BIP157 server, BIP158 change proposal Tamas Blummer
@ 2019-02-04 20:18 99% ` Jim Posen
2019-02-04 20:59 99% ` Tamas Blummer
0 siblings, 1 reply; 58+ results
From: Jim Posen @ 2019-02-04 20:18 UTC (permalink / raw)
To: Tamas Blummer, Bitcoin Protocol Discussion; +Cc: Jim Posen
[-- Attachment #1: Type: text/plain, Size: 6817 bytes --]
Please see the thread "BIP 158 Flexibility and Filter Size" from 2018
regarding the decision to remove outpoints from the filter [1].
Thanks for bringing this up though, because more discussion is needed on
the client protocol given that clients cannot reliably determine the
integrity of a block filter in a bandwidth-efficient manner (due to the
inclusion of input scripts).
I see three possibilities:
1) Introduce a new P2P message to retrieve all prev-outputs for a given
block (essentially the undo data in Core), and verify the scripts against
the block by executing them. While this permits some forms of input script
malleability (and thus cannot discriminate between all valid and invalid
filters), it restricts what an attacker can do. This was proposed by Laolu
AFAIK, and I believe this is how btcd is proceeding.
2) Clients track multiple possible filter header chains and essentially
consider the union of their matches. So if any filter received for a
particular block header matches, the client downloads the block. The client
can ban a peer if they 1) ever return a filter omitting some data that is
observed in the downloaded block, 2) repeatedly serve filters that trigger
false positive block downloads where such a number of false positives is
statistically unlikely, or 3) repeatedly serves filters that are
significantly larger than the expected size (essentially padding the actual
filters with garbage to waste bandwidth). I have not done the analysis yet,
but we should be able to come up with some fairly simple banning heuristics
using Chernoff bounds. The main downside is that the client logic to track
multiple possible filter chains and filters per block is more complex and
bandwidth increases if connected to a malicious server. I first heard about
this idea from David Harding.
3) Rush straight to committing the filters into the chain (via witness
reserved value or coinbase OP_RETURN) and give up on the pre-softfork BIP
157 P2P mode.
I'm in favor of option #2 despite the downsides since it requires the
smallest number of changes and is supported by the BIP 157 P2P protocol as
currently written. (Though the recommended client protocol in the BIP needs
to be updated to account for this). Another benefit of it is that it
removes some synchronicity assumptions where a peer with the correct
filters keeps timing out and is assumed to be dishonest, while the
dishonest peer is assumed to be OK because it is responsive.
If anyone has other ideas, I'd love to hear them.
-jimpo
[1]
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016057.html
On Mon, Feb 4, 2019 at 10:53 AM Tamas Blummer via bitcoin-dev <
bitcoin-dev@lists•linuxfoundation.org> wrote:
> TLDR: a change to BIP158 would allow decision on which filter chain is
> correct at lower bandwith use
>
> Assume there is a BIP157 client that learned a filter header chain earlier
> and is now offered an alternate reality by a newly connected BIP157 server.
>
> The client notices the alternate reality by routinely asking for filter
> chain checkpoints after connecting to a new BIP157 server. A divergence at
> a checkpoint means that the server disagrees the client's history at or
> before the first diverging checkpoint. The client would then request the
> filter headers between the last matching and first divergent checkpoint,
> and quickly figure which block’s filter is the first that does not match
> previous assumption, and request that filter from the server.
>
> The client downloads the corresponding block, checks that its header fits
> the PoW secured best header chain, re-calculates merkle root of its
> transaction list to know that it is complete and queries the filter to see
> if every output script of every transaction is contained in there, if not
> the server is lying, the case is closed, the server disconnected.
>
> Having all output scripts in the filter does not however guarantee that
> the filter is correct since it might omit input scripts. Inputs scripts are
> not part of the downloaded block, but are in some blocks before that.
> Checking those are out of reach for lightweight client with tools given by
> the current BIP.
>
> A remedy here would be an other filter chain on created and spent
> outpoints as is implemented currently by Murmel. The outpoint filter chain
> must offer a match for every spent output of the block with the divergent
> filter, otherwise the interrogated server is lying since a PoW secured
> block can not spend coins out of nowhere. Doing this check would already
> force the client to download the outpoint filter history up-to the point of
> divergence. Then the client would have to download and PoW check every
> block that shows a match in outpoints until it figures that one of the
> spent outputs has a script that was not in the server’s filter, in which
> case the server is lying. If everything checks out then the previous
> assumption on filter history was incorrect and should be replaced by the
> history offered by the interrogated server.
>
> As you see the interrogation works with this added filter but is highly
> ineffective. A really light client should not be forced to download lots of
> blocks just to uncover a lying filter server. This would actually be an
> easy DoS on light BIP157 clients.
>
> A better solution is a change to BIP158 such that the only filter contains
> created scripts and spent outpoints. It appears to me that this would serve
> well both wallets and interrogation of filter servers well:
>
> Wallets would recognize payments to their addresses by the filter as
> output scripts are included, spends from the wallet would be recognized as
> a wallet already knows outpoints of its previously received coins, so it
> can query the filters for them.
>
> Interrogation of a filter server also simplifies, since the filter of the
> block can be checked entirely against the contents of the same block. The
> decision on filter correctness does not require more bandwith then download
> of a block at the mismatching checkpoint. The client could only be forced
> at max. to download 1/1000 th of the blockchain in addition to the filter
> header history.
>
> Therefore I suggest to change BIP158 to have a base filter, defined as:
>
> A basic filter MUST contain exactly the following items for each
> transaction in a block:
> • Spent outpoints
> • The scriptPubKey of each output, aside from all OP_RETURN output
> scripts.
>
> Tamas Blummer
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 7576 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Interrogating a BIP157 server, BIP158 change proposal
2019-02-04 20:18 99% ` Jim Posen
@ 2019-02-04 20:59 99% ` Tamas Blummer
2019-02-05 1:42 99% ` Olaoluwa Osuntokun
0 siblings, 1 reply; 58+ results
From: Tamas Blummer @ 2019-02-04 20:59 UTC (permalink / raw)
To: Jim Posen; +Cc: Bitcoin Protocol Discussion, Jim Posen
[-- Attachment #1.1: Type: text/plain, Size: 8884 bytes --]
I participated in that discussion in 2018, but have not had the insight gathered by now though writing both client and server implementation of BIP157/158
Pieter Wuille considered the design choice I am now suggesting here as alternative (a) in: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016064.html <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016064.html>
In his evaluation he recognized that a filter having spent output and output scripts would allow decision on filter correctness by knowing the block only.
He did not evaluate the usefulness in the context of checkpoints, which I think are an important shortcut here.
Yes, a filter that is collecting input and output scripts is shorter if script re-use is frequent, but I showed back in 2018 in the same thread that this saving is not that significant in recent history as address reuse is no longer that frequent.
A filter on spent outpoint is just as useful for wallets as is one on spent script, since they naturally scan the blockchain forward and thereby learn about their coins by the output script before they need to check spends of those outpoints.
It seems to me that implementing an interrogation by evtl. downloading blocks at checkpoints is much simpler than following multiple possible filter paths.
A spent outpoint filter allows us to decide on coin availability based on immutable store, without updated and eventually rolled back UTXO store. The availability could be decided by following the filter path to current tip to genesis and
check is the outpoint was spent earlier. False positives can be sorted out with a block download. Murmel implements this if running in server mode, where blocks are already there.
Therefore I ask for a BIP change based on better insight gained through implementation.
Tamas Blummer
> On Feb 4, 2019, at 21:18, Jim Posen <jim.posen@gmail•com> wrote:
>
> Please see the thread "BIP 158 Flexibility and Filter Size" from 2018 regarding the decision to remove outpoints from the filter [1].
>
> Thanks for bringing this up though, because more discussion is needed on the client protocol given that clients cannot reliably determine the integrity of a block filter in a bandwidth-efficient manner (due to the inclusion of input scripts).
>
> I see three possibilities:
> 1) Introduce a new P2P message to retrieve all prev-outputs for a given block (essentially the undo data in Core), and verify the scripts against the block by executing them. While this permits some forms of input script malleability (and thus cannot discriminate between all valid and invalid filters), it restricts what an attacker can do. This was proposed by Laolu AFAIK, and I believe this is how btcd is proceeding.
> 2) Clients track multiple possible filter header chains and essentially consider the union of their matches. So if any filter received for a particular block header matches, the client downloads the block. The client can ban a peer if they 1) ever return a filter omitting some data that is observed in the downloaded block, 2) repeatedly serve filters that trigger false positive block downloads where such a number of false positives is statistically unlikely, or 3) repeatedly serves filters that are significantly larger than the expected size (essentially padding the actual filters with garbage to waste bandwidth). I have not done the analysis yet, but we should be able to come up with some fairly simple banning heuristics using Chernoff bounds. The main downside is that the client logic to track multiple possible filter chains and filters per block is more complex and bandwidth increases if connected to a malicious server. I first heard about this idea from David Harding.
> 3) Rush straight to committing the filters into the chain (via witness reserved value or coinbase OP_RETURN) and give up on the pre-softfork BIP 157 P2P mode.
>
> I'm in favor of option #2 despite the downsides since it requires the smallest number of changes and is supported by the BIP 157 P2P protocol as currently written. (Though the recommended client protocol in the BIP needs to be updated to account for this). Another benefit of it is that it removes some synchronicity assumptions where a peer with the correct filters keeps timing out and is assumed to be dishonest, while the dishonest peer is assumed to be OK because it is responsive.
>
> If anyone has other ideas, I'd love to hear them.
>
> -jimpo
>
> [1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016057.html <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016057.html>
>
>
>
> On Mon, Feb 4, 2019 at 10:53 AM Tamas Blummer via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org <mailto:bitcoin-dev@lists•linuxfoundation.org>> wrote:
> TLDR: a change to BIP158 would allow decision on which filter chain is correct at lower bandwith use
>
> Assume there is a BIP157 client that learned a filter header chain earlier and is now offered an alternate reality by a newly connected BIP157 server.
>
> The client notices the alternate reality by routinely asking for filter chain checkpoints after connecting to a new BIP157 server. A divergence at a checkpoint means that the server disagrees the client's history at or before the first diverging checkpoint. The client would then request the filter headers between the last matching and first divergent checkpoint, and quickly figure which block’s filter is the first that does not match previous assumption, and request that filter from the server.
>
> The client downloads the corresponding block, checks that its header fits the PoW secured best header chain, re-calculates merkle root of its transaction list to know that it is complete and queries the filter to see if every output script of every transaction is contained in there, if not the server is lying, the case is closed, the server disconnected.
>
> Having all output scripts in the filter does not however guarantee that the filter is correct since it might omit input scripts. Inputs scripts are not part of the downloaded block, but are in some blocks before that. Checking those are out of reach for lightweight client with tools given by the current BIP.
>
> A remedy here would be an other filter chain on created and spent outpoints as is implemented currently by Murmel. The outpoint filter chain must offer a match for every spent output of the block with the divergent filter, otherwise the interrogated server is lying since a PoW secured block can not spend coins out of nowhere. Doing this check would already force the client to download the outpoint filter history up-to the point of divergence. Then the client would have to download and PoW check every block that shows a match in outpoints until it figures that one of the spent outputs has a script that was not in the server’s filter, in which case the server is lying. If everything checks out then the previous assumption on filter history was incorrect and should be replaced by the history offered by the interrogated server.
>
> As you see the interrogation works with this added filter but is highly ineffective. A really light client should not be forced to download lots of blocks just to uncover a lying filter server. This would actually be an easy DoS on light BIP157 clients.
>
> A better solution is a change to BIP158 such that the only filter contains created scripts and spent outpoints. It appears to me that this would serve well both wallets and interrogation of filter servers well:
>
> Wallets would recognize payments to their addresses by the filter as output scripts are included, spends from the wallet would be recognized as a wallet already knows outpoints of its previously received coins, so it can query the filters for them.
>
> Interrogation of a filter server also simplifies, since the filter of the block can be checked entirely against the contents of the same block. The decision on filter correctness does not require more bandwith then download of a block at the mismatching checkpoint. The client could only be forced at max. to download 1/1000 th of the blockchain in addition to the filter header history.
>
> Therefore I suggest to change BIP158 to have a base filter, defined as:
>
> A basic filter MUST contain exactly the following items for each transaction in a block:
> • Spent outpoints
> • The scriptPubKey of each output, aside from all OP_RETURN output scripts.
>
> Tamas Blummer
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org <mailto:bitcoin-dev@lists•linuxfoundation.org>
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>
[-- Attachment #1.2: Type: text/html, Size: 10958 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Card Shuffle To Bitcoin Seed
2019-02-02 19:51 99% [bitcoin-dev] Card Shuffle To Bitcoin Seed rhavar
2019-02-04 6:49 99% ` Adam Ficsor
@ 2019-02-04 21:05 99% ` James MacWhyte
2019-02-05 1:37 99% ` Devrandom
1 sibling, 1 reply; 58+ results
From: James MacWhyte @ 2019-02-04 21:05 UTC (permalink / raw)
To: rhavar, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 542 bytes --]
James
On Sun, Feb 3, 2019 at 10:27 AM Ryan Havar via bitcoin-dev <
bitcoin-dev@lists•linuxfoundation.org> wrote:
> Conveniently a shuffled deck of cards also can serve as a physical backup
> which is easy to hide in plain sight with great plausible deniability.
>
To make sure someone doesn't play with your cards and mix up the order, use
a permanent marker to draw a diagonal line on the side of the deck from
corner to corner. If the cards ever get mixed up, you can put them back in
order by making sure the diagonal line matches up.
[-- Attachment #2: Type: text/html, Size: 1037 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Card Shuffle To Bitcoin Seed
2019-02-04 21:05 99% ` James MacWhyte
@ 2019-02-05 1:37 99% ` Devrandom
2019-02-06 13:48 99% ` Alan Evans
0 siblings, 1 reply; 58+ results
From: Devrandom @ 2019-02-05 1:37 UTC (permalink / raw)
To: James MacWhyte, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 1062 bytes --]
I would suggest 50+ 6-sided dice rolls, giving about 128 bits of entropy.
Compared to a shuffle, it's easier to be sure that you got the right amount
of entropy, even if the dice are somewhat biased.
On Mon, Feb 4, 2019 at 2:33 PM James MacWhyte via bitcoin-dev <
bitcoin-dev@lists•linuxfoundation.org> wrote:
>
> James
>
>
> On Sun, Feb 3, 2019 at 10:27 AM Ryan Havar via bitcoin-dev <
> bitcoin-dev@lists•linuxfoundation.org> wrote:
>
>> Conveniently a shuffled deck of cards also can serve as a physical backup
>> which is easy to hide in plain sight with great plausible deniability.
>>
>
> To make sure someone doesn't play with your cards and mix up the order,
> use a permanent marker to draw a diagonal line on the side of the deck from
> corner to corner. If the cards ever get mixed up, you can put them back in
> order by making sure the diagonal line matches up.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 2055 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Interrogating a BIP157 server, BIP158 change proposal
2019-02-04 20:59 99% ` Tamas Blummer
@ 2019-02-05 1:42 99% ` Olaoluwa Osuntokun
2019-02-05 12:21 99% ` Matt Corallo
2019-02-05 20:10 99% ` Tamas Blummer
0 siblings, 2 replies; 58+ results
From: Olaoluwa Osuntokun @ 2019-02-05 1:42 UTC (permalink / raw)
To: Tamas Blummer; +Cc: Jim Posen, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 9921 bytes --]
Hi Tamas,
This is how the filter worked before the switch over to optimize for a
filter containing the minimal items needed for a regular wallet to function.
When this was proposed, I had already implemented the entire proposal from
wallet to full-node. At that point, we all more or less decided that the
space savings (along with intra-block compression) were worthwhile, we
weren't cutting off any anticipated application level use cases (at that
point we had already comprehensively integrated both filters into lnd), and
that once committed the security loss would disappear.
I think it's too late into the current deployment of the BIPs to change
things around yet again. Instead, the BIP already has measures in place for
adding _new_ filter types in the future. This along with a few other filter
types may be worthwhile additions as new filter types.
-- Laolu
On Mon, Feb 4, 2019 at 12:59 PM Tamas Blummer <tamas.blummer@gmail•com>
wrote:
> I participated in that discussion in 2018, but have not had the insight
> gathered by now though writing both client and server implementation of
> BIP157/158
>
> Pieter Wuille considered the design choice I am now suggesting here as
> alternative (a) in:
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016064.html
> In his evaluation he recognized that a filter having spent output and
> output scripts would allow decision on filter correctness by knowing the
> block only.
> He did not evaluate the usefulness in the context of checkpoints, which I
> think are an important shortcut here.
>
> Yes, a filter that is collecting input and output scripts is shorter if
> script re-use is frequent, but I showed back in 2018 in the same thread
> that this saving is not that significant in recent history as address reuse
> is no longer that frequent.
>
> A filter on spent outpoint is just as useful for wallets as is one on
> spent script, since they naturally scan the blockchain forward and thereby
> learn about their coins by the output script before they need to check
> spends of those outpoints.
>
> It seems to me that implementing an interrogation by evtl. downloading
> blocks at checkpoints is much simpler than following multiple possible
> filter paths.
>
> A spent outpoint filter allows us to decide on coin availability based on
> immutable store, without updated and eventually rolled back UTXO store. The
> availability could be decided by following the filter path to current tip
> to genesis and
> check is the outpoint was spent earlier. False positives can be sorted out
> with a block download. Murmel implements this if running in server mode,
> where blocks are already there.
>
> Therefore I ask for a BIP change based on better insight gained through
> implementation.
>
> Tamas Blummer
>
> On Feb 4, 2019, at 21:18, Jim Posen <jim.posen@gmail•com> wrote:
>
> Please see the thread "BIP 158 Flexibility and Filter Size" from 2018
> regarding the decision to remove outpoints from the filter [1].
>
> Thanks for bringing this up though, because more discussion is needed on
> the client protocol given that clients cannot reliably determine the
> integrity of a block filter in a bandwidth-efficient manner (due to the
> inclusion of input scripts).
>
> I see three possibilities:
> 1) Introduce a new P2P message to retrieve all prev-outputs for a given
> block (essentially the undo data in Core), and verify the scripts against
> the block by executing them. While this permits some forms of input script
> malleability (and thus cannot discriminate between all valid and invalid
> filters), it restricts what an attacker can do. This was proposed by Laolu
> AFAIK, and I believe this is how btcd is proceeding.
> 2) Clients track multiple possible filter header chains and essentially
> consider the union of their matches. So if any filter received for a
> particular block header matches, the client downloads the block. The client
> can ban a peer if they 1) ever return a filter omitting some data that is
> observed in the downloaded block, 2) repeatedly serve filters that trigger
> false positive block downloads where such a number of false positives is
> statistically unlikely, or 3) repeatedly serves filters that are
> significantly larger than the expected size (essentially padding the actual
> filters with garbage to waste bandwidth). I have not done the analysis yet,
> but we should be able to come up with some fairly simple banning heuristics
> using Chernoff bounds. The main downside is that the client logic to track
> multiple possible filter chains and filters per block is more complex and
> bandwidth increases if connected to a malicious server. I first heard about
> this idea from David Harding.
> 3) Rush straight to committing the filters into the chain (via witness
> reserved value or coinbase OP_RETURN) and give up on the pre-softfork BIP
> 157 P2P mode.
>
> I'm in favor of option #2 despite the downsides since it requires the
> smallest number of changes and is supported by the BIP 157 P2P protocol as
> currently written. (Though the recommended client protocol in the BIP needs
> to be updated to account for this). Another benefit of it is that it
> removes some synchronicity assumptions where a peer with the correct
> filters keeps timing out and is assumed to be dishonest, while the
> dishonest peer is assumed to be OK because it is responsive.
>
> If anyone has other ideas, I'd love to hear them.
>
> -jimpo
>
> [1]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016057.html
>
>
>
> On Mon, Feb 4, 2019 at 10:53 AM Tamas Blummer via bitcoin-dev <
> bitcoin-dev@lists•linuxfoundation.org> wrote:
>
>> TLDR: a change to BIP158 would allow decision on which filter chain is
>> correct at lower bandwith use
>>
>> Assume there is a BIP157 client that learned a filter header chain
>> earlier and is now offered an alternate reality by a newly connected BIP157
>> server.
>>
>> The client notices the alternate reality by routinely asking for filter
>> chain checkpoints after connecting to a new BIP157 server. A divergence at
>> a checkpoint means that the server disagrees the client's history at or
>> before the first diverging checkpoint. The client would then request the
>> filter headers between the last matching and first divergent checkpoint,
>> and quickly figure which block’s filter is the first that does not match
>> previous assumption, and request that filter from the server.
>>
>> The client downloads the corresponding block, checks that its header fits
>> the PoW secured best header chain, re-calculates merkle root of its
>> transaction list to know that it is complete and queries the filter to see
>> if every output script of every transaction is contained in there, if not
>> the server is lying, the case is closed, the server disconnected.
>>
>> Having all output scripts in the filter does not however guarantee that
>> the filter is correct since it might omit input scripts. Inputs scripts are
>> not part of the downloaded block, but are in some blocks before that.
>> Checking those are out of reach for lightweight client with tools given by
>> the current BIP.
>>
>> A remedy here would be an other filter chain on created and spent
>> outpoints as is implemented currently by Murmel. The outpoint filter chain
>> must offer a match for every spent output of the block with the divergent
>> filter, otherwise the interrogated server is lying since a PoW secured
>> block can not spend coins out of nowhere. Doing this check would already
>> force the client to download the outpoint filter history up-to the point of
>> divergence. Then the client would have to download and PoW check every
>> block that shows a match in outpoints until it figures that one of the
>> spent outputs has a script that was not in the server’s filter, in which
>> case the server is lying. If everything checks out then the previous
>> assumption on filter history was incorrect and should be replaced by the
>> history offered by the interrogated server.
>>
>> As you see the interrogation works with this added filter but is highly
>> ineffective. A really light client should not be forced to download lots of
>> blocks just to uncover a lying filter server. This would actually be an
>> easy DoS on light BIP157 clients.
>>
>> A better solution is a change to BIP158 such that the only filter
>> contains created scripts and spent outpoints. It appears to me that this
>> would serve well both wallets and interrogation of filter servers well:
>>
>> Wallets would recognize payments to their addresses by the filter as
>> output scripts are included, spends from the wallet would be recognized as
>> a wallet already knows outpoints of its previously received coins, so it
>> can query the filters for them.
>>
>> Interrogation of a filter server also simplifies, since the filter of the
>> block can be checked entirely against the contents of the same block. The
>> decision on filter correctness does not require more bandwith then download
>> of a block at the mismatching checkpoint. The client could only be forced
>> at max. to download 1/1000 th of the blockchain in addition to the filter
>> header history.
>>
>> Therefore I suggest to change BIP158 to have a base filter, defined as:
>>
>> A basic filter MUST contain exactly the following items for each
>> transaction in a block:
>> • Spent outpoints
>> • The scriptPubKey of each output, aside from all OP_RETURN
>> output scripts.
>>
>> Tamas Blummer
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
>
[-- Attachment #2: Type: text/html, Size: 11497 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Interrogating a BIP157 server, BIP158 change proposal
2019-02-05 1:42 99% ` Olaoluwa Osuntokun
@ 2019-02-05 12:21 99% ` Matt Corallo
2019-02-06 0:05 99% ` Olaoluwa Osuntokun
2019-02-05 20:10 99% ` Tamas Blummer
1 sibling, 1 reply; 58+ results
From: Matt Corallo @ 2019-02-05 12:21 UTC (permalink / raw)
To: Olaoluwa Osuntokun, Bitcoin Protocol Discussion, Tamas Blummer; +Cc: Jim Posen
On 2/4/19 8:18 PM, Jim Posen via bitcoin-dev wrote:
- snip -
> 1) Introduce a new P2P message to retrieve all prev-outputs for a given
> block (essentially the undo data in Core), and verify the scripts
> against the block by executing them. While this permits some forms of
> input script malleability (and thus cannot discriminate between all
> valid and invalid filters), it restricts what an attacker can do. This
> was proposed by Laolu AFAIK, and I believe this is how btcd is
proceeding.
I'm somewhat confused by this - how does the undo data help you without
seeing the full (mistate compressed) transaction? In (the realistic)
thread model where an attacker is trying to blind you from some output,
they can simply give you "undo data" where scriptPubKeys are OP_TRUE
instead of the real script and you'd be none the wiser.
On 2/5/19 1:42 AM, Olaoluwa Osuntokun via bitcoin-dev wrote:
- snip -
> I think it's too late into the current deployment of the BIPs to change
> things around yet again. Instead, the BIP already has measures in place for
> adding _new_ filter types in the future. This along with a few other filter
> types may be worthwhile additions as new filter types.
- snip -
Huh? I don't think we should seriously consider
only-one-codebase-has-deployed-anything-with-very-limited-in-the-wild-use
as "too late into the current deployment"?
Matt
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Interrogating a BIP157 server, BIP158 change proposal
2019-02-05 1:42 99% ` Olaoluwa Osuntokun
2019-02-05 12:21 99% ` Matt Corallo
@ 2019-02-05 20:10 99% ` Tamas Blummer
2019-02-06 0:17 99% ` Olaoluwa Osuntokun
1 sibling, 1 reply; 58+ results
From: Tamas Blummer @ 2019-02-05 20:10 UTC (permalink / raw)
To: Olaoluwa Osuntokun; +Cc: Jim Posen, Bitcoin Protocol Discussion
Hi Laolu,
The only advantage I see in the current design choice is filter size, but even that is less
impressive in recent history and going forward, as address re-use is much less frequent nowadays
than it was Bitcoin’s early days.
I calculated total filter sizes since block 500,000:
input script + output script (current BIP): 1.09 GB
spent outpoint + output script: 1.26 GB
Both filters are equally useful for a wallet to discover relevant transactions, but the current design
choice seriously limits, practically disables a light client, to prove that the filter is correct.
Clear advantages of moving to spent outpoint + output script filter:
1. Filter correctness can be proven by downloading the block in question only.
2. Calculation of the filter on server side does not need UTXO.
3. Spent outpoints in the filter enable light clients to do further probabilistic checks and even more if committed.
The current design choice offers lower security than now attainable. This certainly improves with
a commitment, but that is not even on the roadmap yet, or is it?
Should a filter be committed that contains spent outpoints, then such filter would be even more useful:
A client could decide on availability of spent coins of a transaction without maintaining the UTXO set, by
checking the filters if the coin was spent after its origin proven in an SPV manner, evtl. eliminating false positives
with a block download. This would be slower than having UTXO but require only immutable store, no unwinds and
only download of a few blocks.
Since Bitcoin Core is not yet serving any filters, I do not think this discussion is too late.
Tamas Blummer
> On Feb 5, 2019, at 02:42, Olaoluwa Osuntokun <laolu32@gmail•com> wrote:
>
> Hi Tamas,
>
> This is how the filter worked before the switch over to optimize for a
> filter containing the minimal items needed for a regular wallet to function.
> When this was proposed, I had already implemented the entire proposal from
> wallet to full-node. At that point, we all more or less decided that the
> space savings (along with intra-block compression) were worthwhile, we
> weren't cutting off any anticipated application level use cases (at that
> point we had already comprehensively integrated both filters into lnd), and
> that once committed the security loss would disappear.
>
> I think it's too late into the current deployment of the BIPs to change
> things around yet again. Instead, the BIP already has measures in place for
> adding _new_ filter types in the future. This along with a few other filter
> types may be worthwhile additions as new filter types.
>
> -- Laolu
>
> On Mon, Feb 4, 2019 at 12:59 PM Tamas Blummer <tamas.blummer@gmail•com> wrote:
> I participated in that discussion in 2018, but have not had the insight gathered by now though writing both client and server implementation of BIP157/158
>
> Pieter Wuille considered the design choice I am now suggesting here as alternative (a) in: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016064.html
> In his evaluation he recognized that a filter having spent output and output scripts would allow decision on filter correctness by knowing the block only.
> He did not evaluate the usefulness in the context of checkpoints, which I think are an important shortcut here.
>
> Yes, a filter that is collecting input and output scripts is shorter if script re-use is frequent, but I showed back in 2018 in the same thread that this saving is not that significant in recent history as address reuse is no longer that frequent.
>
> A filter on spent outpoint is just as useful for wallets as is one on spent script, since they naturally scan the blockchain forward and thereby learn about their coins by the output script before they need to check spends of those outpoints.
>
> It seems to me that implementing an interrogation by evtl. downloading blocks at checkpoints is much simpler than following multiple possible filter paths.
>
> A spent outpoint filter allows us to decide on coin availability based on immutable store, without updated and eventually rolled back UTXO store. The availability could be decided by following the filter path to current tip to genesis and
> check is the outpoint was spent earlier. False positives can be sorted out with a block download. Murmel implements this if running in server mode, where blocks are already there.
>
> Therefore I ask for a BIP change based on better insight gained through implementation.
>
> Tamas Blummer
>
>> On Feb 4, 2019, at 21:18, Jim Posen <jim.posen@gmail•com> wrote:
>>
>> Please see the thread "BIP 158 Flexibility and Filter Size" from 2018 regarding the decision to remove outpoints from the filter [1].
>>
>> Thanks for bringing this up though, because more discussion is needed on the client protocol given that clients cannot reliably determine the integrity of a block filter in a bandwidth-efficient manner (due to the inclusion of input scripts).
>>
>> I see three possibilities:
>> 1) Introduce a new P2P message to retrieve all prev-outputs for a given block (essentially the undo data in Core), and verify the scripts against the block by executing them. While this permits some forms of input script malleability (and thus cannot discriminate between all valid and invalid filters), it restricts what an attacker can do. This was proposed by Laolu AFAIK, and I believe this is how btcd is proceeding.
>> 2) Clients track multiple possible filter header chains and essentially consider the union of their matches. So if any filter received for a particular block header matches, the client downloads the block. The client can ban a peer if they 1) ever return a filter omitting some data that is observed in the downloaded block, 2) repeatedly serve filters that trigger false positive block downloads where such a number of false positives is statistically unlikely, or 3) repeatedly serves filters that are significantly larger than the expected size (essentially padding the actual filters with garbage to waste bandwidth). I have not done the analysis yet, but we should be able to come up with some fairly simple banning heuristics using Chernoff bounds. The main downside is that the client logic to track multiple possible filter chains and filters per block is more complex and bandwidth increases if connected to a malicious server. I first heard about this idea from David Harding.
>> 3) Rush straight to committing the filters into the chain (via witness reserved value or coinbase OP_RETURN) and give up on the pre-softfork BIP 157 P2P mode.
>>
>> I'm in favor of option #2 despite the downsides since it requires the smallest number of changes and is supported by the BIP 157 P2P protocol as currently written. (Though the recommended client protocol in the BIP needs to be updated to account for this). Another benefit of it is that it removes some synchronicity assumptions where a peer with the correct filters keeps timing out and is assumed to be dishonest, while the dishonest peer is assumed to be OK because it is responsive.
>>
>> If anyone has other ideas, I'd love to hear them.
>>
>> -jimpo
>>
>> [1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016057.html
>>
>>
>>
>> On Mon, Feb 4, 2019 at 10:53 AM Tamas Blummer via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> wrote:
>> TLDR: a change to BIP158 would allow decision on which filter chain is correct at lower bandwith use
>>
>> Assume there is a BIP157 client that learned a filter header chain earlier and is now offered an alternate reality by a newly connected BIP157 server.
>>
>> The client notices the alternate reality by routinely asking for filter chain checkpoints after connecting to a new BIP157 server. A divergence at a checkpoint means that the server disagrees the client's history at or before the first diverging checkpoint. The client would then request the filter headers between the last matching and first divergent checkpoint, and quickly figure which block’s filter is the first that does not match previous assumption, and request that filter from the server.
>>
>> The client downloads the corresponding block, checks that its header fits the PoW secured best header chain, re-calculates merkle root of its transaction list to know that it is complete and queries the filter to see if every output script of every transaction is contained in there, if not the server is lying, the case is closed, the server disconnected.
>>
>> Having all output scripts in the filter does not however guarantee that the filter is correct since it might omit input scripts. Inputs scripts are not part of the downloaded block, but are in some blocks before that. Checking those are out of reach for lightweight client with tools given by the current BIP.
>>
>> A remedy here would be an other filter chain on created and spent outpoints as is implemented currently by Murmel. The outpoint filter chain must offer a match for every spent output of the block with the divergent filter, otherwise the interrogated server is lying since a PoW secured block can not spend coins out of nowhere. Doing this check would already force the client to download the outpoint filter history up-to the point of divergence. Then the client would have to download and PoW check every block that shows a match in outpoints until it figures that one of the spent outputs has a script that was not in the server’s filter, in which case the server is lying. If everything checks out then the previous assumption on filter history was incorrect and should be replaced by the history offered by the interrogated server.
>>
>> As you see the interrogation works with this added filter but is highly ineffective. A really light client should not be forced to download lots of blocks just to uncover a lying filter server. This would actually be an easy DoS on light BIP157 clients.
>>
>> A better solution is a change to BIP158 such that the only filter contains created scripts and spent outpoints. It appears to me that this would serve well both wallets and interrogation of filter servers well:
>>
>> Wallets would recognize payments to their addresses by the filter as output scripts are included, spends from the wallet would be recognized as a wallet already knows outpoints of its previously received coins, so it can query the filters for them.
>>
>> Interrogation of a filter server also simplifies, since the filter of the block can be checked entirely against the contents of the same block. The decision on filter correctness does not require more bandwith then download of a block at the mismatching checkpoint. The client could only be forced at max. to download 1/1000 th of the blockchain in addition to the filter header history.
>>
>> Therefore I suggest to change BIP158 to have a base filter, defined as:
>>
>> A basic filter MUST contain exactly the following items for each transaction in a block:
>> • Spent outpoints
>> • The scriptPubKey of each output, aside from all OP_RETURN output scripts.
>>
>> Tamas Blummer
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Interrogating a BIP157 server, BIP158 change proposal
2019-02-05 12:21 99% ` Matt Corallo
@ 2019-02-06 0:05 99% ` Olaoluwa Osuntokun
0 siblings, 0 replies; 58+ results
From: Olaoluwa Osuntokun @ 2019-02-06 0:05 UTC (permalink / raw)
To: Matt Corallo; +Cc: Bitcoin Protocol Discussion, Jim Posen
[-- Attachment #1: Type: text/plain, Size: 3408 bytes --]
Hi Matt,
> In (the realistic) thread model where an attacker is trying to blind you
> from some output, they can simply give you "undo data" where scriptPubKeys
> are OP_TRUE instead of the real script and you'd be none the wiser.
It depends on the input. If I'm trying to verify an input that's P2WSH,
since the witness script is included in the witness (the last element), I
can easily verify that the pkScript given is the proper witness program.
> Huh? I don't think we should seriously consider
> only-one-codebase-has-deployed-anything-with-very-limited-in-the-wild-use
as
> "too late into the current deployment"?
I'd wager that most developers reading this email right now are familiar
with neutrino as a project. Many even routinely use "neutrino" to refer to
BIP 157+158. There are several projects in the wild that have already
deployed applications built on lnd+neutrino live on mainnet. lnd+neutrino is
also the only project (as far as I'm aware) that has fully integrated the
p2p BIP 157+158 into a wallet, and also uses the filters for higher level
applications.
I'm no stranger to this argument, as I made the exact same one 7 months ago
when the change was originally discussed. Since then I realized that using
input scripts can be even _more_ flexible as light clients can use them as
set up or triggers for multi-party protocols such as atomic swaps. Using
scripts also allows for faster rescans if one knows all their keys ahead of
time, as the checks can be parallelized. Additionally, the current filter
also lends better to an eventual commitment as you literally can't remove
anything from it, and still have it be useful for the traditional wallet use
case.
As I mentioned in my last email, this can be added as an additional filter
type, leaving it up the full node implementations that have deployed the
base protocol to integrate it or not.
-- Laolu
On Tue, Feb 5, 2019 at 4:21 AM Matt Corallo <lf-lists@mattcorallo•com>
wrote:
>
> On 2/4/19 8:18 PM, Jim Posen via bitcoin-dev wrote:
> - snip -
> > 1) Introduce a new P2P message to retrieve all prev-outputs for a given
> > block (essentially the undo data in Core), and verify the scripts
> > against the block by executing them. While this permits some forms of
> > input script malleability (and thus cannot discriminate between all
> > valid and invalid filters), it restricts what an attacker can do. This
> > was proposed by Laolu AFAIK, and I believe this is how btcd is
> proceeding.
>
> I'm somewhat confused by this - how does the undo data help you without
> seeing the full (mistate compressed) transaction? In (the realistic)
> thread model where an attacker is trying to blind you from some output,
> they can simply give you "undo data" where scriptPubKeys are OP_TRUE
> instead of the real script and you'd be none the wiser.
>
> On 2/5/19 1:42 AM, Olaoluwa Osuntokun via bitcoin-dev wrote:
> - snip -
> > I think it's too late into the current deployment of the BIPs to change
> > things around yet again. Instead, the BIP already has measures in place
> for
> > adding _new_ filter types in the future. This along with a few other
> filter
> > types may be worthwhile additions as new filter types.
> - snip -
>
> Huh? I don't think we should seriously consider
> only-one-codebase-has-deployed-anything-with-very-limited-in-the-wild-use
> as "too late into the current deployment"?
>
> Matt
>
[-- Attachment #2: Type: text/html, Size: 4334 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Interrogating a BIP157 server, BIP158 change proposal
2019-02-05 20:10 99% ` Tamas Blummer
@ 2019-02-06 0:17 99% ` Olaoluwa Osuntokun
2019-02-06 8:09 99% ` Tamas Blummer
0 siblings, 1 reply; 58+ results
From: Olaoluwa Osuntokun @ 2019-02-06 0:17 UTC (permalink / raw)
To: Tamas Blummer; +Cc: Jim Posen, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 15098 bytes --]
Hi Tamas,
> The only advantage I see in the current design choice is filter size, but
> even that is less impressive in recent history and going forward, as
address
> re-use is much less frequent nowadays than it was Bitcoin’s early days.
Gains aren't only had with address re-use, it's also the case that if an
input is spent in the same block as it was created, then only a single items
is inserted into the filter. Filters spanning across several blocks would
also see savings due to the usage of input scripts.
Another advantage of using input scripts is that it allows rescans where all
keys are known ahead of time to proceed in parallel, which can serve to
greatly speed up rescans in bitcoind. Additionally, it allows light clients
to participate in protocols like atomic swaps using the input scripts as
triggers for state transitions. If outpoints were used, then the party that
initiated the swap would need to send the cooperating party all possible
txid's that may be generated due to fee bumps (RBF or sighash single
tricks). Using the script, the light client simply waits for it to be
revealed in a block (P2WSH) and then it can carry on the protocol.
> Clear advantages of moving to spent outpoint + output script filter:
> 1. Filter correctness can be proven by downloading the block in question
only.
Yep, as is they can verify half the filter. With auxiliary data, they can
verify the entire thing. Once committed, they don't need to verify at all.
We're repeating a discussion that played out 7 months ago with no new
information or context.
> 2. Calculation of the filter on server side does not need UTXO.
This is incorrect. Filter calculation can use the spentness journal (or undo
blocks) that many full node implementations utilize.
> This certainly improves with a commitment, but that is not even on the
> roadmap yet, or is it?
I don't really know of any sort of roadmaps in Bitcoin development. However,
I think there's relatively strong support to adding a commitment, once the
current protocol gets more usage in the wild, which it already is today on
mainnet.
> Should a filter be committed that contains spent outpoints, then such
> filter would be even more useful
Indeed, this can be added as a new filter type, optionally adding created
outpoints as you referenced in your prior email.
> Since Bitcoin Core is not yet serving any filters, I do not think this
> discussion is too late.
See my reply to Matt on the current state of deployment. It's also the case
that bitcoind isn't the only full node implementation used in the wild.
Further changes would also serve to delay inclusion into bitcoind. The
individuals proposing these PRs to bitcoind has participated in this
discussion 7 months ago (along with many of the contributors to this
project). Based in this conversation 7 months ago, it's my understanding
that all parties are aware of the options and tradeoffs to be had.
-- Laolu
On Tue, Feb 5, 2019 at 12:10 PM Tamas Blummer <tamas.blummer@gmail•com>
wrote:
> Hi Laolu,
>
> The only advantage I see in the current design choice is filter size, but
> even that is less
> impressive in recent history and going forward, as address re-use is much
> less frequent nowadays
> than it was Bitcoin’s early days.
>
> I calculated total filter sizes since block 500,000:
>
> input script + output script (current BIP): 1.09 GB
> spent outpoint + output script: 1.26 GB
>
> Both filters are equally useful for a wallet to discover relevant
> transactions, but the current design
> choice seriously limits, practically disables a light client, to prove
> that the filter is correct.
>
> Clear advantages of moving to spent outpoint + output script filter:
>
> 1. Filter correctness can be proven by downloading the block in question
> only.
> 2. Calculation of the filter on server side does not need UTXO.
> 3. Spent outpoints in the filter enable light clients to do further
> probabilistic checks and even more if committed.
>
> The current design choice offers lower security than now attainable. This
> certainly improves with
> a commitment, but that is not even on the roadmap yet, or is it?
>
> Should a filter be committed that contains spent outpoints, then such
> filter would be even more useful:
> A client could decide on availability of spent coins of a transaction
> without maintaining the UTXO set, by
> checking the filters if the coin was spent after its origin proven in an
> SPV manner, evtl. eliminating false positives
> with a block download. This would be slower than having UTXO but require
> only immutable store, no unwinds and
> only download of a few blocks.
>
> Since Bitcoin Core is not yet serving any filters, I do not think this
> discussion is too late.
>
> Tamas Blummer
>
>
> > On Feb 5, 2019, at 02:42, Olaoluwa Osuntokun <laolu32@gmail•com> wrote:
> >
> > Hi Tamas,
> >
> > This is how the filter worked before the switch over to optimize for a
> > filter containing the minimal items needed for a regular wallet to
> function.
> > When this was proposed, I had already implemented the entire proposal
> from
> > wallet to full-node. At that point, we all more or less decided that the
> > space savings (along with intra-block compression) were worthwhile, we
> > weren't cutting off any anticipated application level use cases (at that
> > point we had already comprehensively integrated both filters into lnd),
> and
> > that once committed the security loss would disappear.
> >
> > I think it's too late into the current deployment of the BIPs to change
> > things around yet again. Instead, the BIP already has measures in place
> for
> > adding _new_ filter types in the future. This along with a few other
> filter
> > types may be worthwhile additions as new filter types.
> >
> > -- Laolu
> >
> > On Mon, Feb 4, 2019 at 12:59 PM Tamas Blummer <tamas.blummer@gmail•com>
> wrote:
> > I participated in that discussion in 2018, but have not had the insight
> gathered by now though writing both client and server implementation of
> BIP157/158
> >
> > Pieter Wuille considered the design choice I am now suggesting here as
> alternative (a) in:
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016064.html
> > In his evaluation he recognized that a filter having spent output and
> output scripts would allow decision on filter correctness by knowing the
> block only.
> > He did not evaluate the usefulness in the context of checkpoints, which
> I think are an important shortcut here.
> >
> > Yes, a filter that is collecting input and output scripts is shorter if
> script re-use is frequent, but I showed back in 2018 in the same thread
> that this saving is not that significant in recent history as address reuse
> is no longer that frequent.
> >
> > A filter on spent outpoint is just as useful for wallets as is one on
> spent script, since they naturally scan the blockchain forward and thereby
> learn about their coins by the output script before they need to check
> spends of those outpoints.
> >
> > It seems to me that implementing an interrogation by evtl. downloading
> blocks at checkpoints is much simpler than following multiple possible
> filter paths.
> >
> > A spent outpoint filter allows us to decide on coin availability based
> on immutable store, without updated and eventually rolled back UTXO store.
> The availability could be decided by following the filter path to current
> tip to genesis and
> > check is the outpoint was spent earlier. False positives can be sorted
> out with a block download. Murmel implements this if running in server
> mode, where blocks are already there.
> >
> > Therefore I ask for a BIP change based on better insight gained through
> implementation.
> >
> > Tamas Blummer
> >
> >> On Feb 4, 2019, at 21:18, Jim Posen <jim.posen@gmail•com> wrote:
> >>
> >> Please see the thread "BIP 158 Flexibility and Filter Size" from 2018
> regarding the decision to remove outpoints from the filter [1].
> >>
> >> Thanks for bringing this up though, because more discussion is needed
> on the client protocol given that clients cannot reliably determine the
> integrity of a block filter in a bandwidth-efficient manner (due to the
> inclusion of input scripts).
> >>
> >> I see three possibilities:
> >> 1) Introduce a new P2P message to retrieve all prev-outputs for a given
> block (essentially the undo data in Core), and verify the scripts against
> the block by executing them. While this permits some forms of input script
> malleability (and thus cannot discriminate between all valid and invalid
> filters), it restricts what an attacker can do. This was proposed by Laolu
> AFAIK, and I believe this is how btcd is proceeding.
> >> 2) Clients track multiple possible filter header chains and essentially
> consider the union of their matches. So if any filter received for a
> particular block header matches, the client downloads the block. The client
> can ban a peer if they 1) ever return a filter omitting some data that is
> observed in the downloaded block, 2) repeatedly serve filters that trigger
> false positive block downloads where such a number of false positives is
> statistically unlikely, or 3) repeatedly serves filters that are
> significantly larger than the expected size (essentially padding the actual
> filters with garbage to waste bandwidth). I have not done the analysis yet,
> but we should be able to come up with some fairly simple banning heuristics
> using Chernoff bounds. The main downside is that the client logic to track
> multiple possible filter chains and filters per block is more complex and
> bandwidth increases if connected to a malicious server. I first heard about
> this idea from David Harding.
> >> 3) Rush straight to committing the filters into the chain (via witness
> reserved value or coinbase OP_RETURN) and give up on the pre-softfork BIP
> 157 P2P mode.
> >>
> >> I'm in favor of option #2 despite the downsides since it requires the
> smallest number of changes and is supported by the BIP 157 P2P protocol as
> currently written. (Though the recommended client protocol in the BIP needs
> to be updated to account for this). Another benefit of it is that it
> removes some synchronicity assumptions where a peer with the correct
> filters keeps timing out and is assumed to be dishonest, while the
> dishonest peer is assumed to be OK because it is responsive.
> >>
> >> If anyone has other ideas, I'd love to hear them.
> >>
> >> -jimpo
> >>
> >> [1]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016057.html
> >>
> >>
> >>
> >> On Mon, Feb 4, 2019 at 10:53 AM Tamas Blummer via bitcoin-dev <
> bitcoin-dev@lists•linuxfoundation.org> wrote:
> >> TLDR: a change to BIP158 would allow decision on which filter chain is
> correct at lower bandwith use
> >>
> >> Assume there is a BIP157 client that learned a filter header chain
> earlier and is now offered an alternate reality by a newly connected BIP157
> server.
> >>
> >> The client notices the alternate reality by routinely asking for filter
> chain checkpoints after connecting to a new BIP157 server. A divergence at
> a checkpoint means that the server disagrees the client's history at or
> before the first diverging checkpoint. The client would then request the
> filter headers between the last matching and first divergent checkpoint,
> and quickly figure which block’s filter is the first that does not match
> previous assumption, and request that filter from the server.
> >>
> >> The client downloads the corresponding block, checks that its header
> fits the PoW secured best header chain, re-calculates merkle root of its
> transaction list to know that it is complete and queries the filter to see
> if every output script of every transaction is contained in there, if not
> the server is lying, the case is closed, the server disconnected.
> >>
> >> Having all output scripts in the filter does not however guarantee that
> the filter is correct since it might omit input scripts. Inputs scripts are
> not part of the downloaded block, but are in some blocks before that.
> Checking those are out of reach for lightweight client with tools given by
> the current BIP.
> >>
> >> A remedy here would be an other filter chain on created and spent
> outpoints as is implemented currently by Murmel. The outpoint filter chain
> must offer a match for every spent output of the block with the divergent
> filter, otherwise the interrogated server is lying since a PoW secured
> block can not spend coins out of nowhere. Doing this check would already
> force the client to download the outpoint filter history up-to the point of
> divergence. Then the client would have to download and PoW check every
> block that shows a match in outpoints until it figures that one of the
> spent outputs has a script that was not in the server’s filter, in which
> case the server is lying. If everything checks out then the previous
> assumption on filter history was incorrect and should be replaced by the
> history offered by the interrogated server.
> >>
> >> As you see the interrogation works with this added filter but is highly
> ineffective. A really light client should not be forced to download lots of
> blocks just to uncover a lying filter server. This would actually be an
> easy DoS on light BIP157 clients.
> >>
> >> A better solution is a change to BIP158 such that the only filter
> contains created scripts and spent outpoints. It appears to me that this
> would serve well both wallets and interrogation of filter servers well:
> >>
> >> Wallets would recognize payments to their addresses by the filter as
> output scripts are included, spends from the wallet would be recognized as
> a wallet already knows outpoints of its previously received coins, so it
> can query the filters for them.
> >>
> >> Interrogation of a filter server also simplifies, since the filter of
> the block can be checked entirely against the contents of the same block.
> The decision on filter correctness does not require more bandwith then
> download of a block at the mismatching checkpoint. The client could only be
> forced at max. to download 1/1000 th of the blockchain in addition to the
> filter header history.
> >>
> >> Therefore I suggest to change BIP158 to have a base filter, defined as:
> >>
> >> A basic filter MUST contain exactly the following items for each
> transaction in a block:
> >> • Spent outpoints
> >> • The scriptPubKey of each output, aside from all OP_RETURN
> output scripts.
> >>
> >> Tamas Blummer
> >> _______________________________________________
> >> bitcoin-dev mailing list
> >> bitcoin-dev@lists•linuxfoundation.org
> >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> >
>
>
[-- Attachment #2: Type: text/html, Size: 17214 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Interrogating a BIP157 server, BIP158 change proposal
2019-02-06 0:17 99% ` Olaoluwa Osuntokun
@ 2019-02-06 8:09 99% ` Tamas Blummer
2019-02-06 18:17 99% ` Gregory Maxwell
0 siblings, 1 reply; 58+ results
From: Tamas Blummer @ 2019-02-06 8:09 UTC (permalink / raw)
To: Olaoluwa Osuntokun; +Cc: Jim Posen, Bitcoin Protocol Discussion
Hi Laolu,
space savings come with the rather serious current disadvantage, that a light client is not
in the position to check the filter. Also the advanced uses you mention are subject to this, for now.
Building more on a shaky fundament does not make it look better.
Now that we have seen advantages of both filters, what keeps us from offering both by Core?
Computing the addional spent-outpoint output-script filter is cheaper than the current one as
it can be done with the block as only context, it does not need UTXO nor undo blocks no journals or
whatever else. I do not see how my statement regarding this was incorrect.
There is a political issue though, why I favor better provable uncommitted filter:
I am skeptical that commitment of any filter will come into Core soon.
The reason of my skepticism is political, not technical.
A committed filter makes light clients much more reliable and attractive, for some taste too much more.
Clients that follow PoW are not significant on the current network. Core nodes enforce many more rules,
some as important as miners' reward. A committed filter would strengthen light clients
significantly, such that perhabs too many were compelled using them instead of a Core node.
Would the remaining Core nodes be sufficient to enforce checks not covered? I see how this is a dilemma.
Is this a dilemma because we think black-white? Light(er) clients might implement checks that are more
than blind PoW trust even if less than all Core checks. Has the time come to allow for this?
Tamas Blummer
> On Feb 6, 2019, at 01:17, Olaoluwa Osuntokun <laolu32@gmail•com> wrote:
>
> Hi Tamas,
>
> > The only advantage I see in the current design choice is filter size, but
> > even that is less impressive in recent history and going forward, as address
> > re-use is much less frequent nowadays than it was Bitcoin’s early days.
>
> Gains aren't only had with address re-use, it's also the case that if an
> input is spent in the same block as it was created, then only a single items
> is inserted into the filter. Filters spanning across several blocks would
> also see savings due to the usage of input scripts.
>
> Another advantage of using input scripts is that it allows rescans where all
> keys are known ahead of time to proceed in parallel, which can serve to
> greatly speed up rescans in bitcoind. Additionally, it allows light clients
> to participate in protocols like atomic swaps using the input scripts as
> triggers for state transitions. If outpoints were used, then the party that
> initiated the swap would need to send the cooperating party all possible
> txid's that may be generated due to fee bumps (RBF or sighash single
> tricks). Using the script, the light client simply waits for it to be
> revealed in a block (P2WSH) and then it can carry on the protocol.
>
> > Clear advantages of moving to spent outpoint + output script filter:
>
> > 1. Filter correctness can be proven by downloading the block in question only.
>
> Yep, as is they can verify half the filter. With auxiliary data, they can
> verify the entire thing. Once committed, they don't need to verify at all.
> We're repeating a discussion that played out 7 months ago with no new
> information or context.
>
> > 2. Calculation of the filter on server side does not need UTXO.
>
> This is incorrect. Filter calculation can use the spentness journal (or undo
> blocks) that many full node implementations utilize.
>
> > This certainly improves with a commitment, but that is not even on the
> > roadmap yet, or is it?
>
> I don't really know of any sort of roadmaps in Bitcoin development. However,
> I think there's relatively strong support to adding a commitment, once the
> current protocol gets more usage in the wild, which it already is today on
> mainnet.
>
> > Should a filter be committed that contains spent outpoints, then such
> > filter would be even more useful
>
> Indeed, this can be added as a new filter type, optionally adding created
> outpoints as you referenced in your prior email.
>
> > Since Bitcoin Core is not yet serving any filters, I do not think this
> > discussion is too late.
>
> See my reply to Matt on the current state of deployment. It's also the case
> that bitcoind isn't the only full node implementation used in the wild.
> Further changes would also serve to delay inclusion into bitcoind. The
> individuals proposing these PRs to bitcoind has participated in this
> discussion 7 months ago (along with many of the contributors to this
> project). Based in this conversation 7 months ago, it's my understanding
> that all parties are aware of the options and tradeoffs to be had.
>
> -- Laolu
>
>
> On Tue, Feb 5, 2019 at 12:10 PM Tamas Blummer <tamas.blummer@gmail•com> wrote:
> Hi Laolu,
>
> The only advantage I see in the current design choice is filter size, but even that is less
> impressive in recent history and going forward, as address re-use is much less frequent nowadays
> than it was Bitcoin’s early days.
>
> I calculated total filter sizes since block 500,000:
>
> input script + output script (current BIP): 1.09 GB
> spent outpoint + output script: 1.26 GB
>
> Both filters are equally useful for a wallet to discover relevant transactions, but the current design
> choice seriously limits, practically disables a light client, to prove that the filter is correct.
>
> Clear advantages of moving to spent outpoint + output script filter:
>
> 1. Filter correctness can be proven by downloading the block in question only.
> 2. Calculation of the filter on server side does not need UTXO.
> 3. Spent outpoints in the filter enable light clients to do further probabilistic checks and even more if committed.
>
> The current design choice offers lower security than now attainable. This certainly improves with
> a commitment, but that is not even on the roadmap yet, or is it?
>
> Should a filter be committed that contains spent outpoints, then such filter would be even more useful:
> A client could decide on availability of spent coins of a transaction without maintaining the UTXO set, by
> checking the filters if the coin was spent after its origin proven in an SPV manner, evtl. eliminating false positives
> with a block download. This would be slower than having UTXO but require only immutable store, no unwinds and
> only download of a few blocks.
>
> Since Bitcoin Core is not yet serving any filters, I do not think this discussion is too late.
>
> Tamas Blummer
>
>
> > On Feb 5, 2019, at 02:42, Olaoluwa Osuntokun <laolu32@gmail•com> wrote:
> >
> > Hi Tamas,
> >
> > This is how the filter worked before the switch over to optimize for a
> > filter containing the minimal items needed for a regular wallet to function.
> > When this was proposed, I had already implemented the entire proposal from
> > wallet to full-node. At that point, we all more or less decided that the
> > space savings (along with intra-block compression) were worthwhile, we
> > weren't cutting off any anticipated application level use cases (at that
> > point we had already comprehensively integrated both filters into lnd), and
> > that once committed the security loss would disappear.
> >
> > I think it's too late into the current deployment of the BIPs to change
> > things around yet again. Instead, the BIP already has measures in place for
> > adding _new_ filter types in the future. This along with a few other filter
> > types may be worthwhile additions as new filter types.
> >
> > -- Laolu
> >
> > On Mon, Feb 4, 2019 at 12:59 PM Tamas Blummer <tamas.blummer@gmail•com> wrote:
> > I participated in that discussion in 2018, but have not had the insight gathered by now though writing both client and server implementation of BIP157/158
> >
> > Pieter Wuille considered the design choice I am now suggesting here as alternative (a) in: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016064.html
> > In his evaluation he recognized that a filter having spent output and output scripts would allow decision on filter correctness by knowing the block only.
> > He did not evaluate the usefulness in the context of checkpoints, which I think are an important shortcut here.
> >
> > Yes, a filter that is collecting input and output scripts is shorter if script re-use is frequent, but I showed back in 2018 in the same thread that this saving is not that significant in recent history as address reuse is no longer that frequent.
> >
> > A filter on spent outpoint is just as useful for wallets as is one on spent script, since they naturally scan the blockchain forward and thereby learn about their coins by the output script before they need to check spends of those outpoints.
> >
> > It seems to me that implementing an interrogation by evtl. downloading blocks at checkpoints is much simpler than following multiple possible filter paths.
> >
> > A spent outpoint filter allows us to decide on coin availability based on immutable store, without updated and eventually rolled back UTXO store. The availability could be decided by following the filter path to current tip to genesis and
> > check is the outpoint was spent earlier. False positives can be sorted out with a block download. Murmel implements this if running in server mode, where blocks are already there.
> >
> > Therefore I ask for a BIP change based on better insight gained through implementation.
> >
> > Tamas Blummer
> >
> >> On Feb 4, 2019, at 21:18, Jim Posen <jim.posen@gmail•com> wrote:
> >>
> >> Please see the thread "BIP 158 Flexibility and Filter Size" from 2018 regarding the decision to remove outpoints from the filter [1].
> >>
> >> Thanks for bringing this up though, because more discussion is needed on the client protocol given that clients cannot reliably determine the integrity of a block filter in a bandwidth-efficient manner (due to the inclusion of input scripts).
> >>
> >> I see three possibilities:
> >> 1) Introduce a new P2P message to retrieve all prev-outputs for a given block (essentially the undo data in Core), and verify the scripts against the block by executing them. While this permits some forms of input script malleability (and thus cannot discriminate between all valid and invalid filters), it restricts what an attacker can do. This was proposed by Laolu AFAIK, and I believe this is how btcd is proceeding.
> >> 2) Clients track multiple possible filter header chains and essentially consider the union of their matches. So if any filter received for a particular block header matches, the client downloads the block. The client can ban a peer if they 1) ever return a filter omitting some data that is observed in the downloaded block, 2) repeatedly serve filters that trigger false positive block downloads where such a number of false positives is statistically unlikely, or 3) repeatedly serves filters that are significantly larger than the expected size (essentially padding the actual filters with garbage to waste bandwidth). I have not done the analysis yet, but we should be able to come up with some fairly simple banning heuristics using Chernoff bounds. The main downside is that the client logic to track multiple possible filter chains and filters per block is more complex and bandwidth increases if connected to a malicious server. I first heard about this idea from David Harding.
> >> 3) Rush straight to committing the filters into the chain (via witness reserved value or coinbase OP_RETURN) and give up on the pre-softfork BIP 157 P2P mode.
> >>
> >> I'm in favor of option #2 despite the downsides since it requires the smallest number of changes and is supported by the BIP 157 P2P protocol as currently written. (Though the recommended client protocol in the BIP needs to be updated to account for this). Another benefit of it is that it removes some synchronicity assumptions where a peer with the correct filters keeps timing out and is assumed to be dishonest, while the dishonest peer is assumed to be OK because it is responsive.
> >>
> >> If anyone has other ideas, I'd love to hear them.
> >>
> >> -jimpo
> >>
> >> [1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016057.html
> >>
> >>
> >>
> >> On Mon, Feb 4, 2019 at 10:53 AM Tamas Blummer via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> wrote:
> >> TLDR: a change to BIP158 would allow decision on which filter chain is correct at lower bandwith use
> >>
> >> Assume there is a BIP157 client that learned a filter header chain earlier and is now offered an alternate reality by a newly connected BIP157 server.
> >>
> >> The client notices the alternate reality by routinely asking for filter chain checkpoints after connecting to a new BIP157 server. A divergence at a checkpoint means that the server disagrees the client's history at or before the first diverging checkpoint. The client would then request the filter headers between the last matching and first divergent checkpoint, and quickly figure which block’s filter is the first that does not match previous assumption, and request that filter from the server.
> >>
> >> The client downloads the corresponding block, checks that its header fits the PoW secured best header chain, re-calculates merkle root of its transaction list to know that it is complete and queries the filter to see if every output script of every transaction is contained in there, if not the server is lying, the case is closed, the server disconnected.
> >>
> >> Having all output scripts in the filter does not however guarantee that the filter is correct since it might omit input scripts. Inputs scripts are not part of the downloaded block, but are in some blocks before that. Checking those are out of reach for lightweight client with tools given by the current BIP.
> >>
> >> A remedy here would be an other filter chain on created and spent outpoints as is implemented currently by Murmel. The outpoint filter chain must offer a match for every spent output of the block with the divergent filter, otherwise the interrogated server is lying since a PoW secured block can not spend coins out of nowhere. Doing this check would already force the client to download the outpoint filter history up-to the point of divergence. Then the client would have to download and PoW check every block that shows a match in outpoints until it figures that one of the spent outputs has a script that was not in the server’s filter, in which case the server is lying. If everything checks out then the previous assumption on filter history was incorrect and should be replaced by the history offered by the interrogated server.
> >>
> >> As you see the interrogation works with this added filter but is highly ineffective. A really light client should not be forced to download lots of blocks just to uncover a lying filter server. This would actually be an easy DoS on light BIP157 clients.
> >>
> >> A better solution is a change to BIP158 such that the only filter contains created scripts and spent outpoints. It appears to me that this would serve well both wallets and interrogation of filter servers well:
> >>
> >> Wallets would recognize payments to their addresses by the filter as output scripts are included, spends from the wallet would be recognized as a wallet already knows outpoints of its previously received coins, so it can query the filters for them.
> >>
> >> Interrogation of a filter server also simplifies, since the filter of the block can be checked entirely against the contents of the same block. The decision on filter correctness does not require more bandwith then download of a block at the mismatching checkpoint. The client could only be forced at max. to download 1/1000 th of the blockchain in addition to the filter header history.
> >>
> >> Therefore I suggest to change BIP158 to have a base filter, defined as:
> >>
> >> A basic filter MUST contain exactly the following items for each transaction in a block:
> >> • Spent outpoints
> >> • The scriptPubKey of each output, aside from all OP_RETURN output scripts.
> >>
> >> Tamas Blummer
> >> _______________________________________________
> >> bitcoin-dev mailing list
> >> bitcoin-dev@lists•linuxfoundation.org
> >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> >
>
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Card Shuffle To Bitcoin Seed
2019-02-05 1:37 99% ` Devrandom
@ 2019-02-06 13:48 99% ` Alan Evans
2019-02-06 13:51 99% ` Alan Evans
0 siblings, 1 reply; 58+ results
From: Alan Evans @ 2019-02-06 13:48 UTC (permalink / raw)
To: Devrandom, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 2351 bytes --]
It's not quite enough to just do SHA512, you missed out this condition
(incredibly rare as it is):
> In case IL is 0 or ≥n, the master key is invalid.
Also I can't see how I would use this to seed a hardware wallet that
requires a BIP39 seed as mentioned in your abstract.
For both of those reasons, you may want to just invent/formalize a scheme
that takes Cards -> Entropy.
From that Entropy one can generate BIP39, and non-BIP39 fans can just
continue, generate and store their root xprv.
Prior art: Note that Ian Coleman's BIP39 site already supports Cards (and
Dice), see the logic here:
https://github.com/iancoleman/bip39/blob/master/src/js/entropy.js
[image: image.png]
Note it detected "full deck". It also calculates the Total Bits of Entropy
and can handle card replacement and multiple decks.
PS, you're a bit out on your entropy calculation, log2(52!) ~= 225.58 bits,
not 219.
On Tue, 5 Feb 2019 at 02:08, Devrandom via bitcoin-dev <
bitcoin-dev@lists•linuxfoundation.org> wrote:
> I would suggest 50+ 6-sided dice rolls, giving about 128 bits of entropy.
> Compared to a shuffle, it's easier to be sure that you got the right amount
> of entropy, even if the dice are somewhat biased.
>
>
> On Mon, Feb 4, 2019 at 2:33 PM James MacWhyte via bitcoin-dev <
> bitcoin-dev@lists•linuxfoundation.org> wrote:
>
>>
>> James
>>
>>
>> On Sun, Feb 3, 2019 at 10:27 AM Ryan Havar via bitcoin-dev <
>> bitcoin-dev@lists•linuxfoundation.org> wrote:
>>
>>> Conveniently a shuffled deck of cards also can serve as a physical
>>> backup which is easy to hide in plain sight with great plausible
>>> deniability.
>>>
>>
>> To make sure someone doesn't play with your cards and mix up the order,
>> use a permanent marker to draw a diagonal line on the side of the deck from
>> corner to corner. If the cards ever get mixed up, you can put them back in
>> order by making sure the diagonal line matches up.
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 4197 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Card Shuffle To Bitcoin Seed
2019-02-06 13:48 99% ` Alan Evans
@ 2019-02-06 13:51 99% ` Alan Evans
2019-02-07 2:42 99% ` James MacWhyte
0 siblings, 1 reply; 58+ results
From: Alan Evans @ 2019-02-06 13:51 UTC (permalink / raw)
To: Devrandom, Bitcoin Protocol Discussion
[-- Attachment #1.1: Type: text/plain, Size: 2564 bytes --]
Image didn't seem to attach:
[image: image.png]
On Wed, 6 Feb 2019 at 09:48, Alan Evans <thealanevans@gmail•com> wrote:
> It's not quite enough to just do SHA512, you missed out this condition
> (incredibly rare as it is):
>
> > In case IL is 0 or ≥n, the master key is invalid.
>
> Also I can't see how I would use this to seed a hardware wallet that
> requires a BIP39 seed as mentioned in your abstract.
>
> For both of those reasons, you may want to just invent/formalize a scheme
> that takes Cards -> Entropy.
> From that Entropy one can generate BIP39, and non-BIP39 fans can just
> continue, generate and store their root xprv.
>
> Prior art: Note that Ian Coleman's BIP39 site already supports Cards (and
> Dice), see the logic here:
> https://github.com/iancoleman/bip39/blob/master/src/js/entropy.js
>
> [image: image.png]
>
> Note it detected "full deck". It also calculates the Total Bits of Entropy
> and can handle card replacement and multiple decks.
>
> PS, you're a bit out on your entropy calculation, log2(52!) ~= 225.58
> bits, not 219.
>
>
> On Tue, 5 Feb 2019 at 02:08, Devrandom via bitcoin-dev <
> bitcoin-dev@lists•linuxfoundation.org> wrote:
>
>> I would suggest 50+ 6-sided dice rolls, giving about 128 bits of
>> entropy. Compared to a shuffle, it's easier to be sure that you got the
>> right amount of entropy, even if the dice are somewhat biased.
>>
>>
>> On Mon, Feb 4, 2019 at 2:33 PM James MacWhyte via bitcoin-dev <
>> bitcoin-dev@lists•linuxfoundation.org> wrote:
>>
>>>
>>> James
>>>
>>>
>>> On Sun, Feb 3, 2019 at 10:27 AM Ryan Havar via bitcoin-dev <
>>> bitcoin-dev@lists•linuxfoundation.org> wrote:
>>>
>>>> Conveniently a shuffled deck of cards also can serve as a physical
>>>> backup which is easy to hide in plain sight with great plausible
>>>> deniability.
>>>>
>>>
>>> To make sure someone doesn't play with your cards and mix up the order,
>>> use a permanent marker to draw a diagonal line on the side of the deck from
>>> corner to corner. If the cards ever get mixed up, you can put them back in
>>> order by making sure the diagonal line matches up.
>>> _______________________________________________
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists•linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
[-- Attachment #1.2: Type: text/html, Size: 4713 bytes --]
[-- Attachment #2: image.png --]
[-- Type: image/png, Size: 176797 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Interrogating a BIP157 server, BIP158 change proposal
2019-02-06 8:09 99% ` Tamas Blummer
@ 2019-02-06 18:17 99% ` Gregory Maxwell
2019-02-06 19:48 99% ` Tamas Blummer
0 siblings, 1 reply; 58+ results
From: Gregory Maxwell @ 2019-02-06 18:17 UTC (permalink / raw)
To: Tamas Blummer; +Cc: Jim Posen, Bitcoin Protocol Discussion
On Wed, Feb 6, 2019 at 8:10 AM Tamas Blummer <tamas.blummer@gmail•com> wrote:
> I am skeptical that commitment of any filter will come into Core soon. [...] A committed filter makes light clients much more reliable and attractive, for some taste too much more.
You keep repeating this smear. Please stop.
If you would actually bother reading the threads where this was
discussed previously you will see that there was significant interest
from bitcoin developers to eventually commit an output filter, and a
significant investment of effort in improving the proposal to that
end. It is really disheartening to see you continue to repeat your
negative assumptions about other people's wishes when you haven't even
invested the time required to read their words.
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Interrogating a BIP157 server, BIP158 change proposal
2019-02-06 18:17 99% ` Gregory Maxwell
@ 2019-02-06 19:48 99% ` Tamas Blummer
[not found] ` <CAAS2fgQX_02_Uwu0hCu91N_11N4C4Scm2FbAXQ-0YibroeqMYg@mail.gmail.com>
0 siblings, 1 reply; 58+ results
From: Tamas Blummer @ 2019-02-06 19:48 UTC (permalink / raw)
To: Gregory Maxwell; +Cc: Jim Posen, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 1223 bytes --]
I do not think this ad hominem attack of you on me was justified.
I wrote code, gathered and shared data now and back in 2018. I showed
understanding of non technical issues. Is there an actual action that
defies my observation that a commitment is not yet in sight?
Is there anything technically wrong in what I wrote?
If not you should stop.
Tamas Blummer
On Wed, 6 Feb 2019, 18:17 Gregory Maxwell <greg@xiph•org wrote:
> On Wed, Feb 6, 2019 at 8:10 AM Tamas Blummer <tamas.blummer@gmail•com>
> wrote:
> > I am skeptical that commitment of any filter will come into Core soon.
> [...] A committed filter makes light clients much more reliable and
> attractive, for some taste too much more.
>
> You keep repeating this smear. Please stop.
>
> If you would actually bother reading the threads where this was
> discussed previously you will see that there was significant interest
> from bitcoin developers to eventually commit an output filter, and a
> significant investment of effort in improving the proposal to that
> end. It is really disheartening to see you continue to repeat your
> negative assumptions about other people's wishes when you haven't even
> invested the time required to read their words.
>
[-- Attachment #2: Type: text/html, Size: 1782 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Interrogating a BIP157 server, BIP158 change proposal
[not found] ` <CAAS2fgQX_02_Uwu0hCu91N_11N4C4Scm2FbAXQ-0YibroeqMYg@mail.gmail.com>
@ 2019-02-06 21:17 99% ` Tamas Blummer
2019-02-07 20:36 99% ` Pieter Wuille
0 siblings, 1 reply; 58+ results
From: Tamas Blummer @ 2019-02-06 21:17 UTC (permalink / raw)
To: Gregory Maxwell; +Cc: Jim Posen, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 1942 bytes --]
The attack was in your implication that I would assume ill intent of those
contributed to the proposal. That is not my position. I explained why, I
think, rolling out a commitment could face opposition. This foreseable
opposition, that must not come from you makes me prefer a provable
uncommitted filter for now.
I am myself concerned of the implications if many nodes would blindly
follow POW.
I did restart the discussion which I read and participated in at its first
instance because implementing the current proposal taught me how
problematic as is until not committed and because I have not seen a sign to
assume commitment was imminent.
This is not just missing code. AFAIK we do not even have a consensus on how
any future soft fork would be activated.
While trying to build a useful software I have to make assumtions on the
timeline of dependencies and in my personal evaluation commitment is not
yet to build on.
I and others learned in this new discussion new arguments such as that of
atomic swaps by Laolu. If nothing else, this was worth of learning.
It appears me that it is rather you assuming ill intent on my side, which
hurts given that I do contribute to the ecosystem since many years and have
not ever been caught of hurting the project.
Tamas Blummer
On Wed, 6 Feb 2019, 20:16 Gregory Maxwell <gmaxwell@gmail•com wrote:
> On Wed, Feb 6, 2019 at 7:48 PM Tamas Blummer <tamas.blummer@gmail•com>
> wrote:
> > I do not think this ad hominem attack of you on me was justified.
>
> I apologize if I have offended you, but I am at a loss to find in my
> words you found to be an attack. Can you help me out?
>
> On reread the only thing I'm saying is that you hadn't even read the
> prior discussion. Am I mistaken? If so, why did you simply propose
> reverting prior improvements without addressing the arguments given
> the first time around or even acknowledging that you were rehashing an
> old discussion?
>
[-- Attachment #2: Type: text/html, Size: 2657 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Card Shuffle To Bitcoin Seed
2019-02-06 13:51 99% ` Alan Evans
@ 2019-02-07 2:42 99% ` James MacWhyte
0 siblings, 0 replies; 58+ results
From: James MacWhyte @ 2019-02-07 2:42 UTC (permalink / raw)
To: Alan Evans, Bitcoin Protocol Discussion
[-- Attachment #1.1: Type: text/plain, Size: 3052 bytes --]
Oooh, that's cool. I didn't realize Ian's support for cards looks so slick
now!
Thanks for the image.
James
On Wed, Feb 6, 2019 at 7:55 AM Alan Evans via bitcoin-dev <
bitcoin-dev@lists•linuxfoundation.org> wrote:
> Image didn't seem to attach:
> [image: image.png]
>
> On Wed, 6 Feb 2019 at 09:48, Alan Evans <thealanevans@gmail•com> wrote:
>
>> It's not quite enough to just do SHA512, you missed out this condition
>> (incredibly rare as it is):
>>
>> > In case IL is 0 or ≥n, the master key is invalid.
>>
>> Also I can't see how I would use this to seed a hardware wallet that
>> requires a BIP39 seed as mentioned in your abstract.
>>
>> For both of those reasons, you may want to just invent/formalize a scheme
>> that takes Cards -> Entropy.
>> From that Entropy one can generate BIP39, and non-BIP39 fans can just
>> continue, generate and store their root xprv.
>>
>> Prior art: Note that Ian Coleman's BIP39 site already supports Cards (and
>> Dice), see the logic here:
>> https://github.com/iancoleman/bip39/blob/master/src/js/entropy.js
>>
>> [image: image.png]
>>
>> Note it detected "full deck". It also calculates the Total Bits of
>> Entropy and can handle card replacement and multiple decks.
>>
>> PS, you're a bit out on your entropy calculation, log2(52!) ~= 225.58
>> bits, not 219.
>>
>>
>> On Tue, 5 Feb 2019 at 02:08, Devrandom via bitcoin-dev <
>> bitcoin-dev@lists•linuxfoundation.org> wrote:
>>
>>> I would suggest 50+ 6-sided dice rolls, giving about 128 bits of
>>> entropy. Compared to a shuffle, it's easier to be sure that you got the
>>> right amount of entropy, even if the dice are somewhat biased.
>>>
>>>
>>> On Mon, Feb 4, 2019 at 2:33 PM James MacWhyte via bitcoin-dev <
>>> bitcoin-dev@lists•linuxfoundation.org> wrote:
>>>
>>>>
>>>> James
>>>>
>>>>
>>>> On Sun, Feb 3, 2019 at 10:27 AM Ryan Havar via bitcoin-dev <
>>>> bitcoin-dev@lists•linuxfoundation.org> wrote:
>>>>
>>>>> Conveniently a shuffled deck of cards also can serve as a physical
>>>>> backup which is easy to hide in plain sight with great plausible
>>>>> deniability.
>>>>>
>>>>
>>>> To make sure someone doesn't play with your cards and mix up the order,
>>>> use a permanent marker to draw a diagonal line on the side of the deck from
>>>> corner to corner. If the cards ever get mixed up, you can put them back in
>>>> order by making sure the diagonal line matches up.
>>>> _______________________________________________
>>>> bitcoin-dev mailing list
>>>> bitcoin-dev@lists•linuxfoundation.org
>>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>>
>>> _______________________________________________
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists•linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>
>> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #1.2: Type: text/html, Size: 5822 bytes --]
[-- Attachment #2: image.png --]
[-- Type: image/png, Size: 176797 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Interrogating a BIP157 server, BIP158 change proposal
2019-02-06 21:17 99% ` Tamas Blummer
@ 2019-02-07 20:36 99% ` Pieter Wuille
0 siblings, 0 replies; 58+ results
From: Pieter Wuille @ 2019-02-07 20:36 UTC (permalink / raw)
To: Tamas Blummer, Bitcoin Protocol Discussion; +Cc: Gregory Maxwell, Jim Posen
On Thu, 7 Feb 2019 at 12:19, Tamas Blummer via bitcoin-dev
<bitcoin-dev@lists•linuxfoundation.org> wrote:
> I did restart the discussion which I read and participated in at its first instance because implementing the current proposal taught me how problematic as is until not committed and because I have not seen a sign to assume commitment was imminent.
Hi Tamas,
I think you're confusing the lack of sign of imminent commitment for a
sign it isn't the end goal. Changes in consensus rules take a while,
and I think adoption of BIP157 in a limited setting where offered by
trusted nodes is necessary before we will see a big push for that.
In my personal view (and I respect other opinions in this regard),
BIP157 as a public network-facing service offered by untrusted full
nodes is fair uninteresting. If the goal wasn't to have it eventually
as a commitment, I don't think I would be interested in helping
improving it. There are certainly heuristics that reduce the risk of
using it without, but they come at the cost of software complexity,
extra bandwidth, and a number of assumptions on the types of scripts
involved in the transactions. I appreciate work in exploring more
possibilities, but for a BIP157-that-eventually-becomes-a-commitment,
I think they're a distraction. Unless you feel that changes actually
benefit that end goal, I think the current BIP157 filter definition
should be kept.
There is no problem however in optionally supporting other filters,
which make different trade-offs, which are intended to be offered by
(semi) trusted nodes. Still, for the reasons above I would very much
like to keep those discussions separate from the
to-be-committed-filter.
Cheers,
--
Pieter
^ permalink raw reply [relevance 99%]
* [bitcoin-dev] Implementing Confidential Transactions in extension blocks
@ 2019-02-08 10:12 99% Kenshiro []
2019-02-11 4:29 99% ` ZmnSCPxj
0 siblings, 1 reply; 58+ results
From: Kenshiro [] @ 2019-02-08 10:12 UTC (permalink / raw)
To: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 698 bytes --]
Greetings,
What do you think about implementing Confidential Transactions in extension blocks? CT transactions go from extension block to extension block passing through normal blocks. It looks the perfect solution:
- Soft fork: old nodes see CT transactions as "sendtoany" transactions
- Safe: if there is a software bug in CT it's impossible to create new coins because the coins move from normal block to normal block as public transactions
- Legal: Exchanges can use public transactions so regulators can monitor their activity
- Capacity increase: the CT signature is stored in the extension block, so CT transactions increase the maximum number of transactions per block
Regards
[-- Attachment #2: Type: text/html, Size: 1213 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Safer NOINPUT with output tagging
@ 2019-02-08 19:01 99% ` Jonas Nick
2019-02-09 10:01 99% ` Alejandro Ranchal Pedrosa
2019-02-09 10:15 99% ` Johnson Lau
2019-02-19 19:04 99% ` Luke Dashjr
2 siblings, 2 replies; 58+ results
From: Jonas Nick @ 2019-02-08 19:01 UTC (permalink / raw)
To: Johnson Lau, Bitcoin Protocol Discussion
Output tagging may result in reduced fungibility in multiparty eltoo channels.
If one party is unresponsive, the remaining participants want to remove
the party from the channel without downtime. This is possible by creating
settlement transactions which pay off the unresponsive party and fund a new
channel with the remaining participants.
When the party becomes unresponsive, the channel is closed by broadcasting the
update transaction as usual. As soon as that happens the remaining
participants can start to update their new channel. Their update signatures
must use SIGHASH_NOINPUT. This is because in eltoo the settlement txid is not
final (because update tx is not confirmed and may have to rebind to another
output). Therefore, the funding output of the new channel must be NOINPUT
tagged. Assuming the remaining parties later settle cooperatively, this loss
of fungibility would not have happened without output tagging.
funding output update output settlement outputs update output
[ A & B & C ] -> ... -> [ (A & B & C & state CLTV) | (As & Bs & Cs) ] -> [ NOINPUT tagged: (A' & B'), -> ...
C' ]
If the expectation is that the unresponsive party returns, fungibility is
not reduced due to output tagging because the above scheme can be used
off-chain until the original channel can be continued.
Side note: I was not able to come up with an similar, eltoo-like protocol that works
if you can't predict in advance who will become absent.
On 12/13/18 12:32 PM, Johnson Lau via bitcoin-dev wrote:
> NOINPUT is very powerful, but the tradeoff is the risks of signature replay. While the key holders are expected not to reuse key pair, little could be done to stop payers to reuse an address. Unfortunately, key-pair reuse has been a social and technical norm since the creation of Bitcoin (the first tx made in block 170 reused the previous public key). I don’t see any hope to change this norm any time soon, if possible at all.
>
> As the people who are designing the layer-1 protocol, we could always blame the payer and/or payee for their stupidity, just like those people laughed at victims of Ethereum dumb contracts (DAO, Parity multisig, etc). The existing bitcoin script language is so restrictive. It disallows many useful smart contracts, but at the same time prevented many dumb contracts. After all, “smart” and “dumb” are non-technical judgement. The DAO contract has always been faithfully executed. It’s dumb only for those invested in the project. For me, it was just a comedy show.
>
> So NOINPUT brings us more smart contract capacity, and at the same time we are one step closer to dumb contracts. The target is to find a design that exactly enables the smart contracts we want, while minimising the risks of misuse.
>
> The risk I am trying to mitigate is a payer mistakenly pay to a previous address with the exactly same amount, and the previous UTXO has been spent using NOINPUT. Accidental double payment is not uncommon. Even if the payee was honest and willing to refund, the money might have been spent with a replayed NOINPUT signature. Once people lost a significant amount of money this way, payers (mostly exchanges) may refuse to send money to anything other than P2PKH, native-P2WPKH and native-P2WSH (as the only 3 types without possibility of NOINPUT)
>
> The proposed solution is that an output must be “tagged” for it to be spendable with NOINPUT, and the “tag” must be made explicitly by the payer. There are 2 possible ways to do the tagging:
>
> 1. A certain bit in the tx version must be set
> 2. A certain bit in the scriptPubKey must be set
>
> I will analyse the pros and cons later.
>
> Using eltoo as example. The setup utxo is a simple 2-of-2 multisig, and should not be tagged. This makes it indistinguishable from normal 1-of-1 utxo. The trigger tx, which spends the setup utxo, should be tagged, so the update txs could spend the trigger utxo with NOINPUT. Similarly, all update txs should be tagged, so they could be spent by other update txs and settlement tx with NOINPUT. As the final destination, there is no need to tag in the settlement tx.
>
> In payer’s perspective, tagging means “I believe this address is for one-time-use only” Since we can’t control how other people manage their addresses, we should never do tagging when paying to other people.
>
> I mentioned 2 ways of tagging, and they have pros and cons. First of all, tagging in either way should not complicate the eltoo protocol in anyway, nor bring extra block space overhead.
>
> A clear advantage of tagging with scriptPubKey is we could tag on a per-output basis. However, scriptPubKey tagging is only possible with native-segwit, not P2SH. That means we have to disallow NOINPUT in P2SH-segwit (Otherwise, *all* P2SH addresses would become “risky” for payers) This should be ok for eltoo, since it has no reason to use P2SH-segwit in intermediate txs, which is more expensive.
>
> Another problem with scriptPubKey tagging is all the existing bech32 implementations will not understand the special tag, and will pay to a tagged address as usual. An upgrade would be needed for them to refuse sending to tagged addresses by default.
>
> On the other hand, tagging with tx version will also protect P2SH-segwit, and all existing wallets are protected by default. However, it is somewhat a layer violation and you could only tag all or none output in the same tx. Also, as Bitcoin Core has just removed the tx version from the UTXO database, adding it back could be a little bit annoying, but doable.
>
> There is an extension to the version tagging, which could make NOINPUT even safer. In addition to tagging requirement, NOINPUT will also sign the version of the previous tx. If the wallet always uses a randomised tx version, it makes accidental replay very unlikely. However, that will burn a few more bits in the tx version field.
>
> While this seems fully compatible with eltoo, is there any other proposals require NOINPUT, and is adversely affected by either way of tagging?
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT
@ 2019-02-09 0:39 99% ` Pieter Wuille
0 siblings, 0 replies; 58+ results
From: Pieter Wuille @ 2019-02-09 0:39 UTC (permalink / raw)
To: Rusty Russell, Bitcoin Protocol Discussion
On Wed, 19 Dec 2018 at 18:06, Rusty Russell via bitcoin-dev
<bitcoin-dev@lists•linuxfoundation.org> wrote:
>
> Meanwhile, both SIGHASH_NOINPUT and OP_MASK have the reuse-is-dangerous
> property; with OP_MASK the danger is limited to reuse-on-the-same-script
> (ie. if you use the same key for a non-lightning output and a lightning
> output, you're safe with OP_MASK. However, this is far less likely in
> practice).
Having had some more time to consider this and seeing discussions
about alternatives, I agree. It doesn't seem that OP_MASK protects
against any likely failure modes. I do think that there are realistic
risks around NOINPUT, but output tagging (as suggested in another ML
thread) seems to match those much better than masking does.
Cheers,
--
Pieter
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Safer NOINPUT with output tagging
2019-02-08 19:01 99% ` Jonas Nick
@ 2019-02-09 10:01 99% ` Alejandro Ranchal Pedrosa
2019-02-09 16:48 99% ` Johnson Lau
2019-02-09 16:54 99% ` Jonas Nick
2019-02-09 10:15 99% ` Johnson Lau
1 sibling, 2 replies; 58+ results
From: Alejandro Ranchal Pedrosa @ 2019-02-09 10:01 UTC (permalink / raw)
To: bitcoin-dev
Hi all,
>
> Side note: I was not able to come up with an similar, eltoo-like
> protocol that works
> if you can't predict in advance who will become absent.
>
An eltoo-like protocol that works (without going on-chain) if you can't
predict in advance who will become absent would be a childchain. If the
off-chain protocol can continue updating in the abscence of other
parties, it means that other parties' signatures must not be required
when they are not involved in the off-chain state update. If other
parties' signatures must not be required, there must be a way of having
a common verifiable 'last state' to prevent a party to simultaneously
'fork' the state with two different parties, and double-spend. A
solution for this is a childchain for Bitcoin. An example of this is
what is known as a 'Broken Factory' attack [1]
(https://bitcoin.stackexchange.com/questions/77434/how-does-channel-factory-act/81005#81005)
> If the expectation is that the unresponsive party returns, fungibility is
> not reduced due to output tagging because the above scheme can be used
> off-chain until the original channel can be continued.
I believe that in many cases other parties won't be able to continue
until the unresponsive parties go back online. That might be true in
particular scenarios, but generally speaking, the party might have gone
unresponsive during a factory-level update (i.e. off-chain closing and
opening of channels), while some parties might have given out their
signature for the update without receiving a fully signed transaction.
In this case they do not even know which channel they have open (the one
of the old state that they have fully signed, or the one for the new
state that they have given out their signature for). This is known as a
'Stale Factory', and can be exploited by an adversary in a 'Stale
Factory' attack [1]. Even if they knew which state they are in (i.e. the
party went unresponsive but not during a factory-level update), some of
them might have run out of funds in some of their channels of the
factory, and might want to update, while they will not be willing to
wait for a party to go back online (something for which they also have
zero guarantees of).
An eltoo-like protocol that works (allowing going on-chain) if you can't
in advance who will become absent, then this is precisely why
'Transaction Fragments' have been suggested. They allow an eltoo-like
protocol even when one cannot predict in advance who will become absent,
or malicious (by publishing invalid states), cause the non-absent
parties can unite their fragments and create a valid spendable
factory-level transaction that effectively kicks out the malicious
parties, while leaving the rest of the factory as it was. To the best of
my understanding, the eltoo original proposal also allows this though.
Best,
Alejandro.
[1]: Scalable Lightning Factories for Bitcoin,
https://eprint.iacr.org/2018/918.pdf
On 08/02/2019 20:01, Jonas Nick via bitcoin-dev wrote:
> Output tagging may result in reduced fungibility in multiparty eltoo channels.
> If one party is unresponsive, the remaining participants want to remove
> the party from the channel without downtime. This is possible by creating
> settlement transactions which pay off the unresponsive party and fund a new
> channel with the remaining participants.
>
> When the party becomes unresponsive, the channel is closed by broadcasting the
> update transaction as usual. As soon as that happens the remaining
> participants can start to update their new channel. Their update signatures
> must use SIGHASH_NOINPUT. This is because in eltoo the settlement txid is not
> final (because update tx is not confirmed and may have to rebind to another
> output). Therefore, the funding output of the new channel must be NOINPUT
> tagged. Assuming the remaining parties later settle cooperatively, this loss
> of fungibility would not have happened without output tagging.
>
> funding output update output settlement outputs update output
> [ A & B & C ] -> ... -> [ (A & B & C & state CLTV) | (As & Bs & Cs) ] -> [ NOINPUT tagged: (A' & B'), -> ...
> C' ]
> If the expectation is that the unresponsive party returns, fungibility is
> not reduced due to output tagging because the above scheme can be used
> off-chain until the original channel can be continued.
>
> Side note: I was not able to come up with an similar, eltoo-like protocol that works
> if you can't predict in advance who will become absent.
>
> On 12/13/18 12:32 PM, Johnson Lau via bitcoin-dev wrote:
>> NOINPUT is very powerful, but the tradeoff is the risks of signature replay. While the key holders are expected not to reuse key pair, little could be done to stop payers to reuse an address. Unfortunately, key-pair reuse has been a social and technical norm since the creation of Bitcoin (the first tx made in block 170 reused the previous public key). I don’t see any hope to change this norm any time soon, if possible at all.
>>
>> As the people who are designing the layer-1 protocol, we could always blame the payer and/or payee for their stupidity, just like those people laughed at victims of Ethereum dumb contracts (DAO, Parity multisig, etc). The existing bitcoin script language is so restrictive. It disallows many useful smart contracts, but at the same time prevented many dumb contracts. After all, “smart” and “dumb” are non-technical judgement. The DAO contract has always been faithfully executed. It’s dumb only for those invested in the project. For me, it was just a comedy show.
>>
>> So NOINPUT brings us more smart contract capacity, and at the same time we are one step closer to dumb contracts. The target is to find a design that exactly enables the smart contracts we want, while minimising the risks of misuse.
>>
>> The risk I am trying to mitigate is a payer mistakenly pay to a previous address with the exactly same amount, and the previous UTXO has been spent using NOINPUT. Accidental double payment is not uncommon. Even if the payee was honest and willing to refund, the money might have been spent with a replayed NOINPUT signature. Once people lost a significant amount of money this way, payers (mostly exchanges) may refuse to send money to anything other than P2PKH, native-P2WPKH and native-P2WSH (as the only 3 types without possibility of NOINPUT)
>>
>> The proposed solution is that an output must be “tagged” for it to be spendable with NOINPUT, and the “tag” must be made explicitly by the payer. There are 2 possible ways to do the tagging:
>>
>> 1. A certain bit in the tx version must be set
>> 2. A certain bit in the scriptPubKey must be set
>>
>> I will analyse the pros and cons later.
>>
>> Using eltoo as example. The setup utxo is a simple 2-of-2 multisig, and should not be tagged. This makes it indistinguishable from normal 1-of-1 utxo. The trigger tx, which spends the setup utxo, should be tagged, so the update txs could spend the trigger utxo with NOINPUT. Similarly, all update txs should be tagged, so they could be spent by other update txs and settlement tx with NOINPUT. As the final destination, there is no need to tag in the settlement tx.
>>
>> In payer’s perspective, tagging means “I believe this address is for one-time-use only” Since we can’t control how other people manage their addresses, we should never do tagging when paying to other people.
>>
>> I mentioned 2 ways of tagging, and they have pros and cons. First of all, tagging in either way should not complicate the eltoo protocol in anyway, nor bring extra block space overhead.
>>
>> A clear advantage of tagging with scriptPubKey is we could tag on a per-output basis. However, scriptPubKey tagging is only possible with native-segwit, not P2SH. That means we have to disallow NOINPUT in P2SH-segwit (Otherwise, *all* P2SH addresses would become “risky” for payers) This should be ok for eltoo, since it has no reason to use P2SH-segwit in intermediate txs, which is more expensive.
>>
>> Another problem with scriptPubKey tagging is all the existing bech32 implementations will not understand the special tag, and will pay to a tagged address as usual. An upgrade would be needed for them to refuse sending to tagged addresses by default.
>>
>> On the other hand, tagging with tx version will also protect P2SH-segwit, and all existing wallets are protected by default. However, it is somewhat a layer violation and you could only tag all or none output in the same tx. Also, as Bitcoin Core has just removed the tx version from the UTXO database, adding it back could be a little bit annoying, but doable.
>>
>> There is an extension to the version tagging, which could make NOINPUT even safer. In addition to tagging requirement, NOINPUT will also sign the version of the previous tx. If the wallet always uses a randomised tx version, it makes accidental replay very unlikely. However, that will burn a few more bits in the tx version field.
>>
>> While this seems fully compatible with eltoo, is there any other proposals require NOINPUT, and is adversely affected by either way of tagging?
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Safer NOINPUT with output tagging
2019-02-08 19:01 99% ` Jonas Nick
2019-02-09 10:01 99% ` Alejandro Ranchal Pedrosa
@ 2019-02-09 10:15 99% ` Johnson Lau
2019-02-09 16:52 99% ` Jonas Nick
1 sibling, 1 reply; 58+ results
From: Johnson Lau @ 2019-02-09 10:15 UTC (permalink / raw)
To: Jonas Nick; +Cc: bitcoin-dev
This is really interesting. If I get it correctly, I think the fungibility hit could be avoided, just by making one more signature, and not affecting the blockchain space usage.
Just some terminology first. In a 3-party channel, “main channel” means the one requires all parties to update, and “branch channel” requires only 2 parties to update.
By what you describe, I think the most realistic scenario is “C is going to offline soon, and may or may not return. So the group wants to keep the main channel open, and create a branch channel for A and B, during the absence of C”. I guess this is what you mean by being able to "predict in advance who will become absent”
I call this process as “semi-cooperative channel closing” (SCCC). During a SCCC, the settlement tx will have 2 outputs: one as (A & B), one as (C). Therefore, a branch channel could be opened with the (A & B) output. The channel opening must use NOINPUT signature, since we don’t know the txid of the settlement tx. With the output tagging requirement, (A & B) must be tagged, and lead to the fungibility loss you described.
However, it is possible to make 2 settlement txs during SCCC. Outputs of the settlement tx X are tagged(A&B) and C. Outputs of the settlement tx Y are untagged(A&B) and C. Both X and Y are BIP68 relative-time-locked, but Y has a longer time lock.
The branch channel is opened on top of the tagged output of tx X. If A and B want to close the channel without C, they need to publish the last update tx of the main channel. Once the update tx is confirmed, its txid becomes permanent, so are the txids of X and Y. If A and B decide to close the channel cooperatively, they could do it on top of the untagged output of tx Y, without using NOINPUT. There won’t be any fungibility loss. Other people will only see the uncooperative closing of the main channel, and couldn’t even tell the number of parties in the main channel. Unfortunately, the unusual long lock time of Y might still tell something.
If anything goes wrong, A or B could publish X before the lock time of Y, and settle it through the usual eltoo style. Since this is an uncooperative closing anyway, the extra fungibility loss due to tagging is next to nothing. However, it may suggest that the main channel was a multi-party one.
For C, the last update tx of the main channel and the settlement tx Y are the only things he needs to get the money back. C has to sign tx X, but he shouldn’t get the complete tx X. Otherwise, C might have an incentive to publish X in order to get the money back earlier, at the cost of fungibility loss of the branch channel.
To minimise the fungibility loss, we’d better make it a social norm: if you sign your tx with NOINPUT, always try to make all outputs tagged to be NOINPUT-spendable. (NOTE: you can still spend tagged outputs with normal signatures, so this won’t permanently taint your coins as NOINPUT-spendable) It makes sense because the use of NOINPUT signature strongly suggests that you don’t know the txid of the parent tx, so you may most likely want your outputs to be NOINPUT-spendable as well. I thought of making this a policy or consensus rule, but may be it’s just overkill.
> On 9 Feb 2019, at 3:01 AM, Jonas Nick <jonasdnick@gmail•com> wrote:
>
> Output tagging may result in reduced fungibility in multiparty eltoo channels.
> If one party is unresponsive, the remaining participants want to remove
> the party from the channel without downtime. This is possible by creating
> settlement transactions which pay off the unresponsive party and fund a new
> channel with the remaining participants.
>
> When the party becomes unresponsive, the channel is closed by broadcasting the
> update transaction as usual. As soon as that happens the remaining
> participants can start to update their new channel. Their update signatures
> must use SIGHASH_NOINPUT. This is because in eltoo the settlement txid is not
> final (because update tx is not confirmed and may have to rebind to another
> output). Therefore, the funding output of the new channel must be NOINPUT
> tagged. Assuming the remaining parties later settle cooperatively, this loss
> of fungibility would not have happened without output tagging.
>
> funding output update output settlement outputs update output
> [ A & B & C ] -> ... -> [ (A & B & C & state CLTV) | (As & Bs & Cs) ] -> [ NOINPUT tagged: (A' & B'), -> ...
> C' ]
> If the expectation is that the unresponsive party returns, fungibility is
> not reduced due to output tagging because the above scheme can be used
> off-chain until the original channel can be continued.
>
> Side note: I was not able to come up with an similar, eltoo-like protocol that works
> if you can't predict in advance who will become absent.
>
> On 12/13/18 12:32 PM, Johnson Lau via bitcoin-dev wrote:
>> NOINPUT is very powerful, but the tradeoff is the risks of signature replay. While the key holders are expected not to reuse key pair, little could be done to stop payers to reuse an address. Unfortunately, key-pair reuse has been a social and technical norm since the creation of Bitcoin (the first tx made in block 170 reused the previous public key). I don’t see any hope to change this norm any time soon, if possible at all.
>>
>> As the people who are designing the layer-1 protocol, we could always blame the payer and/or payee for their stupidity, just like those people laughed at victims of Ethereum dumb contracts (DAO, Parity multisig, etc). The existing bitcoin script language is so restrictive. It disallows many useful smart contracts, but at the same time prevented many dumb contracts. After all, “smart” and “dumb” are non-technical judgement. The DAO contract has always been faithfully executed. It’s dumb only for those invested in the project. For me, it was just a comedy show.
>>
>> So NOINPUT brings us more smart contract capacity, and at the same time we are one step closer to dumb contracts. The target is to find a design that exactly enables the smart contracts we want, while minimising the risks of misuse.
>>
>> The risk I am trying to mitigate is a payer mistakenly pay to a previous address with the exactly same amount, and the previous UTXO has been spent using NOINPUT. Accidental double payment is not uncommon. Even if the payee was honest and willing to refund, the money might have been spent with a replayed NOINPUT signature. Once people lost a significant amount of money this way, payers (mostly exchanges) may refuse to send money to anything other than P2PKH, native-P2WPKH and native-P2WSH (as the only 3 types without possibility of NOINPUT)
>>
>> The proposed solution is that an output must be “tagged” for it to be spendable with NOINPUT, and the “tag” must be made explicitly by the payer. There are 2 possible ways to do the tagging:
>>
>> 1. A certain bit in the tx version must be set
>> 2. A certain bit in the scriptPubKey must be set
>>
>> I will analyse the pros and cons later.
>>
>> Using eltoo as example. The setup utxo is a simple 2-of-2 multisig, and should not be tagged. This makes it indistinguishable from normal 1-of-1 utxo. The trigger tx, which spends the setup utxo, should be tagged, so the update txs could spend the trigger utxo with NOINPUT. Similarly, all update txs should be tagged, so they could be spent by other update txs and settlement tx with NOINPUT. As the final destination, there is no need to tag in the settlement tx.
>>
>> In payer’s perspective, tagging means “I believe this address is for one-time-use only” Since we can’t control how other people manage their addresses, we should never do tagging when paying to other people.
>>
>> I mentioned 2 ways of tagging, and they have pros and cons. First of all, tagging in either way should not complicate the eltoo protocol in anyway, nor bring extra block space overhead.
>>
>> A clear advantage of tagging with scriptPubKey is we could tag on a per-output basis. However, scriptPubKey tagging is only possible with native-segwit, not P2SH. That means we have to disallow NOINPUT in P2SH-segwit (Otherwise, *all* P2SH addresses would become “risky” for payers) This should be ok for eltoo, since it has no reason to use P2SH-segwit in intermediate txs, which is more expensive.
>>
>> Another problem with scriptPubKey tagging is all the existing bech32 implementations will not understand the special tag, and will pay to a tagged address as usual. An upgrade would be needed for them to refuse sending to tagged addresses by default.
>>
>> On the other hand, tagging with tx version will also protect P2SH-segwit, and all existing wallets are protected by default. However, it is somewhat a layer violation and you could only tag all or none output in the same tx. Also, as Bitcoin Core has just removed the tx version from the UTXO database, adding it back could be a little bit annoying, but doable.
>>
>> There is an extension to the version tagging, which could make NOINPUT even safer. In addition to tagging requirement, NOINPUT will also sign the version of the previous tx. If the wallet always uses a randomised tx version, it makes accidental replay very unlikely. However, that will burn a few more bits in the tx version field.
>>
>> While this seems fully compatible with eltoo, is there any other proposals require NOINPUT, and is adversely affected by either way of tagging?
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Safer NOINPUT with output tagging
2019-02-09 10:01 99% ` Alejandro Ranchal Pedrosa
@ 2019-02-09 16:48 99% ` Johnson Lau
2019-02-10 4:46 99% ` Anthony Towns
2019-02-09 16:54 99% ` Jonas Nick
1 sibling, 1 reply; 58+ results
From: Johnson Lau @ 2019-02-09 16:48 UTC (permalink / raw)
To: Alejandro Ranchal Pedrosa, bitcoin-dev
In a 3 parties channel, let’s say the balance for A, B, C is 2, 3, 6BTC respectively, there are few ways they could make the settlement tx.
The first type we may call it “simple settlement”, which has 3 outputs with A=2, B=3, C=6.
The second type we may call it “fully combinatorial settlement”, which has 3 outputs with (A & B), (B & C), and (A & C). The value distribution is flexible, but never exceed the total balance of the involved parties. For example, (A & B) may have any value between 0 and 5BTC. For the following example, I will use (A & B) = 3; (B & C) = 6; (A & C) = 2, but there are infinitely many valid combinations.
The third type we may call it “partially combinatorial settlement”. It may have 2 multi-sig outputs, for example, (A & B) = 4 and (B & C) = 7; or 1 multi-sig output and 1 single-sig output, for example, (A & B) = 5 and C=6 (known as "semi-cooperative channel closing” SCCC in my last post)
I’ll just focus on the fully combinatorial settlement. The partial type works in the same way, with benefits and limitations.
In a combinatorial settlement, the multi-sig outputs are actually eltoo-style scripts. Therefore, A and B will further distribute the value of (A & B) by a 2-party eltoo channel (“branch channels"). Again, there are infinitely many valid ways to distribute the values. If the AB branch channel is distributed as A=1 and B=2, then the BC channel must be B=1 and C=5, and the AC channel must be A=1 and C=1.
A clear benefit of this model is that any 2 parties could trade with each other, in the absence of any other party(s), as long as there is enough liquidity in their branch channel. There is also no way to “fork” the state, because liquidity is restricted to each branch channel. In some way, this is like the existing lightning network where the 3 parties have direct channel with each other. However, this is superior to lightning network, because when the 3 parties are online simultaneously, they could re-distribute the channel capacities without closing any channels. They could even change it to a partially combinatorial settlement. If they find that A and C rarely trade with each other, they could remove the (A & C) output, and improve the capacities of the remaining channels. If C is going offline for a week, they could make it (A & B), C, (aka. SCCC) which will maximise the capacity of the AB branch channel, and minimise the cost in case C is not coming back.
A problem with combinatorial settlement is the increased costs of uncooperative settlement. It is more expensive, as more parties are missing. Simple settlement has the same settlement cost for any number of missing party. However, even if one party is missing, a simple settled channel will cease to function and force an immediate on-chain settlement. In combinatorial settlement, the surviving parties may keep trading, may or may not with reduced capacity depending on the exact settlement model, and in the meantime hope that the missing parties may return.
It requires 6 outputs for 4 parties doing fully combinatorial settlement, 10 outputs for 5 parties, 15 outputs for 6 parties, etc. However, in a many parties situation, not every parties might want to trade with all the other parties, and those branch channels might be omitted to improve the capacities of the other channels. If some pairs want to trade without a direct branch channel, they might try to find a third (internal) party to forward the tx. When the next time all parties are online, they could rearrange the branch channel capacities at no cost.
The combinatorial settlement model could be generalised to a hierarchical settlement model, where we might have 4 settlement outputs (A&B&C), (A&B&D), (A&C&D), (B&C&D) for a 4-party channel, and each settlement output will have 3 branch channels. If A is missing, for example, we will still have one BC branch channel, one BD branch channel, one CD branch channel, and one BCD 3-party branch channel. The benefit of having a BCD 3-party branch channel is the 3 parties could rearrange the channel capacities without involving A. Let’s say D is going for vacation, he could do a SCCC in the BCD branch channel to maximise the capacity of its BC channel. Without the involvement of A, however, the capacities of the other BC, BD, and CD branch channels are not modifiable, and B and C’s balance in the BD/CD channels are frozen during the absence of D.
As the number of parties increase, the number of settlement txs will grow factorially in a fully hierarchical settlement model, and will soon be out-of-control. The result could be catastrophic if many parties are gone. So the group needs to continuously evaluate the risks of each party being missing, and modify the settlement model accordingly.
> On 9 Feb 2019, at 6:01 PM, Alejandro Ranchal Pedrosa via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> wrote:
>
> Hi all,
>>
>> Side note: I was not able to come up with an similar, eltoo-like protocol that works
>> if you can't predict in advance who will become absent.
>>
> An eltoo-like protocol that works (without going on-chain) if you can't predict in advance who will become absent would be a childchain. If the off-chain protocol can continue updating in the abscence of other parties, it means that other parties' signatures must not be required when they are not involved in the off-chain state update. If other parties' signatures must not be required, there must be a way of having a common verifiable 'last state' to prevent a party to simultaneously 'fork' the state with two different parties, and double-spend. A solution for this is a childchain for Bitcoin. An example of this is what is known as a 'Broken Factory' attack [1] (https://bitcoin.stackexchange.com/questions/77434/how-does-channel-factory-act/81005#81005)
>
>> If the expectation is that the unresponsive party returns, fungibility is
>> not reduced due to output tagging because the above scheme can be used
>> off-chain until the original channel can be continued.
>
> I believe that in many cases other parties won't be able to continue until the unresponsive parties go back online. That might be true in particular scenarios, but generally speaking, the party might have gone unresponsive during a factory-level update (i.e. off-chain closing and opening of channels), while some parties might have given out their signature for the update without receiving a fully signed transaction. In this case they do not even know which channel they have open (the one of the old state that they have fully signed, or the one for the new state that they have given out their signature for). This is known as a 'Stale Factory', and can be exploited by an adversary in a 'Stale Factory' attack [1]. Even if they knew which state they are in (i.e. the party went unresponsive but not during a factory-level update), some of them might have run out of funds in some of their channels of the factory, and might want to update, while they will not be willing to wait for a party to go back online (something for which they also have zero guarantees of).
>
> An eltoo-like protocol that works (allowing going on-chain) if you can't in advance who will become absent, then this is precisely why 'Transaction Fragments' have been suggested. They allow an eltoo-like protocol even when one cannot predict in advance who will become absent, or malicious (by publishing invalid states), cause the non-absent parties can unite their fragments and create a valid spendable factory-level transaction that effectively kicks out the malicious parties, while leaving the rest of the factory as it was. To the best of my understanding, the eltoo original proposal also allows this though.
>
> Best,
>
> Alejandro.
>
> [1]: Scalable Lightning Factories for Bitcoin, https://eprint.iacr.org/2018/918.pdf
>
>
> On 08/02/2019 20:01, Jonas Nick via bitcoin-dev wrote:
>> Output tagging may result in reduced fungibility in multiparty eltoo channels.
>> If one party is unresponsive, the remaining participants want to remove
>> the party from the channel without downtime. This is possible by creating
>> settlement transactions which pay off the unresponsive party and fund a new
>> channel with the remaining participants.
>>
>> When the party becomes unresponsive, the channel is closed by broadcasting the
>> update transaction as usual. As soon as that happens the remaining
>> participants can start to update their new channel. Their update signatures
>> must use SIGHASH_NOINPUT. This is because in eltoo the settlement txid is not
>> final (because update tx is not confirmed and may have to rebind to another
>> output). Therefore, the funding output of the new channel must be NOINPUT
>> tagged. Assuming the remaining parties later settle cooperatively, this loss
>> of fungibility would not have happened without output tagging.
>>
>> funding output update output settlement outputs update output
>> [ A & B & C ] -> ... -> [ (A & B & C & state CLTV) | (As & Bs & Cs) ] -> [ NOINPUT tagged: (A' & B'), -> ...
>> C' ]
>> If the expectation is that the unresponsive party returns, fungibility is
>> not reduced due to output tagging because the above scheme can be used
>> off-chain until the original channel can be continued.
>>
>> Side note: I was not able to come up with an similar, eltoo-like protocol that works
>> if you can't predict in advance who will become absent.
>>
>> On 12/13/18 12:32 PM, Johnson Lau via bitcoin-dev wrote:
>>> NOINPUT is very powerful, but the tradeoff is the risks of signature replay. While the key holders are expected not to reuse key pair, little could be done to stop payers to reuse an address. Unfortunately, key-pair reuse has been a social and technical norm since the creation of Bitcoin (the first tx made in block 170 reused the previous public key). I don’t see any hope to change this norm any time soon, if possible at all.
>>>
>>> As the people who are designing the layer-1 protocol, we could always blame the payer and/or payee for their stupidity, just like those people laughed at victims of Ethereum dumb contracts (DAO, Parity multisig, etc). The existing bitcoin script language is so restrictive. It disallows many useful smart contracts, but at the same time prevented many dumb contracts. After all, “smart” and “dumb” are non-technical judgement. The DAO contract has always been faithfully executed. It’s dumb only for those invested in the project. For me, it was just a comedy show.
>>>
>>> So NOINPUT brings us more smart contract capacity, and at the same time we are one step closer to dumb contracts. The target is to find a design that exactly enables the smart contracts we want, while minimising the risks of misuse.
>>>
>>> The risk I am trying to mitigate is a payer mistakenly pay to a previous address with the exactly same amount, and the previous UTXO has been spent using NOINPUT. Accidental double payment is not uncommon. Even if the payee was honest and willing to refund, the money might have been spent with a replayed NOINPUT signature. Once people lost a significant amount of money this way, payers (mostly exchanges) may refuse to send money to anything other than P2PKH, native-P2WPKH and native-P2WSH (as the only 3 types without possibility of NOINPUT)
>>>
>>> The proposed solution is that an output must be “tagged” for it to be spendable with NOINPUT, and the “tag” must be made explicitly by the payer. There are 2 possible ways to do the tagging:
>>>
>>> 1. A certain bit in the tx version must be set
>>> 2. A certain bit in the scriptPubKey must be set
>>>
>>> I will analyse the pros and cons later.
>>>
>>> Using eltoo as example. The setup utxo is a simple 2-of-2 multisig, and should not be tagged. This makes it indistinguishable from normal 1-of-1 utxo. The trigger tx, which spends the setup utxo, should be tagged, so the update txs could spend the trigger utxo with NOINPUT. Similarly, all update txs should be tagged, so they could be spent by other update txs and settlement tx with NOINPUT. As the final destination, there is no need to tag in the settlement tx.
>>>
>>> In payer’s perspective, tagging means “I believe this address is for one-time-use only” Since we can’t control how other people manage their addresses, we should never do tagging when paying to other people.
>>>
>>> I mentioned 2 ways of tagging, and they have pros and cons. First of all, tagging in either way should not complicate the eltoo protocol in anyway, nor bring extra block space overhead.
>>>
>>> A clear advantage of tagging with scriptPubKey is we could tag on a per-output basis. However, scriptPubKey tagging is only possible with native-segwit, not P2SH. That means we have to disallow NOINPUT in P2SH-segwit (Otherwise, *all* P2SH addresses would become “risky” for payers) This should be ok for eltoo, since it has no reason to use P2SH-segwit in intermediate txs, which is more expensive.
>>>
>>> Another problem with scriptPubKey tagging is all the existing bech32 implementations will not understand the special tag, and will pay to a tagged address as usual. An upgrade would be needed for them to refuse sending to tagged addresses by default.
>>>
>>> On the other hand, tagging with tx version will also protect P2SH-segwit, and all existing wallets are protected by default. However, it is somewhat a layer violation and you could only tag all or none output in the same tx. Also, as Bitcoin Core has just removed the tx version from the UTXO database, adding it back could be a little bit annoying, but doable.
>>>
>>> There is an extension to the version tagging, which could make NOINPUT even safer. In addition to tagging requirement, NOINPUT will also sign the version of the previous tx. If the wallet always uses a randomised tx version, it makes accidental replay very unlikely. However, that will burn a few more bits in the tx version field.
>>>
>>> While this seems fully compatible with eltoo, is there any other proposals require NOINPUT, and is adversely affected by either way of tagging?
>>> _______________________________________________
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists•linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Safer NOINPUT with output tagging
2019-02-09 10:15 99% ` Johnson Lau
@ 2019-02-09 16:52 99% ` Jonas Nick
2019-02-09 17:43 99% ` Johnson Lau
0 siblings, 1 reply; 58+ results
From: Jonas Nick @ 2019-02-09 16:52 UTC (permalink / raw)
To: Johnson Lau; +Cc: bitcoin-dev
Johnson's modification solves the issue I pointed out.
Moreover, as Johnson and I discussed in private, using different locktimes for
X and Y is not necessary. They can have the same relative locktime. If A and B
would only sign Y as soon as the update tx is confirmed, there is no risk of Y
changing its txid and therefore invalidating updates built on it.
On 2/9/19 10:15 AM, Johnson Lau wrote:
> This is really interesting. If I get it correctly, I think the fungibility hit could be avoided, just by making one more signature, and not affecting the blockchain space usage.
>
> Just some terminology first. In a 3-party channel, “main channel” means the one requires all parties to update, and “branch channel” requires only 2 parties to update.
>
> By what you describe, I think the most realistic scenario is “C is going to offline soon, and may or may not return. So the group wants to keep the main channel open, and create a branch channel for A and B, during the absence of C”. I guess this is what you mean by being able to "predict in advance who will become absent”
>
> I call this process as “semi-cooperative channel closing” (SCCC). During a SCCC, the settlement tx will have 2 outputs: one as (A & B), one as (C). Therefore, a branch channel could be opened with the (A & B) output. The channel opening must use NOINPUT signature, since we don’t know the txid of the settlement tx. With the output tagging requirement, (A & B) must be tagged, and lead to the fungibility loss you described.
>
> However, it is possible to make 2 settlement txs during SCCC. Outputs of the settlement tx X are tagged(A&B) and C. Outputs of the settlement tx Y are untagged(A&B) and C. Both X and Y are BIP68 relative-time-locked, but Y has a longer time lock.
>
> The branch channel is opened on top of the tagged output of tx X. If A and B want to close the channel without C, they need to publish the last update tx of the main channel. Once the update tx is confirmed, its txid becomes permanent, so are the txids of X and Y. If A and B decide to close the channel cooperatively, they could do it on top of the untagged output of tx Y, without using NOINPUT. There won’t be any fungibility loss. Other people will only see the uncooperative closing of the main channel, and couldn’t even tell the number of parties in the main channel. Unfortunately, the unusual long lock time of Y might still tell something.
>
> If anything goes wrong, A or B could publish X before the lock time of Y, and settle it through the usual eltoo style. Since this is an uncooperative closing anyway, the extra fungibility loss due to tagging is next to nothing. However, it may suggest that the main channel was a multi-party one.
>
> For C, the last update tx of the main channel and the settlement tx Y are the only things he needs to get the money back. C has to sign tx X, but he shouldn’t get the complete tx X. Otherwise, C might have an incentive to publish X in order to get the money back earlier, at the cost of fungibility loss of the branch channel.
>
> To minimise the fungibility loss, we’d better make it a social norm: if you sign your tx with NOINPUT, always try to make all outputs tagged to be NOINPUT-spendable. (NOTE: you can still spend tagged outputs with normal signatures, so this won’t permanently taint your coins as NOINPUT-spendable) It makes sense because the use of NOINPUT signature strongly suggests that you don’t know the txid of the parent tx, so you may most likely want your outputs to be NOINPUT-spendable as well. I thought of making this a policy or consensus rule, but may be it’s just overkill.
>
>
>
>> On 9 Feb 2019, at 3:01 AM, Jonas Nick <jonasdnick@gmail•com> wrote:
>>
>> Output tagging may result in reduced fungibility in multiparty eltoo channels.
>> If one party is unresponsive, the remaining participants want to remove
>> the party from the channel without downtime. This is possible by creating
>> settlement transactions which pay off the unresponsive party and fund a new
>> channel with the remaining participants.
>>
>> When the party becomes unresponsive, the channel is closed by broadcasting the
>> update transaction as usual. As soon as that happens the remaining
>> participants can start to update their new channel. Their update signatures
>> must use SIGHASH_NOINPUT. This is because in eltoo the settlement txid is not
>> final (because update tx is not confirmed and may have to rebind to another
>> output). Therefore, the funding output of the new channel must be NOINPUT
>> tagged. Assuming the remaining parties later settle cooperatively, this loss
>> of fungibility would not have happened without output tagging.
>>
>> funding output update output settlement outputs update output
>> [ A & B & C ] -> ... -> [ (A & B & C & state CLTV) | (As & Bs & Cs) ] -> [ NOINPUT tagged: (A' & B'), -> ...
>> C' ]
>> If the expectation is that the unresponsive party returns, fungibility is
>> not reduced due to output tagging because the above scheme can be used
>> off-chain until the original channel can be continued.
>>
>> Side note: I was not able to come up with an similar, eltoo-like protocol that works
>> if you can't predict in advance who will become absent.
>>
>> On 12/13/18 12:32 PM, Johnson Lau via bitcoin-dev wrote:
>>> NOINPUT is very powerful, but the tradeoff is the risks of signature replay. While the key holders are expected not to reuse key pair, little could be done to stop payers to reuse an address. Unfortunately, key-pair reuse has been a social and technical norm since the creation of Bitcoin (the first tx made in block 170 reused the previous public key). I don’t see any hope to change this norm any time soon, if possible at all.
>>>
>>> As the people who are designing the layer-1 protocol, we could always blame the payer and/or payee for their stupidity, just like those people laughed at victims of Ethereum dumb contracts (DAO, Parity multisig, etc). The existing bitcoin script language is so restrictive. It disallows many useful smart contracts, but at the same time prevented many dumb contracts. After all, “smart” and “dumb” are non-technical judgement. The DAO contract has always been faithfully executed. It’s dumb only for those invested in the project. For me, it was just a comedy show.
>>>
>>> So NOINPUT brings us more smart contract capacity, and at the same time we are one step closer to dumb contracts. The target is to find a design that exactly enables the smart contracts we want, while minimising the risks of misuse.
>>>
>>> The risk I am trying to mitigate is a payer mistakenly pay to a previous address with the exactly same amount, and the previous UTXO has been spent using NOINPUT. Accidental double payment is not uncommon. Even if the payee was honest and willing to refund, the money might have been spent with a replayed NOINPUT signature. Once people lost a significant amount of money this way, payers (mostly exchanges) may refuse to send money to anything other than P2PKH, native-P2WPKH and native-P2WSH (as the only 3 types without possibility of NOINPUT)
>>>
>>> The proposed solution is that an output must be “tagged” for it to be spendable with NOINPUT, and the “tag” must be made explicitly by the payer. There are 2 possible ways to do the tagging:
>>>
>>> 1. A certain bit in the tx version must be set
>>> 2. A certain bit in the scriptPubKey must be set
>>>
>>> I will analyse the pros and cons later.
>>>
>>> Using eltoo as example. The setup utxo is a simple 2-of-2 multisig, and should not be tagged. This makes it indistinguishable from normal 1-of-1 utxo. The trigger tx, which spends the setup utxo, should be tagged, so the update txs could spend the trigger utxo with NOINPUT. Similarly, all update txs should be tagged, so they could be spent by other update txs and settlement tx with NOINPUT. As the final destination, there is no need to tag in the settlement tx.
>>>
>>> In payer’s perspective, tagging means “I believe this address is for one-time-use only” Since we can’t control how other people manage their addresses, we should never do tagging when paying to other people.
>>>
>>> I mentioned 2 ways of tagging, and they have pros and cons. First of all, tagging in either way should not complicate the eltoo protocol in anyway, nor bring extra block space overhead.
>>>
>>> A clear advantage of tagging with scriptPubKey is we could tag on a per-output basis. However, scriptPubKey tagging is only possible with native-segwit, not P2SH. That means we have to disallow NOINPUT in P2SH-segwit (Otherwise, *all* P2SH addresses would become “risky” for payers) This should be ok for eltoo, since it has no reason to use P2SH-segwit in intermediate txs, which is more expensive.
>>>
>>> Another problem with scriptPubKey tagging is all the existing bech32 implementations will not understand the special tag, and will pay to a tagged address as usual. An upgrade would be needed for them to refuse sending to tagged addresses by default.
>>>
>>> On the other hand, tagging with tx version will also protect P2SH-segwit, and all existing wallets are protected by default. However, it is somewhat a layer violation and you could only tag all or none output in the same tx. Also, as Bitcoin Core has just removed the tx version from the UTXO database, adding it back could be a little bit annoying, but doable.
>>>
>>> There is an extension to the version tagging, which could make NOINPUT even safer. In addition to tagging requirement, NOINPUT will also sign the version of the previous tx. If the wallet always uses a randomised tx version, it makes accidental replay very unlikely. However, that will burn a few more bits in the tx version field.
>>>
>>> While this seems fully compatible with eltoo, is there any other proposals require NOINPUT, and is adversely affected by either way of tagging?
>>> _______________________________________________
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists•linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>
>
>
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Safer NOINPUT with output tagging
2019-02-09 10:01 99% ` Alejandro Ranchal Pedrosa
2019-02-09 16:48 99% ` Johnson Lau
@ 2019-02-09 16:54 99% ` Jonas Nick
1 sibling, 0 replies; 58+ results
From: Jonas Nick @ 2019-02-09 16:54 UTC (permalink / raw)
To: Alejandro Ranchal Pedrosa via bitcoin-dev
<--- not replying to list as this is off-topic ---->
Hey Alejandro,
thanks for the pointer. Is there a summary of how the opcode you're proposing would look like?
Is pairing crypto strictly necessary or would interactive key aggregation schemes like Bellare-Neven
work as well?
Best,
Jonas
On 2/9/19 10:01 AM, Alejandro Ranchal Pedrosa via bitcoin-dev wrote:
> Hi all,
>>
>> Side note: I was not able to come up with an similar, eltoo-like protocol that works
>> if you can't predict in advance who will become absent.
>>
> An eltoo-like protocol that works (without going on-chain) if you can't predict in advance who will become absent would be a childchain. If the off-chain protocol can continue updating in the abscence
> of other parties, it means that other parties' signatures must not be required when they are not involved in the off-chain state update. If other parties' signatures must not be required, there must
> be a way of having a common verifiable 'last state' to prevent a party to simultaneously 'fork' the state with two different parties, and double-spend. A solution for this is a childchain for Bitcoin.
> An example of this is what is known as a 'Broken Factory' attack [1] (https://bitcoin.stackexchange.com/questions/77434/how-does-channel-factory-act/81005#81005)
>
>> If the expectation is that the unresponsive party returns, fungibility is
>> not reduced due to output tagging because the above scheme can be used
>> off-chain until the original channel can be continued.
>
> I believe that in many cases other parties won't be able to continue until the unresponsive parties go back online. That might be true in particular scenarios, but generally speaking, the party might
> have gone unresponsive during a factory-level update (i.e. off-chain closing and opening of channels), while some parties might have given out their signature for the update without receiving a fully
> signed transaction. In this case they do not even know which channel they have open (the one of the old state that they have fully signed, or the one for the new state that they have given out their
> signature for). This is known as a 'Stale Factory', and can be exploited by an adversary in a 'Stale Factory' attack [1]. Even if they knew which state they are in (i.e. the party went unresponsive
> but not during a factory-level update), some of them might have run out of funds in some of their channels of the factory, and might want to update, while they will not be willing to wait for a party
> to go back online (something for which they also have zero guarantees of).
>
> An eltoo-like protocol that works (allowing going on-chain) if you can't in advance who will become absent, then this is precisely why 'Transaction Fragments' have been suggested. They allow an
> eltoo-like protocol even when one cannot predict in advance who will become absent, or malicious (by publishing invalid states), cause the non-absent parties can unite their fragments and create a
> valid spendable factory-level transaction that effectively kicks out the malicious parties, while leaving the rest of the factory as it was. To the best of my understanding, the eltoo original
> proposal also allows this though.
>
> Best,
>
> Alejandro.
>
> [1]: Scalable Lightning Factories for Bitcoin, https://eprint.iacr.org/2018/918.pdf
>
>
> On 08/02/2019 20:01, Jonas Nick via bitcoin-dev wrote:
>> Output tagging may result in reduced fungibility in multiparty eltoo channels.
>> If one party is unresponsive, the remaining participants want to remove
>> the party from the channel without downtime. This is possible by creating
>> settlement transactions which pay off the unresponsive party and fund a new
>> channel with the remaining participants.
>>
>> When the party becomes unresponsive, the channel is closed by broadcasting the
>> update transaction as usual. As soon as that happens the remaining
>> participants can start to update their new channel. Their update signatures
>> must use SIGHASH_NOINPUT. This is because in eltoo the settlement txid is not
>> final (because update tx is not confirmed and may have to rebind to another
>> output). Therefore, the funding output of the new channel must be NOINPUT
>> tagged. Assuming the remaining parties later settle cooperatively, this loss
>> of fungibility would not have happened without output tagging.
>>
>> funding output update output settlement outputs update output
>> [ A & B & C ] -> ... -> [ (A & B & C & state CLTV) | (As & Bs & Cs) ] -> [ NOINPUT tagged: (A' & B'), -> ...
>> C' ]
>> If the expectation is that the unresponsive party returns, fungibility is
>> not reduced due to output tagging because the above scheme can be used
>> off-chain until the original channel can be continued.
>>
>> Side note: I was not able to come up with an similar, eltoo-like protocol that works
>> if you can't predict in advance who will become absent.
>>
>> On 12/13/18 12:32 PM, Johnson Lau via bitcoin-dev wrote:
>>> NOINPUT is very powerful, but the tradeoff is the risks of signature replay. While the key holders are expected not to reuse key pair, little could be done to stop payers to reuse an address.
>>> Unfortunately, key-pair reuse has been a social and technical norm since the creation of Bitcoin (the first tx made in block 170 reused the previous public key). I don’t see any hope to change this
>>> norm any time soon, if possible at all.
>>>
>>> As the people who are designing the layer-1 protocol, we could always blame the payer and/or payee for their stupidity, just like those people laughed at victims of Ethereum dumb contracts (DAO,
>>> Parity multisig, etc). The existing bitcoin script language is so restrictive. It disallows many useful smart contracts, but at the same time prevented many dumb contracts. After all, “smart” and
>>> “dumb” are non-technical judgement. The DAO contract has always been faithfully executed. It’s dumb only for those invested in the project. For me, it was just a comedy show.
>>>
>>> So NOINPUT brings us more smart contract capacity, and at the same time we are one step closer to dumb contracts. The target is to find a design that exactly enables the smart contracts we want,
>>> while minimising the risks of misuse.
>>>
>>> The risk I am trying to mitigate is a payer mistakenly pay to a previous address with the exactly same amount, and the previous UTXO has been spent using NOINPUT. Accidental double payment is not
>>> uncommon. Even if the payee was honest and willing to refund, the money might have been spent with a replayed NOINPUT signature. Once people lost a significant amount of money this way, payers
>>> (mostly exchanges) may refuse to send money to anything other than P2PKH, native-P2WPKH and native-P2WSH (as the only 3 types without possibility of NOINPUT)
>>>
>>> The proposed solution is that an output must be “tagged” for it to be spendable with NOINPUT, and the “tag” must be made explicitly by the payer. There are 2 possible ways to do the tagging:
>>>
>>> 1. A certain bit in the tx version must be set
>>> 2. A certain bit in the scriptPubKey must be set
>>>
>>> I will analyse the pros and cons later.
>>>
>>> Using eltoo as example. The setup utxo is a simple 2-of-2 multisig, and should not be tagged. This makes it indistinguishable from normal 1-of-1 utxo. The trigger tx, which spends the setup utxo,
>>> should be tagged, so the update txs could spend the trigger utxo with NOINPUT. Similarly, all update txs should be tagged, so they could be spent by other update txs and settlement tx with NOINPUT.
>>> As the final destination, there is no need to tag in the settlement tx.
>>>
>>> In payer’s perspective, tagging means “I believe this address is for one-time-use only” Since we can’t control how other people manage their addresses, we should never do tagging when paying to
>>> other people.
>>>
>>> I mentioned 2 ways of tagging, and they have pros and cons. First of all, tagging in either way should not complicate the eltoo protocol in anyway, nor bring extra block space overhead.
>>>
>>> A clear advantage of tagging with scriptPubKey is we could tag on a per-output basis. However, scriptPubKey tagging is only possible with native-segwit, not P2SH. That means we have to disallow
>>> NOINPUT in P2SH-segwit (Otherwise, *all* P2SH addresses would become “risky” for payers) This should be ok for eltoo, since it has no reason to use P2SH-segwit in intermediate txs, which is more
>>> expensive.
>>>
>>> Another problem with scriptPubKey tagging is all the existing bech32 implementations will not understand the special tag, and will pay to a tagged address as usual. An upgrade would be needed for
>>> them to refuse sending to tagged addresses by default.
>>>
>>> On the other hand, tagging with tx version will also protect P2SH-segwit, and all existing wallets are protected by default. However, it is somewhat a layer violation and you could only tag all or
>>> none output in the same tx. Also, as Bitcoin Core has just removed the tx version from the UTXO database, adding it back could be a little bit annoying, but doable.
>>>
>>> There is an extension to the version tagging, which could make NOINPUT even safer. In addition to tagging requirement, NOINPUT will also sign the version of the previous tx. If the wallet always
>>> uses a randomised tx version, it makes accidental replay very unlikely. However, that will burn a few more bits in the tx version field.
>>>
>>> While this seems fully compatible with eltoo, is there any other proposals require NOINPUT, and is adversely affected by either way of tagging?
>>> _______________________________________________
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists•linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Safer NOINPUT with output tagging
2019-02-09 16:52 99% ` Jonas Nick
@ 2019-02-09 17:43 99% ` Johnson Lau
0 siblings, 0 replies; 58+ results
From: Johnson Lau @ 2019-02-09 17:43 UTC (permalink / raw)
To: Jonas Nick; +Cc: bitcoin-dev
And this scheme could be generalised to the combinatorial settlement model in my earlier post.
Let’s say the settlement tx has 3 outputs: (A&B),(B&C),(A&C). There will be 4 versions of this tx:
tx-X: all 3 outputs are tagged, signed by all 3 parties
tx-Y-AB: output (A&B) is untagged, the other 2 outputs are tagged. Signed only by C
tx-Y-AC: output (A&C) is untagged, the other 2 outputs are tagged. Signed only by B
tx-Y-BC: ………
All 4 txs will have the same relative-lock-time
If C is missing at the time of settlement, A and B will settle upon tx-Y-AB with a simple signature
If B and C are missing, A will settle upon tx-X
However, I think this is just an overkill, and hardly improves fungibility. It is very clear that this is an uncooperative eltoo closing (due to the update tx of the main channel), and this is a multi-party channel (due to multiple settlement outputs). There is little doubt that the remaining parties would like to continue trading. So there is actually no secret to hide, and it might be easier to just tag all outputs
Nonetheless, this example shows that the fungibility impact of output tagging is quite manageable. Most likely you just need to prepare more versions of intermediate txs, and only use the tagged one when things go against you.
> On 10 Feb 2019, at 12:52 AM, Jonas Nick <jonasdnick@gmail•com> wrote:
>
> Johnson's modification solves the issue I pointed out.
>
> Moreover, as Johnson and I discussed in private, using different locktimes for
> X and Y is not necessary. They can have the same relative locktime. If A and B
> would only sign Y as soon as the update tx is confirmed, there is no risk of Y
> changing its txid and therefore invalidating updates built on it.
>
>
> On 2/9/19 10:15 AM, Johnson Lau wrote:
>> This is really interesting. If I get it correctly, I think the fungibility hit could be avoided, just by making one more signature, and not affecting the blockchain space usage.
>>
>> Just some terminology first. In a 3-party channel, “main channel” means the one requires all parties to update, and “branch channel” requires only 2 parties to update.
>>
>> By what you describe, I think the most realistic scenario is “C is going to offline soon, and may or may not return. So the group wants to keep the main channel open, and create a branch channel for A and B, during the absence of C”. I guess this is what you mean by being able to "predict in advance who will become absent”
>>
>> I call this process as “semi-cooperative channel closing” (SCCC). During a SCCC, the settlement tx will have 2 outputs: one as (A & B), one as (C). Therefore, a branch channel could be opened with the (A & B) output. The channel opening must use NOINPUT signature, since we don’t know the txid of the settlement tx. With the output tagging requirement, (A & B) must be tagged, and lead to the fungibility loss you described.
>>
>> However, it is possible to make 2 settlement txs during SCCC. Outputs of the settlement tx X are tagged(A&B) and C. Outputs of the settlement tx Y are untagged(A&B) and C. Both X and Y are BIP68 relative-time-locked, but Y has a longer time lock.
>>
>> The branch channel is opened on top of the tagged output of tx X. If A and B want to close the channel without C, they need to publish the last update tx of the main channel. Once the update tx is confirmed, its txid becomes permanent, so are the txids of X and Y. If A and B decide to close the channel cooperatively, they could do it on top of the untagged output of tx Y, without using NOINPUT. There won’t be any fungibility loss. Other people will only see the uncooperative closing of the main channel, and couldn’t even tell the number of parties in the main channel. Unfortunately, the unusual long lock time of Y might still tell something.
>>
>> If anything goes wrong, A or B could publish X before the lock time of Y, and settle it through the usual eltoo style. Since this is an uncooperative closing anyway, the extra fungibility loss due to tagging is next to nothing. However, it may suggest that the main channel was a multi-party one.
>>
>> For C, the last update tx of the main channel and the settlement tx Y are the only things he needs to get the money back. C has to sign tx X, but he shouldn’t get the complete tx X. Otherwise, C might have an incentive to publish X in order to get the money back earlier, at the cost of fungibility loss of the branch channel.
>>
>> To minimise the fungibility loss, we’d better make it a social norm: if you sign your tx with NOINPUT, always try to make all outputs tagged to be NOINPUT-spendable. (NOTE: you can still spend tagged outputs with normal signatures, so this won’t permanently taint your coins as NOINPUT-spendable) It makes sense because the use of NOINPUT signature strongly suggests that you don’t know the txid of the parent tx, so you may most likely want your outputs to be NOINPUT-spendable as well. I thought of making this a policy or consensus rule, but may be it’s just overkill.
>>
>>
>>
>>> On 9 Feb 2019, at 3:01 AM, Jonas Nick <jonasdnick@gmail•com> wrote:
>>>
>>> Output tagging may result in reduced fungibility in multiparty eltoo channels.
>>> If one party is unresponsive, the remaining participants want to remove
>>> the party from the channel without downtime. This is possible by creating
>>> settlement transactions which pay off the unresponsive party and fund a new
>>> channel with the remaining participants.
>>>
>>> When the party becomes unresponsive, the channel is closed by broadcasting the
>>> update transaction as usual. As soon as that happens the remaining
>>> participants can start to update their new channel. Their update signatures
>>> must use SIGHASH_NOINPUT. This is because in eltoo the settlement txid is not
>>> final (because update tx is not confirmed and may have to rebind to another
>>> output). Therefore, the funding output of the new channel must be NOINPUT
>>> tagged. Assuming the remaining parties later settle cooperatively, this loss
>>> of fungibility would not have happened without output tagging.
>>>
>>> funding output update output settlement outputs update output
>>> [ A & B & C ] -> ... -> [ (A & B & C & state CLTV) | (As & Bs & Cs) ] -> [ NOINPUT tagged: (A' & B'), -> ...
>>> C' ]
>>> If the expectation is that the unresponsive party returns, fungibility is
>>> not reduced due to output tagging because the above scheme can be used
>>> off-chain until the original channel can be continued.
>>>
>>> Side note: I was not able to come up with an similar, eltoo-like protocol that works
>>> if you can't predict in advance who will become absent.
>>>
>>> On 12/13/18 12:32 PM, Johnson Lau via bitcoin-dev wrote:
>>>> NOINPUT is very powerful, but the tradeoff is the risks of signature replay. While the key holders are expected not to reuse key pair, little could be done to stop payers to reuse an address. Unfortunately, key-pair reuse has been a social and technical norm since the creation of Bitcoin (the first tx made in block 170 reused the previous public key). I don’t see any hope to change this norm any time soon, if possible at all.
>>>>
>>>> As the people who are designing the layer-1 protocol, we could always blame the payer and/or payee for their stupidity, just like those people laughed at victims of Ethereum dumb contracts (DAO, Parity multisig, etc). The existing bitcoin script language is so restrictive. It disallows many useful smart contracts, but at the same time prevented many dumb contracts. After all, “smart” and “dumb” are non-technical judgement. The DAO contract has always been faithfully executed. It’s dumb only for those invested in the project. For me, it was just a comedy show.
>>>>
>>>> So NOINPUT brings us more smart contract capacity, and at the same time we are one step closer to dumb contracts. The target is to find a design that exactly enables the smart contracts we want, while minimising the risks of misuse.
>>>>
>>>> The risk I am trying to mitigate is a payer mistakenly pay to a previous address with the exactly same amount, and the previous UTXO has been spent using NOINPUT. Accidental double payment is not uncommon. Even if the payee was honest and willing to refund, the money might have been spent with a replayed NOINPUT signature. Once people lost a significant amount of money this way, payers (mostly exchanges) may refuse to send money to anything other than P2PKH, native-P2WPKH and native-P2WSH (as the only 3 types without possibility of NOINPUT)
>>>>
>>>> The proposed solution is that an output must be “tagged” for it to be spendable with NOINPUT, and the “tag” must be made explicitly by the payer. There are 2 possible ways to do the tagging:
>>>>
>>>> 1. A certain bit in the tx version must be set
>>>> 2. A certain bit in the scriptPubKey must be set
>>>>
>>>> I will analyse the pros and cons later.
>>>>
>>>> Using eltoo as example. The setup utxo is a simple 2-of-2 multisig, and should not be tagged. This makes it indistinguishable from normal 1-of-1 utxo. The trigger tx, which spends the setup utxo, should be tagged, so the update txs could spend the trigger utxo with NOINPUT. Similarly, all update txs should be tagged, so they could be spent by other update txs and settlement tx with NOINPUT. As the final destination, there is no need to tag in the settlement tx.
>>>>
>>>> In payer’s perspective, tagging means “I believe this address is for one-time-use only” Since we can’t control how other people manage their addresses, we should never do tagging when paying to other people.
>>>>
>>>> I mentioned 2 ways of tagging, and they have pros and cons. First of all, tagging in either way should not complicate the eltoo protocol in anyway, nor bring extra block space overhead.
>>>>
>>>> A clear advantage of tagging with scriptPubKey is we could tag on a per-output basis. However, scriptPubKey tagging is only possible with native-segwit, not P2SH. That means we have to disallow NOINPUT in P2SH-segwit (Otherwise, *all* P2SH addresses would become “risky” for payers) This should be ok for eltoo, since it has no reason to use P2SH-segwit in intermediate txs, which is more expensive.
>>>>
>>>> Another problem with scriptPubKey tagging is all the existing bech32 implementations will not understand the special tag, and will pay to a tagged address as usual. An upgrade would be needed for them to refuse sending to tagged addresses by default.
>>>>
>>>> On the other hand, tagging with tx version will also protect P2SH-segwit, and all existing wallets are protected by default. However, it is somewhat a layer violation and you could only tag all or none output in the same tx. Also, as Bitcoin Core has just removed the tx version from the UTXO database, adding it back could be a little bit annoying, but doable.
>>>>
>>>> There is an extension to the version tagging, which could make NOINPUT even safer. In addition to tagging requirement, NOINPUT will also sign the version of the previous tx. If the wallet always uses a randomised tx version, it makes accidental replay very unlikely. However, that will burn a few more bits in the tx version field.
>>>>
>>>> While this seems fully compatible with eltoo, is there any other proposals require NOINPUT, and is adversely affected by either way of tagging?
>>>> _______________________________________________
>>>> bitcoin-dev mailing list
>>>> bitcoin-dev@lists•linuxfoundation.org
>>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>>
>>
>>
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Safer NOINPUT with output tagging
2019-02-09 16:48 99% ` Johnson Lau
@ 2019-02-10 4:46 99% ` Anthony Towns
0 siblings, 0 replies; 58+ results
From: Anthony Towns @ 2019-02-10 4:46 UTC (permalink / raw)
To: Johnson Lau; +Cc: bitcoin-dev
On Sun, Feb 10, 2019 at 12:48:40AM +0800, Johnson Lau wrote:
> In a 3 parties channel, let’s say the balance for A, B, C is 2, 3, 6BTC respectively, there are few ways they could make the settlement tx.
The way I look at this is:
* you can have a "channel factory" of 3 or more members (A,B,C,...)
* it's protected by an n-of-n multisig output
* it contains some combination of:
- spends directly to members
- lightning channels between pairs of members
- channel factories between subgroups of members
* when initially setup, the factory just has direct spends to each
member matching the amount they contributed to the factory
* whether you create a lightning channel or a sub-factory is the same
decision as whether you create a lightning channel or a factory
on-chain, so there's no combinatorial explosion.
You can close any channel factory by publishing it (and any higher level
channel factories it was a subgroup of) to the blockchain (at which point
the lower level channel factories and lightning channels remain open),
or you can update a channel factory off-chain by having everyone agree
to a new state -- which is only possible if everyone is online, of course.
Updates to transactions in a lightning channel in a factory, or updates
to a subfactory, don't generally involve updating the containing factory
at all, I think.
I don't think there's much use to having sub-factories -- maybe if you
have a subgroup that's much more active and wants to change channel
balances between each other more frequently than the least active member
of the main factory is online?
As far as NOINPUT goes; this impacts channel factories because cheating
could be by any member of the group, so you can't easily penalise the
cheater. So an eltoo-esque setup where you publish a commitment to the
state that's spendable only by any later state, and is then redeemed
after a timelock seems workable. In that case closing a factory when
you can't get all group members to cooperatively close looks like:
funding tx: n-of-n multisig
state commitment: n-of-n multisig
spends funding tx or earlier state commitment
spendable by later state commitment or settlement
settlement: n-of-n multisig
relative timelock
spends state commitment
spends to members, channels or sub-factories
The settlement tx has to spend with a NOINPUT sig, because the state
commitment could have had to spend different things. If it's a
sub-factory, the funding tx will have been in a factory, so the state
commitment would also have had to be a NOINPUT spend. So tagging
NOINPUT-spendable outputs would mean:
- tagging state commitment outputs (which will be spent shortly with
NOINPUT by the settlement tx, so no real loss here)
- tagging settlement tx outputs if they're lightning channels or
sub-factories (which is something of a privacy loss, I think, since
they could continue off-chain for an indefinite period before being
spent)
I think Johnson's suggested elsewhere that if you spend an input with a
NOINPUT signature, you should make all the outputs be tagged NOINPUT (as
a "best practice rule", rather than consensus-enforced or standardness).
That would avoid the privacy loss here, I think, but might be confusing.
If you wanted to close your factory and send your funds to an external
third-party (a cold-wallet, custodial wallet, or just paying someone
for something), you'd presumably do that via a cooperative close of the
factory, which doesn't require the state/settlement pair or NOINPUT
spends, so the NOINPUT-in means NOINPUT-tagged-outputs doesn't cause
a problem for that use case.
FWIW, I think an interesting way to improve this model might be to *add*
centralisation and trust; so that instead of having the factory have
an n-of-n multisig, have it be protected by k-of-n plus a trusted third
party. If you have the trusted third party check that the only balances
that change in the factory are from the "k" signers, that allows (n-k)
members to be offline at any time, but the remaining members to rebalance
their channels happily. (Theoretically you could do this trustlessly
with covenants, but the spending proofs on chain would be much larger)
Of course, this allows k-signers plus the trusted party to steal funds.
It might be possible for the trusted party to store audit logs of the
partial signatures from each of the k-signers for each transaction to
provide accountability -- where the lack of such logs implies the
trusted third party was cheating.
Cheers,
aj
>
>
>
> > On 9 Feb 2019, at 6:01 PM, Alejandro Ranchal Pedrosa via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> wrote:
> >
> > Hi all,
> >>
> >> Side note: I was not able to come up with an similar, eltoo-like protocol that works
> >> if you can't predict in advance who will become absent.
> >>
> > An eltoo-like protocol that works (without going on-chain) if you can't predict in advance who will become absent would be a childchain. If the off-chain protocol can continue updating in the abscence of other parties, it means that other parties' signatures must not be required when they are not involved in the off-chain state update. If other parties' signatures must not be required, there must be a way of having a common verifiable 'last state' to prevent a party to simultaneously 'fork' the state with two different parties, and double-spend. A solution for this is a childchain for Bitcoin. An example of this is what is known as a 'Broken Factory' attack [1] (https://bitcoin.stackexchange.com/questions/77434/how-does-channel-factory-act/81005#81005)
> >
> >> If the expectation is that the unresponsive party returns, fungibility is
> >> not reduced due to output tagging because the above scheme can be used
> >> off-chain until the original channel can be continued.
> >
> > I believe that in many cases other parties won't be able to continue until the unresponsive parties go back online. That might be true in particular scenarios, but generally speaking, the party might have gone unresponsive during a factory-level update (i.e. off-chain closing and opening of channels), while some parties might have given out their signature for the update without receiving a fully signed transaction. In this case they do not even know which channel they have open (the one of the old state that they have fully signed, or the one for the new state that they have given out their signature for). This is known as a 'Stale Factory', and can be exploited by an adversary in a 'Stale Factory' attack [1]. Even if they knew which state they are in (i.e. the party went unresponsive but not during a factory-level update), some of them might have run out of funds in some of their channels of the factory, and might want to update, while they will not be willing to wait for a party to go back online (something for which they also have zero guarantees of).
> >
> > An eltoo-like protocol that works (allowing going on-chain) if you can't in advance who will become absent, then this is precisely why 'Transaction Fragments' have been suggested. They allow an eltoo-like protocol even when one cannot predict in advance who will become absent, or malicious (by publishing invalid states), cause the non-absent parties can unite their fragments and create a valid spendable factory-level transaction that effectively kicks out the malicious parties, while leaving the rest of the factory as it was. To the best of my understanding, the eltoo original proposal also allows this though.
> >
> > Best,
> >
> > Alejandro.
> >
> > [1]: Scalable Lightning Factories for Bitcoin, https://eprint.iacr.org/2018/918.pdf
> >
> >
> > On 08/02/2019 20:01, Jonas Nick via bitcoin-dev wrote:
> >> Output tagging may result in reduced fungibility in multiparty eltoo channels.
> >> If one party is unresponsive, the remaining participants want to remove
> >> the party from the channel without downtime. This is possible by creating
> >> settlement transactions which pay off the unresponsive party and fund a new
> >> channel with the remaining participants.
> >>
> >> When the party becomes unresponsive, the channel is closed by broadcasting the
> >> update transaction as usual. As soon as that happens the remaining
> >> participants can start to update their new channel. Their update signatures
> >> must use SIGHASH_NOINPUT. This is because in eltoo the settlement txid is not
> >> final (because update tx is not confirmed and may have to rebind to another
> >> output). Therefore, the funding output of the new channel must be NOINPUT
> >> tagged. Assuming the remaining parties later settle cooperatively, this loss
> >> of fungibility would not have happened without output tagging.
> >>
> >> funding output update output settlement outputs update output
> >> [ A & B & C ] -> ... -> [ (A & B & C & state CLTV) | (As & Bs & Cs) ] -> [ NOINPUT tagged: (A' & B'), -> ...
> >> C' ]
> >> If the expectation is that the unresponsive party returns, fungibility is
> >> not reduced due to output tagging because the above scheme can be used
> >> off-chain until the original channel can be continued.
> >>
> >> Side note: I was not able to come up with an similar, eltoo-like protocol that works
> >> if you can't predict in advance who will become absent.
> >>
> >> On 12/13/18 12:32 PM, Johnson Lau via bitcoin-dev wrote:
> >>> NOINPUT is very powerful, but the tradeoff is the risks of signature replay. While the key holders are expected not to reuse key pair, little could be done to stop payers to reuse an address. Unfortunately, key-pair reuse has been a social and technical norm since the creation of Bitcoin (the first tx made in block 170 reused the previous public key). I don’t see any hope to change this norm any time soon, if possible at all.
> >>>
> >>> As the people who are designing the layer-1 protocol, we could always blame the payer and/or payee for their stupidity, just like those people laughed at victims of Ethereum dumb contracts (DAO, Parity multisig, etc). The existing bitcoin script language is so restrictive. It disallows many useful smart contracts, but at the same time prevented many dumb contracts. After all, “smart” and “dumb” are non-technical judgement. The DAO contract has always been faithfully executed. It’s dumb only for those invested in the project. For me, it was just a comedy show.
> >>>
> >>> So NOINPUT brings us more smart contract capacity, and at the same time we are one step closer to dumb contracts. The target is to find a design that exactly enables the smart contracts we want, while minimising the risks of misuse.
> >>>
> >>> The risk I am trying to mitigate is a payer mistakenly pay to a previous address with the exactly same amount, and the previous UTXO has been spent using NOINPUT. Accidental double payment is not uncommon. Even if the payee was honest and willing to refund, the money might have been spent with a replayed NOINPUT signature. Once people lost a significant amount of money this way, payers (mostly exchanges) may refuse to send money to anything other than P2PKH, native-P2WPKH and native-P2WSH (as the only 3 types without possibility of NOINPUT)
> >>>
> >>> The proposed solution is that an output must be “tagged” for it to be spendable with NOINPUT, and the “tag” must be made explicitly by the payer. There are 2 possible ways to do the tagging:
> >>>
> >>> 1. A certain bit in the tx version must be set
> >>> 2. A certain bit in the scriptPubKey must be set
> >>>
> >>> I will analyse the pros and cons later.
> >>>
> >>> Using eltoo as example. The setup utxo is a simple 2-of-2 multisig, and should not be tagged. This makes it indistinguishable from normal 1-of-1 utxo. The trigger tx, which spends the setup utxo, should be tagged, so the update txs could spend the trigger utxo with NOINPUT. Similarly, all update txs should be tagged, so they could be spent by other update txs and settlement tx with NOINPUT. As the final destination, there is no need to tag in the settlement tx.
> >>>
> >>> In payer’s perspective, tagging means “I believe this address is for one-time-use only” Since we can’t control how other people manage their addresses, we should never do tagging when paying to other people.
> >>>
> >>> I mentioned 2 ways of tagging, and they have pros and cons. First of all, tagging in either way should not complicate the eltoo protocol in anyway, nor bring extra block space overhead.
> >>>
> >>> A clear advantage of tagging with scriptPubKey is we could tag on a per-output basis. However, scriptPubKey tagging is only possible with native-segwit, not P2SH. That means we have to disallow NOINPUT in P2SH-segwit (Otherwise, *all* P2SH addresses would become “risky” for payers) This should be ok for eltoo, since it has no reason to use P2SH-segwit in intermediate txs, which is more expensive.
> >>>
> >>> Another problem with scriptPubKey tagging is all the existing bech32 implementations will not understand the special tag, and will pay to a tagged address as usual. An upgrade would be needed for them to refuse sending to tagged addresses by default.
> >>>
> >>> On the other hand, tagging with tx version will also protect P2SH-segwit, and all existing wallets are protected by default. However, it is somewhat a layer violation and you could only tag all or none output in the same tx. Also, as Bitcoin Core has just removed the tx version from the UTXO database, adding it back could be a little bit annoying, but doable.
> >>>
> >>> There is an extension to the version tagging, which could make NOINPUT even safer. In addition to tagging requirement, NOINPUT will also sign the version of the previous tx. If the wallet always uses a randomised tx version, it makes accidental replay very unlikely. However, that will burn a few more bits in the tx version field.
> >>>
> >>> While this seems fully compatible with eltoo, is there any other proposals require NOINPUT, and is adversely affected by either way of tagging?
> >>> _______________________________________________
> >>> bitcoin-dev mailing list
> >>> bitcoin-dev@lists•linuxfoundation.org
> >>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> >>>
> >> _______________________________________________
> >> bitcoin-dev mailing list
> >> bitcoin-dev@lists•linuxfoundation.org
> >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> > _______________________________________________
> > bitcoin-dev mailing list
> > bitcoin-dev@lists•linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Implementing Confidential Transactions in extension blocks
2019-02-08 10:12 99% [bitcoin-dev] Implementing Confidential Transactions in extension blocks Kenshiro []
@ 2019-02-11 4:29 99% ` ZmnSCPxj
2019-02-11 10:19 99% ` Kenshiro []
` (2 more replies)
0 siblings, 3 replies; 58+ results
From: ZmnSCPxj @ 2019-02-11 4:29 UTC (permalink / raw)
To: Kenshiro \[\], Bitcoin Protocol Discussion
Good morning Kenshiro,
> - Soft fork: old nodes see CT transactions as "sendtoany" transactions
There is a position that fullnodes must be able to get a view of the UTXO set, and extension blocks (which are invisible to pre-extension-block fullnodes) means that fullnodes no longer have an accurate view of the UTXO set.
SegWit still provides pre-SegWit fullnodes with a view of the UTXO set, although pre-SegWit fullnodes could be convinced that a particular UTXO is anyone-can-spend even though they are no longer anyone-can-spend.
Under this point-of-view, then, extension block is "not" soft fork.
It is "evil" soft fork since older nodes are forced to upgrade as their intended functionality becomes impossible.
In this point-of-view, it is no better than a hard fork, which at least is very noisy about how older fullnode versions will simply stop working.
> - Safe: if there is a software bug in CT it's impossible to create new coins because the coins move from normal block to normal block as public transactions
I think more relevant here is the issue of a future quantum computing breach of the algorithms used to implement confidentiality.
I believe this is also achievable with a non-extension-block approach by implementing a globally-verified publicly-visible counter of the total amount in all confidential transaction outputs.
Then it becomes impossible to move from confidential to public transactions with a value more than this counter, thus preventing inflation even if a future QC breach allows confidential transaction value commitments to be opened to any value.
(do note that a non-extension-block approach is a definite hardfork)
> - Capacity increase: the CT signature is stored in the extension block, so CT transactions increase the maximum number of transactions per block
This is not an unalloyed positive: block size increase, even via extension block, translates to greater network capacity usage globally on all fullnodes.
Regards,
ZmnSCPxj
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Implementing Confidential Transactions in extension blocks
2019-02-11 4:29 99% ` ZmnSCPxj
@ 2019-02-11 10:19 99% ` Kenshiro []
2019-02-12 17:27 99% ` Trey Del Bonis
2019-02-14 22:32 99% ` Hampus Sjöberg
2 siblings, 0 replies; 58+ results
From: Kenshiro [] @ 2019-02-11 10:19 UTC (permalink / raw)
To: Bitcoin Protocol Discussion, ZmnSCPxj
[-- Attachment #1: Type: text/plain, Size: 3651 bytes --]
Good morning ZmnSCPxj,
Thank you for your answer.
There is a position that fullnodes must be able to get a view of the UTXO set, and extension blocks (which are invisible to pre-extension-block fullnodes) means that fullnodes no longer have an accurate view of the UTXO set.
I think old nodes don't need to know the CT part of the UTXO set. It would be possible to move coins from normal address to CT address and the opposite, it would be written as "anyone-can-spend" transactions in the main block so old nodes are fully aware of these transactions. Miners would enforce that "anyone-can-spend" transactions are true. The full details of the transactions involving CT would be in the extension block. CT to CT transactions don't need to be written in the main block. Maybe I'm missing some technical detail here but it looks good for me.
> - Capacity increase: the CT signature is stored in the extension block, so CT transactions increase the maximum number of transactions per block
This is not an unalloyed positive: block size increase, even via extension block, translates to greater network capacity usage globally on all fullnodes.
Yes, there is an increase in block size and network usage but I think it would still be possible for people with regular computers to run a full node, an people in developing countries could use light wallets.
Regards
________________________________
From: ZmnSCPxj <ZmnSCPxj@protonmail•com>
Sent: Monday, February 11, 2019 5:29
To: Kenshiro \[\]; Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] Implementing Confidential Transactions in extension blocks
Good morning Kenshiro,
> - Soft fork: old nodes see CT transactions as "sendtoany" transactions
There is a position that fullnodes must be able to get a view of the UTXO set, and extension blocks (which are invisible to pre-extension-block fullnodes) means that fullnodes no longer have an accurate view of the UTXO set.
SegWit still provides pre-SegWit fullnodes with a view of the UTXO set, although pre-SegWit fullnodes could be convinced that a particular UTXO is anyone-can-spend even though they are no longer anyone-can-spend.
Under this point-of-view, then, extension block is "not" soft fork.
It is "evil" soft fork since older nodes are forced to upgrade as their intended functionality becomes impossible.
In this point-of-view, it is no better than a hard fork, which at least is very noisy about how older fullnode versions will simply stop working.
> - Safe: if there is a software bug in CT it's impossible to create new coins because the coins move from normal block to normal block as public transactions
I think more relevant here is the issue of a future quantum computing breach of the algorithms used to implement confidentiality.
I believe this is also achievable with a non-extension-block approach by implementing a globally-verified publicly-visible counter of the total amount in all confidential transaction outputs.
Then it becomes impossible to move from confidential to public transactions with a value more than this counter, thus preventing inflation even if a future QC breach allows confidential transaction value commitments to be opened to any value.
(do note that a non-extension-block approach is a definite hardfork)
> - Capacity increase: the CT signature is stored in the extension block, so CT transactions increase the maximum number of transactions per block
This is not an unalloyed positive: block size increase, even via extension block, translates to greater network capacity usage globally on all fullnodes.
Regards,
ZmnSCPxj
[-- Attachment #2: Type: text/html, Size: 6249 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Implementing Confidential Transactions in extension blocks
2019-02-11 4:29 99% ` ZmnSCPxj
2019-02-11 10:19 99% ` Kenshiro []
@ 2019-02-12 17:27 99% ` Trey Del Bonis
2019-02-14 22:32 99% ` Hampus Sjöberg
2 siblings, 0 replies; 58+ results
From: Trey Del Bonis @ 2019-02-12 17:27 UTC (permalink / raw)
To: ZmnSCPxj, Bitcoin Protocol Discussion
>Under this point-of-view, then, extension block is "not" soft fork.
>It is "evil" soft fork since older nodes are forced to upgrade as their intended functionality becomes impossible.
>In this point-of-view, it is no better than a hard fork, which at least is very noisy about how older fullnode versions will simply stop working
Offtopic: I believe that this kind of "evil soft fork" where nodes who
don't upgrade can continue to read the blockchain, update their
utxoset, etc. but can't actually spend some or all of the coins they
have has been referred to as a "firm fork". I think this is a pretty
useful term to pass around when talking about potential future forks.
The earliest reference I can find to that term from a quick search is
this talk from 2016 by Adam Back:
http://diyhpl.us/wiki/transcripts/adam3us-bitcoin-scaling-tradeoffs/
-Trey Del Bonis
On Tue, Feb 12, 2019 at 7:48 AM ZmnSCPxj via bitcoin-dev
<bitcoin-dev@lists•linuxfoundation.org> wrote:
>
> Good morning Kenshiro,
>
> > - Soft fork: old nodes see CT transactions as "sendtoany" transactions
>
> There is a position that fullnodes must be able to get a view of the UTXO set, and extension blocks (which are invisible to pre-extension-block fullnodes) means that fullnodes no longer have an accurate view of the UTXO set.
> SegWit still provides pre-SegWit fullnodes with a view of the UTXO set, although pre-SegWit fullnodes could be convinced that a particular UTXO is anyone-can-spend even though they are no longer anyone-can-spend.
>
> Under this point-of-view, then, extension block is "not" soft fork.
> It is "evil" soft fork since older nodes are forced to upgrade as their intended functionality becomes impossible.
> In this point-of-view, it is no better than a hard fork, which at least is very noisy about how older fullnode versions will simply stop working.
>
> > - Safe: if there is a software bug in CT it's impossible to create new coins because the coins move from normal block to normal block as public transactions
>
> I think more relevant here is the issue of a future quantum computing breach of the algorithms used to implement confidentiality.
>
> I believe this is also achievable with a non-extension-block approach by implementing a globally-verified publicly-visible counter of the total amount in all confidential transaction outputs.
> Then it becomes impossible to move from confidential to public transactions with a value more than this counter, thus preventing inflation even if a future QC breach allows confidential transaction value commitments to be opened to any value.
>
> (do note that a non-extension-block approach is a definite hardfork)
>
> > - Capacity increase: the CT signature is stored in the extension block, so CT transactions increase the maximum number of transactions per block
>
> This is not an unalloyed positive: block size increase, even via extension block, translates to greater network capacity usage globally on all fullnodes.
>
> Regards,
> ZmnSCPxj
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)
@ 2019-02-13 4:22 99% ` Rusty Russell
0 siblings, 0 replies; 58+ results
From: Rusty Russell @ 2019-02-13 4:22 UTC (permalink / raw)
To: Matt Corallo; +Cc: bitcoin-dev, lightning-dev
Matt Corallo <lf-lists@mattcorallo•com> writes:
>>> Thus, even if you imagine a steady-state mempool growth, unless the
>>> "near the top of the mempool" criteria is "near the top of the next
>>> block" (which is obviously *not* incentive-compatible)
>>
>> I was defining "top of mempool" as "in the first 4 MSipa", ie. next
>> block, and assumed you'd only allow RBF if the old package wasn't in the
>> top and the replacement would be. That seems incentive compatible; more
>> than the current scheme?
>
> My point was, because of block time variance, even that criteria doesn't hold up. If you assume a steady flow of new transactions and one or two blocks come in "late", suddenly "top 4MWeight" isn't likely to get confirmed until a few blocks come in "early". Given block variance within a 12 block window, this is a relatively likely scenario.
[ Digging through old mail. ]
Doesn't really matter. Lightning close algorithm would be:
1. Give bitcoind unileratal close.
2. Ask bitcoind what current expidited fee is (or survey your mempool).
3. Give bitcoind child "push" tx at that total feerate.
4. If next block doesn't contain unilateral close tx, goto 2.
In this case, if you allow a simpified RBF where 'you can replace if
1. feerate is higher, 2. new tx is in first 4Msipa of mempool, 3. old tx isnt',
it works.
It allows someone 100k of free tx spam, sure. But it's simple.
We could further restrict it by marking the unilateral close somehow to
say "gonna be pushed" and further limiting the child tx weight (say,
5kSipa?) in that case.
Cheers,
Rusty.
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Implementing Confidential Transactions in extension blocks
@ 2019-02-14 21:14 99% Kenshiro []
0 siblings, 0 replies; 58+ results
From: Kenshiro [] @ 2019-02-14 21:14 UTC (permalink / raw)
To: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 1778 bytes --]
Greetings,
I think extension blocks could be optional, and it could be many different extension blocks with different functionalities like Confidential Transactions or smart contracts. Only the interested nodes would enable this extension blocks, the rest would see only the classic blockchain without extension blocks. So it's not a matter of "old" and "new" nodes, all are updated nodes with extension blocks enabled or not. The only ones that need to understand the protocols of all existing extension blocks are the miners, which must verify that transactions from "anyone-can-spend" to a "classic" address are legal.
So this is what a node with all extension blocks disabled would see in the blockchain:
* Classic address to classic address: as always
* Classic address to extension block address: transaction to "anyone-can-spend"
* Extension block address to classic address: transaction from "anyone-can-spend"
* Extension block address to extension block address: it doesn't see it because it doesn't download the extension blocks, only the main blocks.
All coins that are in extension blocks are also in the "anyone-can-spend" address of the main blocks, so basic nodes are aware of the total number of coins. It's totally safe.
So for the particular case of Confidential Transactions, it would work as explained. The CT transaction details would be in the extension block which could have the same size as the main block so the total size of the blockchain (main blocks + extension blocks) would be double.
With this method bitcoin could add new features without losing the "store of value" property, as the base protocol never changes. Again, maybe I'm missing some technical detail here, I'm still learning 😊
Regards
[-- Attachment #2: Type: text/html, Size: 4635 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Implementing Confidential Transactions in extension blocks
2019-02-11 4:29 99% ` ZmnSCPxj
2019-02-11 10:19 99% ` Kenshiro []
2019-02-12 17:27 99% ` Trey Del Bonis
@ 2019-02-14 22:32 99% ` Hampus Sjöberg
2 siblings, 0 replies; 58+ results
From: Hampus Sjöberg @ 2019-02-14 22:32 UTC (permalink / raw)
To: ZmnSCPxj, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 5015 bytes --]
Hi ZmnSCPxj.
> There is a position that fullnodes must be able to get a view of the UTXO
set, and extension blocks (which are invisible to pre-extension-block
fullnodes) means that fullnodes no longer have an accurate view of the UTXO
set.
> SegWit still provides pre-SegWit fullnodes with a view of the UTXO set,
although pre-SegWit fullnodes could be convinced that a particular UTXO is
anyone-can-spend even though they are no longer anyone-can-spend.
> Under this point-of-view, then, extension block is "not" soft fork.
There's a way to do CT without an extension block and while still
maintaining a correct UTXO set for old nodes. Perhaps it is similar what
you meant with this comment (I believe you don't need to do a hardfork
though)
> Then it becomes impossible to move from confidential to public
transactions with a value more than this counter, thus preventing inflation
even if a future QC breach allows confidential transaction value
commitments to be opened to any value.
> (do note that a non-extension-block approach is a definite hardfork)
Anyway, the method goes like this:
Funds that go in to CT-mode are placed in a consensus/miner controlled
reserve pool. To go out from CT back to normal, funds are then transferred
back to the user from this pool.
CT transactions seen from a non-upgraded node will be a transaction with 0
sat outputs. The actual rangeproof commitment could be placed in the script
output or perhaps somewhere else.
To enter CT-mode, you'll need to make a commitment. The transaction
contains two outputs, one to the reserve pool containing the funds that can
only be reclaimed when you go back to normal and one CT-output that you can
start doing CT transactions from.
I believe this could be made seamlessly with just a new bech32 address
specifically for CT. Sending to a CT address could be done as easily as
sending to a P2SH. In other words, it doesn't have to be two steps to send
to someone over at CT space.
> It is "evil" soft fork since older nodes are forced to upgrade as their
intended functionality becomes impossible.
> In this point-of-view, it is no better than a hard fork, which at least
is very noisy about how older fullnode versions will simply stop working.
Regarding normal extension blocks, I think it is definitely better than a
hardfork since there's no way to be derailed from the network, even though
you do not understand the rules fully.
Sidenote, I think Trey Del Bonis is right regarding the terminology here,
evil softforks/soft hardforks usually mean that you abandon the old chain
to force all nodes to upgrade (https://petertodd.org/2016/forced-soft-forks
).
Best
Hampus
Den tis 12 feb. 2019 kl 13:49 skrev ZmnSCPxj via bitcoin-dev <
bitcoin-dev@lists•linuxfoundation.org>:
> Good morning Kenshiro,
>
> > - Soft fork: old nodes see CT transactions as "sendtoany" transactions
>
> There is a position that fullnodes must be able to get a view of the UTXO
> set, and extension blocks (which are invisible to pre-extension-block
> fullnodes) means that fullnodes no longer have an accurate view of the UTXO
> set.
> SegWit still provides pre-SegWit fullnodes with a view of the UTXO set,
> although pre-SegWit fullnodes could be convinced that a particular UTXO is
> anyone-can-spend even though they are no longer anyone-can-spend.
>
> Under this point-of-view, then, extension block is "not" soft fork.
> It is "evil" soft fork since older nodes are forced to upgrade as their
> intended functionality becomes impossible.
> In this point-of-view, it is no better than a hard fork, which at least is
> very noisy about how older fullnode versions will simply stop working.
>
> > - Safe: if there is a software bug in CT it's impossible to create new
> coins because the coins move from normal block to normal block as public
> transactions
>
> I think more relevant here is the issue of a future quantum computing
> breach of the algorithms used to implement confidentiality.
>
> I believe this is also achievable with a non-extension-block approach by
> implementing a globally-verified publicly-visible counter of the total
> amount in all confidential transaction outputs.
> Then it becomes impossible to move from confidential to public
> transactions with a value more than this counter, thus preventing inflation
> even if a future QC breach allows confidential transaction value
> commitments to be opened to any value.
>
> (do note that a non-extension-block approach is a definite hardfork)
>
> > - Capacity increase: the CT signature is stored in the extension block,
> so CT transactions increase the maximum number of transactions per block
>
> This is not an unalloyed positive: block size increase, even via extension
> block, translates to greater network capacity usage globally on all
> fullnodes.
>
> Regards,
> ZmnSCPxj
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 6201 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] [BIP Proposal] Simple Proof-of-Reserves Transactions
@ 2019-02-15 15:18 99% ` Luke Dashjr
2019-02-16 16:49 99% ` [bitcoin-dev] NIH warning (was Re: [BIP Proposal] Simple Proof-of-Reserves Transactions) Pavol Rusnak
0 siblings, 1 reply; 58+ results
From: Luke Dashjr @ 2019-02-15 15:18 UTC (permalink / raw)
To: bitcoin-dev, Steven Roose
On Tuesday 29 January 2019 22:03:04 Steven Roose via bitcoin-dev wrote:
> The existence of the first input (which is just a commitment hash) ensures
> that this transaction is invalid and can never be confirmed.
But nodes can never prove the transaction is invalid, thus if sent it, they
will likely cache the "transaction", taking up memory. I'm not sure if this
is an actual problem, as an attacker can fabricate such transactions anyway.
> #:Not all systems that will be used for verification have access to a full
> index of all transactions. However, proofs should be easily verifiable
> even after some of the UTXOs used in the proof are no longer unspent.
> Metadata present in the proof allows for relatively efficient verification
> of proofs even if no transaction index is available.
I don't see anything in the format that would prove unspentness...
> The proposed proof-file format provides a standard way of combining
> multiple proofs and associated metadata. The specification of the format
> is in the Protocol
> Buffers<ref>https://github.com/protocolbuffers/protobuf/</ref> format.
IIRC, this has been contentious for its use in BIP70 and may hinder adoption.
> message OutputMeta {
> // Identify the outpoint.
> bytes txid = 1;
> uint32 vout = 2;
>
> // The block hash of the block where this output was created.
> bytes block_hash = 3;
This isn't really sufficient. There should probably be a merkle proof.
Luke
^ permalink raw reply [relevance 99%]
* [bitcoin-dev] NIH warning (was Re: [BIP Proposal] Simple Proof-of-Reserves Transactions)
2019-02-15 15:18 99% ` Luke Dashjr
@ 2019-02-16 16:49 99% ` Pavol Rusnak
2019-02-17 18:00 99% ` William Casarin
0 siblings, 1 reply; 58+ results
From: Pavol Rusnak @ 2019-02-16 16:49 UTC (permalink / raw)
To: Luke Dashjr, Bitcoin Protocol Discussion, Steven Roose
On 15/02/2019 16:18, Luke Dashjr via bitcoin-dev wrote:
>> The proposed proof-file format provides a standard way of combining
>> multiple proofs and associated metadata. The specification of the format
>> is in the Protocol
>> Buffers<ref>https://github.com/protocolbuffers/protobuf/</ref> format.
>
> IIRC, this has been contentious for its use in BIP70 and may hinder adoption.
Off-topic to main discussion of this thread. But I need to voice my opinion.
We've been using Protocol buffers in Trezor since the beginning and so
far it has proven to be as a great choice.
While I agree it is always risky to add an exotic dependency to a
software project, this one has lots of interoperable implementations in
all possible languages you can name and it's very easy to work with.
In the past, the Bitcoin dev community used the same arguments with
regards to PSBT and we ended up with something that is almost as complex
as protobuf, but it's de-facto proprietary to Bitcoin.
Cherry on top is that PSBT format can be easily translated back and
forth to PB making it even more obvious that PB should have been used in
the first place.
Now everyone ELSE needs to implement this proprietary format and this
actually hinders adoption, not using Protocol Buffers. If these were
used since the beginning, there would be much more PSBT usage already.
--
Best Regards / S pozdravom,
Pavol "stick" Rusnak
CTO, SatoshiLabs
^ permalink raw reply [relevance 99%]
* [bitcoin-dev] BIP proposal - Signatures of Messages using Bitcoin Private Keys
@ 2019-02-17 14:14 99% Christopher Gilliard
2019-02-17 19:42 99% ` Adam Ficsor
2019-02-18 22:59 99% ` Aymeric Vitte
0 siblings, 2 replies; 58+ results
From: Christopher Gilliard @ 2019-02-17 14:14 UTC (permalink / raw)
To: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 553 bytes --]
I have written up a proposed BIP. It has to do with Signature formats when
using Bitcoin Private keys. It is here:
https://github.com/cgilliard/BIP/blob/master/README.md
This BIP was written up as suggested in this github issue:
https://github.com/bitcoin/bitcoin/issues/10542
Note that the proposal is inline with the implementation that Trezor
implemented in the above issue.
Any feedback would be appreciated. Please let me know what the steps are
with regards to getting a BIP number assigned or any other process steps
required.
Regards,
Chris
[-- Attachment #2: Type: text/html, Size: 877 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] NIH warning (was Re: [BIP Proposal] Simple Proof-of-Reserves Transactions)
2019-02-16 16:49 99% ` [bitcoin-dev] NIH warning (was Re: [BIP Proposal] Simple Proof-of-Reserves Transactions) Pavol Rusnak
@ 2019-02-17 18:00 99% ` William Casarin
0 siblings, 0 replies; 58+ results
From: William Casarin @ 2019-02-17 18:00 UTC (permalink / raw)
To: Pavol Rusnak, Bitcoin Protocol Discussion, Luke Dashjr,
Bitcoin Protocol Discussion, Steven Roose
Pavol Rusnak via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org>
writes:
> We've been using Protocol buffers in Trezor since the beginning and so
> far it has proven to be as a great choice.
>
> While I agree it is always risky to add an exotic dependency to a
> software project, this one has lots of interoperable implementations in
> all possible languages you can name and it's very easy to work with.
>
> In the past, the Bitcoin dev community used the same arguments with
> regards to PSBT and we ended up with something that is almost as complex
> as protobuf, but it's de-facto proprietary to Bitcoin.
>
> Cherry on top is that PSBT format can be easily translated back and
> forth to PB making it even more obvious that PB should have been used in
> the first place.
One argument against Protobuf is that people are already moving away
from it in favor of FlatBuffers, Google's successor to Protobuf that
doesn't require serialization/deserialization of structures.
Do we really want to be chasing the latest serialization library fad
each time a new one comes out? I do think there is value in having
accessible serialization formats, which is why I think it's a good idea
to provide custom format to protobuf conversion tools.
This way users who prefer not to include large dependencies don't have
to, and protobuf users can just do an extra step to convert it into
their preferred format.
Cheers,
Will
--
https://jb55.com
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] BIP proposal - Signatures of Messages using Bitcoin Private Keys
2019-02-17 14:14 99% [bitcoin-dev] BIP proposal - Signatures of Messages using Bitcoin Private Keys Christopher Gilliard
@ 2019-02-17 19:42 99% ` Adam Ficsor
2019-02-18 22:59 99% ` Aymeric Vitte
1 sibling, 0 replies; 58+ results
From: Adam Ficsor @ 2019-02-17 19:42 UTC (permalink / raw)
To: Christopher Gilliard, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 1187 bytes --]
In Wasabi wallet we are in the finishing line of an encryption manager.
I'll ask the OP to review your BIP and probably I'll do it myself, too
before I'd merge. Feel free to review/test our PR, too:
https://github.com/zkSNACKs/WalletWasabi/pull/1127
On Sun, Feb 17, 2019 at 6:01 PM Christopher Gilliard via bitcoin-dev <
bitcoin-dev@lists•linuxfoundation.org> wrote:
> I have written up a proposed BIP. It has to do with Signature formats when
> using Bitcoin Private keys. It is here:
> https://github.com/cgilliard/BIP/blob/master/README.md
>
> This BIP was written up as suggested in this github issue:
> https://github.com/bitcoin/bitcoin/issues/10542
>
> Note that the proposal is inline with the implementation that Trezor
> implemented in the above issue.
>
> Any feedback would be appreciated. Please let me know what the steps are
> with regards to getting a BIP number assigned or any other process steps
> required.
>
> Regards,
> Chris
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
--
Best,
Ádám
[-- Attachment #2: Type: text/html, Size: 2284 bytes --]
^ permalink raw reply [relevance 99%]
* [bitcoin-dev] BIP proposal - addrv2 message
@ 2019-02-18 7:56 99% Wladimir J. van der Laan
0 siblings, 0 replies; 58+ results
From: Wladimir J. van der Laan @ 2019-02-18 7:56 UTC (permalink / raw)
To: bitcoin-dev
See https://gist.github.com/laanwj/4fe8470881d7b9499eedc48dc9ef1ad1 for formatted version,
Look under "Considerations" for topics that might still need to be discussed.
<pre>
BIP: ???
Layer: Peer Services
Title: addrv2 message
Author: Wladimir J. van der Laan <laanwj@gmail•com>
Comments-Summary: No comments yet.
Comments-URI:
Status: Draft
Type: Standards Track
Created: 2018-06-01
License: BSD-2-Clause
</pre>
==Introduction==
===Abstract===
This document proposes a new P2P message to gossip longer node addresses over the P2P network.
This is required to support new-generation Onion addresses, I2P, and potentially other networks
that have longer endpoint addresses than fit in the 128 bits of the current <code>addr</code> message.
===Copyright===
This BIP is licensed under the 2-clause BSD license.
===Motivation===
Tor v3 hidden services are part of the stable release of Tor since version 0.3.2.9. They have
various advantages compared to the old hidden services, among which better encryption and privacy
<ref>[https://gitweb.torproject.org/torspec.git/tree/rend-spec-v3.txt Tor Rendezvous Specification - Version 3]</ref>.
These services have 256 bit addresses and thus do not fit in the existing <code>addr</code> message, which encapsulates onion addresses in OnionCat IPv6 addresses.
Other transport-layer protocols such as I2P have always used longer
addresses. This change would make it possible to gossip such addresses over the
P2P network, so that other peers can connect to them.
==Specification==
<blockquote>
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
interpreted as described in RFC 2119<ref>[https://tools.ietf.org/html/rfc2119 RFC 2119]</ref>.
</blockquote>
The <code>addrv2</code> message is defined as a message where <code>pchCommand == "addrv2"</code>.
It is serialized in the standard encoding for P2P messages.
Its format is similar to the current <code>addr</code> message format
<ref>[https://bitcoin.org/en/developer-reference#addr Bitcoin Developer Reference: addr message]</ref>, with the difference that the
fixed 16-byte IP address is replaced by a network ID and a variable-length address, and the time and services format has been changed to VARINT.
This means that the message contains a serialized <code>std::vector</code> of the following structure:
{| class="wikitable" style="width: auto; text-align: center; font-size: smaller; table-layout: fixed;"
!Type
!Name
!Description
|-
| <code>VARINT</code> (unsigned)
| <code>time</code>
| Time that this node was last seen as connected to the network. A time in Unix epoch time format, up to 64 bits wide.
|-
| <code>VARINT</code> (unsigned)
| <code>services</code>
| Service bits. A 64-wide bit field.
|-
| <code>uint8_t</code>
| <code>networkID</code>
| Network identifier. An 8-bit value that specifies which network is addressed.
|-
| <code>std::vector<uint8_t></code>
| <code>addr</code>
| Network address. The interpretation depends on networkID.
|-
| <code>uint16_t</code>
| <code>port</code>
| Network port. If not relevant for the network this MUST be 0.
|}
One message can contain up to 1,000 addresses. Clients SHOULD reject messages with more addresses.
Field <code>addr</code> has a variable length, with a maximum of 32 bytes (256 bits). Clients SHOULD reject
longer addresses.
The list of reserved network IDs is as follows:
{| class="wikitable" style="width: auto; text-align: center; font-size: smaller; table-layout: fixed;"
!Network ID
!Enumeration
!Address length (bytes)
!Description
|-
| <code>0x01</code>
| <code>IPV4</code>
| 4
| IPv4 address (globally routed internet)
|-
| <code>0x02</code>
| <code>IPV6</code>
| 16
| IPv6 address (globally routed internet)
|-
| <code>0x03</code>
| <code>TORV2</code>
| 10
| Tor v2 hidden service address
|-
| <code>0x04</code>
| <code>TORV3</code>
| 32
| Tor v3 hidden service address
|-
| <code>0x05</code>
| <code>I2P</code>
| 32
| I2P overlay network address
|-
| <code>0x06</code>
| <code>CJDNS</code>
| 16
| Cjdns overlay network address
|}
To allow for future extensibility, clients MUST ignore address types that they do not know about.
Client MAY store and gossip address formats that they do not know about. Further network ID numbers MUST be reserved in a new BIP document.
Clients SHOULD reject addresses that have a different length than specified in this table for a specific address ID, as these are meaningless.
See the appendices for the address encodings to be used for the various networks.
==Compatibility==
Send <code>addrv2</code> messages only, and exclusively, when the peer has a certain protocol version (or higher):
<source lang="c++">
//! gossiping using `addrv2` messages starts with this version
static const int GOSSIP_ADDRV2_VERSION = 70016;
</source>
For older peers keep sending the legacy <code>addr</code> message, ignoring addresses with the newly introduced address types.
==Reference implementation==
The reference implementation is available at (to be done)
==Considerations==
(to be discussed)
* ''Client MAY store and gossip address formats that they do not know about'': does it ever make sense to gossip addresses outside a certain overlay network? Say, I2P addresses to Tor? I'm not sure. Especially for networks that have no exit nodes as there is no overlap with the globally routed internet at all.
* Lower precision of <code>time</code> field? seconds precision seems overkill, and can even be harmful, there have been attacks that exploited high precision timestamps for mapping the current network topology.
** (gmaxwell) If you care about space time field could be reduced to 16 bits easily. Turn it into a "time ago seen" quantized to 1 hour precision. (IIRC we quantize times to 2hrs regardless).
* Rolling <code>port</code> into <code>addr</code>, or making the port optional, would make it possible to shave off two bytes for address types that don't have ports (however, all of the currently listed formats have a concept of port.). It could also be an optional data item (see below).
* (gmaxwell) Optional (per-service) data could be useful for various things:
** Node-flavors for striping (signalling which slice of the blocks the node has in selective pruning)
** Payload for is alternative ports for other transports (e.g. UDP ports)
** If we want optional flags. I guess the best thing would just be a byte to include the count of them, then a byte "type" for each one where the type also encodes if the payload is 0/8/16/32 bits. (using the two MSB of the type to encode the length). And then bound the count of them so that the total is still reasonably sized.
==Acknowledgements==
- Jonas Schnelli: change <code>services</code> field to VARINT, to make the message more compact in the likely case instead of always using 8 bytes.
- Luke-Jr: change <code>time</code> field to VARINT, for post-2038 compatibility.
- Gregory Maxwell: various suggestions regarding extensibility
==Appendix A: Tor v2 address encoding==
The new message introduces a separate network ID for <code>TORV2</code>.
Clients MUST send Tor hidden service addresses with this network ID, with the 80-bit hidden service ID in the address field. This is the same as the representation in the legacy <code>addr</code> message, minus the 6 byte prefix of the OnionCat wrapping.
Clients SHOULD ignore OnionCat (<code>fd87:d87e:eb43::/48</code>) addresses on receive if they come with the <code>IPV6</code> network ID.
==Appendix B: Tor v3 address encoding==
According to the spec <ref>[https://gitweb.torproject.org/torspec.git/tree/rend-spec-v3.txt Tor Rendezvous Specification - Version 3: Encoding onion addresses]</ref>, next-gen <code>.onion</code> addresses are encoded as follows:
<pre>
onion_address = base32(PUBKEY | CHECKSUM | VERSION) + ".onion"
CHECKSUM = H(".onion checksum" | PUBKEY | VERSION)[:2]
where:
- PUBKEY is the 32 bytes ed25519 master pubkey of the hidden service.
- VERSION is an one byte version field (default value '\x03')
- ".onion checksum" is a constant string
- CHECKSUM is truncated to two bytes before inserting it in onion_address
</pre>
Tor v3 addresses MUST be sent with the <code>TORV3</code> network ID, with the 32-byte PUBKEY part in the address field. As VERSION will always be '\x03' in the case of v3 addresses, this is enough to reconstruct the onion address.
==Appendix C: I2P address encoding==
Like Tor, I2P naming uses a base32-encoded address format<ref>[https://geti2p.net/en/docs/naming#base32 I2P: Naming and address book]</ref>.
I2P uses 52 characters (256 bits) to represent the full SHA-256 hash, followed by <code>.b32.i2p</code>.
I2P addresses MUST be sent with the <code>I2P</code> network ID, with the decoded SHA-256 hash as address field.
==Appendix D: Cjdns address encoding==
Cjdns addresses are simply IPv6 addresses in the <code>fc00::/8</code> range<ref>[https://github.com/cjdelisle/cjdns/blob/6e46fa41f5647d6b414612d9d63626b0b952746b/doc/Whitepaper.md#pulling-it-all-together Cjdns whitepaper: Pulling It All Together]</ref>. They MUST be sent with the <code>CJDNS</code> network ID.
==References==
<references/>
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] BIP proposal - Signatures of Messages using Bitcoin Private Keys
2019-02-17 14:14 99% [bitcoin-dev] BIP proposal - Signatures of Messages using Bitcoin Private Keys Christopher Gilliard
2019-02-17 19:42 99% ` Adam Ficsor
@ 2019-02-18 22:59 99% ` Aymeric Vitte
2019-02-18 23:24 99% ` Christopher Gilliard
1 sibling, 1 reply; 58+ results
From: Aymeric Vitte @ 2019-02-18 22:59 UTC (permalink / raw)
To: Christopher Gilliard, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 2190 bytes --]
Then, since you wrote this proposal, maybe you should add the very
precise description of the signing/verification process since it is
documented nowhere
I don't get the use of the speech regarding keys while it should focus
on signatures which are summarized in a vague sentence inspired by your
ref [2] with a not very logical link to the next paragraph stating that
r,s should be 32B and the whole thing 65B with a header of 1B, you did
not invent it, that's probably the rule, not sure where it is specified
again and for what purpose, the header seems completely of no use
especially when you extend to segwit/bech32 since you just have to check
that related compressed key matches
Le 17/02/2019 à 15:14, Christopher Gilliard via bitcoin-dev a écrit :
> I have written up a proposed BIP. It has to do with Signature formats
> when using Bitcoin Private keys. It is
> here: https://github.com/cgilliard/BIP/blob/master/README.md
>
> This BIP was written up as suggested in this github
> issue: https://github.com/bitcoin/bitcoin/issues/10542
>
> Note that the proposal is inline with the implementation that Trezor
> implemented in the above issue.
>
> Any feedback would be appreciated. Please let me know what the steps
> are with regards to getting a BIP number assigned or any other process
> steps required.
>
> Regards,
> Chris
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
--
Move your coins by yourself (browser version): https://peersm.com/wallet
Bitcoin transactions made simple: https://github.com/Ayms/bitcoin-transactions
Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
[-- Attachment #2: Type: text/html, Size: 4531 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] BIP proposal - Signatures of Messages using Bitcoin Private Keys
2019-02-18 22:59 99% ` Aymeric Vitte
@ 2019-02-18 23:24 99% ` Christopher Gilliard
2019-02-18 23:50 99% ` Aymeric Vitte
0 siblings, 1 reply; 58+ results
From: Christopher Gilliard @ 2019-02-18 23:24 UTC (permalink / raw)
To: Aymeric Vitte; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 3837 bytes --]
The proposal includes actual code that does verification, but I didn't
include code for signing. I thought it could be inferred, but I could at
least include a description of how to sign. I am not sure exactly what part
you are referring to by "keys speech", but the signatures are done by ECDSA
keys so it's hard to not include anything about keys even though that's not
the main topic. The "Background on ECDSA keys" section was mainly meant to
give background about what kind of keys Bitcoin uses, for people who
already know that they can easily skip this section so I would probably
think it's best just to leave in. Maybe it should be at the end as an
addendum though. Yes, I did not invent any of this, I'm just documenting
what people actually seem to do because I had to verify signatures as part
of a project I'm working on. I would have liked to have had this document
when I started the project so I thought it might be useful to others since
as far as I can tell this was not specified anywhere. The reason for
including this data in the header is the same that compressed/uncompressed
is included in the header so that you know which type of key the signature
is from and you don't have to try all options to see if any matches. This
is why Trezor did that way and why I documented it. I'm sure there are
other ways to do this, but since this is out there in the field being used
and is a reasonable solution, I thought I'd write it up.
On Mon, Feb 18, 2019 at 2:59 PM Aymeric Vitte <vitteaymeric@gmail•com>
wrote:
> Then, since you wrote this proposal, maybe you should add the very precise
> description of the signing/verification process since it is documented
> nowhere
>
> I don't get the use of the speech regarding keys while it should focus on
> signatures which are summarized in a vague sentence inspired by your ref
> [2] with a not very logical link to the next paragraph stating that r,s
> should be 32B and the whole thing 65B with a header of 1B, you did not
> invent it, that's probably the rule, not sure where it is specified again
> and for what purpose, the header seems completely of no use especially when
> you extend to segwit/bech32 since you just have to check that related
> compressed key matches
> Le 17/02/2019 à 15:14, Christopher Gilliard via bitcoin-dev a écrit :
>
> I have written up a proposed BIP. It has to do with Signature formats when
> using Bitcoin Private keys. It is here:
> https://github.com/cgilliard/BIP/blob/master/README.md
>
> This BIP was written up as suggested in this github issue:
> https://github.com/bitcoin/bitcoin/issues/10542
>
> Note that the proposal is inline with the implementation that Trezor
> implemented in the above issue.
>
> Any feedback would be appreciated. Please let me know what the steps are
> with regards to getting a BIP number assigned or any other process steps
> required.
>
> Regards,
> Chris
>
> _______________________________________________
> bitcoin-dev mailing listbitcoin-dev@lists•linuxfoundation.orghttps://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
> --
> Move your coins by yourself (browser version): https://peersm.com/wallet
> Bitcoin transactions made simple: https://github.com/Ayms/bitcoin-transactions
> Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
> Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
> Get the torrent dynamic blocklist: http://peersm.com/getblocklist
> Check the 10 M passwords list: http://peersm.com/findmyass
> Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
> Peersm : http://www.peersm.com
> torrent-live: https://github.com/Ayms/torrent-live
> node-Tor : https://www.github.com/Ayms/node-Tor
> GitHub : https://www.github.com/Ayms
>
>
[-- Attachment #2: Type: text/html, Size: 6796 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] BIP proposal - Signatures of Messages using Bitcoin Private Keys
2019-02-18 23:24 99% ` Christopher Gilliard
@ 2019-02-18 23:50 99% ` Aymeric Vitte
2019-02-19 0:29 99% ` Christopher Gilliard
0 siblings, 1 reply; 58+ results
From: Aymeric Vitte @ 2019-02-18 23:50 UTC (permalink / raw)
To: Christopher Gilliard; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 5388 bytes --]
Ah, OK, that's of course a good thing to document this undocumented (and
strange) stuff, as a matter of fact I implemented it after reading your
post (because this was on my todo list since some time) and got annoyed
quickly, mainly by what is doing formatMessageForSigning (which is quite
trivial when you know it but would be good to document precisely)
So, yes, it's a good idea to write this, regarding the header I still
don't see the use, testing the different possibilities is not a big
deal, why the signature format is not the same as transactions one is
mysterious too
Le 19/02/2019 à 00:24, Christopher Gilliard a écrit :
> The proposal includes actual code that does verification, but I didn't
> include code for signing. I thought it could be inferred, but I could
> at least include a description of how to sign. I am not sure exactly
> what part you are referring to by "keys speech", but the signatures
> are done by ECDSA keys so it's hard to not include anything about keys
> even though that's not the main topic. The "Background on ECDSA keys"
> section was mainly meant to give background about what kind of keys
> Bitcoin uses, for people who already know that they can easily skip
> this section so I would probably think it's best just to leave in.
> Maybe it should be at the end as an addendum though. Yes, I did not
> invent any of this, I'm just documenting what people actually seem to
> do because I had to verify signatures as part of a project I'm working
> on. I would have liked to have had this document when I started the
> project so I thought it might be useful to others since as far as I
> can tell this was not specified anywhere. The reason for including
> this data in the header is the same that compressed/uncompressed is
> included in the header so that you know which type of key the
> signature is from and you don't have to try all options to see if any
> matches. This is why Trezor did that way and why I documented it. I'm
> sure there are other ways to do this, but since this is out there in
> the field being used and is a reasonable solution, I thought I'd write
> it up.
>
> On Mon, Feb 18, 2019 at 2:59 PM Aymeric Vitte <vitteaymeric@gmail•com
> <mailto:vitteaymeric@gmail•com>> wrote:
>
> Then, since you wrote this proposal, maybe you should add the very
> precise description of the signing/verification process since it
> is documented nowhere
>
> I don't get the use of the speech regarding keys while it should
> focus on signatures which are summarized in a vague sentence
> inspired by your ref [2] with a not very logical link to the next
> paragraph stating that r,s should be 32B and the whole thing 65B
> with a header of 1B, you did not invent it, that's probably the
> rule, not sure where it is specified again and for what purpose,
> the header seems completely of no use especially when you extend
> to segwit/bech32 since you just have to check that related
> compressed key matches
>
> Le 17/02/2019 à 15:14, Christopher Gilliard via bitcoin-dev a écrit :
>> I have written up a proposed BIP. It has to do with Signature
>> formats when using Bitcoin Private keys. It is
>> here: https://github.com/cgilliard/BIP/blob/master/README.md
>>
>> This BIP was written up as suggested in this github
>> issue: https://github.com/bitcoin/bitcoin/issues/10542
>>
>> Note that the proposal is inline with the implementation that
>> Trezor implemented in the above issue.
>>
>> Any feedback would be appreciated. Please let me know what the
>> steps are with regards to getting a BIP number assigned or any
>> other process steps required.
>>
>> Regards,
>> Chris
>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org <mailto:bitcoin-dev@lists•linuxfoundation.org>
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
> --
> Move your coins by yourself (browser version): https://peersm.com/wallet
> Bitcoin transactions made simple: https://github.com/Ayms/bitcoin-transactions
> Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
> Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
> Get the torrent dynamic blocklist: http://peersm.com/getblocklist
> Check the 10 M passwords list: http://peersm.com/findmyass
> Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
> Peersm : http://www.peersm.com
> torrent-live: https://github.com/Ayms/torrent-live
> node-Tor : https://www.github.com/Ayms/node-Tor
> GitHub : https://www.github.com/Ayms
>
--
Move your coins by yourself (browser version): https://peersm.com/wallet
Bitcoin transactions made simple: https://github.com/Ayms/bitcoin-transactions
Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
[-- Attachment #2: Type: text/html, Size: 10516 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] BIP proposal - Signatures of Messages using Bitcoin Private Keys
2019-02-18 23:50 99% ` Aymeric Vitte
@ 2019-02-19 0:29 99% ` Christopher Gilliard
0 siblings, 0 replies; 58+ results
From: Christopher Gilliard @ 2019-02-19 0:29 UTC (permalink / raw)
To: Aymeric Vitte; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 5977 bytes --]
Trying the four possible options (p2pkh compressed, p2pkh uncompressed,
seg3, and bech32) is certainly a possibility and in fact, that's what I
ended up doing because not every wallet implements something like this, but
if there is a header field currently in use, it seemed reasonable to me to
use it specify which type of key is being used. If the header includes
whether the key is compressed or not compressed it seems logical to include
all data about what type of key it is and not just this one type of
information. That's why I thought the solution made sense and I wrote it up.
On Mon, Feb 18, 2019 at 3:50 PM Aymeric Vitte <vitteaymeric@gmail•com>
wrote:
> Ah, OK, that's of course a good thing to document this undocumented (and
> strange) stuff, as a matter of fact I implemented it after reading your
> post (because this was on my todo list since some time) and got annoyed
> quickly, mainly by what is doing formatMessageForSigning (which is quite
> trivial when you know it but would be good to document precisely)
>
> So, yes, it's a good idea to write this, regarding the header I still
> don't see the use, testing the different possibilities is not a big deal,
> why the signature format is not the same as transactions one is mysterious
> too
> Le 19/02/2019 à 00:24, Christopher Gilliard a écrit :
>
> The proposal includes actual code that does verification, but I didn't
> include code for signing. I thought it could be inferred, but I could at
> least include a description of how to sign. I am not sure exactly what part
> you are referring to by "keys speech", but the signatures are done by ECDSA
> keys so it's hard to not include anything about keys even though that's not
> the main topic. The "Background on ECDSA keys" section was mainly meant to
> give background about what kind of keys Bitcoin uses, for people who
> already know that they can easily skip this section so I would probably
> think it's best just to leave in. Maybe it should be at the end as an
> addendum though. Yes, I did not invent any of this, I'm just documenting
> what people actually seem to do because I had to verify signatures as part
> of a project I'm working on. I would have liked to have had this document
> when I started the project so I thought it might be useful to others since
> as far as I can tell this was not specified anywhere. The reason for
> including this data in the header is the same that compressed/uncompressed
> is included in the header so that you know which type of key the signature
> is from and you don't have to try all options to see if any matches. This
> is why Trezor did that way and why I documented it. I'm sure there are
> other ways to do this, but since this is out there in the field being used
> and is a reasonable solution, I thought I'd write it up.
>
> On Mon, Feb 18, 2019 at 2:59 PM Aymeric Vitte <vitteaymeric@gmail•com>
> wrote:
>
>> Then, since you wrote this proposal, maybe you should add the very
>> precise description of the signing/verification process since it is
>> documented nowhere
>>
>> I don't get the use of the speech regarding keys while it should focus on
>> signatures which are summarized in a vague sentence inspired by your ref
>> [2] with a not very logical link to the next paragraph stating that r,s
>> should be 32B and the whole thing 65B with a header of 1B, you did not
>> invent it, that's probably the rule, not sure where it is specified again
>> and for what purpose, the header seems completely of no use especially when
>> you extend to segwit/bech32 since you just have to check that related
>> compressed key matches
>> Le 17/02/2019 à 15:14, Christopher Gilliard via bitcoin-dev a écrit :
>>
>> I have written up a proposed BIP. It has to do with Signature formats
>> when using Bitcoin Private keys. It is here:
>> https://github.com/cgilliard/BIP/blob/master/README.md
>>
>> This BIP was written up as suggested in this github issue:
>> https://github.com/bitcoin/bitcoin/issues/10542
>>
>> Note that the proposal is inline with the implementation that Trezor
>> implemented in the above issue.
>>
>> Any feedback would be appreciated. Please let me know what the steps are
>> with regards to getting a BIP number assigned or any other process steps
>> required.
>>
>> Regards,
>> Chris
>>
>> _______________________________________________
>> bitcoin-dev mailing listbitcoin-dev@lists•linuxfoundation.orghttps://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>> --
>> Move your coins by yourself (browser version): https://peersm.com/wallet
>> Bitcoin transactions made simple: https://github.com/Ayms/bitcoin-transactions
>> Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
>> Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
>> Get the torrent dynamic blocklist: http://peersm.com/getblocklist
>> Check the 10 M passwords list: http://peersm.com/findmyass
>> Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
>> Peersm : http://www.peersm.com
>> torrent-live: https://github.com/Ayms/torrent-live
>> node-Tor : https://www.github.com/Ayms/node-Tor
>> GitHub : https://www.github.com/Ayms
>>
>> --
> Move your coins by yourself (browser version): https://peersm.com/wallet
> Bitcoin transactions made simple: https://github.com/Ayms/bitcoin-transactions
> Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
> Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
> Get the torrent dynamic blocklist: http://peersm.com/getblocklist
> Check the 10 M passwords list: http://peersm.com/findmyass
> Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
> Peersm : http://www.peersm.com
> torrent-live: https://github.com/Ayms/torrent-live
> node-Tor : https://www.github.com/Ayms/node-Tor
> GitHub : https://www.github.com/Ayms
>
>
[-- Attachment #2: Type: text/html, Size: 11937 bytes --]
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Safer NOINPUT with output tagging
2019-02-08 19:01 99% ` Jonas Nick
@ 2019-02-19 19:04 99% ` Luke Dashjr
2019-02-19 19:22 99% ` Johnson Lau
2 siblings, 1 reply; 58+ results
From: Luke Dashjr @ 2019-02-19 19:04 UTC (permalink / raw)
To: bitcoin-dev, Johnson Lau
On Thursday 13 December 2018 12:32:44 Johnson Lau via bitcoin-dev wrote:
> While this seems fully compatible with eltoo, is there any other proposals
> require NOINPUT, and is adversely affected by either way of tagging?
Yes, this seems to break the situation where a wallet wants to use NOINPUT for
everything, including normal L1 payments. For example, in the scenario where
address reuse will be rejected/ignored by the recipient unconditionally, and
the payee is considered to have burned their bitcoins by attempting it.
Luke
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Safer NOINPUT with output tagging
2019-02-19 19:04 99% ` Luke Dashjr
@ 2019-02-19 19:22 99% ` Johnson Lau
2019-02-19 20:24 99% ` Luke Dashjr
0 siblings, 1 reply; 58+ results
From: Johnson Lau @ 2019-02-19 19:22 UTC (permalink / raw)
To: Luke Dashjr; +Cc: bitcoin-dev
This only depends on the contract between the payer and payee. If the contract says address reuse is unacceptable, it’s unacceptable. It has nothing to do with how the payee spends the coin. We can’t ban address reuse at protocol level (unless we never prune the chain), so address reuse could only be prevented at social level.
Using NOINPUT is also a very weak excuse: NOINPUT always commit to the value. If the payer reused an address but for different amount, the payee can’t claim the coin is lost due to previous NOINPUT use. A much stronger way is to publish the key after a coin is well confirmed.
> On 20 Feb 2019, at 3:04 AM, Luke Dashjr <luke@dashjr•org> wrote:
>
> On Thursday 13 December 2018 12:32:44 Johnson Lau via bitcoin-dev wrote:
>> While this seems fully compatible with eltoo, is there any other proposals
>> require NOINPUT, and is adversely affected by either way of tagging?
>
> Yes, this seems to break the situation where a wallet wants to use NOINPUT for
> everything, including normal L1 payments. For example, in the scenario where
> address reuse will be rejected/ignored by the recipient unconditionally, and
> the payee is considered to have burned their bitcoins by attempting it.
>
> Luke
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Safer NOINPUT with output tagging
2019-02-19 19:22 99% ` Johnson Lau
@ 2019-02-19 20:24 99% ` Luke Dashjr
2019-02-19 20:36 99% ` Johnson Lau
0 siblings, 1 reply; 58+ results
From: Luke Dashjr @ 2019-02-19 20:24 UTC (permalink / raw)
To: Johnson Lau; +Cc: bitcoin-dev
Even besides NOINPUT, such a wallet would simply never show a second payment
to the same address (or at least never show it as confirmed, until
successfully spent).
At least if tx versions are used, it isn't possible to indicate this
requirement in current Bitcoin L1 addresses. scriptPubKey might not be
impossible to encode, but it isn't really clear what the purpose of doing so
is.
If people don't want to use NOINPUT, they should just not use it. Trying to
implement a nanny in the protocol is inappropriate and limits what developers
can do who actually want the features.
Luke
On Tuesday 19 February 2019 19:22:07 Johnson Lau wrote:
> This only depends on the contract between the payer and payee. If the
> contract says address reuse is unacceptable, it’s unacceptable. It has
> nothing to do with how the payee spends the coin. We can’t ban address
> reuse at protocol level (unless we never prune the chain), so address reuse
> could only be prevented at social level.
>
> Using NOINPUT is also a very weak excuse: NOINPUT always commit to the
> value. If the payer reused an address but for different amount, the payee
> can’t claim the coin is lost due to previous NOINPUT use. A much stronger
> way is to publish the key after a coin is well confirmed.
>
> > On 20 Feb 2019, at 3:04 AM, Luke Dashjr <luke@dashjr•org> wrote:
> >
> > On Thursday 13 December 2018 12:32:44 Johnson Lau via bitcoin-dev wrote:
> >> While this seems fully compatible with eltoo, is there any other
> >> proposals require NOINPUT, and is adversely affected by either way of
> >> tagging?
> >
> > Yes, this seems to break the situation where a wallet wants to use
> > NOINPUT for everything, including normal L1 payments. For example, in the
> > scenario where address reuse will be rejected/ignored by the recipient
> > unconditionally, and the payee is considered to have burned their
> > bitcoins by attempting it.
> >
> > Luke
^ permalink raw reply [relevance 99%]
* Re: [bitcoin-dev] Safer NOINPUT with output tagging
2019-02-19 20:24 99% ` Luke Dashjr
@ 2019-02-19 20:36 99% ` Johnson Lau
0 siblings, 0 replies; 58+ results
From: Johnson Lau @ 2019-02-19 20:36 UTC (permalink / raw)
To: Luke Dashjr; +Cc: bitcoin-dev
> On 20 Feb 2019, at 4:24 AM, Luke Dashjr <luke@dashjr•org> wrote:
>
> Even besides NOINPUT, such a wallet would simply never show a second payment
> to the same address (or at least never show it as confirmed, until
> successfully spent).
This is totally unrelated to NOINPUT. You can make a wallet like this today already, and tell your payer not to reuse address.
>
> At least if tx versions are used, it isn't possible to indicate this
> requirement in current Bitcoin L1 addresses. scriptPubKey might not be
> impossible to encode, but it isn't really clear what the purpose of doing so
> is.
It sounds like you actually want to tag such outputs as scriptPubKey, so you could encode this requirement in the address?
If we allow NOINPUT unconditionally (i.e. all v1 addresses are spendable with NOINPUT), you may only create a different proposal to indicate such special requirements
>
> If people don't want to use NOINPUT, they should just not use it. Trying to
> implement a nanny in the protocol is inappropriate and limits what developers
> can do who actually want the features.
>
> Luke
>
>
> On Tuesday 19 February 2019 19:22:07 Johnson Lau wrote:
>> This only depends on the contract between the payer and payee. If the
>> contract says address reuse is unacceptable, it’s unacceptable. It has
>> nothing to do with how the payee spends the coin. We can’t ban address
>> reuse at protocol level (unless we never prune the chain), so address reuse
>> could only be prevented at social level.
>>
>> Using NOINPUT is also a very weak excuse: NOINPUT always commit to the
>> value. If the payer reused an address but for different amount, the payee
>> can’t claim the coin is lost due to previous NOINPUT use. A much stronger
>> way is to publish the key after a coin is well confirmed.
>>
>>> On 20 Feb 2019, at 3:04 AM, Luke Dashjr <luke@dashjr•org> wrote:
>>>
>>> On Thursday 13 December 2018 12:32:44 Johnson Lau via bitcoin-dev wrote:
>>>> While this seems fully compatible with eltoo, is there any other
>>>> proposals require NOINPUT, and is adversely affected by either way of
>>>> tagging?
>>>
>>> Yes, this seems to break the situation where a wallet wants to use
>>> NOINPUT for everything, including normal L1 payments. For example, in the
>>> scenario where address reuse will be rejected/ignored by the recipient
>>> unconditionally, and the payee is considered to have burned their
>>> bitcoins by attempting it.
>>>
>>> Luke
>
^ permalink raw reply [relevance 99%]
* [bitcoin-dev] Privacy literature review
@ 2019-02-23 20:17 99% Chris Belcher
0 siblings, 0 replies; 58+ results
From: Chris Belcher @ 2019-02-23 20:17 UTC (permalink / raw)
To: bitcoin-dev
Hello list,
For the last few weeks I've been working on a literature review for
bitcoin privacy:
https://en.bitcoin.it/wiki/Privacy
It aims to cover about all privacy issues in bitcoin, including
Lightning network, and has a bunch of examples to help demonstrate how
the concepts work in practice.
There is also a new wiki category with smaller related articles:
https://en.bitcoin.it/wiki/Category:Privacy
Regards
CB
^ permalink raw reply [relevance 99%]
* [bitcoin-dev] BIP - Symbol for satoshi
@ 2019-02-23 22:10 99% Amine Chakak
0 siblings, 0 replies; 58+ results
From: Amine Chakak @ 2019-02-23 22:10 UTC (permalink / raw)
To: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 514 bytes --]
Hi,
I don't know if this is the right place to do so, but it says on the
website to first propose ideas for BIPS to the mailing list.
I would like to propose @bitficus idea for a satoshi symbol (monetary).
The idea has been floated around to switch to satoshi as a base unit.
The lightning network uses satoshis as a base unit.
Here is the proposal :
https://twitter.com/bitficus/status/1097979724515557377
Pleas let me know if it would be appropriate to write a BIP for it.
Thank you for your consideration.
[-- Attachment #2: Type: text/html, Size: 918 bytes --]
^ permalink raw reply [relevance 99%]
* [bitcoin-dev] Vulnerability relating to 64-byte transactions in Bitcoin Core 0.13 branch
@ 2019-02-25 19:29 99% Suhas Daftuar
0 siblings, 0 replies; 58+ results
From: Suhas Daftuar @ 2019-02-25 19:29 UTC (permalink / raw)
To: Bitcoin Dev
[-- Attachment #1.1: Type: text/plain, Size: 3764 bytes --]
Hi,
I'm writing to report a consensus vulnerability affecting Bitcoin Core
versions
0.13.0, 0.13.1, and 0.13.2. These software versions reached end-of-life on
2018-08-01.
The issue surrounds a fundamental design flaw in Bitcoin's Merkle tree
construction. Last year, the vulnerability (CVE-2017-12842) around 64-byte
transactions being used to trick light clients into thinking that a
transaction
was committed in a block was discussed on this mailing list
(
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016091.html
).
There is a related attack resulting from the ambiguity around 64-byte
transactions which could be used to cause a vulnerable full-node
implementation
to fall out of consensus.
The attack on light clients discussed previously was centered on the idea of
claiming the Merkle tree has one more level in it than it actually has, to
prove that a candidate transaction is in the chain by having its hash match
one
side of a 64-byte transaction. The vulnerability I am describing here
involves
going the other direction: find a row of interior nodes in the Merkle tree
that successfully deserialize as transactions, in order to make a block
appear
to be invalid. (This is of a similar character to the attack described by
Sergio Demian Lerner on
https://bitslog.wordpress.com/2018/06/09/leaf-node-weakness-in-bitcoin-merkle-tree-design/
,
in the section titled "An (expensive) attack to partition Bitcoin".)
It has long been recognized that malleating a block's transactions in a way
that produces the same Merkle root could be used to cause a node to fall
out of
consensus, because of the logic in Bitcoin Core to cache the invalidity of
blocks (ie to avoid re-validation of known-invalid ones, which would
otherwise
make the software vulnerable to DoS). Malleation by "going up" the Merkle
tree,
and claiming that some interior row is in fact the set of (64-byte)
transactions in a block, could be used to cause the Bitcoin Core 0.13
branch to
incorrectly mark as invalid a block that in fact has a valid set of
transactions. Moreover, this requires very little work to accomplish -- less
than 22 bits of work in all.
I have attached a writeup that I put together for my own memory and notes
which
goes into more detail (along with a summary of other Merkle tree issues,
including the duplicate transactions issue from CVE-2012-2459 and the SPV
issue); please see sections 3.1 and 4.1 for a discussion. The bug in 0.13
was
introduced as an unintended side-effect of a change I authored
(https://github.com/bitcoin/bitcoin/pull/7225). Once I learned of this
category of Merkle malleation issues, I realized that the change
inadvertently
introduced a vulnerability to this issue that did not previously exist (in
any
prior version of the software, as far as I can tell). A bug fix that
effectively reverted the change (
https://github.com/bitcoin/bitcoin/pull/9765)
was made just before the 0.14 version of Bitcoin Core was released, and no
later versions of the software are affected.
Also, I have scanned the blockchain looking for instances where the first
two
hashes in any row of the Merkle tree would deserialize validly as a 64-byte
transaction, and I have found zero such instances. So in particular there
are
no blocks on Bitcoin's main chain (as of this writing) that could be used to
attack an 0.13 node.
I thought it best to withhold disclosure of this vulnerability before a
mitigation was in place for the related SPV-issue (which I assumed would
become
obvious with this disclosure); once that became public last summer and a
mitigation deployed (by making 64-byte transactions nonstandard), that
concern
was eliminated.
Thanks to Johnson Lau and Greg Maxwell for originally alerting me to this
issue.
[-- Attachment #1.2: Type: text/html, Size: 4838 bytes --]
[-- Attachment #2: BitcoinMerkle.pdf --]
[-- Type: application/pdf, Size: 200880 bytes --]
^ permalink raw reply [relevance 99%]
* [bitcoin-dev] Fortune Cookies to Bitcoin Seed
@ 2019-02-28 3:48 99% Trey Del Bonis
0 siblings, 0 replies; 58+ results
From: Trey Del Bonis @ 2019-02-28 3:48 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
Hello all,
This might be another proto-BIP similar to the post about using a card
shuffle as a wallet seed that was posted here a few weeks back:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-February/016645.html
This is an idea I had to deriving a wallet seed from the lucky numbers
on a fortunes from fortune cookies [1].
On one side is some silly fortune, which we don't really care about
here. But depending on the brand, on the other side there's 2 parts:
* "Learn Chinese", with a word in English and its translation into
Chinese characters and the (probably) pinyin.
* "Lucky Numbers", followed by usually 6 or 7 numbers, presumably in
the range of 1 to 99. Someone can correct me on this if I'm wrong.
So each number should have around ~6.6 bits of entropy, which means
you could generate a "very secure" wallet seed with about 7 fortunes.
We can remember the order of the numbers on these fortunes based on
the English words, which we can commit to memory.
It's considered a rule of thumb that you can remember "7 things" at
once, which is pretty convenient for this. Sometimes the numbers are
sorted, which decreases the entropy a bit, but that can be remedied
with just more fortunes. This also splits up the information required
to reconstruct the seed into both something physical and something
remembered, and there isn't any particular ordering that someone can
mess up by, say, shuffling the card deck. Although someone is
arguably more likely to throw away random fortunes than they are to
throw away a deck of cards which is a weakness of this scheme.
It also arguably has better deniability. If you keep a pile of 20
fortunes (with different "Learn Chinese" words) and remember which 7
of them are for your key, but pick another 7 you can use to make a
decoy wallet to use if being forced to reveal a wallet. Keeping 20
around is a little excessive but it gives 390700800 possible wallets.
So security can be trivially parameterized based on how secure you
want your wallet to be if someone finds your stash.
I wrote a little Python script to generate a key with this, it's not
very clean and could be much improved but it works pretty well as a
proof of concept: https://gitlab.com/delbonis/chinese-wallet
-Trey Del Bonis
[1] https://en.wikipedia.org/wiki/Fortune_cookie
^ permalink raw reply [relevance 99%]
Results 1-58 of 58 | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2018-11-19 22:37 [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT Pieter Wuille
2018-12-12 9:42 ` Rusty Russell
2018-12-12 20:00 ` Johnson Lau
2018-12-12 23:49 ` Rusty Russell
2018-12-13 0:37 ` Rusty Russell
2018-12-14 9:30 ` Anthony Towns
2018-12-16 6:55 ` Rusty Russell
2018-12-17 19:08 ` Johnson Lau
2018-12-19 0:39 ` Rusty Russell
2019-02-09 0:39 99% ` Pieter Wuille
2018-11-29 19:37 [bitcoin-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning) Matt Corallo
2018-12-04 3:33 ` [bitcoin-dev] [Lightning-dev] " Rusty Russell
2019-01-07 15:18 ` Matt Corallo
2019-01-08 5:50 ` Rusty Russell
2019-01-08 14:46 ` Matt Corallo
2019-02-13 4:22 99% ` Rusty Russell
2018-12-13 12:32 [bitcoin-dev] Safer NOINPUT with output tagging Johnson Lau
2018-12-21 11:40 ` ZmnSCPxj
2018-12-21 15:37 ` Johnson Lau
2018-12-22 14:25 ` ZmnSCPxj
2018-12-22 16:56 ` Johnson Lau
2018-12-24 11:47 ` ZmnSCPxj
2019-01-31 6:04 ` Anthony Towns
2019-02-01 9:36 99% ` ZmnSCPxj
2019-02-08 19:01 99% ` Jonas Nick
2019-02-09 10:01 99% ` Alejandro Ranchal Pedrosa
2019-02-09 16:48 99% ` Johnson Lau
2019-02-10 4:46 99% ` Anthony Towns
2019-02-09 16:54 99% ` Jonas Nick
2019-02-09 10:15 99% ` Johnson Lau
2019-02-09 16:52 99% ` Jonas Nick
2019-02-09 17:43 99% ` Johnson Lau
2019-02-19 19:04 99% ` Luke Dashjr
2019-02-19 19:22 99% ` Johnson Lau
2019-02-19 20:24 99% ` Luke Dashjr
2019-02-19 20:36 99% ` Johnson Lau
2019-01-18 22:59 [bitcoin-dev] Proof-of-Stake Bitcoin Sidechains Matt Bell
2019-02-01 9:19 99% ` ZmnSCPxj
2019-01-29 22:03 [bitcoin-dev] [BIP Proposal] Simple Proof-of-Reserves Transactions Steven Roose
2019-02-15 15:18 99% ` Luke Dashjr
2019-02-16 16:49 99% ` [bitcoin-dev] NIH warning (was Re: [BIP Proposal] Simple Proof-of-Reserves Transactions) Pavol Rusnak
2019-02-17 18:00 99% ` William Casarin
2019-01-31 23:44 [bitcoin-dev] Predicate Tree in ZkVM: a variant of Taproot/G'root Oleg Andreev
2019-02-01 17:56 99% ` Oleg Andreev
2019-02-02 19:51 99% [bitcoin-dev] Card Shuffle To Bitcoin Seed rhavar
2019-02-04 6:49 99% ` Adam Ficsor
2019-02-04 21:05 99% ` James MacWhyte
2019-02-05 1:37 99% ` Devrandom
2019-02-06 13:48 99% ` Alan Evans
2019-02-06 13:51 99% ` Alan Evans
2019-02-07 2:42 99% ` James MacWhyte
2019-02-03 20:15 99% [bitcoin-dev] BIP157 server Murmel introduced, enchancement suggestion to BIP158 Tamas Blummer
2019-02-04 11:41 99% [bitcoin-dev] Interrogating a BIP157 server, BIP158 change proposal Tamas Blummer
2019-02-04 20:18 99% ` Jim Posen
2019-02-04 20:59 99% ` Tamas Blummer
2019-02-05 1:42 99% ` Olaoluwa Osuntokun
2019-02-05 12:21 99% ` Matt Corallo
2019-02-06 0:05 99% ` Olaoluwa Osuntokun
2019-02-05 20:10 99% ` Tamas Blummer
2019-02-06 0:17 99% ` Olaoluwa Osuntokun
2019-02-06 8:09 99% ` Tamas Blummer
2019-02-06 18:17 99% ` Gregory Maxwell
2019-02-06 19:48 99% ` Tamas Blummer
[not found] ` <CAAS2fgQX_02_Uwu0hCu91N_11N4C4Scm2FbAXQ-0YibroeqMYg@mail.gmail.com>
2019-02-06 21:17 99% ` Tamas Blummer
2019-02-07 20:36 99% ` Pieter Wuille
2019-02-08 10:12 99% [bitcoin-dev] Implementing Confidential Transactions in extension blocks Kenshiro []
2019-02-11 4:29 99% ` ZmnSCPxj
2019-02-11 10:19 99% ` Kenshiro []
2019-02-12 17:27 99% ` Trey Del Bonis
2019-02-14 22:32 99% ` Hampus Sjöberg
2019-02-14 21:14 99% Kenshiro []
2019-02-17 14:14 99% [bitcoin-dev] BIP proposal - Signatures of Messages using Bitcoin Private Keys Christopher Gilliard
2019-02-17 19:42 99% ` Adam Ficsor
2019-02-18 22:59 99% ` Aymeric Vitte
2019-02-18 23:24 99% ` Christopher Gilliard
2019-02-18 23:50 99% ` Aymeric Vitte
2019-02-19 0:29 99% ` Christopher Gilliard
2019-02-18 7:56 99% [bitcoin-dev] BIP proposal - addrv2 message Wladimir J. van der Laan
2019-02-23 20:17 99% [bitcoin-dev] Privacy literature review Chris Belcher
2019-02-23 22:10 99% [bitcoin-dev] BIP - Symbol for satoshi Amine Chakak
2019-02-25 19:29 99% [bitcoin-dev] Vulnerability relating to 64-byte transactions in Bitcoin Core 0.13 branch Suhas Daftuar
2019-02-28 3:48 99% [bitcoin-dev] Fortune Cookies to Bitcoin Seed Trey Del Bonis
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox