On Thu, Oct 10, 2019 at 5:20 PM Braydon Fuller via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
 It would be interesting to have a succinct chainwork proof
for all cases. Chainwork being a sum of the total proof-of-work in a
chain. Such proofs currently only require a few headers for common cases
and the other cases can be identified.

I wonder if a "seed" based system would be useful.

A seed is defined as a header with a very low digest. 

When a new peer connects, you ask him to send you the header with the lowest digest on his main chain.

Chains ending at the strongest seeds are kept preferentially when discarding chains.

This requires a way to download chains backwards, which the protocol doesn't support at the moment.

The most chain work chain is overwhelmingly likely to contain the header with the strongest digest.

This means that the honest peer's chain would be kept preferentially.

It also means that a node that is synced to the main chain can easily discard noise from dishonest peers.  Before downloading, they could ask the peer to provide a header with at least 1% of the POW of the best header on the main chain starting at the fork point.  If they can't then their fork probably has less POW than the main chain.
 
A peer could
broadcast a few low-work header chains, reconnect and repeat ad nauseam.

I meant connected peer rather than peer.  If a peer disconnects and then reconnects as a new peer, then their allocation of bandwidth/RAM resets to zero.

Each peer would be allocated a certain bandwidth per minute for headers as in a token bucket system.   New peers would start with empty buckets.

If an active (outgoing) peer is building on a header chain, then that chain is preferentially kept.  Essentially, the last chain that each outgoing peer built on may not be discarded.

In retrospect, that works out as the same as throttling peer download, just with a different method for throttling.

In your system, peers who extend the best chain don't get throttled, but the other peers do (but with a gradual transition). 

This could be accomplished by adding 80 bytes into the peers bucket if it extends the main chain.
 
For example, let's assume a case that the initial chain of headers was
dishonest and with low chainwork. The initial block download retrieves
the header chain from a single loader peer first. Once recent time is
reached, header chains are downloaded from all outgoing peers.

The key it that it must not be possible to prevent a single honest peer from making progress by flooding with other peers and getting the honest peer's chain discarded.

I think parallel downloading would be better than focusing on one peer initially.  Otherwise, a dishonest peer can slowly send their headers to prevent moving to parallel mode.

Each connected peer is given a bandwidth and RAM allowance.  If a connected peer forks off their own chain before reaching current time, then the fork is just discarded.

The RAM allowance would be sufficient to hold one header per minute since genesis.

The header chains are relatively small (50MB), so it is not unreasonable to expect the honest peer to send the entire chain in one go.

I wonder if there is a formula that gives the minimum chain work required to have a particular chain length by now.

1 minute per header would mean that the difficulty would increase every adjustment, so it couldn't be maintained without an exponentially rising total chain work.
 
On Sat, Oct 12, 2019 at 2:41 AM Braydon Fuller via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
  - Nodes are vulnerable during the initial sync when joining the
network until the minimum chainwork is achieved.

Nodes should stay "headers-only" until they have hit the threshold.

It isn't really any different from a checkpoint anyway. 

Download headers until you hit this header is about the same as download headers until you hit this chain work.

It would be different if header chains were downloaded from the final checkpoint backwards.

You would start at a final checkpoint and work backwards.  Each ancestor header is committed to by the final checkpoint, so it would not be possible a dishonest peer to fool the node during IBD.
 
This is possible if the
loader peer is the attacker. To mitigate this there would need to be a
minimum chainwork defined based on the current chainwork. However, such
could also be used to prevent nodes from joining the network as it's
rejecting rather that throttling.

I think mixing two different concepts makes this problem more complex than needed.

It looks like they are aiming for hard-coding

A) "The main chain has at least C chainwork"
B) "All blocks after A is satisfied have at least X POW"

To me, this is equivalent to a checkpoint, without it having it be called a checkpoint.

The point about excluding checkpoints is that it means that (in theory) two clients can't end up on incompatible forks due to different checkpoints.

The "checkpoint" is replaced by a statement by the dev team that

"There exists at least one valid chain with C chainwork"

which is equivalent to

"The longest valid chain has at least C chainwork"

Two client making those statements can't cause a permanent incompatibility.  If they pick a different C, then eventually, once the main chain has more than the larger chain work, they will agree again.

Checkpoints don't automatically heal.

Adding in a minimum POW requirement could break the requirement for that to happen.

Just because B was met on the original main chain, a fork isn't required to meet it.

  - It's technically a consensus change each time the minimum difficulty
or best chainwork is updated. It is a similar consensus change as
maintaining the last checkpoint, as it's used to prevent forking prior
to the last checkpoint.

I agree on the min difficulty being a consensus change.

The minimum chain work is just the devs making a true statement and then using it to optimize things.