ZmnSCPxj,

 I think that this implication affects other applications built on the blockchain, not just the PubRef proposal:

 > There is a potential for a targeted attack where a large payout going to a `scriptPubKey` that uses `OP_PUBREF` on a recently-confirmed transaction finds that recently-confirmed transaction is replaced with one that pays to a different public key, via a history-rewrite attack.
 > Such an attack is doable by miners, and if we consider that we accept 100 blocks for miner coinbase maturity as "acceptably low risk" against miner shenanigans, then we might consider that 100 blocks might be acceptable for this also.
 > Whether 100 is too high or not largely depends on your risk appetite.

I agree 100% this attack is unexpected and very interesting.  However, I find the arbitrary '100' to be unsatisfying - I'll have to do some more digging. It would be interesting to trigger this on the testnet to see what happens.  Do you know if anyone has pushed these limits?  I am so taken by this attack I might attempt it.

 > Data derived from > 220Gb of perpetually-growing blockchain is hardly, to my mind, "only needs an array".

There are other open source projects that have to deal with larger data sets and have accounted for the real-world limits on computability. Apache HTTPD's Bucket-Brigade comes to mind, which has been well tested and can account for limited RAM when accessing linear data structures. For a more general purpose utility leveldb (bsd-license) provides random access to arbitrary data collections.  Pruning can also be a real asset for PubRef.  If all transactions for a wallet have been pruned, then there is no need to index this PubRef - a validator can safely skip over it.

Best Regards,
Mike

On Sun, Jul 28, 2019 at 6:46 PM ZmnSCPxj <ZmnSCPxj@protonmail.com> wrote:
Good morning Mike,


Sent with ProtonMail Secure Email.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Sunday, July 28, 2019 4:03 AM, Mike Brooks <m@ib.tc> wrote:

> Hey ZmnSCPxj,
>
> As to your first point.  I wasn't aware there was so much volatility at the tip, also 100 blocks is quite the difference!  I agree no one could references a transaction in a newly formed blocks, but I'm curious how this number was chosen. Do you have any documentation or code that you can share related to how re-orgs are handled? Do we have a kind of 'consensus checkpoint' when a re-org is no longer possible? This is a very interesting topic.
>

Miner coinbases need 100 blocks for maturity, which is the basis of my suggestion to use 100 blocks.
It might be too high, but I doubt there will be good reason to be less than 100.

There is a potential for a targeted attack where a large payout going to a `scriptPubKey` that uses `OP_PUBREF` on a recently-confirmed transaction finds that recently-confirmed transaction is replaced with one that pays to a different public key, via a history-rewrite attack.
Such an attack is doable by miners, and if we consider that we accept 100 blocks for miner coinbase maturity as "acceptably low risk" against miner shenanigans, then we might consider that 100 blocks might be acceptable for this also.
Whether 100 is too high or not largely depends on your risk appetite.

>  A validator only needs an array of PUSHDATA elements and can then validate any given SCRIPT at O(1).  

Data derived from > 220Gb of perpetually-growing blockchain is hardly, to my mind, "only needs an array".
Such an array would not fit in memory for many devices that today are practical for running fullnodes.
It is keeping that array and indexing it which is the problem, i.e. the devil in the details.

Reiterating also, current pruned nodes did not retain that data and would be forced to re-download the entire blockchain.
Unless you propose that we can refer only to `OP_PUSHDATA` after activation of `OP_PUSHREF`.

Regards,
ZmnSCPxj