Indeed, as I mentioned in my first mail, nodes can be told how much bandwidth they're allowed to use and then prioritize within that, so I don't see any way convergence can fail. And regardless, I used 10mbit for the calculations, that isn't exactly unlimited. My home internet connection is better than that. It's just an arbitrary choice that lets us get a feel for the numbers. We can see that even with a lot of replacements, an attacker would have a hard time matching up his flood with when a block is actually solved. On the wider point - how many people DoS things with their own bandwidth? The point of DNS reflection and/or botnets is you use other peoples bandwidth. The attacks on Mt Gox are supposedly 80 gigabit+, which is enough to take out all of the main network simultaneously. We can't do anything about that. So I agree we should work to avoid opening up new DoS attacks, but we should also be realistic about what can be accomplished. The kind of people trying to manipulate Mt Gox could nuke the entire P2P network off the face of the internet with the flick of a switch, presumably the reason they aren't doing that it would to use Satoshi's phrasing "undermine the validity of their own wealth". > sure it's worth doing, at least immediately. Weakening the non-final == > non-standard test to give a window of, say, 3 blocks, would be fine I > think. > Sure. I think Gavin wants some kind of wider memory pool limiter policy which would encompass such a thing already.