I don't understand this comment. The bandwidth gains are not from
address reuse, they are from the observed property that false
positives are independent between two filters. I.e. clients that
connect once a day will probably download 2-3 filters at most, if they
had nothing relevant in the last ~144 blocks.

Your multi-layer digest proposal (https://bc-2.jp/bfd-profile.pdf) uses a different type of filter which seems more like a compressed Bloom filter if I understand it correctly. Appendix A shows how the FP rate increases with the number of elements.

With the Golomb-Coded Sets, the filter size increases linearly in the number of elements for a fixed FP rate. So currently we are targeting an ~1/2^20 rate (actually 1/784931 now), and filter sizes are ~20 bits * N for N elements. With a 1-layer digest covering let's say 16 blocks, you could drop the FP rate on the digest filters and the block filters each to ~10 bits per element, I think, to get the same FP rate for a given block by your argument of independence. But the digest is only half the size of the 16 combined filters and there's a high probability of downloading the other half anyway. So unless there is greater duplication of elements in the digest filters, it's not clear to me that there are great bandwidth savings. But maybe there are. Even so, I think we should just ship the block filters and consider multi-layer digests later.