Preface: As a longstanding policy, whenever I buy a new hard disk or decommission an old one, I immediately `dd` it from start to end with a pseudorandom byte stream. The result is indistinguishable from my disk encryption setup, which leaves no apparent on-disk headers. I don’t do this for “plausibility” reasons, but rather, 0. to assure that immediately upon use, any sectors written with disk encryption cannot be distinguished from unwritten sectors, and 1. to make things overall more fun for potential cryptanalysts. I do realize the small problem that I can’t affirmatively prove any particular disk in my possession to *not* contain decryptable data; and many of them don’t! (I think that next, I may start writing my disks with headers for LUKS, which I do not use...) Whereupon, I challenge plausible deniability designers to `dd` a 6TB disk with pseudorandom bytes, then try walking it across the U.S. border until it gets searched. What could possibly go wrong? Should you be ordered to decrypt it, the disk *could* be *plausibly* filled with pseudorandom bytes; and you would not be committing the crime of lying to an officer, when you truly state that in fact, it *is* filled with pseudorandom bytes. Please, I want to see this “plausible deniability” theory in action. You owe it to your users to test the theory empirically, in circumstances in which users have here reported applying it. Now, in reply: On 2018-01-13 at 02:11:08 +0000, Damian Williamson wrote: >The same problems exist for users of whole disk encrypted operating >systems. Once the device (or, the initial password authentication) is >found, the adversary knows that there is something to see. Or PGP. Or in a broader sense, Tor. Or in the physical world, a high-security safe bolted to your floor. Security systems attract attention. Smart people develop appropriate threat models, keep their security systems confidential where it is practical to do so (don’t brag about your high-security safe), and work to increase the popularity of network security systems (PGP, HTTPS, Tor...) to reduce how much they stand out. In the context of this discussion, it does help that Bitcoin is becoming popular. It would help much more if Trezors and similar devices were as commonplace as iGadgets. But when considering the potential threats to any specific individual, the only “plausibility” shield is to not seem like someone who is likely to have *much*. Of course, this is not a problem specific to Bitcoin. Depending on the threat, the same danger applies to owning a substantial amount of gold, cash, or even money in a bank. >The objective of plausible deniability is to present some acceptable >(plausible) alternative while keeping the actual hidden (denied). > >If the adversary does not believe you, you do indeed risk everything. And therein lies the trick. Unsophisticated adversaries such as common criminals may be fooled, or may not care if they can quickly grab *something* of value and run away. But if your threat model may potentially include any adversaries possessed of both brains and patience, “plausible deniability” solves nothing. Such an adversary will not likely be satisfied with the standard of “plausibility”. More likely, the prevailing standard will be: “I wasn’t born yesterday, and I *know* that you are hiding something.” >[snip extended prior quotations] -- nullius@nym.zone | PGP ECC: 0xC2E91CD74A4C57A105F6C21B5A00591B2F307E0C Bitcoin: bc1qcash96s5jqppzsp8hy8swkggf7f6agex98an7h | (Segwit nested: 3NULL3ZCUXr7RDLxXeLPDMZDZYxuaYkCnG) (PGP RSA: 0x36EBB4AB699A10EE) “‘If you’re not doing anything wrong, you have nothing to hide.’ No! Because I do nothing wrong, I have nothing to show.” — nullius