The entire point of the definition of eventually consistency is that your computer system is running continously and DO NOT have a final state, and therefore you must be able to describe the behavior when your system either may give responses to queries across time that are either perfectly consistent *or not* perfectly consistent.

This is not the definition of eventual consistency. From https://en.wikipedia.org/wiki/Eventual_consistency:
Eventual consistency is a consistency model used in distributed computing to achieve high availability that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value.

The actual definition makes it quite clear that a system need not have a final state to be evaluated for its consistency properties. Almost all practical database systems execute continuously without a final state.

And Bitcoin by default *does not* ignore the contents of the last X blocks. A Bitcoin node being queried about the current blockchain state WILL give inconsistent answers when there's block rearrangements = no strong consistency. 
 
One could split hairs here by pedantically defining "Bitcoin by default" -- you could refer to just the reference client code and ignore the shim code in the app that interfaces with the client -- but that'd drag us into a fruitless email-list-style discussion from which no one would emerge any wiser. I'll avoid that, and will instead dryly note that the reference client's listreceivedbyaddress will return the number of confirmations by default, and every application will then check the confirmations value to confirm that it exceeds that application's own omega, while getbalance,getreceivedbyaddress will take a number of confirmations as an argument, shielding the app from reorgs of the suffix. That is precisely the point made in the post.

Not to mention that your definition ignores the nonzero probability of a block rearrangement extending beyond your constant omega.

The post covers this case. Technically, there is a difference between 0 probability and epsilon probability -- this is the reason why Nakamoto Consensus was an exciting breakthrough result; the same reason why Lamport's results regarding a 3f+1 bound on the Byzantine Generals Problem do not apply to Nakamoto Consensus; and the same reason it took our paper (Majority is Not Enough) to show that Nakamoto consensus has a similar 33% bound as Lamport-style consensus when it comes to tolerating Byzantine actors. 

Practically, however, there is little difference between 0 and a value that exponentially approximates 0, given that we operate on hardware subject to random errors. The post makes the case that one can pick an omega such that the probability of your processor mis-executing your code is larger than the probability of observing a reorganization.

Bitcoin provides a probabilistic, accumulative probability. Not a perfect one.

Sometimes, non-technical people get confused about the difference between very very very small probabilities that approximate 0 and 0. For instance, some people get very worried about hash collisions, on which Bitcoin relies for its correctness, whose probability also drops exponentially but is not exactly 0. Your overall point seems to be an analogous concern that Bitcoin's exponentially dropping probability of reorganization isn't quite a "perfect" 0. If so, I agree and the original post made this quite clear. Though I hope we can avoid that kind of discussion on this particular list. 

- egs