public inbox for bitcoindev@googlegroups.com
 help / color / mirror / Atom feed
* [bitcoin-dev] Why not checkpointing the transactions?
@ 2015-10-09  3:18 telemaco
  2015-10-09  4:16 ` jl2012
  0 siblings, 1 reply; 2+ messages in thread
From: telemaco @ 2015-10-09  3:18 UTC (permalink / raw)
  To: bitcoin-dev

Hello,

I have been working on database engineering for many years and there are 
some things i don't understand very well about how bitcoin architecture 
works. I have not written here because i would not like to disturb 
development with yet another of those far to implement ideas that does 
not contribute to actual code as sometimes is said here.

On any case today I have been listening the last beyond bitcoin video 
about the new bitshares 2.0 and how they are changing the transaction 
structure to do it more similar to what relational database management 
systems have been doing for 30 years.

Keep a checkpointed state and just carry the new transactions. On rdbms, 
anyone if they want to perform historical research or something, they 
can just get the transaction log backups and reply every single 
transaction since the beginning of history.
Why is bitcoin network replying every single transaction since the 
beginning and not start from a closer state. Why is that information 
even stored on every core node? Couldn't we just have a checkpointed 
state and the new transactions and leave to "historical" nodes or 
collectors the backup of all the transactions since the beginning of 
history?

Replication rdbms have been working with this model for some time, just 
being able to replicate at table, column, index, row or even db level 
between many datacenters/continents and already serving the financial 
world, banks and exchanges. Their tps is very fast because they only 
transfer the smallest number of transactions that nodes decide to be 
suscribed to, maybe japan exchange just needs transactional info from 
japanese stocks on nasdaq or something similar. But even if they 
suscribe to everything, the transactional info is to some extent just a 
very small amount of information.

Couldn't we have just a very small transactional system with the fewest 
number of working transactions and advancing checkpointed states? We 
should be able to have nodes of the size of watches with that structure, 
instead of holding everything for ever for all eternity and hope on 
moore's law to keep us allowing infinite growth. What if 5 internet 
submarine cables get cut on a earth movement or war or there is a 
shortage of materials for chip manufacturing and the network moore's law 
cannot keep up. Shouldn't performance optimization and capacity planning 
go in both ways?. Having a really small working "transaction log" allows 
companies to rely some transactional info to little pdas on warehouses, 
or just relay a small amount of information to a satellite, not every 
single transaction of the company forever.

After all if we could have a very small transactional workload and leave 
behind the overload of all the previous transactions, we could have 
bitcoin nodes on watches and have an incredibly decentralized system 
that nobody can disrupt as the decentralization would be massive. We 
could even create a very small odbc, jdbc connector on the bitcoin 
client and just let any traditional rdbms system handle the heavy load 
and just let bitcoin core rely everyone and his mother to a level that 
noone could ever disrupt a very small amount of transactional data.

Just some thoughts. Please don't be very harsh, i am still researching 
bitcoin code and my intentions are the best as i cannot be more 
passionate about the project.

Thanks,




^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [bitcoin-dev] Why not checkpointing the transactions?
  2015-10-09  3:18 [bitcoin-dev] Why not checkpointing the transactions? telemaco
@ 2015-10-09  4:16 ` jl2012
  0 siblings, 0 replies; 2+ messages in thread
From: jl2012 @ 2015-10-09  4:16 UTC (permalink / raw)
  To: telemaco; +Cc: bitcoin-dev

You are mixing multiple issues.

1. It is not possible to "checkpoint" in a totally decentralized and 
trustless way. You need the whole blockchain to confirm its validity, as 
a single invalid tx in the history will invalidate ALL blocks after it, 
even if the invalid tx is irrelevant to you.

2. Downloading the whole blockchain does not mean you need to store the 
whole blockchain. Spent transactions outputs can be safely removed from 
your harddrive. Please read section 7 of Satoshi's paper: 
https://bitcoin.org/bitcoin.pdf . This function is already implemented 
in Bitcoin Core 0.11

3. If you don't even want to download the whole blockchain, you can 
download and validate the portions that your are interested. Satoshi 
called it Simplified Payment Verification (SPV), the section 8 of his 
paper. It is secure as long as >50% of miners are honest. Android 
Bitcoin Wallet is an SPV wallet based on bitcoinj.

Finally, I think this kind of question would be better asked on the 
bitcointalk forum. The mailing list should be more specific to 
development, not merely some vague idea.



telemaco via bitcoin-dev 於 2015-10-08 23:18 寫到:
> Hello,
> 
> I have been working on database engineering for many years and there
> are some things i don't understand very well about how bitcoin
> architecture works. I have not written here because i would not like
> to disturb development with yet another of those far to implement
> ideas that does not contribute to actual code as sometimes is said
> here.
> 
> On any case today I have been listening the last beyond bitcoin video
> about the new bitshares 2.0 and how they are changing the transaction
> structure to do it more similar to what relational database management
> systems have been doing for 30 years.
> 
> Keep a checkpointed state and just carry the new transactions. On
> rdbms, anyone if they want to perform historical research or
> something, they can just get the transaction log backups and reply
> every single transaction since the beginning of history.
> Why is bitcoin network replying every single transaction since the
> beginning and not start from a closer state. Why is that information
> even stored on every core node? Couldn't we just have a checkpointed
> state and the new transactions and leave to "historical" nodes or
> collectors the backup of all the transactions since the beginning of
> history?
> 
> Replication rdbms have been working with this model for some time,
> just being able to replicate at table, column, index, row or even db
> level between many datacenters/continents and already serving the
> financial world, banks and exchanges. Their tps is very fast because
> they only transfer the smallest number of transactions that nodes
> decide to be suscribed to, maybe japan exchange just needs
> transactional info from japanese stocks on nasdaq or something
> similar. But even if they suscribe to everything, the transactional
> info is to some extent just a very small amount of information.
> 
> Couldn't we have just a very small transactional system with the
> fewest number of working transactions and advancing checkpointed
> states? We should be able to have nodes of the size of watches with
> that structure, instead of holding everything for ever for all
> eternity and hope on moore's law to keep us allowing infinite growth.
> What if 5 internet submarine cables get cut on a earth movement or war
> or there is a shortage of materials for chip manufacturing and the
> network moore's law cannot keep up. Shouldn't performance optimization
> and capacity planning go in both ways?. Having a really small working
> "transaction log" allows companies to rely some transactional info to
> little pdas on warehouses, or just relay a small amount of information
> to a satellite, not every single transaction of the company forever.
> 
> After all if we could have a very small transactional workload and
> leave behind the overload of all the previous transactions, we could
> have bitcoin nodes on watches and have an incredibly decentralized
> system that nobody can disrupt as the decentralization would be
> massive. We could even create a very small odbc, jdbc connector on the
> bitcoin client and just let any traditional rdbms system handle the
> heavy load and just let bitcoin core rely everyone and his mother to a
> level that noone could ever disrupt a very small amount of
> transactional data.
> 
> Just some thoughts. Please don't be very harsh, i am still researching
> bitcoin code and my intentions are the best as i cannot be more
> passionate about the project.
> 
> Thanks,
> 
> 
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2015-10-09  4:16 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-09  3:18 [bitcoin-dev] Why not checkpointing the transactions? telemaco
2015-10-09  4:16 ` jl2012

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox