Proposal : Hyperledger Fabric block archiving


nekia <atsushin@...>
 

Hi,

I figured out that it's quite difficult to estimate maximum size in advance to align metadata size on each orderer. So we've decided to change to handle archive data at block level, not at blockfile level. By managing archived data at block level, we can avoid the following issues:
  • The layout of blockfile with same suffix is not always identical among peers in an organization
  • Because of this variance of the layout, each peer can't get access to archived data by using original block location info recorded in block index DB
I've put together overview of the new approach in the following slide.
HLF Block Archiving - Google Slides

In the feedbacks we got in this thread, implementing separately from fabric core has been mentioned several times. We have not been able to offer this feature as a pluggable module for ledger layer of fabric so far. One of the main reasons of it is that we need to hook the local file access when reading block data from local file system.

Do you think this project can compensate or coexist the functionality offered by 'ledger checkpoint and pruning' feature which is planned to be added to v2.x?

Cheeers,
Atsushi


nekia <atsushin@...>
 

Hi Jay,

Thank you for your confirmation. I could see the situation on my local by using byfn of fabric-samples on fabric v2.0.0-beta. (But I haven't understood yet how to decide which order delivers block to which peer).

If we can align the size of metadata on each orderer to a certain size regarding each block, I think we are still able to keep our assumption available yet.
Is it possible to estimate the size of SignatureHeader and Signature at most? If it's possible, by padding to the maximum size on each orderer, we can align the size of metadata of each block across orderers, I think.
If it's not possible, we need to consider going to the way of block-based (not blockfile-based) archiving logic.

Thanks,
Atsushi


Jay Guo
 

Atsushi,

That change simply write data to another field in `BlockMetadata`
array, which doesn't change the fact that they can be of different
size, since orderers individually add signatures and headers.

In order to "reproduce" this situation, you could start some orderer
nodes and produce some blocks, pull and deserialize them to inspect.

- J

On Wed, Dec 18, 2019 at 11:16 AM nekia <atsushin@...> wrote:

Hi Manish,

[FAB-14872] Deprecate the old consenter metadata storage location and prefer the metadata stored in the block signature. - Hyperledger JIRA

I found the above change for v2.0.0. In my understanding, the concenter signature field of metadata is obsoleted and the block metadata is shared across orderers under the network enabled Raft. After applying this change, will it be still happened the following case? If so, could you please give us any detail of such a situation (ex. how to create the situation)? We'd like to know what kind of configuration brings the situation for evaluating our solution.

the size of a given block may vary slightly on each peer. Though the header and the data section of a blocks are of same size across peers but this difference in overall size could be caused by the metadata section which contains concenter signatures. In addition, on some of the peers, the metadata may also include block commit hash as an additional data.

Thanks,
Atsushi


nekia <atsushin@...>
 

Hi Manish,

[FAB-14872] Deprecate the old consenter metadata storage location and prefer the metadata stored in the block signature. - Hyperledger JIRA

I found the above change for v2.0.0. In my understanding, the concenter signature field of metadata is obsoleted and the block metadata is shared across orderers under the network enabled Raft. After applying this change, will it be still happened the following case? If so, could you please give us any detail of such a situation (ex. how to create the situation)? We'd like to know what kind of configuration brings the situation for evaluating our solution.

the size of a given block may vary slightly on each peer. Though the header and the data section of a blocks are of same size across peers but this difference in overall size could be caused by the metadata section which contains concenter signatures. In addition, on some of the peers, the metadata may also include block commit hash as an additional data.
Thanks,
Atsushi


nekia <atsushin@...>
 

Hi Manish,


Thank you for your response.

I see now. This important detail was not covered in the proposal and hence I was under impression that you are not modifying the core fabric code. Given, the first point above, this would cause more changes in the peer core.
I see. We'll keep looking into a solution for this kind of situation and update of the proposal as well.

As I mentioned previously, it can still be event driven (event from archiver repo). My main point was point-to-point communication vs gossip.


I understood. I agree that it might be better pub/sub messaging type rather than broadcasting.

All I wanted to say here is that, it would be good if someone wants one of the peers file to act as a repo as well…. in other words, it still has all what a repo offers and code will be outside core peer code anyways. But this is less important point as compared to others, I guess.
Yes, ultimately we'd like to introduce that kind of new peer role called 'repo' that offers the same functionality with the current repository reside to the existing peer node functionality.

Thanks,
Atsushi


Manish
 

Hi Atsushi,

My response in blue in-lined text…

Thanks,
Manish

On Mon, Dec 2, 2019 at 4:10 PM Manish Sethi <manish.sethi@...> wrote:


On Fri, Nov 29, 2019 at 12:44 AM nekia <atsushin@...> wrote:
Thanks, Manish, Yacov, and Gari.
 
 
I really appreciate for your feedback and insights.
 
(Feedback from Manish)
First, the fundamental assumption that you make is that all the block files are same across peers is incorrect. The block files are not guaranteed to contain same number of blocks across peers. This is because a block file is bounded by the file size and not by the  number of blocks. Further, the size of a given block may vary slightly on each peer. Though the header and the data section of a blocks are of same size across peers but this difference in overall size could be caused by the metadata section which contains concenter signatures. ...
Thank you so much for a quite important point. We're now reviewing and analyzing the implementation of fabric around metadata. Let me ask a question to clarify my understanding.
Say for example, block #100 is available on channel 'mychannel' within organization 'org1', does it mean that the metadata of block #100 on peer0.org1 can be different to the metadata of same block(#100) on a different peer(ex. peer1.org1)? If yes, you are right that our assumption is incorrect. That is, our feature will not be able to refer to a block data (from any peer node) which resides on the archive repository. Because locPointer (offset and length of each block within a blockfile) is not available for archived blockfiles on the repository.

Yes, that is the case I was highlighting…
Second, there are certain kind of queries for which a peer assumes the presence of block files in the current way. This primarily includes history queries, blocks related queries, and txid related queries. These queries may start failing or lead to crashes or unexpected results if you simply delete the files. ...
Third, somewhat similar to the second point above, peer has a feature wherein it rebuilds the statedb and historydb if they are dropped and peer is simply restarted. For this feature as well it relies on the percense of blockfiles.
We have catered for these situations. Each peer node is still able to access all the blockfiles (even if they're archived and discarded) seamlessly via as-is interface. Even after archiving blockfiles, blockchain characteristics are still maintained on the Blockchain network. So rebuilding statedb and accessing historydb are still available under this archiving feature.
Note: Hyperledger Fabric core has been modified to handle query failures when it attempts to access deleted blockfiles.

 I see now. This important detail was not covered in the proposal and hence I was under impression that you are not modifying the core fabric code. Given, the first point above, this would cause more changes in the peer core.
Fourth, I am not sure if gossip is the right communication mechanism that you want to employ for this communication. An archiver client perhaps can simply poll (or register for updates with) the archiver repository.
In early stage in our development, we used a polling mechanism to trigger archiving. But in terms of the efficiency (process and network traffic), we changed the implementation to be event driven.
 
 As I mentioned previously, it can still be event driven (event from archiver repo). My main point was point-to-point communication vs gossip.
Finally, I would like to understand in more details what are the benefits of having a separate repository? Why not simply let the files be there on the anchor peer and purge from other peers? If the answer is compression, then ideally we should explore a choise of writing the data in blockfiles in compressed format.
Good point. We designed this archiving feature to be as simple as possible (that is, minimal code changes to Hyperledger Fabric core). With the repository concept, we're able to access all the blockfiles (even if they're archived and discarded) seamlessly via as-is interface.
     All I wanted to say here is that, it would be good if someone wants one of the peers file to act as a repo as well…. in other words, it still has all what a repo offers and code will be outside core peer code anyways. But this is less important point as compared to others, I guess.
 
  
(Feedbacks from Yacov)
one thing I noticed while skimming the code, is that while you send the ArchivedBlockFileMsg via gossip, you are not ensuring it is eventually propagated to peers successfully.
 
This means that if a peer didn't get the message, it won't archive your file.
 
I suggest that you think of a more robust mechanism, like periodically comparing digests of ranges.
You're right. This kind of logic is lacking from our current implementation. Actually it was in our radar, but we have difficulty to implement this aspect. Thank you for pointing to a reference code for pull-based gossip.
 
(Feedbacks from Gari)
It seems the only thing you really wanted to use the peer for was to propagate information to other peers within the same organization. ...
Yes, that is one of the reasons we integrated archiving features into peer binary. But the most important reason is for handling query failures when it attempts to access deleted blockfiles. And each peer node is still able to access all the blockfiles (even if they're archived and discarded) seamlessly via as-is interface.
 
 
Thanks,
Atsushi
 


Manish
 



On Fri, Nov 29, 2019 at 12:44 AM nekia <atsushin@...> wrote:
Thanks, Manish, Yacov, and Gari.
 
 
I really appreciate for your feedback and insights.
 
(Feedback from Manish)
First, the fundamental assumption that you make is that all the block files are same across peers is incorrect. The block files are not guaranteed to contain same number of blocks across peers. This is because a block file is bounded by the file size and not by the  number of blocks. Further, the size of a given block may vary slightly on each peer. Though the header and the data section of a blocks are of same size across peers but this difference in overall size could be caused by the metadata section which contains concenter signatures. ...
Thank you so much for a quite important point. We're now reviewing and analyzing the implementation of fabric around metadata. Let me ask a question to clarify my understanding.
Say for example, block #100 is available on channel 'mychannel' within organization 'org1', does it mean that the metadata of block #100 on peer0.org1 can be different to the metadata of same block(#100) on a different peer(ex. peer1.org1)? If yes, you are right that our assumption is incorrect. That is, our feature will not be able to refer to a block data (from any peer node) which resides on the archive repository. Because locPointer (offset and length of each block within a blockfile) is not available for archived blockfiles on the repository.

Yes, that is the case I was highlighting…
Second, there are certain kind of queries for which a peer assumes the presence of block files in the current way. This primarily includes history queries, blocks related queries, and txid related queries. These queries may start failing or lead to crashes or unexpected results if you simply delete the files. ...
Third, somewhat similar to the second point above, peer has a feature wherein it rebuilds the statedb and historydb if they are dropped and peer is simply restarted. For this feature as well it relies on the percense of blockfiles.
We have catered for these situations. Each peer node is still able to access all the blockfiles (even if they're archived and discarded) seamlessly via as-is interface. Even after archiving blockfiles, blockchain characteristics are still maintained on the Blockchain network. So rebuilding statedb and accessing historydb are still available under this archiving feature.
Note: Hyperledger Fabric core has been modified to handle query failures when it attempts to access deleted blockfiles.

 I see now. This important detail was not covered in the proposal and hence I was under impression that you are not modifying the core fabric code. Given, the first point above, this would cause more changes in the peer core.
Fourth, I am not sure if gossip is the right communication mechanism that you want to employ for this communication. An archiver client perhaps can simply poll (or register for updates with) the archiver repository.
In early stage in our development, we used a polling mechanism to trigger archiving. But in terms of the efficiency (process and network traffic), we changed the implementation to be event driven.
 
 As I mentioned previously, it can still be event driven (event from archiver repo). My main point was point-to-point communication vs gossip.
Finally, I would like to understand in more details what are the benefits of having a separate repository? Why not simply let the files be there on the anchor peer and purge from other peers? If the answer is compression, then ideally we should explore a choise of writing the data in blockfiles in compressed format.
Good point. We designed this archiving feature to be as simple as possible (that is, minimal code changes to Hyperledger Fabric core). With the repository concept, we're able to access all the blockfiles (even if they're archived and discarded) seamlessly via as-is interface.
     All I wanted to say here is that, it would be good if someone wants one of the peers file to act as a repo as well…. in other words, it still has all what a repo offers and code will be outside core peer code anyways. But this is less important point as compared to others, I guess.
 
  
(Feedbacks from Yacov)
one thing I noticed while skimming the code, is that while you send the ArchivedBlockFileMsg via gossip, you are not ensuring it is eventually propagated to peers successfully.
 
This means that if a peer didn't get the message, it won't archive your file.
 
I suggest that you think of a more robust mechanism, like periodically comparing digests of ranges.
You're right. This kind of logic is lacking from our current implementation. Actually it was in our radar, but we have difficulty to implement this aspect. Thank you for pointing to a reference code for pull-based gossip.
 
(Feedbacks from Gari)
It seems the only thing you really wanted to use the peer for was to propagate information to other peers within the same organization. ...
Yes, that is one of the reasons we integrated archiving features into peer binary. But the most important reason is for handling query failures when it attempts to access deleted blockfiles. And each peer node is still able to access all the blockfiles (even if they're archived and discarded) seamlessly via as-is interface.
 
 
Thanks,
Atsushi
 


nekia <atsushin@...>
 

Thanks, Manish, Yacov, and Gari.
 
 
I really appreciate for your feedback and insights.
 
(Feedback from Manish)
First, the fundamental assumption that you make is that all the block files are same across peers is incorrect. The block files are not guaranteed to contain same number of blocks across peers. This is because a block file is bounded by the file size and not by the  number of blocks. Further, the size of a given block may vary slightly on each peer. Though the header and the data section of a blocks are of same size across peers but this difference in overall size could be caused by the metadata section which contains concenter signatures. ...
Thank you so much for a quite important point. We're now reviewing and analyzing the implementation of fabric around metadata. Let me ask a question to clarify my understanding.
Say for example, block #100 is available on channel 'mychannel' within organization 'org1', does it mean that the metadata of block #100 on peer0.org1 can be different to the metadata of same block(#100) on a different peer(ex. peer1.org1)? If yes, you are right that our assumption is incorrect. That is, our feature will not be able to refer to a block data (from any peer node) which resides on the archive repository. Because locPointer (offset and length of each block within a blockfile) is not available for archived blockfiles on the repository.

 
Second, there are certain kind of queries for which a peer assumes the presence of block files in the current way. This primarily includes history queries, blocks related queries, and txid related queries. These queries may start failing or lead to crashes or unexpected results if you simply delete the files. ...
Third, somewhat similar to the second point above, peer has a feature wherein it rebuilds the statedb and historydb if they are dropped and peer is simply restarted. For this feature as well it relies on the percense of blockfiles.
We have catered for these situations. Each peer node is still able to access all the blockfiles (even if they're archived and discarded) seamlessly via as-is interface. Even after archiving blockfiles, blockchain characteristics are still maintained on the Blockchain network. So rebuilding statedb and accessing historydb are still available under this archiving feature.
Note: Hyperledger Fabric core has been modified to handle query failures when it attempts to access deleted blockfiles.

 
Fourth, I am not sure if gossip is the right communication mechanism that you want to employ for this communication. An archiver client perhaps can simply poll (or register for updates with) the archiver repository.
In early stage in our development, we used a polling mechanism to trigger archiving. But in terms of the efficiency (process and network traffic), we changed the implementation to be event driven.
 
 
Finally, I would like to understand in more details what are the benefits of having a separate repository? Why not simply let the files be there on the anchor peer and purge from other peers? If the answer is compression, then ideally we should explore a choise of writing the data in blockfiles in compressed format.
Good point. We designed this archiving feature to be as simple as possible (that is, minimal code changes to Hyperledger Fabric core). With the repository concept, we're able to access all the blockfiles (even if they're archived and discarded) seamlessly via as-is interface.
 
(Feedbacks from Yacov)
one thing I noticed while skimming the code, is that while you send the ArchivedBlockFileMsg via gossip, you are not ensuring it is eventually propagated to peers successfully.
 
This means that if a peer didn't get the message, it won't archive your file.
 
I suggest that you think of a more robust mechanism, like periodically comparing digests of ranges.
You're right. This kind of logic is lacking from our current implementation. Actually it was in our radar, but we have difficulty to implement this aspect. Thank you for pointing to a reference code for pull-based gossip.
 
(Feedbacks from Gari)
It seems the only thing you really wanted to use the peer for was to propagate information to other peers within the same organization. ...
Yes, that is one of the reasons we integrated archiving features into peer binary. But the most important reason is for handling query failures when it attempts to access deleted blockfiles. And each peer node is still able to access all the blockfiles (even if they're archived and discarded) seamlessly via as-is interface.
 
 
Thanks,
Atsushi
 


Gari Singh <garis@...>
 

Hi Atsushi -

Thanks for sharing your efforts to date.

Overall, I like the idea of providing a utility to do this as generally we tell people that they can do this but don't provide any tools for doing so.

I do, however, have concerns about integrating any part of this functionality into the actual peer binary itself. I don't actually think you need to do that.
I think running a separate "archive client" without modifying the peer is the way to go. It keeps this functionality clean and separate from the peer and allows it to progress on its own.

It seems the only thing you really wanted to use the peer for was to propagate information to other peers within the same organization. My take here is that you can more easily do something such as having the archive client write its status to a file in the "archiver repository". This way other archiver clients within the same organization can simply periodically poll this file for status. Additionally, you can also use this same file repository to maintain some type of process lock file such that you'll only have one archiver client actively performing the archival.

-- G

-----------------------------------------
Gari Singh
Distinguished Engineer, CTO - IBM Blockchain
IBM Middleware
550 King St
Littleton, MA 01460
Cell: 978-846-7499
garis@...
-----------------------------------------

-----fabric@... wrote: -----
To: "Manish" <manish.sethi@...>
From: "Yacov"
Sent by: fabric@...
Date: 11/25/2019 04:21PM
Cc: nekia <atsushin@...>, "fabric@..." <fabric@...>
Subject: [EXTERNAL] Re: [Hyperledger Fabric] Proposal : Hyperledger Fabric block archiving

Hey Atsushi,

one thing I noticed while skimming the code, is that while you send the ArchivedBlockFileMsg via gossip, you are not ensuring it is eventually propagated to peers successfully.

This means that if a peer didn't get the message, it won't archive your file.

I suggest that you think of a more robust mechanism, like periodically comparing digests of ranges.

The code in https://github.com/hyperledger-labs/fabric-block-archiving/blob/master/gossip/gossip/pull/pullstore.go is a generic pull mechanism based on digests. You might want to give it a look.


- Yacov.



From: "Manish" <manish.sethi@...>
To: nekia <atsushin@...>
Cc: "fabric@..." <fabric@...>
Date: 11/25/2019 10:50 PM
Subject: [EXTERNAL] Re: [Hyperledger Fabric] Proposal : Hyperledger Fabric block archiving
Sent by: fabric@...



Hi Atsushi,

Thanks for your proposal and at high level the objective makes sense to me and below is my high level observations that you may want to consider.

First, the fundamental assumption that you make is that all the block files are same across peers is incorrect. The block files are not guaranteed to contain same number of blocks across peers. This is because a block file is bounded by the file size and not by the number of blocks. Further, the size of a given block may vary slightly on each peer. Though the header and the data section of a blocks are of same size across peers but this difference in overall size could be caused by the metadata section which contains concenter signatures. In addition, on some of the peers, the metadata may also include block commit hash as an additional data. So, either you have to operate at the block numbers (i.e., during purging an archiver client on a peer deals a file that should be purged partially based on where in the file the target block is located) or if you want to deal at the files level the archiver client could just consider files up to previous file.

Second, there are certain kind of queries for which a peer assumes the presence of block files in the current way. This primarily includes history queries, blocks related queries, and txid related queries. These queries may start failing or lead to crashes or unexpected results if you simply delete the files. I did not see any details in your design how you plan to handle these. The potential solutions may range from simply denying these kind of queries to more sophisticated solution such as serving the queries by involving the achiever repository. However, in either of these the challenge would be to know that the desired block/ transaction has been purged from the local peer (e.g., consider blockByHash or transactionByTxid kind of queries.)

Third, somewhat similar to the second point above, peer has a feature wherein it rebuilds the statedb and historydb if they are dropped and peer is simply restarted. For this feature as well it relies on the percense of blockfiles.

Fourth, I am not sure if gossip is the right communication mechanism that you want to employ for this communication. An archiver client perhaps can simply poll (or register for updates with) the archiver repository.

Finally, I would like to understand in more details what are the benefits of having a separate repository? Why not simply let the files be there on the anchor peer and purge from other peers? If the answer is compression, then ideally we should explore a choise of writing the data in blockfiles in compressed format.


Hope this helps.

Thanks,
Manish

On Thu, Nov 14, 2019 at 10:26 PM nekia <atsushin@...> wrote:
Hello everybody,

I’d like to propose a new feature ‘block archiving’ for Hyperledger Fabric. We are working on this block archiving project which is listed under Hyperledger Labs repository. Our current main efforts are focused on improvement of reliability. If we could get some feedback on our proposed feature from members involved in Hyperledger Fabric implementation, it’ll be quite useful for further improvement of UX.

- Hyperledger Fabric Block Archiving
https://github.com/hyperledger-labs/fabric-block-archiving

This enhancement for Hyperledger Fabric is aiming to:

- Reduce the total amount of storage space required for an organisation to operate a Hyperledger Fabric network by archiving block data into repository.
- For organisations, operate a Hyperledger Fabric network with low resourced nodes, such as a IoT edge devices for example.

- Our proposal
https://github.com/hyperledger-labs/hyperledger-labs.github.io/blob/master/labs/fabric-block-archiving.md

- Technical overview
https://github.com/nekia/fabric-block-archiving/blob/techoverview/BlockVault%20-%20Technical%20Overview.pdf


Kind regards,
Atsushi Neki
RocketChat: nekia

Atsushi Neki
Senior Software Development Engineer

Fujitsu Australia Software Technology Pty Ltd
14 Rodborough Road, Frenchs Forest NSW 2086, Australia
T +61 2 9452 9036 M +61 428 223 387
AtsushiN@...
fastware.com.au



Disclaimer
The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete the document and all copies thereof.

Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.

If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe@...


Yacov
 

Hey Atsushi,

one thing I noticed while skimming the code, is that while you send the ArchivedBlockFileMsg via gossip, you are not ensuring it is eventually propagated to peers successfully.

This means that if a peer didn't get the message, it won't archive your file.

I suggest that you think of a more robust mechanism, like periodically comparing digests of ranges.

The code in https://github.com/hyperledger-labs/fabric-block-archiving/blob/master/gossip/gossip/pull/pullstore.go is a generic pull mechanism based on digests.  You might want to give it a look.


- Yacov.



From:        "Manish" <manish.sethi@...>
To:        nekia <atsushin@...>
Cc:        "fabric@..." <fabric@...>
Date:        11/25/2019 10:50 PM
Subject:        [EXTERNAL] Re: [Hyperledger Fabric] Proposal : Hyperledger Fabric block archiving
Sent by:        fabric@...




Hi Atsushi,

Thanks for your proposal and at high level the objective makes sense to me and below is my high level observations that you may want to consider. 

First, the fundamental assumption that you make is that all the block files are same across peers is incorrect. The block files are not guaranteed to contain same number of blocks across peers. This is because a block file is bounded by the file size and not by the  number of blocks. Further, the size of a given block may vary slightly on each peer. Though the header and the data section of a blocks are of same size across peers but this difference in overall size could be caused by the metadata section which contains concenter signatures. In addition, on some of the peers, the metadata may also include block commit hash as an additional data. So, either you have to operate at the block numbers (i.e., during purging an archiver client on a peer deals a file that should be purged partially based on where in the file the target block is located) or if you want to deal at the files level the archiver client could just consider files up to previous file.

Second, there are certain kind of queries for which a peer assumes the presence of block files in the current way. This primarily includes history queries, blocks related queries, and txid related queries. These queries may start failing or lead to crashes or unexpected results if you simply delete the files. I did not see any details in your design how you plan to handle these. The potential solutions may range from simply denying these kind of queries to more sophisticated solution such as serving the queries by involving the  achiever repository. However, in either of these the challenge would be to know that the desired block/ transaction has been purged from the local peer (e.g., consider blockByHash or transactionByTxid kind of queries.)

Third, somewhat similar to the second point above, peer has a feature wherein it rebuilds the statedb and historydb if they are dropped and peer is simply restarted. For this feature as well it relies on the percense of blockfiles.

Fourth, I am not sure if gossip is the right communication mechanism that you want to employ for this communication. An archiver client perhaps can simply poll (or register for updates with) the archiver repository.

Finally, I would like to understand in more details what are the benefits of having a separate repository? Why not simply let the files be there on the anchor peer and purge from other peers? If the answer is compression, then ideally we should explore a choise of writing the data in blockfiles in compressed format.


Hope this helps.

Thanks,
Manish


On Thu, Nov 14, 2019 at 10:26 PM nekia <atsushin@...> wrote:
Hello everybody,

 

 

I’d like to propose a new feature ‘block archiving’ for Hyperledger Fabric. We are working on this block archiving project which is listed under Hyperledger Labs repository. Our current main efforts are focused on improvement of reliability. If we could get some feedback on our proposed feature from members involved in Hyperledger Fabric implementation, it’ll be quite useful for further improvement of UX.

 

- Hyperledger Fabric Block Archiving

    https://github.com/hyperledger-labs/fabric-block-archiving

 

    This enhancement for Hyperledger Fabric is aiming to:

 

        - Reduce the total amount of storage space required for an organisation to operate a Hyperledger Fabric network by archiving block data into repository.

        - For organisations, operate a Hyperledger Fabric network with low resourced nodes, such as a IoT edge devices for example.

 

- Our proposal

    https://github.com/hyperledger-labs/hyperledger-labs.github.io/blob/master/labs/fabric-block-archiving.md

 

- Technical overview

    https://github.com/nekia/fabric-block-archiving/blob/techoverview/BlockVault%20-%20Technical%20Overview.pdf

 

 

Kind regards,

Atsushi Neki

RocketChat:  nekia

 

Atsushi Neki
Senior Software Development Engineer

Fujitsu Australia Software Technology Pty Ltd

14 Rodborough Road, Frenchs Forest NSW 2086, Australia
T
+61 2 9452 9036 M +61 428 223 387

AtsushiN@...
fastware.com.au


Disclaimer

The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete the document and all copies thereof.

Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.

If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe@...





Manish
 

Hi Atsushi,

Thanks for your proposal and at high level the objective makes sense to me and below is my high level observations that you may want to consider. 

First, the fundamental assumption that you make is that all the block files are same across peers is incorrect. The block files are not guaranteed to contain same number of blocks across peers. This is because a block file is bounded by the file size and not by the  number of blocks. Further, the size of a given block may vary slightly on each peer. Though the header and the data section of a blocks are of same size across peers but this difference in overall size could be caused by the metadata section which contains concenter signatures. In addition, on some of the peers, the metadata may also include block commit hash as an additional data. So, either you have to operate at the block numbers (i.e., during purging an archiver client on a peer deals a file that should be purged partially based on where in the file the target block is located) or if you want to deal at the files level the archiver client could just consider files up to previous file.

Second, there are certain kind of queries for which a peer assumes the presence of block files in the current way. This primarily includes history queries, blocks related queries, and txid related queries. These queries may start failing or lead to crashes or unexpected results if you simply delete the files. I did not see any details in your design how you plan to handle these. The potential solutions may range from simply denying these kind of queries to more sophisticated solution such as serving the queries by involving the  achiever repository. However, in either of these the challenge would be to know that the desired block/ transaction has been purged from the local peer (e.g., consider blockByHash or transactionByTxid kind of queries.)

Third, somewhat similar to the second point above, peer has a feature wherein it rebuilds the statedb and historydb if they are dropped and peer is simply restarted. For this feature as well it relies on the percense of blockfiles.

Fourth, I am not sure if gossip is the right communication mechanism that you want to employ for this communication. An archiver client perhaps can simply poll (or register for updates with) the archiver repository.

Finally, I would like to understand in more details what are the benefits of having a separate repository? Why not simply let the files be there on the anchor peer and purge from other peers? If the answer is compression, then ideally we should explore a choise of writing the data in blockfiles in compressed format.


Hope this helps.

Thanks,
Manish


On Thu, Nov 14, 2019 at 10:26 PM nekia <atsushin@...> wrote:

Hello everybody,

 

 

I’d like to propose a new feature ‘block archiving’ for Hyperledger Fabric. We are working on this block archiving project which is listed under Hyperledger Labs repository. Our current main efforts are focused on improvement of reliability. If we could get some feedback on our proposed feature from members involved in Hyperledger Fabric implementation, it’ll be quite useful for further improvement of UX.

 

- Hyperledger Fabric Block Archiving

    https://github.com/hyperledger-labs/fabric-block-archiving

 

    This enhancement for Hyperledger Fabric is aiming to:

 

        - Reduce the total amount of storage space required for an organisation to operate a Hyperledger Fabric network by archiving block data into repository.

        - For organisations, operate a Hyperledger Fabric network with low resourced nodes, such as a IoT edge devices for example.

 

- Our proposal

    https://github.com/hyperledger-labs/hyperledger-labs.github.io/blob/master/labs/fabric-block-archiving.md

 

- Technical overview

    https://github.com/nekia/fabric-block-archiving/blob/techoverview/BlockVault%20-%20Technical%20Overview.pdf

 

 

Kind regards,

Atsushi Neki

RocketChat:  nekia

 

Atsushi Neki
Senior Software Development Engineer

Fujitsu Australia Software Technology Pty Ltd

14 Rodborough Road, Frenchs Forest NSW 2086, Australia
T +61 2 9452 9036 M +61 428 223 387
AtsushiN@...
fastware.com.au


Disclaimer

The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete the document and all copies thereof.


Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.


If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe@...


nekia <atsushin@...>
 

Hello everybody,

 

 

I’d like to propose a new feature ‘block archiving’ for Hyperledger Fabric. We are working on this block archiving project which is listed under Hyperledger Labs repository. Our current main efforts are focused on improvement of reliability. If we could get some feedback on our proposed feature from members involved in Hyperledger Fabric implementation, it’ll be quite useful for further improvement of UX.

 

- Hyperledger Fabric Block Archiving

    https://github.com/hyperledger-labs/fabric-block-archiving

 

    This enhancement for Hyperledger Fabric is aiming to:

 

        - Reduce the total amount of storage space required for an organisation to operate a Hyperledger Fabric network by archiving block data into repository.

        - For organisations, operate a Hyperledger Fabric network with low resourced nodes, such as a IoT edge devices for example.

 

- Our proposal

    https://github.com/hyperledger-labs/hyperledger-labs.github.io/blob/master/labs/fabric-block-archiving.md

 

- Technical overview

    https://github.com/nekia/fabric-block-archiving/blob/techoverview/BlockVault%20-%20Technical%20Overview.pdf

 

 

Kind regards,

Atsushi Neki

RocketChat:  nekia

 

Atsushi Neki
Senior Software Development Engineer

Fujitsu Australia Software Technology Pty Ltd

14 Rodborough Road, Frenchs Forest NSW 2086, Australia
T +61 2 9452 9036 M +61 428 223 387
AtsushiN@...
fastware.com.au


Disclaimer

The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete the document and all copies thereof.


Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.


If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe@...