Date   

Re: Transaction read/write sets are bleeding into each other

Tsvetan Georgiev
 

Hi Curtis,

Marcos has a good point and you should check your client logic.

To better understand could you share also how many peers participate in the transaction endorsement? Are those failed/bad transactions part of the block built by the Ordering Service and marked as invalid due to concurrency control version check?

If your transactions don't contain the read/write sets you expect then you definitely have to re-check your chaincode and what your client is actually sending to the peers during the endorsement step.

Regards,

Senofi

Tsvetan Georgiev
Director, Senofi Inc.

438-494-7854 | tsvetan@...

www.senofi.ca

www.consortia.io






---- On Wed, 06 Apr 2022 18:54:10 -0400 Marcos Sarres <marcos.sarres@...> wrote ----

Please check your Hyperledger Fabric client.

 

The endorsement policy should be fulfilled at your API sending the message proposals the the endorsement peers further receiving and packaging the signed proposals to be transmitted to the orderer.

 

I think your API (HLF Client) is not getting the correct signed proposals from the chaincode endorsing policy orgs.

 

Regards,

 

Marcos Sarres | CEO | +55 61 98116 7866


 

 

De: fabric@... <fabric@...> Em nome de Curtis Miles
Enviada em: sexta-feira, 1 de abril de 2022 10:32
Para: fabric@...
Assunto: [Hyperledger Fabric] Transaction read/write sets are bleeding into each other

 

Hello world!

 

I’ve hit what appears to be a strange situation when testing my soon-to-be production network under a bit of (very light) load.  I have a very simple chaincode function that is doing two things within a single transaction:
1. Retrieving a document of type A by id, updating it, and storing it back (using its same id)

2. Storing a new object of type B by id (the object is passed in a parameter to the chaincode function)

 

When I try to execute a number of these transactions in parallel (e.g. 5) such that they are included in the same block, I’m *randomly* getting a ENDORSEMENT_POLICY_FAILURE error.  This didn’t make any sense to me because each transaction is dealing with distinct objects of type A and B and nothing else is going on in the network at the time.  So I look into the blocks that had the transactions that were failing and noticed something very strange.  It looks like the items in the read/write sets in the transactions are bleeding into each other.  For example, a single block includes the following read/write sets within three different transactions:

 

Transaction 1 (which looks correct to me):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A1",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               },

             {

                     "key": "B1",

             "version": null

             }

       ],

       "range_queries_info": [],

       "writes": [

             {

        "key": "A1",

        "is_delete": false,

        "value": "{\"id\":\"1\”, \"type\":\"A\”}"}"

             },

             {

        "key": "B1",

        "is_delete": false,

        "value": "{\"id\":\"1\”, \"type\":\"B\”}"}"

             }

       ],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

Transaction 2 (has an extra read/write of A3, which I would have expected to be in a different transaction):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A2",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               },

               {

                     "key": "A3",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

               }

             },

             {

                     "key": "B2",

             "version": null

             }

       ],

       "range_queries_info": [],

       "writes": [

             {

        "key": "A2",

        "is_delete": false,

        "value": "{\"id\":\"2\”, \"type\":\"A\”}"}"

             },

             {

        "key": "A3",

        "is_delete": false,

               "value": "{\"id\":\"3\”, \"type\":\"A\”}"}"

             },

             {

        "key": "B2",

        "is_delete": false,

        "value": "{\"id\":\"2\”, \"type\":\"B\”}"}"

             }

       ],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

Transaction 3 (has only a read of A3, with no write of A3 or B3):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A3",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               }

       ],

       "range_queries_info": [],

       "writes": [],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

If I only submit one transaction per block, everything is fine (although for performance reasons, that isn’t going to work for me). 

 

Does anyone have any ideas why this might be happening?  Shouldn’t it be fundamentally impossible for these transactions to bleed into each other?  Can you think of anything I might be doing wrong that is causing this?

 

Thanks for any help you can offer!

Curtis.







Re: Transaction read/write sets are bleeding into each other

Marcos Sarres
 

Please check your Hyperledger Fabric client.

 

The endorsement policy should be fulfilled at your API sending the message proposals the the endorsement peers further receiving and packaging the signed proposals to be transmitted to the orderer.

 

I think your API (HLF Client) is not getting the correct signed proposals from the chaincode endorsing policy orgs.

 

Regards,

 

Marcos Sarres | CEO | +55 61 98116 7866

 

 

De: fabric@... <fabric@...> Em nome de Curtis Miles
Enviada em: sexta-feira, 1 de abril de 2022 10:32
Para: fabric@...
Assunto: [Hyperledger Fabric] Transaction read/write sets are bleeding into each other

 

Hello world!

 

I’ve hit what appears to be a strange situation when testing my soon-to-be production network under a bit of (very light) load.  I have a very simple chaincode function that is doing two things within a single transaction:
1. Retrieving a document of type A by id, updating it, and storing it back (using its same id)

2. Storing a new object of type B by id (the object is passed in a parameter to the chaincode function)

 

When I try to execute a number of these transactions in parallel (e.g. 5) such that they are included in the same block, I’m *randomly* getting a ENDORSEMENT_POLICY_FAILURE error.  This didn’t make any sense to me because each transaction is dealing with distinct objects of type A and B and nothing else is going on in the network at the time.  So I look into the blocks that had the transactions that were failing and noticed something very strange.  It looks like the items in the read/write sets in the transactions are bleeding into each other.  For example, a single block includes the following read/write sets within three different transactions:

 

Transaction 1 (which looks correct to me):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A1",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               },

             {

                     "key": "B1",

             "version": null

             }

       ],

       "range_queries_info": [],

       "writes": [

             {

        "key": "A1",

        "is_delete": false,

        "value": "{\"id\":\"1\”, \"type\":\"A\”}"}"

             },

             {

        "key": "B1",

        "is_delete": false,

        "value": "{\"id\":\"1\”, \"type\":\"B\”}"}"

             }

       ],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

Transaction 2 (has an extra read/write of A3, which I would have expected to be in a different transaction):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A2",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               },

               {

                     "key": "A3",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

               }

             },

             {

                     "key": "B2",

             "version": null

             }

       ],

       "range_queries_info": [],

       "writes": [

             {

        "key": "A2",

        "is_delete": false,

        "value": "{\"id\":\"2\”, \"type\":\"A\”}"}"

             },

             {

        "key": "A3",

        "is_delete": false,

               "value": "{\"id\":\"3\”, \"type\":\"A\”}"}"

             },

             {

        "key": "B2",

        "is_delete": false,

        "value": "{\"id\":\"2\”, \"type\":\"B\”}"}"

             }

       ],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

Transaction 3 (has only a read of A3, with no write of A3 or B3):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A3",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               }

       ],

       "range_queries_info": [],

       "writes": [],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

If I only submit one transaction per block, everything is fine (although for performance reasons, that isn’t going to work for me). 

 

Does anyone have any ideas why this might be happening?  Shouldn’t it be fundamentally impossible for these transactions to bleed into each other?  Can you think of anything I might be doing wrong that is causing this?

 

Thanks for any help you can offer!

Curtis.


Now: Private Chaincode Lab - 04/05/2022 #cal-notice

fabric@lists.hyperledger.org Calendar <noreply@...>
 

Private Chaincode Lab

When:
04/05/2022
8:00am to 9:00am
(UTC-07:00) America/Los Angeles

Where:
https://zoom.us/my/hyperledger.community.3?pwd=UE90WHhEaHRqOGEyMkV3cldKa2d2dz09

Organizer: Marcus Brandenburger bur@...

View Event

Description:
Two of the Hyperleger Labs projects (private data objects and private chain code) are collaborating to develop a "private smart contracts" capability.

Join Zoom Meeting https://zoom.us/j/5184947650?pwd=UE90WHhEaHRqOGEyMkV3cldKa2d2dz09 Meeting ID: 518 494 7650 Passcode: 475869


Re: Extend network with new orderers while is running

Nikos Karamolegkos
 

Ok based on theory I quote:
On the one hand, the more Raft nodes that are deployed, the more nodes can be lost while maintaining that a majority of the nodes are still available (unless a majority of nodes are available, the cluster will cease to process and create blocks). A five node cluster, for example, can tolerate two down nodes, while a seven node cluster can tolerate three down nodes.

So I understand that in case I have 3 orderer nodes and one is unavailable the Raft works based on the concept of a quorum in which as long as a majority (i.e 2) of the Raft nodes are online, the Raft cluster stays available. Correct?

However, your answer confuse me. Specifically,

That allows the orderers to reach consensus even if one does not agree. But, that does not allow for continued operation if one of those nodes is interrupted and is not responding. If you have five orderers, the set can still reach consensus even if one is down temporarily

What do you mean continued operation? Does not agree for me means that the orderer is bad or dead, in any case the network can keep working with the remaining two orderers.


Re: Extend network with new orderers while is running

Tom Lamm
 

As the scenarios become more complex, multi-channel, which nodes participate in each, etc… I can only refer you to the docs.

The answer to “Are all the nodes joining the channel?” Is “Some or all peers and some or all orderers, depending on your configuration.”

Also, you asked about reconfiguring the consensus rules to make changes during a Peer outage “until the k8s mechanism restart the dead node”. Understand that, to be practical, you won’t be able to. The K8s plane will attempt to restart that node in a matter of minutes. The real question is, “Why did that node fail and will K8’s attempt to restart it succeed, or will the restart only fail again for the same reason?”

I believe you are looking for short answers to complex scenarios. I suggest reviewing the docs and experimenting through some actual tests with a network that is close to your actual requirements and goals. 

On Apr 5, 2022, at 7:07 AM, Nikos Karamolegkos <nkaram@...> wrote:

Also, how can I list which orderers nodes are consenters? Are all the nodes joining the channel?


Re: Extend network with new orderers while is running

Nikos Karamolegkos
 

Also, how can I list which orderers nodes are consenters? Are all the nodes joining the channel?


Re: Extend network with new orderers while is running

Nikos Karamolegkos
 

Sorry but I misspelled. "What I have to change in order the network to keep running with one peer after the second one is down" is wrong. I meant what happens in case where I have only two orderer nodes and one is down? I am wondering if 3 orderers is a good choice given that if one is dead (temporarily) the network can not reach consensus until the k8s mechanism restart the dead node.


Re: Extend network with new orderers while is running

Tom Lamm
 

Nikos,

Be careful about what you mean by “consensus”. In most blockchain conversations consensus refers to an agreement about what is stored in the blocks. In Hyperledger Fabric, that is determined by Peer consensus, not Orderers. See


The ordering nodes determine the order of the transactions (Blocks). See


So the Peers agree on “what” is stored in the blocks, e.g., “A deposit and a withdraw”. The Orderers agree on the order of events, e.g., “Withdraw followed by Deposit”.

In your example network of one Peer and one working Orderer, consensus does not apply, there are no other Peers or Orderers to have conflicting information.



On Apr 5, 2022, at 3:04 AM, Nikos Karamolegkos <nkaram@...> wrote:

What happens if I have two orderers nodes and one is down? What I have to change in order the network to keep running with one peer after the second one is down (I believe somehow I have to change the consensus policy)? Also, how I set the consensus to majority?


manju.venkatachalam@...
 

 
Hi all,

I created a test network which has 2 orgs (each with one peer), 1 orderer in kubernetes using BAF. Orgs are joined in the channel called testchannel. Orderer msp, peer msp and tls certs expired within 1 day. Before it expired, I renewed all the certs using dcm tool and kept it in my local. First I updated the orderer tls cert in system channel and in application channel from orderer cli, by fetching the channel config, decoded, updated renewed orderer tls certs under consenters, encoded and updated the channel config using peer channel update command. I received a successfully submitted message. 

Later replaced orderer msp, peer msp and tls certs in the vault and restarted all the services. When I checked the orderer logs, it didn't show any expiry error.

Now my network's previous certificate expired. Now it is using renewed certs. I am able to invoke and query transactions.

Now I want to add a new org called org3 to the existing (testchannel).

I created a new org (org3). When I tried to join that org to the channel, during peer channel update it failed. It shows the following error,

Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'testchannel': error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 2 of the 'Admins' sub-policies to be satisfied


This error is because of wrong admin certs. Then only I found that during certificate renewal, I updated only orderer tls certs. But the channel config also contains each org's admin certs and cacerts. Now the channel config contains expired certs. But the orgs and vault contain renewed certs. 
This is the cause for the above mentioned error.

Can anyone suggest a way to resolve this? How can we update org's admin certificate in the channel config which has expired certificate?

Thanks in advance......


Re: Extend network with new orderers while is running

Nikos Karamolegkos
 

What happens if I have two orderers nodes and one is down? What I have to change in order the network to keep running with one peer after the second one is down (I believe somehow I have to change the consensus policy)? Also, how I set the consensus to majority?


Re: : [Hyperledger Fabric] Tracking products in Supply chain across Manufacturers/Wholesalers/Distributors/Retailers

Mark Rakhmilevich
 

Satheesh,
There are many such use cases. What you want to track are linked assets, and you can implement them with both shared and private details, so all participants can see shared info but only authorized ones can see the private details. PDCs is one such approach, but there are others, including an on-chain ACL mechanism provided in Oracle implementation of Fabric. You can ping me for more details on that.

Thanks,
Mark

-----Original Message-----
From: fabric@... <fabric@...> On Behalf Of satheesh via lists.hyperledger.org
Sent: Monday, April 4, 2022 12:16 AM
To: fabric@...
Subject: [External] : [Hyperledger Fabric] Tracking products in Supply chain across Manufacturers/Wholesalers/Distributors/Retailers

Consider supply chain usecase where manufacturer, wholesalers, distributors and retailers are involved.
Manufacturer wants to track batch of products manufactured and shipped all the way till retailer.
Wholesaler wants to track orders from either side: orders to Manufacturer and orders from distributors.
Similarly for distributors and retailers.

So we can see here, assets to be tracked are different depending on the role played is whether Manufacturer or wholesaler or distributor.
But in a typical Hyperledger fabric blockchain network, we want all the orgs to see same data.

How do we approach such usecases ? Are there any references ?

Regards,
Satheesh


Re: Extend network with new orderers while is running

Tom Lamm
 

My understanding is, and yes I can accept correction, that the number of consenter nodes is “enough to reach consensus”. If your network relies on majority consensus, the minimum is three. That allows the orderers to reach consensus even if one does not agree. But, that does not allow for continued operation if one of those nodes is interrupted and is not responding. If you have five orderers, the set can still reach consensus even if one is down temporarily.

I wrote an article demonstrating this, Hyperledger and Kubernetes where I deliberately disabled one orderer and watched the logs as the remaining ones re-organized themselves to continue to reach consensus. Next, the Kubernetes based network replaced the disabled orderer, and the group re-organized again to add it back in.

On Apr 1, 2022, at 5:37 AM, Nikos Karamolegkos <nkaram@...> wrote:

I created a new orderer node (in the orderer org) and I would like to join this node to the existing channel. Should I add this new orderer node to the consenters? In general, when should I add the orderer node to the consenters (i.e why is this useful)? How many consenters nodes should I have per channel?


Tracking products in Supply chain across Manufacturers/Wholesalers/Distributors/Retailers

satheesh
 

Consider supply chain usecase where manufacturer, wholesalers, distributors and retailers are involved.
Manufacturer wants to track batch of products manufactured and shipped all the way till retailer.
Wholesaler wants to track orders from either side: orders to Manufacturer and orders from distributors.
Similarly for distributors and retailers.

So we can see here, assets to be tracked are different depending on the role played is whether Manufacturer or wholesaler or distributor.
But in a typical Hyperledger fabric blockchain network, we want all the orgs to see same data.

How do we approach such usecases ? Are there any references ?

Regards,
Satheesh


Adding new org to the existing channel facing issue #raft #configtxgen #hyperledger-fabric

manju.venkatachalam@...
 

Hi All,

Using BAF, I had a hyperledger fabric network v2.2.0 which has 11 orgs, 5 orderers and 5 channels.
Org1 is a part of all the channels. Other than that each channel has 2 individual orgs. So totally 3 orgs are part of each channel. Before all the certs got expired, I renewed all the certs ( orderer msp, tls and peer msp, tls certs) using dcm tool. And updated all the renewed orderer tls certs (for 5 orderers in 5 application channels and system channel) in all the channel config. After successful update, I updated all the renewed certs in the vault. Finally restarted all the services. Now it is working fine, I am able to perform invoke and query function.

Now the problem is, when I try to add a new org in the existing channel, I got the following error,

"Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'testchannel': error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 2 of the 'Admins' sub-policies to be satisfied"
 
Then I checked the channel config file, in that peer's admin certs has old cert. Is this the cause for my error?
If so, can anyone suggest how to update peer certs in the channel config? Because my vault and peer node has updated certificate except channel config.


Transaction read/write sets are bleeding into each other

curtis@...
 

Hello world!

 

I’ve hit what appears to be a strange situation when testing my soon-to-be production network under a bit of (very light) load.  I have a very simple chaincode function that is doing two things within a single transaction:
1. Retrieving a document of type A by id, updating it, and storing it back (using its same id)

2. Storing a new object of type B by id (the object is passed in a parameter to the chaincode function)

 

When I try to execute a number of these transactions in parallel (e.g. 5) such that they are included in the same block, I’m *randomly* getting a ENDORSEMENT_POLICY_FAILURE error.  This didn’t make any sense to me because each transaction is dealing with distinct objects of type A and B and nothing else is going on in the network at the time.  So I look into the blocks that had the transactions that were failing and noticed something very strange.  It looks like the items in the read/write sets in the transactions are bleeding into each other.  For example, a single block includes the following read/write sets within three different transactions:

 

Transaction 1 (which looks correct to me):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A1",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               },

             {

                     "key": "B1",

             "version": null

             }

       ],

       "range_queries_info": [],

       "writes": [

             {

        "key": "A1",

        "is_delete": false,

        "value": "{\"id\":\"1\”, \"type\":\"A\”}"}"

             },

             {

        "key": "B1",

        "is_delete": false,

        "value": "{\"id\":\"1\”, \"type\":\"B\”}"}"

             }

       ],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

Transaction 2 (has an extra read/write of A3, which I would have expected to be in a different transaction):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A2",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               },

               {

                     "key": "A3",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

               }

             },

             {

                     "key": "B2",

             "version": null

             }

       ],

       "range_queries_info": [],

       "writes": [

             {

        "key": "A2",

        "is_delete": false,

        "value": "{\"id\":\"2\”, \"type\":\"A\”}"}"

             },

             {

        "key": "A3",

        "is_delete": false,

               "value": "{\"id\":\"3\”, \"type\":\"A\”}"}"

             },

             {

        "key": "B2",

        "is_delete": false,

        "value": "{\"id\":\"2\”, \"type\":\"B\”}"}"

             }

       ],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

Transaction 3 (has only a read of A3, with no write of A3 or B3):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A3",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               }

       ],

       "range_queries_info": [],

       "writes": [],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

If I only submit one transaction per block, everything is fine (although for performance reasons, that isn’t going to work for me). 

 

Does anyone have any ideas why this might be happening?  Shouldn’t it be fundamentally impossible for these transactions to bleed into each other?  Can you think of anything I might be doing wrong that is causing this?

 

Thanks for any help you can offer!

Curtis.


Re: Extend network with new orderers while is running

Nikos Karamolegkos
 

I created a new orderer node (in the orderer org) and I would like to join this node to the existing channel. Should I add this new orderer node to the consenters? In general, when should I add the orderer node to the consenters (i.e why is this useful)? How many consenters nodes should I have per channel?


Go protocol buffers API usage in Fabric Gateway client API

Mark Lewis
 

Hi Fabric community,

A short notice and request for feedback on the Fabric Gateway client API for Fabric v2.4 and later.

At some point in the not-too-distant future, we will publish a v1.1 release of fabric-gateway including support for block eventing and checkpointing to aid resume of eventing sessions, either after transient connection failures or on subsequent application runs. There is an outline milestone plan here, although not everything in that list will necessarily make the release.

Now for the feedback... One of the changes is to adopt the current Go protocol buffers API, since the API version used in fabric-protos-go was superseded over two years ago and is now deprecated. There are no wire-level changes; this is purely a dependency update within the Fabric Gateway client API code. Are you happy for the Fabric protobuf bindings built using this non-deprecated API to be published as a separate module (similar to how fabric-protos-go is published today), or would you prefer these bindings to be included directly in the fabric-gateway module?

References:

Regards,

    Mark.


Now: Fabric Contributor Meeting - 03/30/2022 #cal-notice

fabric@lists.hyperledger.org Calendar <noreply@...>
 

Fabric Contributor Meeting

When:
03/30/2022
9:00am to 10:00am
(UTC-04:00) America/New York

Where:
https://zoom.us/my/hyperledger.community.3?pwd=UE90WHhEaHRqOGEyMkV3cldKa2d2dz09

Organizer: Dave Enyeart enyeart@...

View Event

Description:
For meeting agendas, recordings, and more details, see https://wiki.hyperledger.org/display/fabric/Contributor+Meetings

Join Zoom Meeting
https://zoom.us/j/5184947650?pwd=UE90WHhEaHRqOGEyMkV3cldKa2d2dz09
 
Meeting ID: 518 494 7650
Passcode: 475869


Fabric Contributor Meeting - March 30, 2022

David Enyeart
 

Hyperledger Fabric Contributor Meeting

When: Every other Wednesday 9am US Eastern, 13:00 UTC

Where: https://zoom.us/j/5184947650?pwd=UE90WHhEaHRqOGEyMkV3cldKa2d2dz09

Agendas and Recordings: https://wiki.hyperledger.org/display/fabric/Contributor+Meetings

 

----------------------------------------------------------------------------------------------------

Agenda for March 29, 2022


Fabric protocol buffer publishing – Mark Lewis

Kubernetes and chaincode-as-a-service update – Josh Kneubuhl

 


Re: Reset HLF ledger data

Tom Lamm
 

When I was building my Kubernetes based network, I used scripts to deploy it. That way I could do just as David said, completely remove it and re-deploy as needed. I highly suggest this especially when developing and the need to “erase the whiteboard” comes up more often.

Tom Lamm


On Mar 29, 2022, at 12:39 PM, David Enyeart <enyeart@...> wrote:

You can blow away the data associated with each container, and then re-create channels, re-join peers, re-deploy chaincodes.

How you map the container data to volumes depends on your deployment, but fundamentally the container data that you need to consider consists of the following:

Peer data locations (includes block store, leveldb databases, chaincodes, snapshots):
CORE_PEER_FILESYSTEMPATH
CORE_LEDGER_SNAPSHOTS_ROOTDIR

CouchDB data location:
/opt/couchdb/data

Orderer data locations (includes block store, leveldb databases, Raft data):
ORDERER_FILELEDGER_LOCATION
ORDERER_CONSENSUS_WALDIR
ORDERER_CONSENSUS_SNAPDIR

You can re-use your existing crypto (MSP and TLS) assuming it is stored elsewhere.

This being said, I'd recommend automating deployment so that you can easily re-create environments instead of tinkering at the data level. For example many people use the ansible playbooks with IBM Blockchain Platform to do this -  https://cloud.ibm.com/docs/blockchain?topic=blockchain-ansible.



On 3/29/22, 8:14 AM, "fabric@... on behalf of Pechimuthu T" <fabric@... on behalf of tpmuthu@...> wrote:

   Hi,

   We have setup HLF fabric network with 5 orderers and two orgs, two pears for each orgs on k8s platform.
   We have deployed fabcar and other test chain code in the network and did testing.

   Now I want remove only ledger data and recreate fresh one.

   As per my understanding,  orderer uses level db to store transaction data.
   Peer uses leveldb and couch db to store chain db, and state db.

   What do i need to do for  reset(means clear)  the ledger data ?

   Do i have to redeploy the entire network again ? ( generate crypto config, start orderers, create channel , join peers, and deploy chain code etc. )

   any solution available ?

   Thanks and Regards,
   T. Pechimuthu






   Disclaimer:

   This e-mail and its attachments may contain official Indian Government information. If you are not the intended recipient, please notify the sender immediately and delete this e-mail. Any dissemination or use of this information by a person other than the intended recipient is unauthorized. The responsibility lies with the recipient to check this email and any attachment for the presence of viruses.