Re: Peers with different heights #fabric #database #consensus


David Enyeart
 

'peer node reset' should only be used if you suspect your peer's data is corrupted - it resets all channels to genesis block so that peer can re-pull/re-process blocks, but wouldn't change block dissemination behavior on your network.

If CORE_PEER_GOSSIP_USELEADERELECTION and CORE_PEER_GOSSIP_ORGLEADER environment variable overrides are not set, then it will fall back to the core.yaml configuration that is baked into the peer image, defaults can be found here:
https://github.com/hyperledger/fabric/blob/release-1.4/sampleconfig/core.yaml#L90-L100

Note that the private data reconciliation message is different... that is a background daemon thread that always runs to check whether there is any missing private data. It does not indicate a problem with block height or that there is actual missing private data, it's just checking, and in your case it found no problems.


Dave Enyeart

"Joao Antunes" ---11/07/2019 07:35:10 AM---Hi, Just making a small update.

From: "Joao Antunes" <joao.antunes@...>
To: fabric@...
Date: 11/07/2019 07:35 AM
Subject: [EXTERNAL] Re: [Hyperledger Fabric] Peers with different heights #fabric #database #consensus
Sent by: fabric@...





Hi,

Just making a small update.
I received another answer that suggested to do a peer node reset:
      1. Took a backup of peer docker container
      2. Took a backup of respective couchdb's data
      3. Stopped the chaincode container associated with this peer, if any
      4. Stopped the couchdb container of the peer
      5. Stopped the peer
      6. Since I was starting the peer using docker-compose file, I updated the peer startup command from 'peer node start' to 'peer node reset'. This will reset the peer's channel data to the genesis block.
      7. Next, again update the peer startup command from 'peer node reset' to 'peer node start'. This time since the peer does not have ledger data, it will pull the blocks from the other peers and rebuild its couchdb data.
(Thank you Mrudav Shukla)

Unfortunately, I don't have the 1.4.3 fabric version. So I scraped this solution.
I restart the peer and it's CouchDB. After the startup, the peer started to get the blocks that were missing and getting synchronized.

At the end of this, all the peers were in sync.


On another test that we did on the same setup, now it's peer1-org1 that is out os sync.

I checked docker-compose.yml and I have no CORE_PEER_GOSSIP_USELEADERELECTION and CORE_PEER_GOSSIP_ORGLEADER variables defined. So it's acting by default. Was is the default behaviour? (And thank you, David Enyeart, for the explanation).
I can see some gossip in logs, but peer1-org1 is still out of sync.

For example:

2019-11-07 12:29:08.498 UTC [gossip.privdata] reconcile -> DEBU 65a92e Reconciliation cycle finished successfully. no items to reconcile

(at this stage I know that is still out of sync).


One question that came up; If one peer is out of sync, and we have an OR policy for endorsement, where we require one member from Org1 or Org2 to endorse the transaction, why is there no block created and sent to orderer?

Thank you all.




Join fabric@lists.hyperledger.org to automatically receive all group messages.