Date   

Network configuration file

Siddharth Jain
 

Hi,

we are looking at the fabric-samples
let connectionProfile = yaml.safeLoad(fs.readFileSync('../gateway/connection-org1.yaml', 'utf8'));

but cannot find the connection-org1.yaml file in https://github.com/hyperledger/fabric-samples/tree/v2.1.0/commercial-paper/organization/digibank/gateway

Is there any document that explains the syntax of this file needed by the Gateway class?

Sid


Re: What are the minimum component requirements for an entity participate in a blockchain network? #network #fabric

Nicholas Leonardi
 

Just a peer is enough, you don't even need couchdb although it is recommended for fast rich queries. You can use a third-party orderer to send the transactions. 
The steps are:
    1. Generate an identity for that peer with the right certificates
    2. Add it to a channel
    3. Fetch the block of that channel
    4. Join the channel
    5. Install a chaincode package generated and provided by an admin of the channel

Em quarta-feira, 10 de junho de 2020 18:22:15 BRT, Rui M. <rainmanmorais@...> escreveu:


Hello everyone,

So my question is exactly that, what are the minimum component requirements for a organization to fully participate in a fabric business network? I mean, is a peer and a couchdb enough? Do all the organizations need to have a peer node and orderer node along with couchdb? Can a organization have 0 components and still be able to participate or interact in any form?

Thanks in advance guys :)


What are the minimum component requirements for an entity participate in a blockchain network? #network #fabric

Rui M.
 

Hello everyone,

So my question is exactly that, what are the minimum component requirements for a organization to fully participate in a fabric business network? I mean, is a peer and a couchdb enough? Do all the organizations need to have a peer node and orderer node along with couchdb? Can a organization have 0 components and still be able to participate or interact in any form?

Thanks in advance guys :)


How to verify RAFT health in HLF version 1.4.4 #raft

shrugupt@...
 

Hi,

I want to add/remove orderer node in the network. I went through the orderer reconfiguration article.

My question is how can the RAFT health be verified. I need it for two purpose:
1. Before adding/removing node, I need to ensure that RAFT is stable and running.
2. In case of adding orderer node, after adding new orderer node certificate in TLS consenter section, I want to verify the if the RAFT is stable before adding this new orderer node endpoint in ordererAddresses section of configuration block.

I found that in HLF v1.2, a metric "consensus_etcdraft_active_nodes" has been added. As well as, data plane operation has been enhanced
to reject add/remove orderer node if it can lead to instable RAFT cluster. But I could not find anyway in HLF v1.4.4 to verify RAFT health.

Thanks,
Shruti


Hyperledger Fabric: How to deal with node certificate expiry? #fabric-ca #hyperledger-fabric #fabric-questions

jefjos6692@...
 

What is the recommendation to deal with an expired peer certificate? Must the network admin periodically update the local MSP of peer with renewed certificate?


Replace crypto material generated with cryptogen with CA certificates #fabric-questions #fabric #hyperledger-fabric

Mattia Bolzonella
 

Hi, I have a running network with Fabric 1.4 and I need to upgrade it to Fabric 2.1.1.
I've generated the crypto material using cryptogen and now I want to use Fabric CA. If I regenerate the crypto material using the CA I'm able to recover the data of the network (ledger's data, chaincode, etc.)? 


how does caliper use eventSource?

sukill
 

Here is part of my caliper network config file, configuring list of peers with eventSource option...

Does caliper register transaction events on all peers or one of those?

channels:
mychannel1:
configBinary: networks/fabric/config_raft/mychannel1.tx
created: false
orderers:
- orderer0.example.com
- orderer1.example.com
- orderer2.example.com
peers:
peer0.org0.example.com:
eventSource: true
peer1.org0.example.com:
eventSource: true
peer0.org1.example.com:
eventSource: true
peer1.org1.example.com:
eventSource: true
chaincodes:
- id: smallbank1
version: v0
language: golang
path: fabric/scenario/smallbank/go
- id: simple1
version: v0
language: golang
path: fabric/scenario/simple/go


Re: Inconsistency of read result can be occurred when using StateCouchDB with cache #cache #couchdb

David Enyeart
 

Speaking more generally... JSON spec does not guarantee a certain order or formatting. What you put into a JSON database (whether CouchDB, MongoDB, etc) may not match exactly what you get back. Only the content is guaranteed to be the same. Sometimes you get 'lucky' and it is exactly the same, but it is not guaranteed to be the same. And even if you get lucky at first, certain data or formatting or upgraded components will likely break you in the future. If you are doing reads/writes in chaincode, your chaincode must ensure deterministic processing so that writes match across endorsing peers. This means you need to marshal the JSON in chaincode, and you need to understand the marshaling behavior of your library. There exists 'canonical JSON' libraries in most languages to provide deterministic marshaling (Go json.Marshal() itself is deterministic since it sorts keys).

In a nutshell, if you want to query JSON content, then use JSON data with CouchDB state database. If you want guarantees about your bytes staying the same, then use LevelDB state database or CouchDB with non-JSON data (Fabric will automatically save non-JSON data as a CouchDB binary attachment... although performance will not be as good as LevelDB).


Dave Enyeart

"Senthil Nathan" ---06/08/2020 11:17:06 PM---Hi Jeehoon Lim, Thank you for the nice explanation.

From: "Senthil Nathan" <cendhu@...>
To: Jeehoon Lim <jholim@...>
Cc: fabric@...
Date: 06/08/2020 11:17 PM
Subject: [EXTERNAL] Re: [Hyperledger Fabric] Inconsistency of read result can be occurred when using StateCouchDB with cache #couchdb #cache
Sent by: fabric@...





Hi Jeehoon Lim,

Thank you for the nice explanation. 
      And if there's an invocation that includes GetState, it would fail with the error 'Inconsistent Response Payload from Different peers.'

The read-write set constructed by the peer should be consistent across peers
      1. In the read-set, we include only the version not the actual data. 
      2. In the write-set, we store the value passed by the chaincode as-is. 
Only if the chaincode is non-deterministic, the read-write set would differ across peers.

I think, in your scenario, the chaincode is including the read state as-is in the chaincode response (without unmarshaling into the struct).  

In general, the chaincode marshals the struct and passes the bytes to the peer. When the marshaling is done on the struct, the ordering of keys would be the same as the order of fields present in the struct. When the chaincode retrieves the stored value from the peer, it will never be in the same order (irrespective of the usage of cache). 

Hence, it is necessary for the chaincode to unmarshal the received value to the struct before using the value for any other purpose. As we use json.Marshal on the map within the ledger code, it sorts the value by map keys (it is just a side-effect and is not done intentionally). We do this because we need to add a few more internal keys to the doc. This is the major reason for doing marshaling and unmarshaling using a map but not to sort by keys. 

I am not sure whether making the chaincode rely on the low-level peer implementation is a good idea. It will also be an unnecessary constraint on us not to change the low-level implementation details. In your case, I think, you need to unmarshal the received bytes into the struct before using it without having any assumption on the key order

For example, assume that the user submits the bytes of following doc in the invoke argument (without using struct, just []byte(jsonString))
{
"index":{
"fields":["docType","owner"]
},
"ddoc":"indexOwnerDoc",
"name":"indexOwner",
"type":"json"
}

There wouldn't be any in-consistent read-write set. However, when the value is read, it wouldn't be in the same order as passed by the user. If order matters for the receiver, it is recommended to use struct to process the json. 

Having said this, I am okay to explicitly make the peer always return the json values sorted by key (as a consistent behavior is recommended).   

Senthil
Regards,

On Tue, Jun 9, 2020 at 4:42 AM Jeehoon Lim <jholim@...> wrote:
    Hi all.

    I report a bug to Hyperledger Jira. ( https://jira.hyperledger.org/browse/FAB-17954 )

    Please check wheather it could be a real problem or not, and which is better solution for this.

    ==========================================================

    1.Condition

  • Use HLF v2.0.0 ~
  • Use CouchDB as statedb with cache
  • Chaincode use struct as data (The fields are not ordered alphabetically)
    2. Symptom
  • With single peer : It can occurred after calling GetState between 2 invocations of PutState on same key.
    Before 2nd PutState, the keys in the query result would be 
    alphabetically ordered.
    After 2nd PutState, they would be ordered as 
    fields in struct.
    But it's not a big problem.
  • Multiple peers : It can occurred one peer calling GetState between 2 invocations of PutState on same key, another peer does not.
    On this situation, GetState result from 2 peers would be different.
    And if there's an invocation that includes GetState, it would fail with the error 'Inconsistent Response Payload from Different peers.'


    3. Reason
    The cache of statecouchdb loads data when chaincode calls GetState, and update cache when process transaction - only the key is exist in cache.

    When loading, the data is marshalled as alphabetically ordererd.( keyValToCouchDoc func )
    When updating, the data is marshalled in chaincode as order of fields in struct.( In the writeset )


    4. Solutions

    It can be resolved by guiding the way marshalling value in the chaincode.
    But to remove the problem completely, the code should be changed.

    I make 2 solutions for this symptom.
    Solution 1 : Unmarshall and marshall data before update cache
         .Good points - Keep current architecture
         .Bad points - Not so efficient

    Solution 2 : Do not update cache - just remove when updated
         .Good points - Very simple
                                 No need of hold new value in committer
         .Bad points - Cache miss can be occurred one more time

    I've already test both solutions, and they solve the problem.

     






Updating the node CRL automatically in Hyperledger Fabric #fabric-ca #fabric-ca-client #hyperledger-fabric

chintanr97@...
 

The CRL will be generated based on the property given as "next update". Now the question is:

Is it possible to update the local MSPs of the nodes in the network with the latest CRL automatically? Does the peer nodes have a capability which will allow them to periodically and automatically sync the CRL? Without having the org admin to fetch it from CA and manually update the local file system of the peer?


Re: What changes have improved the performance between v2.0.1 and v2.1.0? #fabric #hyperledger-fabric

Senthil Nathan
 

Senthil, have you seen this in your performance testing? I took a look at the system test performance benchmarking, and at best I can say its inconclusive. There isn't a wide enough range to say the variation is enough to warrant saying this occurred. We've even on occasion recorded very small swings in the opposing direction where 2.1 performs ever so slightly slower than 2.0.

Brett, I didn't compare the performance of v2.0.1 and v2.1.0. However, i did verify that https://github.com/hyperledger/fabric/commit/78c4e58c318715249253ce85facaf120b8d9e6fd improved the performance when CouchDB was used (haven't tested it with goleveldb). This commit was initially added in v2.1 and then backported to v2.0. 

Regards,
Senthil

On Tue, Jun 9, 2020 at 11:10 AM Brett T Logan <Brett.T.Logan@...> wrote:
Yoojin,
 
There a many things that influence performance of a network, some things to consider are:
 
How many times have you executed the performance test? Were they executed on a common host, i.e. were all components on the same host? Were they executed on a shared environment, i.e., was the network running on a non-dedicated cloud system, or was it running in a non-dedicated kubernetes environment and was the proper anti-affinity applied to ensure overcrowding didn't occur? Are the hosts co-located with local storage? Were you communicating on internal private vlans or external through the internet? When you moved from 2.0 to 2.1, did you ensure the components launch on the same hosts (did VM1 contain peer1 in 2.0 and did VM1 also contain peer1 in 2.1)?
 
Senthil, have you seen this in your performance testing? I took a look at the system test performance benchmarking, and at best I can say its inconclusive. There isn't a wide enough range to say the variation is enough to warrant saying this occurred. We've even on occasion recorded very small swings in the opposing direction where 2.1 performs ever so slightly slower than 2.0.
 
The takeaway from this is, there are a million factors that influence the performance of the network, the key to performance testing is repeatability, for every variation in the test setup, it takes time (and lots of log analysis) to say for sure what caused the variation in the results.
 
Brett Logan
Software Engineer, IBM Blockchain
Phone: 1-984-242-6890
 
 
 
----- Original message -----
From: "Senthil Nathan" <cendhu@...>
Sent by: fabric@...
To: cendhu@...
Cc: Yoojin Chang <arashi213@...>, fabric@...
Subject: [EXTERNAL] Re: [Hyperledger Fabric] What changes have improved the performance between v2.0.1 and v2.1.0? #fabric #hyperledger-fabric
Date: Tue, Jun 9, 2020 12:51 AM
 
   https://github.com/hyperledger/fabric/commit/78c4e58c318715249253ce85facaf120b8d9e6fd could be the cause of improvement if you are not using the new lifecyle feature for the chaincode deployment. This commit hasn't been backported to v2.0.1 At least when we use CouchDB as the state database, we know for sure that this commit improved the performance as it reduced one entry in the read-set. 
 
This is actually backported to v2.0.1 (sorry for the confusion). Please ensure whether your test setup has this commit.  
 
Regards,
Senthil
 
On Tue, Jun 9, 2020 at 10:14 AM Senthil Nathan via lists.hyperledger.org <cendhu=gmail.com@...> wrote:
Hi Yoojin,
 
   https://github.com/hyperledger/fabric/commit/78c4e58c318715249253ce85facaf120b8d9e6fd could be the cause of improvement if you are not using the new lifecyle feature for the chaincode deployment. This commit hasn't been backported to v2.0.1 At least when we use CouchDB as the state database, we know for sure that this commit improved the performance as it reduced one entry in the read-set. 
 
   If that is not the case, can you share the request arrival rate so that we can check whether it is due to FAB-14761: Limit concurrent requests to endorser and deliver services that indirectly help in reducing the overloaded situation?
 
Regards,
Senthi 
 
On Tue, Jun 9, 2020 at 9:39 AM Yoojin Chang <arashi213@...> wrote:

I recently tested and compared the performance of Fabric between v2.0.1 and v2.1.0 using Caliper.

As a result, Fabric v2.1.0 shows better performance than v2.0.1  when setting peer with leveldb and calling a very simple chaincode.

(invoke TPS increased by 10% , query TPS increased by 27%)

 

Does anyone know what features or fixes has improved the performance?

v2.1.0 release document : https://github.com/hyperledger/fabric/releases/tag/v2.1.0

 

(It seems that a change in the version of go grpc would have affected the performance, but I’m not sure.)
 

 

 

 

 

 


Re: What changes have improved the performance between v2.0.1 and v2.1.0? #fabric #hyperledger-fabric

Brett T Logan <brett.t.logan@...>
 

Yoojin,
 
There a many things that influence performance of a network, some things to consider are:
 
How many times have you executed the performance test? Were they executed on a common host, i.e. were all components on the same host? Were they executed on a shared environment, i.e., was the network running on a non-dedicated cloud system, or was it running in a non-dedicated kubernetes environment and was the proper anti-affinity applied to ensure overcrowding didn't occur? Are the hosts co-located with local storage? Were you communicating on internal private vlans or external through the internet? When you moved from 2.0 to 2.1, did you ensure the components launch on the same hosts (did VM1 contain peer1 in 2.0 and did VM1 also contain peer1 in 2.1)?
 
Senthil, have you seen this in your performance testing? I took a look at the system test performance benchmarking, and at best I can say its inconclusive. There isn't a wide enough range to say the variation is enough to warrant saying this occurred. We've even on occasion recorded very small swings in the opposing direction where 2.1 performs ever so slightly slower than 2.0.
 
The takeaway from this is, there are a million factors that influence the performance of the network, the key to performance testing is repeatability, for every variation in the test setup, it takes time (and lots of log analysis) to say for sure what caused the variation in the results.
 
Brett Logan
Software Engineer, IBM Blockchain
Phone: 1-984-242-6890
 
 
 

----- Original message -----
From: "Senthil Nathan" <cendhu@...>
Sent by: fabric@...
To: cendhu@...
Cc: Yoojin Chang <arashi213@...>, fabric@...
Subject: [EXTERNAL] Re: [Hyperledger Fabric] What changes have improved the performance between v2.0.1 and v2.1.0? #fabric #hyperledger-fabric
Date: Tue, Jun 9, 2020 12:51 AM
 
   https://github.com/hyperledger/fabric/commit/78c4e58c318715249253ce85facaf120b8d9e6fd could be the cause of improvement if you are not using the new lifecyle feature for the chaincode deployment. This commit hasn't been backported to v2.0.1 At least when we use CouchDB as the state database, we know for sure that this commit improved the performance as it reduced one entry in the read-set. 
 
This is actually backported to v2.0.1 (sorry for the confusion). Please ensure whether your test setup has this commit.  
 
Regards,
Senthil
 
On Tue, Jun 9, 2020 at 10:14 AM Senthil Nathan via lists.hyperledger.org <cendhu=gmail.com@...> wrote:
Hi Yoojin,
 
   https://github.com/hyperledger/fabric/commit/78c4e58c318715249253ce85facaf120b8d9e6fd could be the cause of improvement if you are not using the new lifecyle feature for the chaincode deployment. This commit hasn't been backported to v2.0.1 At least when we use CouchDB as the state database, we know for sure that this commit improved the performance as it reduced one entry in the read-set. 
 
   If that is not the case, can you share the request arrival rate so that we can check whether it is due to FAB-14761: Limit concurrent requests to endorser and deliver services that indirectly help in reducing the overloaded situation?
 
Regards,
Senthi 
 
On Tue, Jun 9, 2020 at 9:39 AM Yoojin Chang <arashi213@...> wrote:

I recently tested and compared the performance of Fabric between v2.0.1 and v2.1.0 using Caliper.

As a result, Fabric v2.1.0 shows better performance than v2.0.1  when setting peer with leveldb and calling a very simple chaincode.

(invoke TPS increased by 10% , query TPS increased by 27%)

 

Does anyone know what features or fixes has improved the performance?

v2.1.0 release document : https://github.com/hyperledger/fabric/releases/tag/v2.1.0

 

(It seems that a change in the version of go grpc would have affected the performance, but I’m not sure.)
 

 

 

 

 

 


Re: What changes have improved the performance between v2.0.1 and v2.1.0? #fabric #hyperledger-fabric

Senthil Nathan
 

   https://github.com/hyperledger/fabric/commit/78c4e58c318715249253ce85facaf120b8d9e6fd could be the cause of improvement if you are not using the new lifecyle feature for the chaincode deployment. This commit hasn't been backported to v2.0.1 At least when we use CouchDB as the state database, we know for sure that this commit improved the performance as it reduced one entry in the read-set. 

This is actually backported to v2.0.1 (sorry for the confusion). Please ensure whether your test setup has this commit.  

Regards,
Senthil

On Tue, Jun 9, 2020 at 10:14 AM Senthil Nathan via lists.hyperledger.org <cendhu=gmail.com@...> wrote:
Hi Yoojin,

   https://github.com/hyperledger/fabric/commit/78c4e58c318715249253ce85facaf120b8d9e6fd could be the cause of improvement if you are not using the new lifecyle feature for the chaincode deployment. This commit hasn't been backported to v2.0.1 At least when we use CouchDB as the state database, we know for sure that this commit improved the performance as it reduced one entry in the read-set. 

   If that is not the case, can you share the request arrival rate so that we can check whether it is due to FAB-14761: Limit concurrent requests to endorser and deliver services that indirectly help in reducing the overloaded situation?

Regards,
Senthi 

On Tue, Jun 9, 2020 at 9:39 AM Yoojin Chang <arashi213@...> wrote:

I recently tested and compared the performance of Fabric between v2.0.1 and v2.1.0 using Caliper.

As a result, Fabric v2.1.0 shows better performance than v2.0.1  when setting peer with leveldb and calling a very simple chaincode.

(invoke TPS increased by 10% , query TPS increased by 27%)

 

Does anyone know what features or fixes has improved the performance?

v2.1.0 release document : https://github.com/hyperledger/fabric/releases/tag/v2.1.0

 

(It seems that a change in the version of go grpc would have affected the performance, but I’m not sure.)


Re: What changes have improved the performance between v2.0.1 and v2.1.0? #fabric #hyperledger-fabric

Senthil Nathan
 

Hi Yoojin,

   https://github.com/hyperledger/fabric/commit/78c4e58c318715249253ce85facaf120b8d9e6fd could be the cause of improvement if you are not using the new lifecyle feature for the chaincode deployment. This commit hasn't been backported to v2.0.1 At least when we use CouchDB as the state database, we know for sure that this commit improved the performance as it reduced one entry in the read-set. 

   If that is not the case, can you share the request arrival rate so that we can check whether it is due to FAB-14761: Limit concurrent requests to endorser and deliver services that indirectly help in reducing the overloaded situation?

Regards,
Senthi 

On Tue, Jun 9, 2020 at 9:39 AM Yoojin Chang <arashi213@...> wrote:

I recently tested and compared the performance of Fabric between v2.0.1 and v2.1.0 using Caliper.

As a result, Fabric v2.1.0 shows better performance than v2.0.1  when setting peer with leveldb and calling a very simple chaincode.

(invoke TPS increased by 10% , query TPS increased by 27%)

 

Does anyone know what features or fixes has improved the performance?

v2.1.0 release document : https://github.com/hyperledger/fabric/releases/tag/v2.1.0

 

(It seems that a change in the version of go grpc would have affected the performance, but I’m not sure.)


What changes have improved the performance between v2.0.1 and v2.1.0? #fabric #hyperledger-fabric

Yoojin Chang
 

I recently tested and compared the performance of Fabric between v2.0.1 and v2.1.0 using Caliper.

As a result, Fabric v2.1.0 shows better performance than v2.0.1  when setting peer with leveldb and calling a very simple chaincode.

(invoke TPS increased by 10% , query TPS increased by 27%)

 

Does anyone know what features or fixes has improved the performance?

v2.1.0 release document : https://github.com/hyperledger/fabric/releases/tag/v2.1.0

 

(It seems that a change in the version of go grpc would have affected the performance, but I’m not sure.)


Re: Inconsistency of read result can be occurred when using StateCouchDB with cache #cache #couchdb

Senthil Nathan
 

Hi Jeehoon Lim,

Thank you for the nice explanation. 

  And if there's an invocation that includes GetState, it would fail with the error 'Inconsistent Response Payload from Different peers.'

The read-write set constructed by the peer should be consistent across peers
  1. In the read-set, we include only the version not the actual data. 
  2. In the write-set, we store the value passed by the chaincode as-is. 
Only if the chaincode is non-deterministic, the read-write set would differ across peers.

I think, in your scenario, the chaincode is including the read state as-is in the chaincode response (without unmarshaling into the struct).  

In general, the chaincode marshals the struct and passes the bytes to the peer. When the marshaling is done on the struct, the ordering of keys would be the same as the order of fields present in the struct. When the chaincode retrieves the stored value from the peer, it will never be in the same order (irrespective of the usage of cache). 

Hence, it is necessary for the chaincode to unmarshal the received value to the struct before using the value for any other purpose. As we use json.Marshal on the map within the ledger code, it sorts the value by map keys (it is just a side-effect and is not done intentionally). We do this because we need to add a few more internal keys to the doc. This is the major reason for doing marshaling and unmarshaling using a map but not to sort by keys. 

I am not sure whether making the chaincode rely on the low-level peer implementation is a good idea. It will also be an unnecessary constraint on us not to change the low-level implementation details. In your case, I think, you need to unmarshal the received bytes into the struct before using it without having any assumption on the key order

For example, assume that the user submits the bytes of following doc in the invoke argument (without using struct, just []byte(jsonString))
{
  "index":{
      "fields":["docType","owner"]
  },
  "ddoc":"indexOwnerDoc",
  "name":"indexOwner",
  "type":"json"
}
There wouldn't be any in-consistent read-write set. However, when the value is read, it wouldn't be in the same order as passed by the user. If order matters for the receiver, it is recommended to use struct to process the json. 

Having said this, I am okay to explicitly make the peer always return the json values sorted by key (as a consistent behavior is recommended).   

Senthil
Regards,

On Tue, Jun 9, 2020 at 4:42 AM Jeehoon Lim <jholim@...> wrote:
Hi all.

I report a bug to Hyperledger Jira. ( https://jira.hyperledger.org/browse/FAB-17954 )

Please check wheather it could be a real problem or not, and which is better solution for this.

==========================================================

1.Condition

  • Use HLF v2.0.0 ~
  • Use CouchDB as statedb with cache
  • Chaincode use struct as data (The fields are not ordered alphabetically)

2. Symptom

  • With single peer : It can occurred after calling GetState between 2 invocations of PutState on same key.
    Before 2nd PutState, the keys in the query result would be alphabetically ordered.
    After 2nd PutState, they would be ordered as fields in struct.
    But it's not a big problem.
  • Multiple peers : It can occurred one peer calling GetState between 2 invocations of PutState on same key, another peer does not.
    On this situation, GetState result from 2 peers would be different.
    And if there's an invocation that includes GetState, it would fail with the error 'Inconsistent Response Payload from Different peers.'

3. Reason
The cache of statecouchdb loads data when chaincode calls GetState, and update cache when process transaction - only the key is exist in cache.

When loading, the data is marshalled as alphabetically ordererd.( keyValToCouchDoc func )
When updating, the data is marshalled in chaincode as order of fields in struct.( In the writeset )

4. Solutions

It can be resolved by guiding the way marshalling value in the chaincode.

But to remove the problem completely, the code should be changed.

I make 2 solutions for this symptom.

Solution 1 : Unmarshall and marshall data before update cache
     .Good points - Keep current architecture
     .Bad points - Not so efficient

Solution 2 : Do not update cache - just remove when updated
     .Good points - Very simple
                             No need of hold new value in committer
     .Bad points - Cache miss can be occurred one more time

I've already test both solutions, and they solve the problem.

 


Inconsistency of read result can be occurred when using StateCouchDB with cache #cache #couchdb

Jeehoon Lim
 

Hi all.

I report a bug to Hyperledger Jira. ( https://jira.hyperledger.org/browse/FAB-17954 )

Please check wheather it could be a real problem or not, and which is better solution for this.

==========================================================

1.Condition

  • Use HLF v2.0.0 ~
  • Use CouchDB as statedb with cache
  • Chaincode use struct as data (The fields are not ordered alphabetically)

2. Symptom

  • With single peer : It can occurred after calling GetState between 2 invocations of PutState on same key.
    Before 2nd PutState, the keys in the query result would be alphabetically ordered.
    After 2nd PutState, they would be ordered as fields in struct.
    But it's not a big problem.
  • Multiple peers : It can occurred one peer calling GetState between 2 invocations of PutState on same key, another peer does not.
    On this situation, GetState result from 2 peers would be different.
    And if there's an invocation that includes GetState, it would fail with the error 'Inconsistent Response Payload from Different peers.'

3. Reason
The cache of statecouchdb loads data when chaincode calls GetState, and update cache when process transaction - only the key is exist in cache.

When loading, the data is marshalled as alphabetically ordererd.( keyValToCouchDoc func )
When updating, the data is marshalled in chaincode as order of fields in struct.( In the writeset )

4. Solutions

It can be resolved by guiding the way marshalling value in the chaincode.

But to remove the problem completely, the code should be changed.

I make 2 solutions for this symptom.

Solution 1 : Unmarshall and marshall data before update cache
     .Good points - Keep current architecture
     .Bad points - Not so efficient

Solution 2 : Do not update cache - just remove when updated
     .Good points - Very simple
                             No need of hold new value in committer
     .Bad points - Cache miss can be occurred one more time

I've already test both solutions, and they solve the problem.

 


Next Hyperledger Fabric Application Developer Community call -- this Thursday 11th June @ 3pm UTC time: 4pm UK, 11am ET, 8am PT

Paul O'Mahoney <mahoney@...>
 

dear Fabric Application Developer,


the next  Fabric Application Developer community call is: Thursday 11th June - 3pm UTC,  4pm UK time (+1), 11am ET (-5 hrs), 8am PT (-8 hrs)  - other time zones here.   It lasts approx 30-60 mins FYI.

The agenda will be posted here -> https://wiki.hyperledger.org/display/fabric/Agendas%3A+Fabric+Application+Developer+Community+Call+Meetings  

This community call is held bi-weekly via Zoom webconference and is aimed at :

- helping the worldwide Hyperledger Fabric Application Developer community grow (eg. developing applications, smart contracts, client apps using the SDKs, tutorials/demos etc -  eg using NodeJS/TypeScript, Java, Go etc etc) 
- helping app developers understand / hear more about exciting new things in Fabric, eg. features upcoming or work in progress - ie things that appeal to the developer
- foster more interest, best practices etc in developing applications (eg developing solutions, use cases) with Hyperledger Fabric. 
- opportunity to ask questions of the Fabric team eg. you may have feedback/questions on your experiences developing solutions with Fabric
- to share stuff you've done with the community, eg sample code / sample use cases that others may be interested in

If you wish to share content on a call, just let me know via email direct or DM me on Rocketchat (ID: mahoney1) and I'll put an item on the agenda. Provide the following:
- the topic (state whether its presentation, or demo etc)
- the full name of the presenter, and 
- approx length of your pitch in minutes


The Zoom webconference ID is https://zoom.us/my/hyperledger.community   

More information can be found on the community page -> https://wiki.hyperledger.org/display/fabric/Fabric+Application+Developer+Community+Calls

You can get calendar invites (eg iCal) here

many thanks for your time - feel free to forward this email if you think it is of interest to a colleague.

Paul O'Mahony
Community Lead - Hyperledger Fabric Developer Community
RocketChat:  mahoney1

mahoney@...


Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 741598.
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


Using custom-affiliation based policies by changing the "cacerts" of an OU defnition #fabric-ca #hyperledger-fabric #policies

chintanr97@...
 

I want 4 intermediate CAs for a peer organization: ICA1, ICA2, ICA3 and ICA4 - one for every Node OU (peer, orderer, admin and client). 

Let's say if I place ICA1 as the "cacerts" attribute in the Peer Node OU of the channel configuration, then will a peer identity under a different ICA (2, 3 or 4), be able to satisfy a policy which says signature of "OrgMSP.peer"?

  • If yes, then how can I make sure that only the set of roles under a specific department can satisfy a policy given by OrgMSP.<role>? I do not wish to create an MSP definition for every department or team in the organization. So, is it achievable without that?
  • If no, then can I also specify a group of ICAs in the Node OU configuration of the channel for a particular OU so that I can leverage very complex policies like "Signature of one-of 'OrgMSP.peer'" and here, "cacerts" property for the "peer OU" will be ICA1 and ICA3. Is this achievable?


Re: How to verify that a state in the couchdb matches a transaction in the blockchain? #fabric-sdk-node #database #fabric-questions

David Enyeart
 

It is recommended to use endorsement policy > 1 to ensure that multiple peers are in agreement for each transaction. Similarly you can query multiple peers if the state is in doubt. This will protect from one 'bad' peer that has been corrupted or tampered.

In order to verify the state database matches the blockchain, you'd have to read all blocks and persist the latest valid update for each key. This is exactly what a peer does when it joins a channel. Therefore you could join a new peer to the channel and compare the new peer's state database with the in doubt peer's state database, by querying all the keys (either directly using CouchDB APIs, or have a client page through the full set of data results using chaincode queries). However this itself can be challenging, especially if you are talking about live peers that are constantly getting updated. Therefore a new peer feature is being implemented that will take a snapshot and hash of current state on a running peer at a specific block height, which can then be compared with a snapshot from another existing peer or a newly built peer. For details of this feature see https://github.com/hyperledger/fabric-rfcs/pull/27.


Dave Enyeart


Hyperledger Fabric Documentation Workgroup call - Western hemisphere - Fri, 06/05/2020 #cal-notice

fabric@lists.hyperledger.org Calendar <noreply@...>
 

Hyperledger Fabric Documentation Workgroup call - Western hemisphere

When:
Friday, 5 June 2020
4:00pm to 5:00pm
(GMT+01:00) Europe/London

Where:
https://zoom.us/j/6223336701

Organizer:
a_o-dowd@... +441962816761

Description:
Documentation workgroup call.
Agenda, minutes and recordings :https://wiki.hyperledger.org/display/fabric/Documentation+Working+Group

2981 - 3000 of 11416