Inconsistency of read result can be occurred when using StateCouchDB with cache #cache #couchdb


Jeehoon Lim
 

Hi all.

I report a bug to Hyperledger Jira. ( https://jira.hyperledger.org/browse/FAB-17954 )

Please check wheather it could be a real problem or not, and which is better solution for this.

==========================================================

1.Condition

  • Use HLF v2.0.0 ~
  • Use CouchDB as statedb with cache
  • Chaincode use struct as data (The fields are not ordered alphabetically)

2. Symptom

  • With single peer : It can occurred after calling GetState between 2 invocations of PutState on same key.
    Before 2nd PutState, the keys in the query result would be alphabetically ordered.
    After 2nd PutState, they would be ordered as fields in struct.
    But it's not a big problem.
  • Multiple peers : It can occurred one peer calling GetState between 2 invocations of PutState on same key, another peer does not.
    On this situation, GetState result from 2 peers would be different.
    And if there's an invocation that includes GetState, it would fail with the error 'Inconsistent Response Payload from Different peers.'

3. Reason
The cache of statecouchdb loads data when chaincode calls GetState, and update cache when process transaction - only the key is exist in cache.

When loading, the data is marshalled as alphabetically ordererd.( keyValToCouchDoc func )
When updating, the data is marshalled in chaincode as order of fields in struct.( In the writeset )

4. Solutions

It can be resolved by guiding the way marshalling value in the chaincode.

But to remove the problem completely, the code should be changed.

I make 2 solutions for this symptom.

Solution 1 : Unmarshall and marshall data before update cache
     .Good points - Keep current architecture
     .Bad points - Not so efficient

Solution 2 : Do not update cache - just remove when updated
     .Good points - Very simple
                             No need of hold new value in committer
     .Bad points - Cache miss can be occurred one more time

I've already test both solutions, and they solve the problem.

 

Join fabric@lists.hyperledger.org to automatically receive all group messages.