Re: Transaction read/write sets are bleeding into each other


David F. D. Reis
 

Just as a complement to Marcos and Tsvetan,

I got similar errors due to concurrency control when doing stress tests with the same asset keys.

You may check why the transaction fails using QSCC chaincode:

1 - Evaluate Transaction Method:

/*
* Evaluate a transaction and handle any errors
*/
export const evaluateTransaction = async (contract: Contract, transactionName: string, ...transactionArgs: string[]): Promise<Buffer> => {
const transaction = contract.createTransaction(transactionName);
const transactionId = transaction.getTransactionId();
logger.trace({ transaction }, 'Evaluating transaction');

try {
const payload = await transaction.evaluate(...transactionArgs);
logger.trace(
{
transactionId: transactionId,
payload: payload.toString(),
},
'Evaluate transaction response received',
);
return payload;
} catch (err) {
throw handleTXError(transactionId, err);
}
};


2 - Check status transaction:

/*
* Get the validation code of the specified transaction
*/
export const getTransactionValidationCode = async (qsccContract: Contract, transactionId: string): Promise<string> => {
const data = await evaluateTransaction(qsccContract, 'GetTransactionByID', config.channelName, transactionId);

const processedTransaction = protos.protos.ProcessedTransaction.decode(data);
const validationCode = protos.protos.TxValidationCode[processedTransaction.validationCode];

logger.debug({ transactionId }, 'Validation code: %s', validationCode);
return validationCode;
};

--

I use this as part of my retry logic implemented by Event Source and Saga patterns.

Best regards.

David

Em qua., 6 de abr. de 2022 às 20:29, Tsvetan Georgiev <tsvetan@...> escreveu:

Hi Curtis,

Marcos has a good point and you should check your client logic.

To better understand could you share also how many peers participate in the transaction endorsement? Are those failed/bad transactions part of the block built by the Ordering Service and marked as invalid due to concurrency control version check?

If your transactions don't contain the read/write sets you expect then you definitely have to re-check your chaincode and what your client is actually sending to the peers during the endorsement step.

Regards,

Senofi

Tsvetan Georgiev
Director, Senofi Inc.

438-494-7854 | tsvetan@...

www.senofi.ca

www.consortia.io






---- On Wed, 06 Apr 2022 18:54:10 -0400 Marcos Sarres <marcos.sarres@...> wrote ----

Please check your Hyperledger Fabric client.

 

The endorsement policy should be fulfilled at your API sending the message proposals the the endorsement peers further receiving and packaging the signed proposals to be transmitted to the orderer.

 

I think your API (HLF Client) is not getting the correct signed proposals from the chaincode endorsing policy orgs.

 

Regards,

 

Marcos Sarres | CEO | +55 61 98116 7866


 

 

De: fabric@... <fabric@...> Em nome de Curtis Miles
Enviada em: sexta-feira, 1 de abril de 2022 10:32
Para: fabric@...
Assunto: [Hyperledger Fabric] Transaction read/write sets are bleeding into each other

 

Hello world!

 

I’ve hit what appears to be a strange situation when testing my soon-to-be production network under a bit of (very light) load.  I have a very simple chaincode function that is doing two things within a single transaction:
1. Retrieving a document of type A by id, updating it, and storing it back (using its same id)

2. Storing a new object of type B by id (the object is passed in a parameter to the chaincode function)

 

When I try to execute a number of these transactions in parallel (e.g. 5) such that they are included in the same block, I’m *randomly* getting a ENDORSEMENT_POLICY_FAILURE error.  This didn’t make any sense to me because each transaction is dealing with distinct objects of type A and B and nothing else is going on in the network at the time.  So I look into the blocks that had the transactions that were failing and noticed something very strange.  It looks like the items in the read/write sets in the transactions are bleeding into each other.  For example, a single block includes the following read/write sets within three different transactions:

 

Transaction 1 (which looks correct to me):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A1",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               },

             {

                     "key": "B1",

             "version": null

             }

       ],

       "range_queries_info": [],

       "writes": [

             {

        "key": "A1",

        "is_delete": false,

        "value": "{\"id\":\"1\”, \"type\":\"A\”}"}"

             },

             {

        "key": "B1",

        "is_delete": false,

        "value": "{\"id\":\"1\”, \"type\":\"B\”}"}"

             }

       ],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

Transaction 2 (has an extra read/write of A3, which I would have expected to be in a different transaction):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A2",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               },

               {

                     "key": "A3",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

               }

             },

             {

                     "key": "B2",

             "version": null

             }

       ],

       "range_queries_info": [],

       "writes": [

             {

        "key": "A2",

        "is_delete": false,

        "value": "{\"id\":\"2\”, \"type\":\"A\”}"}"

             },

             {

        "key": "A3",

        "is_delete": false,

               "value": "{\"id\":\"3\”, \"type\":\"A\”}"}"

             },

             {

        "key": "B2",

        "is_delete": false,

        "value": "{\"id\":\"2\”, \"type\":\"B\”}"}"

             }

       ],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

Transaction 3 (has only a read of A3, with no write of A3 or B3):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A3",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               }

       ],

       "range_queries_info": [],

       "writes": [],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

If I only submit one transaction per block, everything is fine (although for performance reasons, that isn’t going to work for me). 

 

Does anyone have any ideas why this might be happening?  Shouldn’t it be fundamentally impossible for these transactions to bleed into each other?  Can you think of anything I might be doing wrong that is causing this?

 

Thanks for any help you can offer!

Curtis.








Join {fabric@lists.hyperledger.org to automatically receive all group messages.