Please check your Hyperledger Fabric client.
The endorsement policy should be fulfilled at your API sending the message proposals the the endorsement peers further receiving and packaging the signed proposals to be transmitted to the orderer.
I think your API (HLF Client) is not getting the correct signed proposals from the chaincode endorsing policy orgs.
Regards,
Marcos Sarres | CEO | +55 61 98116 7866

De: fabric@... <fabric@...> Em nome de Curtis Miles
Enviada em: sexta-feira, 1 de abril de 2022 10:32
Para: fabric@...
Assunto: [Hyperledger Fabric] Transaction read/write sets are bleeding into each other
Hello world!
I’ve hit what appears to be a strange situation when testing my soon-to-be production network under a bit of (very light) load. I have a very simple chaincode function that is doing two things within a single transaction:
1. Retrieving a document of type A by id, updating it, and storing it back (using its same id)
2. Storing a new object of type B by id (the object is passed in a parameter to the chaincode function)
When I try to execute a number of these transactions in parallel (e.g. 5) such that they are included in the same block, I’m *randomly* getting a ENDORSEMENT_POLICY_FAILURE error. This didn’t make any sense to me because each transaction is dealing with distinct objects of type A and B and nothing else is going on in the network at the time. So I look into the blocks that had the transactions that were failing and noticed something very strange. It looks like the items in the read/write sets in the transactions are bleeding into each other. For example, a single block includes the following read/write sets within three different transactions:
Transaction 1 (which looks correct to me):
{
"namespace": "projectXYZ",
"rwset": {
"reads": [
{
"key": "A1",
"version": {
"block_num": "79389",
"tx_num": "0"
}
},
{
"key": "B1",
"version": null
}
],
"range_queries_info": [],
"writes": [
{
"key": "A1",
"is_delete": false,
"value": "{\"id\":\"1\”, \"type\":\"A\”}"}"
},
{
"key": "B1",
"is_delete": false,
"value": "{\"id\":\"1\”, \"type\":\"B\”}"}"
}
],
"metadata_writes": []
},
"collection_hashed_rwset": []
}
Transaction 2 (has an extra read/write of A3, which I would have expected to be in a different transaction):
{
"namespace": "projectXYZ",
"rwset": {
"reads": [
{
"key": "A2",
"version": {
"block_num": "79389",
"tx_num": "0"
}
},
{
"key": "A3",
"version": {
"block_num": "79389",
"tx_num": "0"
}
},
{
"key": "B2",
"version": null
}
],
"range_queries_info": [],
"writes": [
{
"key": "A2",
"is_delete": false,
"value": "{\"id\":\"2\”, \"type\":\"A\”}"}"
},
{
"key": "A3",
"is_delete": false,
"value": "{\"id\":\"3\”, \"type\":\"A\”}"}"
},
{
"key": "B2",
"is_delete": false,
"value": "{\"id\":\"2\”, \"type\":\"B\”}"}"
}
],
"metadata_writes": []
},
"collection_hashed_rwset": []
}
Transaction 3 (has only a read of A3, with no write of A3 or B3):
{
"namespace": "projectXYZ",
"rwset": {
"reads": [
{
"key": "A3",
"version": {
"block_num": "79389",
"tx_num": "0"
}
}
],
"range_queries_info": [],
"writes": [],
"metadata_writes": []
},
"collection_hashed_rwset": []
}
If I only submit one transaction per block, everything is fine (although for performance reasons, that isn’t going to work for me).
Does anyone have any ideas why this might be happening? Shouldn’t it be fundamentally impossible for these transactions to bleed into each other? Can you think of anything I might be doing wrong that is causing this?
Thanks for any help you can offer!
Curtis.