Date   

Re: Fabric-sdk-jAVA-Blockevents #fabric-sdk-java #fabric

Tsvetan Georgiev
 

Hello,

You can use the HLF Java SDK channel to register a chaincode event listener.

So you can do something like:
network.getChannel().registerChaincodeEventListener( <your params here>) ;
You can find an example in the end to end integration test of the SDK:


When your listener is called you can get the payload of the event. See the method public byte[] getPayload() of ChaincodeEvent class.

Best Regards,

Senofi

Tsvetan Georgiev
Director, Senofi Inc.

438-494-7854 | tsvetan@...

www.senofi.ca

www.consortia.io






---- On Tue, 12 Apr 2022 01:27:50 -0400 <jeff.jo95z@...> wrote ----

hi,

Is it possible to return the recorded blockevents in java sdk?
I have attached a my java file.
What modification do I have to do to return the blockevents in a readable format






Now: Private Chaincode Lab - 04/12/2022 #cal-notice

fabric@lists.hyperledger.org Calendar <noreply@...>
 

Private Chaincode Lab

When:
04/12/2022
8:00am to 9:00am
(UTC-07:00) America/Los Angeles

Where:
https://zoom.us/my/hyperledger.community.3?pwd=UE90WHhEaHRqOGEyMkV3cldKa2d2dz09

Organizer: Marcus Brandenburger bur@...

View Event

Description:
Two of the Hyperleger Labs projects (private data objects and private chain code) are collaborating to develop a "private smart contracts" capability.

Join Zoom Meeting https://zoom.us/j/5184947650?pwd=UE90WHhEaHRqOGEyMkV3cldKa2d2dz09 Meeting ID: 518 494 7650 Passcode: 475869


Specify chainID during the deployment of a smart contract on an EVM chaincode

Gourav Sarkar
 

Hi,

 

While deploying a smart contract on an EVM chaincode, can we specify a chainID ?

 

This is an example list of existing chainIDs on ethereum - 

https://github.com/DefiLlama/chainlist/blob/main/utils/extraRpcs.json

 

Warm Regards,

Gourav.


Fabric-sdk-jAVA-Blockevents #fabric-sdk-java #fabric

jeff.jo95z@...
 

hi,

Is it possible to return the recorded blockevents in java sdk?
I have attached a my java file.
What modification do I have to do to return the blockevents in a readable format


Re: Endorsement Ploicy of a chaincode #chaincode

Tsvetan Georgiev
 

Hello, 
You can check the service discovery functionality.

Here is a link to the cli :

https://hyperledger-fabric.readthedocs.io/en/release-2.2/discovery-cli.html#endorsers-query

The hlf sdks also support the discovery service APIs.

Regards,


Senofi

Tsvetan Georgiev
Director, Senofi Inc.

438-494-7854 | tsvetan@...

www.senofi.ca

www.consortia.io





---- On Mon, 11 Apr 2022 07:57:20 -0400 jeff.jo95z@... wrote ----

hi,

How to view the endorsement policy of an already installed chaincode?
Is there any command for that?



Endorsement Ploicy of a chaincode #chaincode

jeff.jo95z@...
 

hi,

How to view the endorsement policy of an already installed chaincode?
Is there any command for that?


Re: Fabric Contributor Meeting - April 13th CANCELLED, Next meeting April 27th

袁怿
 

Hi Enyeart,

I want to add modular crypto service rfc review.
We have a new proposal as implement plan.


On 04/10/2022 00:05David Enyeart<enyeart@...> wrote:

Hyperledger Fabric Contributor Meeting

When: Every other Wednesday 9am US Eastern, 13:00 UTC

Where: https://zoom.us/j/5184947650?pwd=UE90WHhEaHRqOGEyMkV3cldKa2d2dz09

Agendas and Recordings: https://wiki.hyperledger.org/display/fabric/Contributor+Meetings

 

----------------------------------------------------------------------------------------------------

Agenda for April 27, 2022

 

Kubernetes and chaincode-as-a-service update – Josh Kneubuhl

 

 

Please reply if you would like to add additional agenda topics!

 

 


Fabric Contributor Meeting - April 13th CANCELLED, Next meeting April 27th

David Enyeart
 

Hyperledger Fabric Contributor Meeting

When: Every other Wednesday 9am US Eastern, 13:00 UTC

Where: https://zoom.us/j/5184947650?pwd=UE90WHhEaHRqOGEyMkV3cldKa2d2dz09

Agendas and Recordings: https://wiki.hyperledger.org/display/fabric/Contributor+Meetings

 

----------------------------------------------------------------------------------------------------

Agenda for April 27, 2022

 

Kubernetes and chaincode-as-a-service update – Josh Kneubuhl

 

 

Please reply if you would like to add additional agenda topics!

 

 


Cancelled Event: Fabric Contributor Meeting - Wednesday, April 13, 2022 #cal-cancelled

fabric@lists.hyperledger.org Calendar <noreply@...>
 

Cancelled: Fabric Contributor Meeting

This event has been cancelled.

When:
Wednesday, April 13, 2022
9:00am to 10:00am
(UTC-04:00) America/New York

Where:
https://zoom.us/my/hyperledger.community.3?pwd=UE90WHhEaHRqOGEyMkV3cldKa2d2dz09

Organizer: Dave Enyeart enyeart@...

Description:
For meeting agendas, recordings, and more details, see https://wiki.hyperledger.org/display/fabric/Contributor+Meetings

Join Zoom Meeting
https://zoom.us/j/5184947650?pwd=UE90WHhEaHRqOGEyMkV3cldKa2d2dz09
 
Meeting ID: 518 494 7650
Passcode: 475869


Event: Fabric Project Quarterly Update Due #tsc-project-update - 04/14/2022 #tsc-project-update #cal-reminder

fabric@lists.hyperledger.org Calendar <noreply@...>
 

Reminder: Fabric Project Quarterly Update Due #tsc-project-update

When:
04/14/2022

Organizer: community-architects@...

View Event

Description:
Please file a project status report for the TSC here:

https://wiki.hyperledger.org/display/TSC/2022+Project+Updates

https://wiki.hyperledger.org/display/TSC/2022+TSC+Project+Update+Calendar


Re: Update org's admin certificate in channel config #channel #configtxgen #fabric-peer #fabric-questions #fabric-orderer

chris.elder@...
 

The orderers need to be configured to allow the expired admin certificate to be used to sign the change to the channel configuration.
 
This can be accomplished by overriding the orderers to allow expired certificates to be used:
https://hyperledger-fabric.readthedocs.io/en/release-2.2/raft_configuration.html?highlight=noexpirationchecks#certificate-expiration-related-authentication
 
Set the NoExpirationChecks for each orderer in the orderer.yaml and restart.
 
Construct a channel update with the new admin certificates.
 
Update the channel using the expired certificate to sign the update.
 
Remove the orderride in the orderers and restart.


some question

Dai Shibo <binary_414@...>
 

I noticed that the addorg3 process in fabric2.x is different from that in version 1. I'm trying to use

docker-compose - f docker / docker-compose-org3.yaml up - d command to started, the following error is encountered: FileNotFoundError: [Errno 2] No such file or directory: './ docker/docker-compose-org3. yaml'

 

Your docker folder is in ./compose/docker instead of ./docker, so that the command above cannot be started under addorg3 directory.

I tired to use the shell addOrg3.sh , but still failed in the same place.

How to solve this problem, plz.

 

Windows 邮件发送

 


Re: Transaction read/write sets are bleeding into each other

Marcos Sarres
 

Good point David,

 

Please also check for MVCC (Multi Version Concurrent Control) CONFLICT errors at your client and orderer logs.

 

Regards,

 

Marcos Sarres | CEO | +55 61 98116 7866

 

 

De: fabric@... <fabric@...> Em nome de David Faulstich Diniz Reis
Enviada em: quinta-feira, 7 de abril de 2022 08:31
Para: Curtis Miles <curtis@...>
Cc: fabric <fabric@...>
Assunto: Re: [Hyperledger Fabric] Transaction read/write sets are bleeding into each other

 

Just as a complement to Marcos and Tsvetan,

 

I got similar errors due to concurrency control when doing stress tests with the same asset keys.

 

You may check why the transaction fails using QSCC chaincode:

 

1 - Evaluate Transaction Method:

 

/*
* Evaluate a transaction and handle any errors
*/
export const evaluateTransaction = async (contract: Contract, transactionName: string, ...transactionArgs: string[]): Promise<Buffer> => {
 
const transaction = contract.createTransaction(transactionName);
  const
transactionId = transaction.getTransactionId();
 
logger.trace({ transaction }, 'Evaluating transaction');

  try
{
   
const payload = await transaction.evaluate(...transactionArgs);
   
logger.trace(
      {
        transactionId: transactionId
,
       
payload: payload.toString(),
     
},
     
'Evaluate transaction response received',
   
);
    return
payload;
 
} catch (err) {
   
throw handleTXError(transactionId, err);
 
}
}
;

 

 

2 - Check status transaction:

 

/*
* Get the validation code of the specified transaction
*/
export const getTransactionValidationCode = async (qsccContract: Contract, transactionId: string): Promise<string> => {
 
const data = await evaluateTransaction(qsccContract, 'GetTransactionByID', config.channelName, transactionId);

  const
processedTransaction = protos.protos.ProcessedTransaction.decode(data);
  const
validationCode = protos.protos.TxValidationCode[processedTransaction.validationCode];

 
logger.debug({ transactionId }, 'Validation code: %s', validationCode);
  return
validationCode;
};

 

--

 

I use this as part of my retry logic implemented by Event Source and Saga patterns.

 

Best regards.

 

David

 

Em qua., 6 de abr. de 2022 às 20:29, Tsvetan Georgiev <tsvetan@...> escreveu:

Hi Curtis,

 

Marcos has a good point and you should check your client logic.

 

To better understand could you share also how many peers participate in the transaction endorsement? Are those failed/bad transactions part of the block built by the Ordering Service and marked as invalid due to concurrency control version check?

 

If your transactions don't contain the read/write sets you expect then you definitely have to re-check your chaincode and what your client is actually sending to the peers during the endorsement step.

 

Regards,

 

Senofi

Tsvetan Georgiev

Director, Senofi Inc.

438-494-7854 | tsvetan@...

www.senofi.ca

www.consortia.io

 

 

 

 

---- On Wed, 06 Apr 2022 18:54:10 -0400 Marcos Sarres <marcos.sarres@...> wrote ----

 

Please check your Hyperledger Fabric client.

 

The endorsement policy should be fulfilled at your API sending the message proposals the the endorsement peers further receiving and packaging the signed proposals to be transmitted to the orderer.

 

I think your API (HLF Client) is not getting the correct signed proposals from the chaincode endorsing policy orgs.

 

Regards,

 

Marcos Sarres | CEO | +55 61 98116 7866

 

 

De: fabric@... <fabric@...> Em nome de Curtis Miles
Enviada em: sexta-feira, 1 de abril de 2022 10:32
Para: fabric@...
Assunto: [Hyperledger Fabric] Transaction read/write sets are bleeding into each other

 

Hello world!

 

I’ve hit what appears to be a strange situation when testing my soon-to-be production network under a bit of (very light) load.  I have a very simple chaincode function that is doing two things within a single transaction:
1. Retrieving a document of type A by id, updating it, and storing it back (using its same id)

2. Storing a new object of type B by id (the object is passed in a parameter to the chaincode function)

 

When I try to execute a number of these transactions in parallel (e.g. 5) such that they are included in the same block, I’m *randomly* getting a ENDORSEMENT_POLICY_FAILURE error.  This didn’t make any sense to me because each transaction is dealing with distinct objects of type A and B and nothing else is going on in the network at the time.  So I look into the blocks that had the transactions that were failing and noticed something very strange.  It looks like the items in the read/write sets in the transactions are bleeding into each other.  For example, a single block includes the following read/write sets within three different transactions:

 

Transaction 1 (which looks correct to me):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A1",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               },

             {

                     "key": "B1",

             "version": null

             }

       ],

       "range_queries_info": [],

       "writes": [

             {

        "key": "A1",

        "is_delete": false,

        "value": "{\"id\":\"1\”, \"type\":\"A\”}"}"

             },

             {

        "key": "B1",

        "is_delete": false,

        "value": "{\"id\":\"1\”, \"type\":\"B\”}"}"

             }

       ],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

Transaction 2 (has an extra read/write of A3, which I would have expected to be in a different transaction):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A2",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               },

               {

                     "key": "A3",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

               }

             },

             {

                     "key": "B2",

             "version": null

             }

       ],

       "range_queries_info": [],

       "writes": [

             {

        "key": "A2",

        "is_delete": false,

        "value": "{\"id\":\"2\”, \"type\":\"A\”}"}"

             },

             {

        "key": "A3",

        "is_delete": false,

               "value": "{\"id\":\"3\”, \"type\":\"A\”}"}"

             },

             {

        "key": "B2",

        "is_delete": false,

        "value": "{\"id\":\"2\”, \"type\":\"B\”}"}"

             }

       ],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

Transaction 3 (has only a read of A3, with no write of A3 or B3):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A3",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               }

       ],

       "range_queries_info": [],

       "writes": [],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

If I only submit one transaction per block, everything is fine (although for performance reasons, that isn’t going to work for me). 

 

Does anyone have any ideas why this might be happening?  Shouldn’t it be fundamentally impossible for these transactions to bleed into each other?  Can you think of anything I might be doing wrong that is causing this?

 

Thanks for any help you can offer!

Curtis.

 

 

 

 

 


 


Re: Transaction read/write sets are bleeding into each other

David F. D. Reis
 

Just as a complement to Marcos and Tsvetan,

I got similar errors due to concurrency control when doing stress tests with the same asset keys.

You may check why the transaction fails using QSCC chaincode:

1 - Evaluate Transaction Method:

/*
* Evaluate a transaction and handle any errors
*/
export const evaluateTransaction = async (contract: Contract, transactionName: string, ...transactionArgs: string[]): Promise<Buffer> => {
const transaction = contract.createTransaction(transactionName);
const transactionId = transaction.getTransactionId();
logger.trace({ transaction }, 'Evaluating transaction');

try {
const payload = await transaction.evaluate(...transactionArgs);
logger.trace(
{
transactionId: transactionId,
payload: payload.toString(),
},
'Evaluate transaction response received',
);
return payload;
} catch (err) {
throw handleTXError(transactionId, err);
}
};


2 - Check status transaction:

/*
* Get the validation code of the specified transaction
*/
export const getTransactionValidationCode = async (qsccContract: Contract, transactionId: string): Promise<string> => {
const data = await evaluateTransaction(qsccContract, 'GetTransactionByID', config.channelName, transactionId);

const processedTransaction = protos.protos.ProcessedTransaction.decode(data);
const validationCode = protos.protos.TxValidationCode[processedTransaction.validationCode];

logger.debug({ transactionId }, 'Validation code: %s', validationCode);
return validationCode;
};

--

I use this as part of my retry logic implemented by Event Source and Saga patterns.

Best regards.

David

Em qua., 6 de abr. de 2022 às 20:29, Tsvetan Georgiev <tsvetan@...> escreveu:

Hi Curtis,

Marcos has a good point and you should check your client logic.

To better understand could you share also how many peers participate in the transaction endorsement? Are those failed/bad transactions part of the block built by the Ordering Service and marked as invalid due to concurrency control version check?

If your transactions don't contain the read/write sets you expect then you definitely have to re-check your chaincode and what your client is actually sending to the peers during the endorsement step.

Regards,

Senofi

Tsvetan Georgiev
Director, Senofi Inc.

438-494-7854 | tsvetan@...

www.senofi.ca

www.consortia.io






---- On Wed, 06 Apr 2022 18:54:10 -0400 Marcos Sarres <marcos.sarres@...> wrote ----

Please check your Hyperledger Fabric client.

 

The endorsement policy should be fulfilled at your API sending the message proposals the the endorsement peers further receiving and packaging the signed proposals to be transmitted to the orderer.

 

I think your API (HLF Client) is not getting the correct signed proposals from the chaincode endorsing policy orgs.

 

Regards,

 

Marcos Sarres | CEO | +55 61 98116 7866


 

 

De: fabric@... <fabric@...> Em nome de Curtis Miles
Enviada em: sexta-feira, 1 de abril de 2022 10:32
Para: fabric@...
Assunto: [Hyperledger Fabric] Transaction read/write sets are bleeding into each other

 

Hello world!

 

I’ve hit what appears to be a strange situation when testing my soon-to-be production network under a bit of (very light) load.  I have a very simple chaincode function that is doing two things within a single transaction:
1. Retrieving a document of type A by id, updating it, and storing it back (using its same id)

2. Storing a new object of type B by id (the object is passed in a parameter to the chaincode function)

 

When I try to execute a number of these transactions in parallel (e.g. 5) such that they are included in the same block, I’m *randomly* getting a ENDORSEMENT_POLICY_FAILURE error.  This didn’t make any sense to me because each transaction is dealing with distinct objects of type A and B and nothing else is going on in the network at the time.  So I look into the blocks that had the transactions that were failing and noticed something very strange.  It looks like the items in the read/write sets in the transactions are bleeding into each other.  For example, a single block includes the following read/write sets within three different transactions:

 

Transaction 1 (which looks correct to me):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A1",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               },

             {

                     "key": "B1",

             "version": null

             }

       ],

       "range_queries_info": [],

       "writes": [

             {

        "key": "A1",

        "is_delete": false,

        "value": "{\"id\":\"1\”, \"type\":\"A\”}"}"

             },

             {

        "key": "B1",

        "is_delete": false,

        "value": "{\"id\":\"1\”, \"type\":\"B\”}"}"

             }

       ],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

Transaction 2 (has an extra read/write of A3, which I would have expected to be in a different transaction):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A2",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               },

               {

                     "key": "A3",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

               }

             },

             {

                     "key": "B2",

             "version": null

             }

       ],

       "range_queries_info": [],

       "writes": [

             {

        "key": "A2",

        "is_delete": false,

        "value": "{\"id\":\"2\”, \"type\":\"A\”}"}"

             },

             {

        "key": "A3",

        "is_delete": false,

               "value": "{\"id\":\"3\”, \"type\":\"A\”}"}"

             },

             {

        "key": "B2",

        "is_delete": false,

        "value": "{\"id\":\"2\”, \"type\":\"B\”}"}"

             }

       ],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

Transaction 3 (has only a read of A3, with no write of A3 or B3):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A3",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               }

       ],

       "range_queries_info": [],

       "writes": [],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

If I only submit one transaction per block, everything is fine (although for performance reasons, that isn’t going to work for me). 

 

Does anyone have any ideas why this might be happening?  Shouldn’t it be fundamentally impossible for these transactions to bleed into each other?  Can you think of anything I might be doing wrong that is causing this?

 

Thanks for any help you can offer!

Curtis.









Re: Transaction read/write sets are bleeding into each other

Tsvetan Georgiev
 

Hi Curtis,

Marcos has a good point and you should check your client logic.

To better understand could you share also how many peers participate in the transaction endorsement? Are those failed/bad transactions part of the block built by the Ordering Service and marked as invalid due to concurrency control version check?

If your transactions don't contain the read/write sets you expect then you definitely have to re-check your chaincode and what your client is actually sending to the peers during the endorsement step.

Regards,

Senofi

Tsvetan Georgiev
Director, Senofi Inc.

438-494-7854 | tsvetan@...

www.senofi.ca

www.consortia.io






---- On Wed, 06 Apr 2022 18:54:10 -0400 Marcos Sarres <marcos.sarres@...> wrote ----

Please check your Hyperledger Fabric client.

 

The endorsement policy should be fulfilled at your API sending the message proposals the the endorsement peers further receiving and packaging the signed proposals to be transmitted to the orderer.

 

I think your API (HLF Client) is not getting the correct signed proposals from the chaincode endorsing policy orgs.

 

Regards,

 

Marcos Sarres | CEO | +55 61 98116 7866


 

 

De: fabric@... <fabric@...> Em nome de Curtis Miles
Enviada em: sexta-feira, 1 de abril de 2022 10:32
Para: fabric@...
Assunto: [Hyperledger Fabric] Transaction read/write sets are bleeding into each other

 

Hello world!

 

I’ve hit what appears to be a strange situation when testing my soon-to-be production network under a bit of (very light) load.  I have a very simple chaincode function that is doing two things within a single transaction:
1. Retrieving a document of type A by id, updating it, and storing it back (using its same id)

2. Storing a new object of type B by id (the object is passed in a parameter to the chaincode function)

 

When I try to execute a number of these transactions in parallel (e.g. 5) such that they are included in the same block, I’m *randomly* getting a ENDORSEMENT_POLICY_FAILURE error.  This didn’t make any sense to me because each transaction is dealing with distinct objects of type A and B and nothing else is going on in the network at the time.  So I look into the blocks that had the transactions that were failing and noticed something very strange.  It looks like the items in the read/write sets in the transactions are bleeding into each other.  For example, a single block includes the following read/write sets within three different transactions:

 

Transaction 1 (which looks correct to me):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A1",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               },

             {

                     "key": "B1",

             "version": null

             }

       ],

       "range_queries_info": [],

       "writes": [

             {

        "key": "A1",

        "is_delete": false,

        "value": "{\"id\":\"1\”, \"type\":\"A\”}"}"

             },

             {

        "key": "B1",

        "is_delete": false,

        "value": "{\"id\":\"1\”, \"type\":\"B\”}"}"

             }

       ],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

Transaction 2 (has an extra read/write of A3, which I would have expected to be in a different transaction):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A2",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               },

               {

                     "key": "A3",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

               }

             },

             {

                     "key": "B2",

             "version": null

             }

       ],

       "range_queries_info": [],

       "writes": [

             {

        "key": "A2",

        "is_delete": false,

        "value": "{\"id\":\"2\”, \"type\":\"A\”}"}"

             },

             {

        "key": "A3",

        "is_delete": false,

               "value": "{\"id\":\"3\”, \"type\":\"A\”}"}"

             },

             {

        "key": "B2",

        "is_delete": false,

        "value": "{\"id\":\"2\”, \"type\":\"B\”}"}"

             }

       ],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

Transaction 3 (has only a read of A3, with no write of A3 or B3):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A3",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               }

       ],

       "range_queries_info": [],

       "writes": [],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

If I only submit one transaction per block, everything is fine (although for performance reasons, that isn’t going to work for me). 

 

Does anyone have any ideas why this might be happening?  Shouldn’t it be fundamentally impossible for these transactions to bleed into each other?  Can you think of anything I might be doing wrong that is causing this?

 

Thanks for any help you can offer!

Curtis.







Re: Transaction read/write sets are bleeding into each other

Marcos Sarres
 

Please check your Hyperledger Fabric client.

 

The endorsement policy should be fulfilled at your API sending the message proposals the the endorsement peers further receiving and packaging the signed proposals to be transmitted to the orderer.

 

I think your API (HLF Client) is not getting the correct signed proposals from the chaincode endorsing policy orgs.

 

Regards,

 

Marcos Sarres | CEO | +55 61 98116 7866

 

 

De: fabric@... <fabric@...> Em nome de Curtis Miles
Enviada em: sexta-feira, 1 de abril de 2022 10:32
Para: fabric@...
Assunto: [Hyperledger Fabric] Transaction read/write sets are bleeding into each other

 

Hello world!

 

I’ve hit what appears to be a strange situation when testing my soon-to-be production network under a bit of (very light) load.  I have a very simple chaincode function that is doing two things within a single transaction:
1. Retrieving a document of type A by id, updating it, and storing it back (using its same id)

2. Storing a new object of type B by id (the object is passed in a parameter to the chaincode function)

 

When I try to execute a number of these transactions in parallel (e.g. 5) such that they are included in the same block, I’m *randomly* getting a ENDORSEMENT_POLICY_FAILURE error.  This didn’t make any sense to me because each transaction is dealing with distinct objects of type A and B and nothing else is going on in the network at the time.  So I look into the blocks that had the transactions that were failing and noticed something very strange.  It looks like the items in the read/write sets in the transactions are bleeding into each other.  For example, a single block includes the following read/write sets within three different transactions:

 

Transaction 1 (which looks correct to me):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A1",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               },

             {

                     "key": "B1",

             "version": null

             }

       ],

       "range_queries_info": [],

       "writes": [

             {

        "key": "A1",

        "is_delete": false,

        "value": "{\"id\":\"1\”, \"type\":\"A\”}"}"

             },

             {

        "key": "B1",

        "is_delete": false,

        "value": "{\"id\":\"1\”, \"type\":\"B\”}"}"

             }

       ],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

Transaction 2 (has an extra read/write of A3, which I would have expected to be in a different transaction):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A2",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               },

               {

                     "key": "A3",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

               }

             },

             {

                     "key": "B2",

             "version": null

             }

       ],

       "range_queries_info": [],

       "writes": [

             {

        "key": "A2",

        "is_delete": false,

        "value": "{\"id\":\"2\”, \"type\":\"A\”}"}"

             },

             {

        "key": "A3",

        "is_delete": false,

               "value": "{\"id\":\"3\”, \"type\":\"A\”}"}"

             },

             {

        "key": "B2",

        "is_delete": false,

        "value": "{\"id\":\"2\”, \"type\":\"B\”}"}"

             }

       ],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

Transaction 3 (has only a read of A3, with no write of A3 or B3):

{

    "namespace": "projectXYZ",

    "rwset": {

"reads": [

        {

             "key": "A3",

             "version": {

                 "block_num": "79389",

                 "tx_num": "0"

              }

               }

       ],

       "range_queries_info": [],

       "writes": [],

       "metadata_writes": []

    },

    "collection_hashed_rwset": []

}

 

If I only submit one transaction per block, everything is fine (although for performance reasons, that isn’t going to work for me). 

 

Does anyone have any ideas why this might be happening?  Shouldn’t it be fundamentally impossible for these transactions to bleed into each other?  Can you think of anything I might be doing wrong that is causing this?

 

Thanks for any help you can offer!

Curtis.


Now: Private Chaincode Lab - 04/05/2022 #cal-notice

fabric@lists.hyperledger.org Calendar <noreply@...>
 

Private Chaincode Lab

When:
04/05/2022
8:00am to 9:00am
(UTC-07:00) America/Los Angeles

Where:
https://zoom.us/my/hyperledger.community.3?pwd=UE90WHhEaHRqOGEyMkV3cldKa2d2dz09

Organizer: Marcus Brandenburger bur@...

View Event

Description:
Two of the Hyperleger Labs projects (private data objects and private chain code) are collaborating to develop a "private smart contracts" capability.

Join Zoom Meeting https://zoom.us/j/5184947650?pwd=UE90WHhEaHRqOGEyMkV3cldKa2d2dz09 Meeting ID: 518 494 7650 Passcode: 475869


Re: Extend network with new orderers while is running

Nikos Karamolegkos
 

Ok based on theory I quote:
On the one hand, the more Raft nodes that are deployed, the more nodes can be lost while maintaining that a majority of the nodes are still available (unless a majority of nodes are available, the cluster will cease to process and create blocks). A five node cluster, for example, can tolerate two down nodes, while a seven node cluster can tolerate three down nodes.

So I understand that in case I have 3 orderer nodes and one is unavailable the Raft works based on the concept of a quorum in which as long as a majority (i.e 2) of the Raft nodes are online, the Raft cluster stays available. Correct?

However, your answer confuse me. Specifically,

That allows the orderers to reach consensus even if one does not agree. But, that does not allow for continued operation if one of those nodes is interrupted and is not responding. If you have five orderers, the set can still reach consensus even if one is down temporarily

What do you mean continued operation? Does not agree for me means that the orderer is bad or dead, in any case the network can keep working with the remaining two orderers.


Re: Extend network with new orderers while is running

Tom Lamm
 

As the scenarios become more complex, multi-channel, which nodes participate in each, etc… I can only refer you to the docs.

The answer to “Are all the nodes joining the channel?” Is “Some or all peers and some or all orderers, depending on your configuration.”

Also, you asked about reconfiguring the consensus rules to make changes during a Peer outage “until the k8s mechanism restart the dead node”. Understand that, to be practical, you won’t be able to. The K8s plane will attempt to restart that node in a matter of minutes. The real question is, “Why did that node fail and will K8’s attempt to restart it succeed, or will the restart only fail again for the same reason?”

I believe you are looking for short answers to complex scenarios. I suggest reviewing the docs and experimenting through some actual tests with a network that is close to your actual requirements and goals. 

On Apr 5, 2022, at 7:07 AM, Nikos Karamolegkos <nkaram@...> wrote:

Also, how can I list which orderers nodes are consenters? Are all the nodes joining the channel?


Re: Extend network with new orderers while is running

Nikos Karamolegkos
 

Also, how can I list which orderers nodes are consenters? Are all the nodes joining the channel?