Date   

Re: Kafka and zookeepers persistent data #kafka #fabric

Gari Singh <garis@...>
 


-----------------------------------------
Gari Singh
Distinguished Engineer, CTO - IBM Blockchain
IBM Middleware
550 King St
Littleton, MA 01460
Cell: 978-846-7499
garis@...
-----------------------------------------
 
 

----- Original message -----
From: "Joao Antunes" <joao.antunes@...>
Sent by: fabric@...
To: fabric@...
Cc:
Subject: [EXTERNAL] [Hyperledger Fabric] Kafka and zookeepers persistent data #fabric #kafka
Date: Tue, Dec 3, 2019 8:03 AM
 
Hi,

Recently after some changes in my network and after a restart, my kafkas and zookeepers got to an incoherent state with my orderers. Similar to the following ticket:
https://jira.hyperledger.org/browse/FAB-15985

Unfortunately, I had to reset my network and delete all the data.
After reading this ticket, what do I need to persist in kafkas and zookeepers so that my network won't enter into this stage again?

Kind regards,
João Antunes
 


Re: identity authentication

Nicholas Leonardi
 

You can use Kong Service manager with JWT or any other form of authentication to access the services that submit transactions to the Fabric network. In this case, the user certificate must be present with the services

Em terça-feira, 3 de dezembro de 2019 10:08:44 BRT, qs meng <qsmeng@...> escreveu:


Hi experts, 
      In the current fabric design, the client app is the use of Fabric. Running a client app is a heavy cost  for a mobilephone user. We design a payment system, where a user can sign a payment request with his/her private key, submit it to a client app and then to Fabric.  
A problem exists that how the identity of the user or requestor can be authenticated?  Can anyone give me some suggestions?
 Thank you.
 Best regards,
qsmeng


 


identity authentication

qs meng <qsmeng@...>
 

Hi experts, 
      In the current fabric design, the client app is the use of Fabric. Running a client app is a heavy cost  for a mobilephone user. We design a payment system, where a user can sign a payment request with his/her private key, submit it to a client app and then to Fabric.  
A problem exists that how the identity of the user or requestor can be authenticated?  Can anyone give me some suggestions?
 Thank you.
 Best regards,
qsmeng


 


Kafka and zookeepers persistent data #kafka #fabric

Joao Antunes
 

Hi,

Recently after some changes in my network and after a restart, my kafkas and zookeepers got to an incoherent state with my orderers. Similar to the following ticket:
https://jira.hyperledger.org/browse/FAB-15985

Unfortunately, I had to reset my network and delete all the data.
After reading this ticket, what do I need to persist in kafkas and zookeepers so that my network won't enter into this stage again?

Kind regards,
João Antunes


Re: Where i can find documentation about HL-Fabric protobufs structure and GRPC communications between nodes? #fabric #grpc #network

Aleksandr Kochetkov
 

Thanks for bringing this repo into conversion Yacov!
As we can see from repo, there are a lot of protobufs, sometimes one utilizes another as payload/data etc. That exactly the reason why i'm asking for documentation, which would explain how all this protobufs are connected and used in Fabric.
2


Re: Where i can find documentation about HL-Fabric protobufs structure and GRPC communications between nodes? #fabric #grpc #network

Yacov
 

Take a look at https://github.com/hyperledger/fabric-protos/



From:        "Prasanth Sundaravelu" <prasanths96@...>
To:        Aleksandr Kochetkov <aleksandr.kochetkov@...>
Cc:        fabric@...
Date:        12/03/2019 12:33 PM
Subject:        [EXTERNAL] Re: [Hyperledger Fabric] Where i can find documentation about HL-Fabric protobufs structure and GRPC communications between nodes? #fabric #grpc #network
Sent by:        fabric@...




I would also love to get the same!


On Tue, 3 Dec 2019, 4:01 pm Aleksandr Kochetkov, <aleksandr.kochetkov@...> wrote:
I'm looking for documentation or just detailed description in some form of fabric protobuf structure and usage. Nodes communicate to each other using GRPC, but it's not clear what exactly they sending to each other? Blocks and transactions also stored in protobufs inside level db, i need to know which protobufs should i use to decode blocks. Of course it's possible to reverse engineer all that, but it takes a lot of time. Maybe this docs right now is not well formatted/not ready to be published in official docs, but it would be great to have them even in draft form. Thanks in advance!





Re: Where i can find documentation about HL-Fabric protobufs structure and GRPC communications between nodes? #fabric #grpc #network

Prasanth Sundaravelu
 

I would also love to get the same!


On Tue, 3 Dec 2019, 4:01 pm Aleksandr Kochetkov, <aleksandr.kochetkov@...> wrote:
I'm looking for documentation or just detailed description in some form of fabric protobuf structure and usage. Nodes communicate to each other using GRPC, but it's not clear what exactly they sending to each other? Blocks and transactions also stored in protobufs inside level db, i need to know which protobufs should i use to decode blocks. Of course it's possible to reverse engineer all that, but it takes a lot of time. Maybe this docs right now is not well formatted/not ready to be published in official docs, but it would be great to have them even in draft form. Thanks in advance!


Where i can find documentation about HL-Fabric protobufs structure and GRPC communications between nodes? #fabric #grpc #network

Aleksandr Kochetkov
 

I'm looking for documentation or just detailed description in some form of fabric protobuf structure and usage. Nodes communicate to each other using GRPC, but it's not clear what exactly they sending to each other? Blocks and transactions also stored in protobufs inside level db, i need to know which protobufs should i use to decode blocks. Of course it's possible to reverse engineer all that, but it takes a lot of time. Maybe this docs right now is not well formatted/not ready to be published in official docs, but it would be great to have them even in draft form. Thanks in advance!


Re: #fabric #raft Orderers and organization, how to organize them? #fabric #raft

Jean-Gaël Dominé <jgdomine@...>
 

Hi all,

I cannot say why this is considered as the "best practice... I was merely trying to summarize what Joe was explaining to me. But I don't have the give a good explanation to you.

Sorry


Peer Failed connecting to Orderer: Channel created but Peer is not connection to Orderer and unable to retrieve blocks from ordering service. #fabric-sdk-node #docker #raft #consensus #cal-notice

Akshay Soni
 

We are not using CLI. We are creating channel using Node SDK and creating infrastructure using shell script.
We are using Raft consensus with 5 orderer and Raft cluster is working fine.
Orderers elected the Leader and able to communicate with each other.
When we created new channel and peer joined, Peer created the Ledger with genesis block, joined gossip network of channel, created state database  and started delivering service in the channel.
At the time of Peer tried to retrieve blocks from ordering service, It gave following error 

2019-12-03 09:03:39.478 UTC [deliveryClient] StartDeliverForChannel -> INFO a88 This peer will retrieve blocks from ordering service and disseminate to other peers in the organization for channel swapchannel
2019-12-03 09:03:42.479 UTC [ConnProducer] NewConnection -> ERRO a89 Failed connecting to {localhost:51015 []} , error: context deadline exceeded
2019-12-03 09:03:45.482 UTC [ConnProducer] NewConnection -> ERRO a8a Failed connecting to {localhost:51014 []} , error: context deadline exceeded
2019-12-03 09:03:48.483 UTC [ConnProducer] NewConnection -> ERRO a8b Failed connecting to {oodjaeuen108-orderer4.orderer.com:7050 []} , error: context deadlineexceeded
2019-12-03 09:03:51.485 UTC [ConnProducer] NewConnection -> ERRO a8c Failed connecting to {oodjaeuen108-orderer1.orderer.com:7050 []} , error: context deadlineexceeded
2019-12-03 09:03:54.487 UTC [ConnProducer] NewConnection -> ERRO a8d Failed connecting to {localhost:51012 []} , error: context deadline exceeded
2019-12-03 09:03:57.488 UTC [ConnProducer] NewConnection -> ERRO a8e Failed connecting to {oodjaeuen108-orderer2.orderer.com:7050 []} , error: context deadlineexceeded
2019-12-03 09:04:00.490 UTC [ConnProducer] NewConnection -> ERRO a8f Failed connecting to {oodjaeuen108-orderer3.orderer.com:7050 []} , error: context deadlineexceeded
2019-12-03 09:04:03.492 UTC [ConnProducer] NewConnection -> ERRO a90 Failed connecting to {oodjaeuen108-orderer0.orderer.com:7050 []} , error: context deadlineexceeded
2019-12-03 09:04:06.493 UTC [ConnProducer] NewConnection -> ERRO a91 Failed connecting to {localhost:51011 []} , error: context deadline exceeded
2019-12-03 09:04:09.495 UTC [ConnProducer] NewConnection -> ERRO a92 Failed connecting to {localhost:51013 []} , error: context deadline exceeded
2019-12-03 09:04:09.495 UTC [ConnProducer] NewConnection -> ERRO a93 Could not connect to any of the endpoints: [{localhost:51015 []} {localhost:51014 []} {oodjaeuen108-orderer4.orderer.com:7050 []} {oodjaeuen108-orderer1.orderer.com:7050 []} {localhost:51012 []} {oodjaeuen108-orderer2.orderer.com:7050 []} {oodjaeuen108-orderer3.orderer.com:7050 []} {oodjaeuen108-orderer0.orderer.com:7050 []} {localhost:51011 []} {localhost:51013 []}]
2019-12-03 09:04:09.495 UTC [deliveryClient] connect -> ERRO a94 Failed obtaining connection: could not connect to any of the endpoints: [{localhost:51015 []} {localhost:51014 []} {oodjaeuen108-orderer4.orderer.com:7050 []} {oodjaeuen108-orderer1.orderer.com:7050 []} {localhost:51012 []} {oodjaeuen108-orderer2.orderer.com:7050 []} {oodjaeuen108-orderer3.orderer.com:7050 []} {oodjaeuen108-orderer0.orderer.com:7050 []} {localhost:51011 []} {localhost:51013 []}]
2019-12-03 09:04:09.495 UTC [deliveryClient] try -> WARN a95 Got error: could not connect to any of the endpoints: [{localhost:51015 []} {localhost:51014 []} {oodjaeuen108-orderer4.orderer.com:7050 []} {oodjaeuen108-orderer1.orderer.com:7050 []} {localhost:51012 []} {oodjaeuen108-orderer2.orderer.com:7050 []} {oodjaeuen108-orderer3.orderer.com:7050 []} {oodjaeuen108-orderer0.orderer.com:7050 []} {localhost:51011 []} {localhost:51013 []}] , at 1 attempt. Retrying in 1s
2019-12-03 09:04:13.497 UTC [ConnProducer] NewConnection -> ERRO a96 Failed connecting to {localhost:51015 []} , error: context deadline exceeded
2019-12-03 09:04:16.498 UTC [ConnProducer] NewConnection -> ERRO a97 Failed connecting to {localhost:51014 []} , error: context deadline exceeded
2019-12-03 09:04:19.500 UTC [ConnProducer] NewConnection -> ERRO a98 Failed connecting to {oodjaeuen108-orderer4.orderer.com:7050 []} , error: context deadlineexceeded
2019-12-03 09:04:22.501 UTC [ConnProducer] NewConnection -> ERRO a99 Failed connecting to {oodjaeuen108-orderer1.orderer.com:7050 []} , error: context deadlineexceeded
2019-12-03 09:04:25.503 UTC [ConnProducer] NewConnection -> ERRO a9a Failed connecting to {localhost:51012 []} , error: context deadline exceeded
2019-12-03 09:04:28.505 UTC [ConnProducer] NewConnection -> ERRO a9b Failed connecting to {oodjaeuen108-orderer2.orderer.com:7050 []} , error: context deadlineexceeded
2019-12-03 09:04:31.507 UTC [ConnProducer] NewConnection -> ERRO a9c Failed connecting to {oodjaeuen108-orderer3.orderer.com:7050 []} , error: context deadlineexceeded
2019-12-03 09:04:34.508 UTC [ConnProducer] NewConnection -> ERRO a9d Failed connecting to {oodjaeuen108-orderer0.orderer.com:7050 []} , error: context deadlineexceeded
2019-12-03 09:04:37.509 UTC [ConnProducer] NewConnection -> ERRO a9e Failed connecting to {localhost:51011 []} , error: context deadline exceeded
2019-12-03 09:04:40.511 UTC [ConnProducer] NewConnection -> ERRO a9f Failed connecting to {localhost:51013 []} , error: context deadline exceeded
2019-12-03 09:04:40.511 UTC [ConnProducer] NewConnection -> ERRO aa0 Could not connect to any of the endpoints: [{localhost:51015 []} {localhost:51014 []} {oodjaeuen108-orderer4.orderer.com:7050 []} {oodjaeuen108-orderer1.orderer.com:7050 []} {localhost:51012 []} {oodjaeuen108-orderer2.orderer.com:7050 []} {oodjaeuen108-orderer3.orderer.com:7050 []} {oodjaeuen108-orderer0.orderer.com:7050 []} {localhost:51011 []} {localhost:51013 []}]
2019-12-03 09:04:40.511 UTC [deliveryClient] connect -> ERRO aa1 Failed obtaining connection: could not connect to any of the endpoints: [{localhost:51015 []} {localhost:51014 []} {oodjaeuen108-orderer4.orderer.com:7050 []} {oodjaeuen108-orderer1.orderer.com:7050 []} {localhost:51012 []} {oodjaeuen108-orderer2.orderer.com:7050 []} {oodjaeuen108-orderer3.orderer.com:7050 []} {oodjaeuen108-orderer0.orderer.com:7050 []} {localhost:51011 []} {localhost:51013 []}]
2019-12-03 09:04:40.511 UTC [deliveryClient] try -> WARN aa2 Got error: could not connect to any of the endpoints: [{localhost:51015 []} {localhost:51014 []} {oodjaeuen108-orderer4.orderer.com:7050 []} {oodjaeuen108-orderer1.orderer.com:7050 []} {localhost:51012 []} {oodjaeuen108-orderer2.orderer.com:7050 []} {oodjaeuen108-orderer3.orderer.com:7050 []} {oodjaeuen108-orderer0.orderer.com:7050 []} {localhost:51011 []} {localhost:51013 []}] , at 2 attempt. Retrying in 2s
2019-12-03 09:04:45.512 UTC [ConnProducer] NewConnection -> ERRO aa3 Failed connecting to {localhost:51015 []} , error: context deadline exceeded
2019-12-03 09:04:48.514 UTC [ConnProducer] NewConnection -> ERRO aa4 Failed connecting to {localhost:51014 []} , error: context deadline exceeded
2019-12-03 09:04:51.515 UTC [ConnProducer] NewConnection -> ERRO aa5 Failed connecting to {oodjaeuen108-orderer4.orderer.com:7050 []} , error: context deadlineexceeded
2019-12-03 09:04:54.517 UTC [ConnProducer] NewConnection -> ERRO aa6 Failed connecting to {oodjaeuen108-orderer1.orderer.com:7050 []} , error: context deadlineexceeded
2019-12-03 09:04:57.518 UTC [ConnProducer] NewConnection -> ERRO aa7 Failed connecting to {localhost:51012 []} , error: context deadline exceeded
2019-12-03 09:05:00.520 UTC [ConnProducer] NewConnection -> ERRO aa8 Failed connecting to {oodjaeuen108-orderer2.orderer.com:7050 []} , error: context deadlineexceeded
2019-12-03 09:05:03.522 UTC [ConnProducer] NewConnection -> ERRO aa9 Failed connecting to {oodjaeuen108-orderer3.orderer.com:7050 []} , error: context deadlineexceeded
2019-12-03 09:05:06.523 UTC [ConnProducer] NewConnection -> ERRO aaa Failed connecting to {oodjaeuen108-orderer0.orderer.com:7050 []} , error: context deadlineexceeded
2019-12-03 09:05:09.525 UTC [ConnProducer] NewConnection -> ERRO aab Failed connecting to {localhost:51011 []} , error: context deadline exceeded
2019-12-03 09:05:12.526 UTC [ConnProducer] NewConnection -> ERRO aac Failed connecting to {localhost:51013 []} , error: context deadline exceeded
2019-12-03 09:05:12.526 UTC [ConnProducer] NewConnection -> ERRO aad Could not connect to any of the endpoints: [{localhost:51015 []} {localhost:51014 []} {oodjaeuen108-orderer4.orderer.com:7050 []} {oodjaeuen108-orderer1.orderer.com:7050 []} {localhost:51012 []} {oodjaeuen108-orderer2.orderer.com:7050 []} {oodjaeuen108-orderer3.orderer.com:7050 []} {oodjaeuen108-orderer0.orderer.com:7050 []} {localhost:51011 []} {localhost:51013 []}]
2019-12-03 09:05:12.526 UTC [deliveryClient] connect -> ERRO aae Failed obtaining connection: could not connect to any of the endpoints: [{localhost:51015 []} {localhost:51014 []} {oodjaeuen108-orderer4.orderer.com:7050 []} {oodjaeuen108-orderer1.orderer.com:7050 []} {localhost:51012 []} {oodjaeuen108-orderer2.orderer.com:7050 []} {oodjaeuen108-orderer3.orderer.com:7050 []} {oodjaeuen108-orderer0.orderer.com:7050 []} {localhost:51011 []} {localhost:51013 []}]
2019-12-03 09:05:12.526 UTC [deliveryClient] try -> WARN aaf Got error: could not connect to any of the endpoints: [{localhost:51015 []} {localhost:51014 []} {oodjaeuen108-orderer4.orderer.com:7050 []} {oodjaeuen108-orderer1.orderer.com:7050 []} {localhost:51012 []} {oodjaeuen108-orderer2.orderer.com:7050 []} {oodjaeuen108-orderer3.orderer.com:7050 []} {oodjaeuen108-orderer0.orderer.com:7050 []} {localhost:51011 []} {localhost:51013 []}] , at 3 attempt. Retrying in 4s



We are using following version of tools

Fabric Version :- 1.4.4
NodeJS version :- v8.16.0
NPM version :- 6.4.1
Docker Version :- 18.09.7
docker-compose version :- 1.24.1

Please help us regarding this. Its is a critical issue for us. 


Re: Is there a way to block chaincode access for SDK? #fabric-chaincode

Prasanth Sundaravelu
 

No, we need the data to exist in all the nodes in network. No hiding required. 

Can you elaborate a little on how do you suggest to use private data here? Id you're thinking about some special exploitation of it?

On Tue, 3 Dec 2019, 8:18 am Mayank Tiwari, <sidharth.mayank@...> wrote:
Prasanth, did you check for private data collection implementation in the chaincodes?


Regards,
Mayank Tiwari.


On Tue, 3 Dec 2019 at 3:26 AM, Prasanth Sundaravelu <prasanths96@...> wrote:
Thanks for quick reply!  I will try those suggestions. 

I had thought separating these chaincodes could help manageability more and might increase development pace. Although you make a valid point, I believe chaincode separation might be a good idea with our use case.

We actually have different categories of business logics that our customers can choose from. Later they can add more as per their needs. Multiple of these services commonly depend on some set of services. For this pay per service like flexibility, we chose this idea. 

On Tue, 3 Dec 2019, 2:55 am Yacov Manevich, <YACOVM@...> wrote:
Each chaincode corresponds to a different namespace, and has a different endorsement policy.
Software engineering idioms such as "separation of business logic from tech logic" should never be a reason to separate one chaincode into several chaincodes.

That being said, you can actually prevent users from invoking certain chaincodes by writing and deploying your own authentication filter in the peer.
Take a look at https://github.com/hyperledger/fabric/blob/release-1.4/sampleconfig/core.yaml#L360-L374and at a built in filter we have for blocking expired client certificates in https://github.com/hyperledger/fabric/blob/release-1.4/core/handlers/auth/filter/expiration.go.

Basically you need to extract the target chaincode name from the signed proposal and return an error if it doesn't fit your whitelist.



From:        "Prasanth Sundaravelu" <prasanths96@...>
To:        fabric@...
Date:        12/02/2019 11:11 PM
Subject:        [EXTERNAL] [Hyperledger Fabric] Is there a way to block chaincode access for SDK? #fabric-chaincode
Sent by:        fabric@...




Hi Guys,

I've been trying to separate one big chaincode into multiple chaincodes and also for separating business logic from tech logic.

Here, the tech logic service (eg: EncryptAndSaveState) needs to be accessed by multiple other chaincodes.

I have separated code into different chaincodes and instantiated in same single channel.

The problem is, if I want to use one chaincode's service from another, I have to expose the functions in chaincode from Invoke / Query functions, so that I can use stub.InvokeChaincode() to call these services. But, if I expose these functions (eg: EncryptAndSaveState), it will be accessible by SDK aswell. I don't want the tech services chaincode to be accessed via SDK.

Is there a way to identify if request is coming from chaincode (stub.InvokeChaincode()) or from SDK?
or,
Is there a way to set configuration at peer, to not let request coming from SDK access certain chaincodes by name?

I've tried to do workaround for this by generating and storing a map of TxID and Random-Number at calling chaincode and attached this random number with the Invocation. When the called chaincode receives the Invoke, it again queries back to the calling chaincode (using stub.InvokeChaincode() again) to verify if this random number is infact generated by that chaincode. But, the last Invoke (verification) did not work, it threw: GRPC client failed to get a proper response from the peer \"grpcs://localhost:8051\"."

I would also like to know why this does not work.

Would really appreciate any clue.





Re: problem creating channel: 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies

Adhav Pavan
 

Hello Siddharth,

Thank you for the detailed log.

I had faced same issue. As mentioned in the log, "DEBU 521 0xc0005f9180 identity 0 does not satisfy principal: The identity is not an admin under this MSP [OrdOrgMSP]: The identity does not contain OU [ADMIN], MSP: [OrdOrgMSP]", Default cryptogen tool create admin user certificate with OU as client(expected as admin).


while creating the Orderer Organization admin certificate add the following part crypto-config.yaml in the orderer section.
CA:
  OrganizationalUnit: admin

Once you added above part, recreate certificate and check orderer organization admin user certificate OU as admin by the following  command

openssl x509 -in certificate.crt -text


In case if you are still facing any issue, let me know.

Thank you.

Heartfelt Regards,
Pavan Adhav

Blockchain Developer
Cell Phone:
+91-8390114357  E-Mail: adhavpavan@...



On Mon, Dec 2, 2019 at 11:47 PM Siddharth Jain <siddjain@...> wrote:
below is our truncated logs

2019-12-02 09:42:46.771 PST [policies] Evaluate -> DEBU 4d3 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/ChannelCreationPolicy ==
2019-12-02 09:44:32.662 PST [policies] Evaluate -> DEBU 4ef Signature set satisfies policy /Channel/Application/Org1/Admins

2019-12-02 09:45:03.088 PST [policies] Evaluate -> DEBU 511 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Writers ==
2019-12-02 09:45:34.699 PST [policies] Evaluate -> DEBU 513 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Writers ==
2019-12-02 09:46:26.038 PST [policies] Evaluate -> DEBU 515 == Evaluating *cauthdsl.policy Policy /Channel/Orderer/OrdOrg/Writers ==
2019-12-02 09:46:26.039 PST [msp] satisfiesPrincipalInternalV143 -> DEBU 51b Checking if identity has been named explicitly as an admin for OrdOrgMSP
2019-12-02 09:46:26.039 PST [msp] satisfiesPrincipalInternalV143 -> DEBU 51c Checking if identity carries the admin ou for OrdOrgMSP
2019-12-02 09:46:26.040 PST [msp] hasOURole -> DEBU 51f MSP OrdOrgMSP checking if the identity is a client
2019-12-02 09:46:26.040 PST [cauthdsl] func2 -> DEBU 521 0xc0005f9180 identity 0 does not satisfy principal: The identity is not an admin under this MSP [OrdOrgMSP]: The identity does not contain OU [ADMIN], MSP: [OrdOrgMSP]
2019-12-02 09:46:26.040 PST [cauthdsl] func2 -> DEBU 522 0xc0005f9180 principal evaluation fails
2019-12-02 09:46:26.040 PST [msp] satisfiesPrincipalInternalPreV13 -> DEBU 525 Checking if identity satisfies role [CLIENT] for OrdOrgMSP
2019-12-02 09:46:26.042 PST [cauthdsl] func2 -> DEBU 52a 0xc0005f9180 identity 0 does not satisfy principal: The identity is not a [CLIENT] under this MSP [OrdOrgMSP]: The identity does not contain OU [CLIENT], MSP: [OrdOrgMSP]
2019-12-02 09:46:26.042 PST [policies] Evaluate -> DEBU 52d Signature set did not satisfy policy /Channel/Orderer/OrdOrg/Writers
2019-12-02 09:48:07.333 PST [policies] func1 -> DEBU 52f Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ OrdOrg/Writers ]
2019-12-02 09:48:07.333 PST [policies] Evaluate -> DEBU 533 Signature set did not satisfy policy /Channel/Orderer/Writers

it is true that in configtx.yaml we have defined

Policies: &OrdPolicies
            Readers:
                Type: Signature                
                Rule: "OR('OrdOrgMSP.admin', 'OrdOrgMSP.orderer', 'OrdOrgMSP.client')"
               
            Writers:
                Type: Signature
                Rule: "OR('OrdOrgMSP.admin', 'OrdOrgMSP.client')"
               
            Admins:
                Type: Signature
                Rule: "OR('OrdOrgMSP.admin')"

and we are calling channel create as admin of Org1 but we used the same pattern as in https://github.com/hyperledger/fabric-samples/blob/release-1.4/first-network/configtx.yaml

Policies:
            Readers:
                Type: Signature
                Rule: "OR('OrdererMSP.member')"
            Writers:
                Type: Signature
                Rule: "OR('OrdererMSP.member')"
            Admins:
                Type: Signature
                Rule: "OR('OrdererMSP.admin')"

so what gives?

From: Nikhil E Gupta <negupta@...>
Sent: Monday, December 2, 2019 6:00 AM
To: Siddharth Jain <siddjain@...>
Cc: fabric@... <fabric@...>
Subject: Re: [Hyperledger Fabric] problem creating channel: 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies
 
Hi Siddharth,

This error is caused by a certificate problem. You can investigate further by checking your orderer logs.

This stack overflow post: https://stackoverflow.com/questions/57662562/when-i-try-to-create-a-channel-using-hyperledger-fabric-the-request-fails/57662645#57662645 has a good overview of what to look for when you check your orderer logs.

Nik



-----fabric@... wrote: -----
To: "fabric@..." <fabric@...>
From: "Siddharth Jain"
Sent by: fabric@...
Date: 11/30/2019 06:32PM
Subject: [EXTERNAL] [Hyperledger Fabric] problem creating channel: 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies

we get the error below when trying to create a channel using the peer CLI
2019-11-30 20:53:15.482 UTC [orderer.common.broadcast] ProcessMessage -> WARN 00c [channel: mychannel] Rejecting broadcast of config message from 172.18.0.1:51816 because of error: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied
  • we have a 3 org network (plus orderer)
  • we are using NodeOUs
  • our configtx.yaml can be found at https://gist.github.com/siddjain/4cefde4321185c81a663f877fd6b105e
  • we are running the CLI under credentials of an admin user of one of the peer organizations
  • we checked the public cert and it has OU=admin on it. it was generated using cryptogen 1.4.4
how can we fix this? what is the cause?

fwiw, in case it helps, if we try to create the channel using credentials of admin of the orderer org we get a different error

2019-11-30 20:30:53.025 UTC [orderer.common.broadcast] ProcessMessage -> WARN 008 [channel: mychannel] Rejecting broadcast of config message from 172.18.0.1:51808 because of error: error validating channel creation transaction for new channel 'tracktrace', could not succesfully apply update to template configuration: error authorizing update: error validating DeltaSet: policy for [Group]  /Channel/Application not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied



Re: Is there a way to block chaincode access for SDK? #fabric-chaincode

Mayank Tiwari
 

Prasanth, did you check for private data collection implementation in the chaincodes?


Regards,
Mayank Tiwari.


On Tue, 3 Dec 2019 at 3:26 AM, Prasanth Sundaravelu <prasanths96@...> wrote:
Thanks for quick reply!  I will try those suggestions. 

I had thought separating these chaincodes could help manageability more and might increase development pace. Although you make a valid point, I believe chaincode separation might be a good idea with our use case.

We actually have different categories of business logics that our customers can choose from. Later they can add more as per their needs. Multiple of these services commonly depend on some set of services. For this pay per service like flexibility, we chose this idea. 

On Tue, 3 Dec 2019, 2:55 am Yacov Manevich, <YACOVM@...> wrote:
Each chaincode corresponds to a different namespace, and has a different endorsement policy.
Software engineering idioms such as "separation of business logic from tech logic" should never be a reason to separate one chaincode into several chaincodes.

That being said, you can actually prevent users from invoking certain chaincodes by writing and deploying your own authentication filter in the peer.
Take a look at https://github.com/hyperledger/fabric/blob/release-1.4/sampleconfig/core.yaml#L360-L374and at a built in filter we have for blocking expired client certificates in https://github.com/hyperledger/fabric/blob/release-1.4/core/handlers/auth/filter/expiration.go.

Basically you need to extract the target chaincode name from the signed proposal and return an error if it doesn't fit your whitelist.



From:        "Prasanth Sundaravelu" <prasanths96@...>
To:        fabric@...
Date:        12/02/2019 11:11 PM
Subject:        [EXTERNAL] [Hyperledger Fabric] Is there a way to block chaincode access for SDK? #fabric-chaincode
Sent by:        fabric@...




Hi Guys,

I've been trying to separate one big chaincode into multiple chaincodes and also for separating business logic from tech logic.

Here, the tech logic service (eg: EncryptAndSaveState) needs to be accessed by multiple other chaincodes.

I have separated code into different chaincodes and instantiated in same single channel.

The problem is, if I want to use one chaincode's service from another, I have to expose the functions in chaincode from Invoke / Query functions, so that I can use stub.InvokeChaincode() to call these services. But, if I expose these functions (eg: EncryptAndSaveState), it will be accessible by SDK aswell. I don't want the tech services chaincode to be accessed via SDK.

Is there a way to identify if request is coming from chaincode (stub.InvokeChaincode()) or from SDK?
or,
Is there a way to set configuration at peer, to not let request coming from SDK access certain chaincodes by name?

I've tried to do workaround for this by generating and storing a map of TxID and Random-Number at calling chaincode and attached this random number with the Invocation. When the called chaincode receives the Invoke, it again queries back to the calling chaincode (using stub.InvokeChaincode() again) to verify if this random number is infact generated by that chaincode. But, the last Invoke (verification) did not work, it threw: GRPC client failed to get a proper response from the peer \"grpcs://localhost:8051\"."

I would also like to know why this does not work.

Would really appreciate any clue.





Re: Is there a way to block chaincode access for SDK? #fabric-chaincode

Prasanth Sundaravelu
 

Thanks for quick reply!  I will try those suggestions. 

I had thought separating these chaincodes could help manageability more and might increase development pace. Although you make a valid point, I believe chaincode separation might be a good idea with our use case.

We actually have different categories of business logics that our customers can choose from. Later they can add more as per their needs. Multiple of these services commonly depend on some set of services. For this pay per service like flexibility, we chose this idea. 

On Tue, 3 Dec 2019, 2:55 am Yacov Manevich, <YACOVM@...> wrote:
Each chaincode corresponds to a different namespace, and has a different endorsement policy.
Software engineering idioms such as "separation of business logic from tech logic" should never be a reason to separate one chaincode into several chaincodes.

That being said, you can actually prevent users from invoking certain chaincodes by writing and deploying your own authentication filter in the peer.
Take a look at https://github.com/hyperledger/fabric/blob/release-1.4/sampleconfig/core.yaml#L360-L374and at a built in filter we have for blocking expired client certificates in https://github.com/hyperledger/fabric/blob/release-1.4/core/handlers/auth/filter/expiration.go.

Basically you need to extract the target chaincode name from the signed proposal and return an error if it doesn't fit your whitelist.



From:        "Prasanth Sundaravelu" <prasanths96@...>
To:        fabric@...
Date:        12/02/2019 11:11 PM
Subject:        [EXTERNAL] [Hyperledger Fabric] Is there a way to block chaincode access for SDK? #fabric-chaincode
Sent by:        fabric@...




Hi Guys,

I've been trying to separate one big chaincode into multiple chaincodes and also for separating business logic from tech logic.

Here, the tech logic service (eg: EncryptAndSaveState) needs to be accessed by multiple other chaincodes.

I have separated code into different chaincodes and instantiated in same single channel.

The problem is, if I want to use one chaincode's service from another, I have to expose the functions in chaincode from Invoke / Query functions, so that I can use stub.InvokeChaincode() to call these services. But, if I expose these functions (eg: EncryptAndSaveState), it will be accessible by SDK aswell. I don't want the tech services chaincode to be accessed via SDK.

Is there a way to identify if request is coming from chaincode (stub.InvokeChaincode()) or from SDK?
or,
Is there a way to set configuration at peer, to not let request coming from SDK access certain chaincodes by name?

I've tried to do workaround for this by generating and storing a map of TxID and Random-Number at calling chaincode and attached this random number with the Invocation. When the called chaincode receives the Invoke, it again queries back to the calling chaincode (using stub.InvokeChaincode() again) to verify if this random number is infact generated by that chaincode. But, the last Invoke (verification) did not work, it threw: GRPC client failed to get a proper response from the peer \"grpcs://localhost:8051\"."

I would also like to know why this does not work.

Would really appreciate any clue.





Re: Is there a way to block chaincode access for SDK? #fabric-chaincode

Yacov
 

Each chaincode corresponds to a different namespace, and has a different endorsement policy.
Software engineering idioms such as "separation of business logic from tech logic" should never be a reason to separate one chaincode into several chaincodes.

That being said, you can actually prevent users from invoking certain chaincodes by writing and deploying your own authentication filter in the peer.
Take a look at https://github.com/hyperledger/fabric/blob/release-1.4/sampleconfig/core.yaml#L360-L374and at a built in filter we have for blocking expired client certificates in https://github.com/hyperledger/fabric/blob/release-1.4/core/handlers/auth/filter/expiration.go.

Basically you need to extract the target chaincode name from the signed proposal and return an error if it doesn't fit your whitelist.



From:        "Prasanth Sundaravelu" <prasanths96@...>
To:        fabric@...
Date:        12/02/2019 11:11 PM
Subject:        [EXTERNAL] [Hyperledger Fabric] Is there a way to block chaincode access for SDK? #fabric-chaincode
Sent by:        fabric@...




Hi Guys,

I've been trying to separate one big chaincode into multiple chaincodes and also for separating business logic from tech logic.

Here, the tech logic service (eg: EncryptAndSaveState) needs to be accessed by multiple other chaincodes.

I have separated code into different chaincodes and instantiated in same single channel.

The problem is, if I want to use one chaincode's service from another, I have to expose the functions in chaincode from Invoke / Query functions, so that I can use stub.InvokeChaincode() to call these services. But, if I expose these functions (eg: EncryptAndSaveState), it will be accessible by SDK aswell. I don't want the tech services chaincode to be accessed via SDK.

Is there a way to identify if request is coming from chaincode (stub.InvokeChaincode()) or from SDK?
or,
Is there a way to set configuration at peer, to not let request coming from SDK access certain chaincodes by name?

I've tried to do workaround for this by generating and storing a map of TxID and Random-Number at calling chaincode and attached this random number with the Invocation. When the called chaincode receives the Invoke, it again queries back to the calling chaincode (using stub.InvokeChaincode() again) to verify if this random number is infact generated by that chaincode. But, the last Invoke (verification) did not work, it threw: GRPC client failed to get a proper response from the peer \"grpcs://localhost:8051\"."

I would also like to know why this does not work.

Would really appreciate any clue.





Re: Proposal : Hyperledger Fabric block archiving

Manish
 

Hi Atsushi,

My response in blue in-lined text…

Thanks,
Manish

On Mon, Dec 2, 2019 at 4:10 PM Manish Sethi <manish.sethi@...> wrote:


On Fri, Nov 29, 2019 at 12:44 AM nekia <atsushin@...> wrote:
Thanks, Manish, Yacov, and Gari.
 
 
I really appreciate for your feedback and insights.
 
(Feedback from Manish)
First, the fundamental assumption that you make is that all the block files are same across peers is incorrect. The block files are not guaranteed to contain same number of blocks across peers. This is because a block file is bounded by the file size and not by the  number of blocks. Further, the size of a given block may vary slightly on each peer. Though the header and the data section of a blocks are of same size across peers but this difference in overall size could be caused by the metadata section which contains concenter signatures. ...
Thank you so much for a quite important point. We're now reviewing and analyzing the implementation of fabric around metadata. Let me ask a question to clarify my understanding.
Say for example, block #100 is available on channel 'mychannel' within organization 'org1', does it mean that the metadata of block #100 on peer0.org1 can be different to the metadata of same block(#100) on a different peer(ex. peer1.org1)? If yes, you are right that our assumption is incorrect. That is, our feature will not be able to refer to a block data (from any peer node) which resides on the archive repository. Because locPointer (offset and length of each block within a blockfile) is not available for archived blockfiles on the repository.

Yes, that is the case I was highlighting…
Second, there are certain kind of queries for which a peer assumes the presence of block files in the current way. This primarily includes history queries, blocks related queries, and txid related queries. These queries may start failing or lead to crashes or unexpected results if you simply delete the files. ...
Third, somewhat similar to the second point above, peer has a feature wherein it rebuilds the statedb and historydb if they are dropped and peer is simply restarted. For this feature as well it relies on the percense of blockfiles.
We have catered for these situations. Each peer node is still able to access all the blockfiles (even if they're archived and discarded) seamlessly via as-is interface. Even after archiving blockfiles, blockchain characteristics are still maintained on the Blockchain network. So rebuilding statedb and accessing historydb are still available under this archiving feature.
Note: Hyperledger Fabric core has been modified to handle query failures when it attempts to access deleted blockfiles.

 I see now. This important detail was not covered in the proposal and hence I was under impression that you are not modifying the core fabric code. Given, the first point above, this would cause more changes in the peer core.
Fourth, I am not sure if gossip is the right communication mechanism that you want to employ for this communication. An archiver client perhaps can simply poll (or register for updates with) the archiver repository.
In early stage in our development, we used a polling mechanism to trigger archiving. But in terms of the efficiency (process and network traffic), we changed the implementation to be event driven.
 
 As I mentioned previously, it can still be event driven (event from archiver repo). My main point was point-to-point communication vs gossip.
Finally, I would like to understand in more details what are the benefits of having a separate repository? Why not simply let the files be there on the anchor peer and purge from other peers? If the answer is compression, then ideally we should explore a choise of writing the data in blockfiles in compressed format.
Good point. We designed this archiving feature to be as simple as possible (that is, minimal code changes to Hyperledger Fabric core). With the repository concept, we're able to access all the blockfiles (even if they're archived and discarded) seamlessly via as-is interface.
     All I wanted to say here is that, it would be good if someone wants one of the peers file to act as a repo as well…. in other words, it still has all what a repo offers and code will be outside core peer code anyways. But this is less important point as compared to others, I guess.
 
  
(Feedbacks from Yacov)
one thing I noticed while skimming the code, is that while you send the ArchivedBlockFileMsg via gossip, you are not ensuring it is eventually propagated to peers successfully.
 
This means that if a peer didn't get the message, it won't archive your file.
 
I suggest that you think of a more robust mechanism, like periodically comparing digests of ranges.
You're right. This kind of logic is lacking from our current implementation. Actually it was in our radar, but we have difficulty to implement this aspect. Thank you for pointing to a reference code for pull-based gossip.
 
(Feedbacks from Gari)
It seems the only thing you really wanted to use the peer for was to propagate information to other peers within the same organization. ...
Yes, that is one of the reasons we integrated archiving features into peer binary. But the most important reason is for handling query failures when it attempts to access deleted blockfiles. And each peer node is still able to access all the blockfiles (even if they're archived and discarded) seamlessly via as-is interface.
 
 
Thanks,
Atsushi
 


Is there a way to block chaincode access for SDK? #fabric-chaincode

Prasanth Sundaravelu
 

Hi Guys,

I've been trying to separate one big chaincode into multiple chaincodes and also for separating business logic from tech logic.

Here, the tech logic service (eg: EncryptAndSaveState) needs to be accessed by multiple other chaincodes. 

I have separated code into different chaincodes and instantiated in same single channel. 

The problem is, if I want to use one chaincode's service from another, I have to expose the functions in chaincode from Invoke / Query functions, so that I can use stub.InvokeChaincode() to call these services. But, if I expose these functions (eg: EncryptAndSaveState), it will be accessible by SDK aswell. I don't want the tech services chaincode to be accessed via SDK. 

Is there a way to identify if request is coming from chaincode (stub.InvokeChaincode()) or from SDK? 
or,
Is there a way to set configuration at peer, to not let request coming from SDK access certain chaincodes by name?

I've tried to do workaround for this by generating and storing a map of TxID and Random-Number at calling chaincode and attached this random number with the Invocation. When the called chaincode receives the Invoke, it again queries back to the calling chaincode (using stub.InvokeChaincode() again) to verify if this random number is infact generated by that chaincode. But, the last Invoke (verification) did not work, it threw: GRPC client failed to get a proper response from the peer \"grpcs://localhost:8051\"."

I would also like to know why this does not work.

Would really appreciate any clue.


Re: Proposal : Hyperledger Fabric block archiving

Manish
 



On Fri, Nov 29, 2019 at 12:44 AM nekia <atsushin@...> wrote:
Thanks, Manish, Yacov, and Gari.
 
 
I really appreciate for your feedback and insights.
 
(Feedback from Manish)
First, the fundamental assumption that you make is that all the block files are same across peers is incorrect. The block files are not guaranteed to contain same number of blocks across peers. This is because a block file is bounded by the file size and not by the  number of blocks. Further, the size of a given block may vary slightly on each peer. Though the header and the data section of a blocks are of same size across peers but this difference in overall size could be caused by the metadata section which contains concenter signatures. ...
Thank you so much for a quite important point. We're now reviewing and analyzing the implementation of fabric around metadata. Let me ask a question to clarify my understanding.
Say for example, block #100 is available on channel 'mychannel' within organization 'org1', does it mean that the metadata of block #100 on peer0.org1 can be different to the metadata of same block(#100) on a different peer(ex. peer1.org1)? If yes, you are right that our assumption is incorrect. That is, our feature will not be able to refer to a block data (from any peer node) which resides on the archive repository. Because locPointer (offset and length of each block within a blockfile) is not available for archived blockfiles on the repository.

Yes, that is the case I was highlighting…
Second, there are certain kind of queries for which a peer assumes the presence of block files in the current way. This primarily includes history queries, blocks related queries, and txid related queries. These queries may start failing or lead to crashes or unexpected results if you simply delete the files. ...
Third, somewhat similar to the second point above, peer has a feature wherein it rebuilds the statedb and historydb if they are dropped and peer is simply restarted. For this feature as well it relies on the percense of blockfiles.
We have catered for these situations. Each peer node is still able to access all the blockfiles (even if they're archived and discarded) seamlessly via as-is interface. Even after archiving blockfiles, blockchain characteristics are still maintained on the Blockchain network. So rebuilding statedb and accessing historydb are still available under this archiving feature.
Note: Hyperledger Fabric core has been modified to handle query failures when it attempts to access deleted blockfiles.

 I see now. This important detail was not covered in the proposal and hence I was under impression that you are not modifying the core fabric code. Given, the first point above, this would cause more changes in the peer core.
Fourth, I am not sure if gossip is the right communication mechanism that you want to employ for this communication. An archiver client perhaps can simply poll (or register for updates with) the archiver repository.
In early stage in our development, we used a polling mechanism to trigger archiving. But in terms of the efficiency (process and network traffic), we changed the implementation to be event driven.
 
 As I mentioned previously, it can still be event driven (event from archiver repo). My main point was point-to-point communication vs gossip.
Finally, I would like to understand in more details what are the benefits of having a separate repository? Why not simply let the files be there on the anchor peer and purge from other peers? If the answer is compression, then ideally we should explore a choise of writing the data in blockfiles in compressed format.
Good point. We designed this archiving feature to be as simple as possible (that is, minimal code changes to Hyperledger Fabric core). With the repository concept, we're able to access all the blockfiles (even if they're archived and discarded) seamlessly via as-is interface.
     All I wanted to say here is that, it would be good if someone wants one of the peers file to act as a repo as well…. in other words, it still has all what a repo offers and code will be outside core peer code anyways. But this is less important point as compared to others, I guess.
 
  
(Feedbacks from Yacov)
one thing I noticed while skimming the code, is that while you send the ArchivedBlockFileMsg via gossip, you are not ensuring it is eventually propagated to peers successfully.
 
This means that if a peer didn't get the message, it won't archive your file.
 
I suggest that you think of a more robust mechanism, like periodically comparing digests of ranges.
You're right. This kind of logic is lacking from our current implementation. Actually it was in our radar, but we have difficulty to implement this aspect. Thank you for pointing to a reference code for pull-based gossip.
 
(Feedbacks from Gari)
It seems the only thing you really wanted to use the peer for was to propagate information to other peers within the same organization. ...
Yes, that is one of the reasons we integrated archiving features into peer binary. But the most important reason is for handling query failures when it attempts to access deleted blockfiles. And each peer node is still able to access all the blockfiles (even if they're archived and discarded) seamlessly via as-is interface.
 
 
Thanks,
Atsushi
 


Re: problem creating channel: 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies

Siddharth Jain
 

below is our truncated logs

2019-12-02 09:42:46.771 PST [policies] Evaluate -> DEBU 4d3 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/ChannelCreationPolicy ==
2019-12-02 09:44:32.662 PST [policies] Evaluate -> DEBU 4ef Signature set satisfies policy /Channel/Application/Org1/Admins

2019-12-02 09:45:03.088 PST [policies] Evaluate -> DEBU 511 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Writers ==
2019-12-02 09:45:34.699 PST [policies] Evaluate -> DEBU 513 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Writers ==
2019-12-02 09:46:26.038 PST [policies] Evaluate -> DEBU 515 == Evaluating *cauthdsl.policy Policy /Channel/Orderer/OrdOrg/Writers ==
2019-12-02 09:46:26.039 PST [msp] satisfiesPrincipalInternalV143 -> DEBU 51b Checking if identity has been named explicitly as an admin for OrdOrgMSP
2019-12-02 09:46:26.039 PST [msp] satisfiesPrincipalInternalV143 -> DEBU 51c Checking if identity carries the admin ou for OrdOrgMSP
2019-12-02 09:46:26.040 PST [msp] hasOURole -> DEBU 51f MSP OrdOrgMSP checking if the identity is a client
2019-12-02 09:46:26.040 PST [cauthdsl] func2 -> DEBU 521 0xc0005f9180 identity 0 does not satisfy principal: The identity is not an admin under this MSP [OrdOrgMSP]: The identity does not contain OU [ADMIN], MSP: [OrdOrgMSP]
2019-12-02 09:46:26.040 PST [cauthdsl] func2 -> DEBU 522 0xc0005f9180 principal evaluation fails
2019-12-02 09:46:26.040 PST [msp] satisfiesPrincipalInternalPreV13 -> DEBU 525 Checking if identity satisfies role [CLIENT] for OrdOrgMSP
2019-12-02 09:46:26.042 PST [cauthdsl] func2 -> DEBU 52a 0xc0005f9180 identity 0 does not satisfy principal: The identity is not a [CLIENT] under this MSP [OrdOrgMSP]: The identity does not contain OU [CLIENT], MSP: [OrdOrgMSP]
2019-12-02 09:46:26.042 PST [policies] Evaluate -> DEBU 52d Signature set did not satisfy policy /Channel/Orderer/OrdOrg/Writers
2019-12-02 09:48:07.333 PST [policies] func1 -> DEBU 52f Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ OrdOrg/Writers ]
2019-12-02 09:48:07.333 PST [policies] Evaluate -> DEBU 533 Signature set did not satisfy policy /Channel/Orderer/Writers

it is true that in configtx.yaml we have defined

Policies: &OrdPolicies
            Readers:
                Type: Signature                
                Rule: "OR('OrdOrgMSP.admin', 'OrdOrgMSP.orderer', 'OrdOrgMSP.client')"
               
            Writers:
                Type: Signature
                Rule: "OR('OrdOrgMSP.admin', 'OrdOrgMSP.client')"
               
            Admins:
                Type: Signature
                Rule: "OR('OrdOrgMSP.admin')"

and we are calling channel create as admin of Org1 but we used the same pattern as in https://github.com/hyperledger/fabric-samples/blob/release-1.4/first-network/configtx.yaml

Policies:
            Readers:
                Type: Signature
                Rule: "OR('OrdererMSP.member')"
            Writers:
                Type: Signature
                Rule: "OR('OrdererMSP.member')"
            Admins:
                Type: Signature
                Rule: "OR('OrdererMSP.admin')"

so what gives?

From: Nikhil E Gupta <negupta@...>
Sent: Monday, December 2, 2019 6:00 AM
To: Siddharth Jain <siddjain@...>
Cc: fabric@... <fabric@...>
Subject: Re: [Hyperledger Fabric] problem creating channel: 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies
 
Hi Siddharth,

This error is caused by a certificate problem. You can investigate further by checking your orderer logs.

This stack overflow post: https://stackoverflow.com/questions/57662562/when-i-try-to-create-a-channel-using-hyperledger-fabric-the-request-fails/57662645#57662645 has a good overview of what to look for when you check your orderer logs.

Nik



-----fabric@... wrote: -----
To: "fabric@..." <fabric@...>
From: "Siddharth Jain"
Sent by: fabric@...
Date: 11/30/2019 06:32PM
Subject: [EXTERNAL] [Hyperledger Fabric] problem creating channel: 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies

we get the error below when trying to create a channel using the peer CLI
2019-11-30 20:53:15.482 UTC [orderer.common.broadcast] ProcessMessage -> WARN 00c [channel: mychannel] Rejecting broadcast of config message from 172.18.0.1:51816 because of error: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied
  • we have a 3 org network (plus orderer)
  • we are using NodeOUs
  • our configtx.yaml can be found at https://gist.github.com/siddjain/4cefde4321185c81a663f877fd6b105e
  • we are running the CLI under credentials of an admin user of one of the peer organizations
  • we checked the public cert and it has OU=admin on it. it was generated using cryptogen 1.4.4
how can we fix this? what is the cause?

fwiw, in case it helps, if we try to create the channel using credentials of admin of the orderer org we get a different error

2019-11-30 20:30:53.025 UTC [orderer.common.broadcast] ProcessMessage -> WARN 008 [channel: mychannel] Rejecting broadcast of config message from 172.18.0.1:51808 because of error: error validating channel creation transaction for new channel 'tracktrace', could not succesfully apply update to template configuration: error authorizing update: error validating DeltaSet: policy for [Group]  /Channel/Application not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied



Re: problem creating channel: 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies

Nikhil Gupta
 

Hi Siddharth,

This error is caused by a certificate problem. You can investigate further by checking your orderer logs.

This stack overflow post: https://stackoverflow.com/questions/57662562/when-i-try-to-create-a-channel-using-hyperledger-fabric-the-request-fails/57662645#57662645 has a good overview of what to look for when you check your orderer logs.

Nik



-----fabric@... wrote: -----
To: "fabric@..." <fabric@...>
From: "Siddharth Jain"
Sent by: fabric@...
Date: 11/30/2019 06:32PM
Subject: [EXTERNAL] [Hyperledger Fabric] problem creating channel: 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies

we get the error below when trying to create a channel using the peer CLI
2019-11-30 20:53:15.482 UTC [orderer.common.broadcast] ProcessMessage -> WARN 00c [channel: mychannel] Rejecting broadcast of config message from 172.18.0.1:51816 because of error: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied
  • we have a 3 org network (plus orderer)
  • we are using NodeOUs
  • our configtx.yaml can be found at https://gist.github.com/siddjain/4cefde4321185c81a663f877fd6b105e
  • we are running the CLI under credentials of an admin user of one of the peer organizations
  • we checked the public cert and it has OU=admin on it. it was generated using cryptogen 1.4.4
how can we fix this? what is the cause?

fwiw, in case it helps, if we try to create the channel using credentials of admin of the orderer org we get a different error

2019-11-30 20:30:53.025 UTC [orderer.common.broadcast] ProcessMessage -> WARN 008 [channel: mychannel] Rejecting broadcast of config message from 172.18.0.1:51808 because of error: error validating channel creation transaction for new channel 'tracktrace', could not succesfully apply update to template configuration: error authorizing update: error validating DeltaSet: policy for [Group]  /Channel/Application not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied


4241 - 4260 of 11527