Switching spec.yaml file causes error


Brett Tiller
 

I decided to try the example spec.yaml file instead of my customized yaml version.  Before doing so I ran  command ‘minifabric cleanup’, and assured that there was no docker containers/volumes found.  I also removed the vars directory as well.  I then switched to using the example spec.yaml that contains the data below. 

 

fabric:

  cas:

  - "ca1.org0.example.com"

  - "ca1.org1.example.com"

  peers:

  - "peer1.org0.example.com"

  - "peer2.org0.example.com"

  - "peer1.org1.example.com"

  - "peer2.org1.example.com"

  orderers:

  - "orderer1.example.com"

  - "orderer2.example.com"

  - "orderer3.example.com"

  settings:

    ca:

      FABRIC_LOGGING_SPEC: DEBUG

    peer:

      FABRIC_LOGGING_SPEC: DEBUG

    orderer:

      FABRIC_LOGGING_SPEC: DEBUG

 

 

After running command  ./minifab up -o org0.example.com -s couch.db  -e 7400  I see the messages below which terminate with the error below.  The orderer nodes do not start up, so there are no docker containers for them.

 

btiller@BRETTTLAPTOP3:~/mywork/primii$ ./minifab up -o org0.example.com -s couch.db  -e 7500

Using spec file: /home/btiller/mywork/primii/spec.yaml

Minifab Execution Context:

    FABRIC_RELEASE=2.3.0

    CHANNEL_NAME=mychannel

    PEER_DATABASE_TYPE=couch.db

    CHAINCODE_LANGUAGE=go

    CHAINCODE_NAME=simple

    CHAINCODE_VERSION=1.0

    CHAINCODE_INIT_REQUIRED=true

    CHAINCODE_PARAMETERS="init","a","200","b","300"

    CHAINCODE_PRIVATE=false

    CHAINCODE_POLICY=

    TRANSIENT_DATA=

    BLOCK_NUMBER=newest

    EXPOSE_ENDPOINTS=7500

    CURRENT_ORG=org0.example.com

    HOST_ADDRESSES=172.28.67.254

    WORKING_DIRECTORY: /home/btiller/mywork/primii

.......

# Preparing for the following operations: *********************

  verify options, download images, generate certificates, start network, network status, channel create, channel join, anchor update, profile generation, cc install, cc approve, cc commit, cc initialize, discover

..................

# Running operation: ******************************************

  verify options

.

# Running operation: ******************************************

  download images

............

# Running operation: ******************************************

  generate certificates

...............................................................................................................................................

# Running operation: ******************************************

  start network

................

# Start all orderer nodes *************************************

  One or more items failed

    non-zero return code

    non-zero return code

    non-zero return code

# Error! ******************************************************

  docker: Error response from daemon: not a directory.

  See 'docker run --help'.

 

# STATS *******************************************************

minifab: ok=230 failed=0

 

real    1m6.330s

user    0m57.180s

sys     0m13.042s

 

 

 

I’ve tracked the error to file playbooks/ops/netup/dockerapply.yaml  method ‘Start all orderer nodes’.  My guess is that one of the variables has an incorrect value, so the volume is not being created.  However I don’t see anyway to debug the minifabric image.  Is there a step that I missed when I switched yaml files? Or is there a way that I can debug this issue further?

 

 name: Start all orderer nodes

  command: >-

    docker run -d --network {{ NETNAME }} --name {{ item.fullname }} --hostname {{ item.fullname }}

    --env-file {{ pjroot }}/vars/run/{{ item.fullname }}.env {{ item.portmap }}

    -v {{ hostroot }}/vars/genesis.block:/var/hyperledger/orderer/orderer.genesis.block

    -v {{ mpath }}/{{item.org}}/orderers/{{item.fullname}}/msp:/var/hyperledger/orderer/msp

    -v {{ mpath }}/{{item.org}}/orderers/{{item.fullname}}/tls:/var/hyperledger/orderer/tls

    -v {{ item.fullname }}:/var/hyperledger/production/orderer

    {{ container_options }}

    hyperledger/fabric-orderer:{{ fabric.release }}

  with_items: "{{ allorderers }}"

  register: ordererstat

  ignore_errors: yes

 

 

Thanks,

 

Brett Tiller

Sr. Software Engineer

984-349-4239 (mobile)

btiller@...

 

https://www.linkedin.com/company/securboration

 


email4tong@gmail.com
 

The only thing I can think of is maybe you have some ports taken already by other things running in your system. When minifabric sets up things it will use quite few ports starting at 7400, if you have other containers /apps running using these ports, the peer or orderer nodes which may try to use the same port will fail. You probably can use the some commands like netstat to see if these ports start at 7400 are taken.

If you still cannot figure this out, we can probably have a 15 minute zoom to take a look at it.




On Wednesday, June 22, 2022, 6:43 PM, Brett Tiller <btiller@...> wrote:

I decided to try the example spec.yaml file instead of my customized yaml version.  Before doing so I ran  command ‘minifabric cleanup’, and assured that there was no docker containers/volumes found.  I also removed the vars directory as well.  I then switched to using the example spec.yaml that contains the data below. 

 

fabric:

  cas:

  - "ca1.org0.example.com"

  - "ca1.org1.example.com"

  peers:

  - "peer1.org0.example.com"

  - "peer2.org0.example.com"

  - "peer1.org1.example.com"

  - "peer2.org1.example.com"

  orderers:

  - "orderer1.example.com"

  - "orderer2.example.com"

  - "orderer3.example.com"

  settings:

    ca:

      FABRIC_LOGGING_SPEC: DEBUG

    peer:

      FABRIC_LOGGING_SPEC: DEBUG

    orderer:

      FABRIC_LOGGING_SPEC: DEBUG

 

 

After running command  ./minifab up -o org0.example.com -s couch.db  -e 7400  I see the messages below which terminate with the error below.  The orderer nodes do not start up, so there are no docker containers for them.

 

btiller@BRETTTLAPTOP3:~/mywork/primii$ ./minifab up -o org0.example.com -s couch.db  -e 7500

Using spec file: /home/btiller/mywork/primii/spec.yaml

Minifab Execution Context:

    FABRIC_RELEASE=2.3.0

    CHANNEL_NAME=mychannel

    PEER_DATABASE_TYPE=couch.db

    CHAINCODE_LANGUAGE=go

    CHAINCODE_NAME=simple

    CHAINCODE_VERSION=1.0

    CHAINCODE_INIT_REQUIRED=true

    CHAINCODE_PARAMETERS="init","a","200","b","300"

    CHAINCODE_PRIVATE=false

    CHAINCODE_POLICY=

    TRANSIENT_DATA=

    BLOCK_NUMBER=newest

    EXPOSE_ENDPOINTS=7500

    CURRENT_ORG=org0.example.com

    HOST_ADDRESSES=172.28.67.254

    WORKING_DIRECTORY: /home/btiller/mywork/primii

.......

# Preparing for the following operations: *********************

  verify options, download images, generate certificates, start network, network status, channel create, channel join, anchor update, profile generation, cc install, cc approve, cc commit, cc initialize, discover

..................

# Running operation: ******************************************

  verify options

.

# Running operation: ******************************************

  download images

............

# Running operation: ******************************************

  generate certificates

...............................................................................................................................................

# Running operation: ******************************************

  start network

................

# Start all orderer nodes *************************************

  One or more items failed

    non-zero return code

    non-zero return code

    non-zero return code

# Error! ******************************************************

  docker: Error response from daemon: not a directory.

  See 'docker run --help'.

 

# STATS *******************************************************

minifab: ok=230 failed=0

 

real    1m6.330s

user    0m57.180s

sys     0m13.042s

 

 

 

I’ve tracked the error to file playbooks/ops/netup/dockerapply.yaml  method ‘Start all orderer nodes’.  My guess is that one of the variables has an incorrect value, so the volume is not being created.  However I don’t see anyway to debug the minifabric image.  Is there a step that I missed when I switched yaml files? Or is there a way that I can debug this issue further?

 

 name: Start all orderer nodes

  command: >-

    docker run -d --network {{ NETNAME }} --name {{ item.fullname }} --hostname {{ item.fullname }}

    --env-file {{ pjroot }}/vars/run/{{ item.fullname }}.env {{ item.portmap }}

    -v {{ hostroot }}/vars/genesis.block:/var/hyperledger/orderer/orderer.genesis.block

    -v {{ mpath }}/{{item.org}}/orderers/{{item.fullname}}/msp:/var/hyperledger/orderer/msp

    -v {{ mpath }}/{{item.org}}/orderers/{{item.fullname}}/tls:/var/hyperledger/orderer/tls

    -v {{ item.fullname }}:/var/hyperledger/production/orderer

    {{ container_options }}

    hyperledger/fabric-orderer:{{ fabric.release }}

  with_items: "{{ allorderers }}"

  register: ordererstat

  ignore_errors: yes

 

 

Thanks,

 

Brett Tiller

Sr. Software Engineer

984-349-4239 (mobile)

btiller@...

 

https://www.linkedin.com/company/securboration

 


Brett Tiller
 

Started working now.  I'm not sure if something in memory was causing the issue as rebooting my computer seems to have cleared up this issue - at least for now.


email4tong@gmail.com
 

great. good to hear. Minifabric has been really heavily tested. It has been solid for past few years.

On Friday, June 24, 2022, 11:33:45 AM EDT, Brett Tiller <btiller@...> wrote:


Started working now.  I'm not sure if something in memory was causing the issue as rebooting my computer seems to have cleared up this issue - at least for now.