[Hyperledger Project TSC] [Hyperledger-fabric] Performance measurements


Imre Kocsis
 

Brian,

this is really exciting. What would be the way forward with this? I'm cc-ing this to the TSC list, too, because in light of the multiple, disconnected performance testing efforts emerging from many places you guys may want to consider setting up a WG, perhaps? (Sorry for the cross-post if the TSC list is not the right place for the suggestion.) As of now, performance testing/asssurance seems to be present in the project structure only through the busywork tool (and its project proposal), and that's somewhat orthogonal to this. At the same time, it's not necessarily the smartest for everybody to develop or duct tape together their own test harness (this seems to be happening), but that's another question.

Anyhow, I assume that however big, this is a finite resource so access to it has to be controlled; either via some form of admission control for project members to send in jobs or simply periodically running sets of experiments agreed upon by the interested parties. One could even argue that "Continuous Performance Testing" wouldn't be a bad idea, either. Your thoughts on this?

Best regards
Imre

---------------------------------------------------------------------------
Imre Kocsis
assistant lecturer

Fault Tolerant Systems Research Group
Department of Measurement and Information Systems
Budapest University of Technology and Economics

mail: ikocsis@...
phone: +36-1-463-2006
mobile: +36-20-514-6881
skype: kocsis_imre




From:        Brian Behlendorf <bbehlendorf@...>
To:        hyperledger-fabric@...
Date:        06/17/2016 08:13 AM
Subject:        Re: [Hyperledger-fabric] Performance measurements
Sent by:        hyperledger-fabric-bounces@...




I'm following this thread on the periphery, and wanted to highlight Imre's questions about the test environment and how it may compare to real-world deployments.  I think it's crucial to start to articulate what that real-world looks like.  For example it probably involves nodes that are not all on the same local network, because each node is "owned" by a different organization that doesn't trust the others, and thus connection latency across the chain could vary up to hundreds of milliseconds and face real-world packet loss from time to time.  It also is extremely cheap these days to get high powered bare metal, so running everything in docker within a single VM may not reflect what's actually possible if we throw real-world hardware at the problem.  The Linux Foundation has access to resources that could be used to help the community with performance testing, if this would be helpful, potentially including a 1000 CPU server farm that has been loaned to us for part-time experiments.  Let me know if that's interesting (and no, you can't run a BTC or ETH miner on it)

Brian

On 06/16/2016 04:07 PM, ikocsis@... wrote:
Yacov,

measuring this is a good idea; however, I would have a few questions & comments:


1. First and foremost: could you maybe plot this? :)

2. Especially as you use (I have to assume) a single VM for the whole P2P network: have you controlled for the state of other resources than network? E.g. CPU saturation.

3. For the purpose of TPS computing: how do you define a "transaction" in this case? As I see it, client INVOKE queries simply ask the network something to be done on the world state; when they return the deed can be far from being done. (So there is no classic atomic transaction execution.) From the point of view of the clients, the invoke request is "done" when QUERY requests begin to show its effect. So the concept of TPS is not exactly trivial. Am I right on this?


I have a more general question, too, open for discussion, if you like. (I'm honestly interested in this.) While the guys at Digital Asset seem to use (and I know that we do use :)) EC2 VMs for performance testing (not optimal, but good enough), there seem to be others simply using single-node (mostly meaning a single VM), purely Docker configuration. Is this really wise for performance testing? I mean containers can and do interfere performance-wise by simply competing for the access of time-shared resources, and while cgroups is there (assuming it's used), to the best of my knowledge it's not perfect for latency-sensitive scenarios. Also, slicing the HW resources of a single VM between containers can make each peer a bit anemic on its own. I understand that for off the cuff perf. measurements using e.g. the Vagrant devnet can be a rapid way to have some numbers, but is it representative enough for what can be expected "in production"? (The answer may very well be "yes and don't nitpick", but I'm having doubts at this point.)


Best regards

Imre





From:        
"Yacov Manevich" <YACOVM@...>
To:        
"Konstantinos Christidis" <kchristidis@...>, tamas@...
Cc:        
Benjamin Mandler <MANDLER@...>, hyperledger-fabric@...
Date:        
06/16/2016 03:20 PM
Subject:        
Re: [Hyperledger-fabric] Performance measurements
Sent by:        
hyperledger-fabric-bounces@...




Hi.

On the same topic, I've also been doing some performance testings.
Specifically I'm interested in correlating network latency, node number and transactions per second.

I made a simple test that:

1.        Spawns N validating peers with PBFT classic in docker containers

2.         deploys a chaincode to them

3.        Injects network latencies (a few milliseconds) to all containers but the first one using the tc command

4.        runs a node.js script that does a few thousand invocations, (in parallel or not, when in parallel- each "thread" runs it against a different validating peer)

5.        Counts the elapsed time and calculates the TPS (transactions per second) value.

The results I got are attached:




Regards,
       Yacov.




From:        
"Konstantinos Christidis" <kchristidis@...>
To:        
ikocsis@...
Cc:        
       hyperledger-fabric@...
Date:        
14/06/2016 19:16
Subject:        
Re: [Hyperledger-fabric] Performance measurements
Sent by:        
hyperledger-fabric-bounces@...





----- Original message -----
From: ikocsis@...
To: Konstantinos Christidis/Durham/IBM@IBMUS
Cc:
hyperledger-fabric@...
Subject: Re: [Hyperledger-fabric] Performance measurements
Date: Tue, Jun 14, 2016 7:24 AM


On another note: we do use certainly the default "batch" pbft, but I saw the "sieve" and I think "crypto" options in the code, too. And I think I found the paper that explains these (haven't had the time to fully understand the specifics yet):
http://arxiv.org/abs/1603.07351
Question is: do these already work? We certainly should stick to deterministic transactions at this point, I'm just curious.

Imre,

Sieve is a prototype, our development efforts are currently focused on PBFT. I would strongly suggest sticking to PBFT for your experiments.

Best,
Kostas

_______________________________________________
Hyperledger-fabric mailing list

Hyperledger-fabric@...
https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric

[attachment "results.pdf.zip" deleted by Imre Kocsis/ftsrg] _______________________________________________
Hyperledger-fabric mailing list

Hyperledger-fabric@...
https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric



_______________________________________________
Hyperledger-fabric mailing list
Hyperledger-fabric@...
https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric



--
Brian Behlendorf
Executive Director at the Hyperledger Project
bbehlendorf@...
Twitter: @brianbehlendorf
_______________________________________________
Hyperledger-fabric mailing list
Hyperledger-fabric@...
https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric


Brian Behlendorf
 

Here is the form for requesting access to the compute cluster.

https://github.com/cncf/cluster

Note that it says it's focused on CNCF (the Cloud Native Computing Foundation, also an LF project like us), but I talked with the director of the project last night and we can use it too.  It may be worth putting the workloads tested here also on EC2 or Azure for comparison, eventually.  While I don't think we can use all 1000 nodes continuously forever, we don't currently have a lot of contention, and the more it becomes a demonstration vehicle for CNCF the more priority I think we'll get :)  I would even feel comfortable with us suggesting to them a continuous performance testing process, though perhaps daily rather than upon every commit would be a good idea.  But let's get together as a community and make a thoughtful, coordinated ask on that form for the cluster.

I am looking for help on our side from the Linux world in performance testing and tuning, but I'd also encourage all our members to search their own organizations for people with these skills, as they are almost as rare as blockchain skills :)  I would hesitate to break this activity out into a separate formal Working Group, only because then it may come across as Someone Else's Problem, rather than one of several parallel concerns devs should always have in mind while working on code (like security, too).  I would suggest this list talk about what the ideal and likely production environments will look like, use that to drive a realistic set of targets and an ideal test harness, and then build collectively.

Brian

On 06/17/2016 02:52 AM, ikocsis@... wrote:
Brian,

this is really exciting. What would be the way forward with this? I'm cc-ing this to the TSC list, too, because in light of the multiple, disconnected performance testing efforts emerging from many places you guys may want to consider setting up a WG, perhaps? (Sorry for the cross-post if the TSC list is not the right place for the suggestion.) As of now, performance testing/asssurance seems to be present in the project structure only through the busywork tool (and its project proposal), and that's somewhat orthogonal to this. At the same time, it's not necessarily the smartest for everybody to develop or duct tape together their own test harness (this seems to be happening), but that's another question.

Anyhow, I assume that however big, this is a finite resource so access to it has to be controlled; either via some form of admission control for project members to send in jobs or simply periodically running sets of experiments agreed upon by the interested parties. One could even argue that "Continuous Performance Testing" wouldn't be a bad idea, either. Your thoughts on this?

Best regards
Imre

---------------------------------------------------------------------------
Imre Kocsis
assistant lecturer

Fault Tolerant Systems Research Group
Department of Measurement and Information Systems
Budapest University of Technology and Economics

mail: ikocsis@...
phone: +36-1-463-2006
mobile: +36-20-514-6881
skype: kocsis_imre




From:        Brian Behlendorf <bbehlendorf@...>
To:        hyperledger-fabric@...
Date:        06/17/2016 08:13 AM
Subject:        Re: [Hyperledger-fabric] Performance measurements
Sent by:        hyperledger-fabric-bounces@...




I'm following this thread on the periphery, and wanted to highlight Imre's questions about the test environment and how it may compare to real-world deployments.  I think it's crucial to start to articulate what that real-world looks like.  For example it probably involves nodes that are not all on the same local network, because each node is "owned" by a different organization that doesn't trust the others, and thus connection latency across the chain could vary up to hundreds of milliseconds and face real-world packet loss from time to time.  It also is extremely cheap these days to get high powered bare metal, so running everything in docker within a single VM may not reflect what's actually possible if we throw real-world hardware at the problem.  The Linux Foundation has access to resources that could be used to help the community with performance testing, if this would be helpful, potentially including a 1000 CPU server farm that has been loaned to us for part-time experiments.  Let me know if that's interesting (and no, you can't run a BTC or ETH miner on it)

Brian

On 06/16/2016 04:07 PM, ikocsis@... wrote:
Yacov,

measuring this is a good idea; however, I would have a few questions & comments:


1. First and foremost: could you maybe plot this? :)

2. Especially as you use (I have to assume) a single VM for the whole P2P network: have you controlled for the state of other resources than network? E.g. CPU saturation.

3. For the purpose of TPS computing: how do you define a "transaction" in this case? As I see it, client INVOKE queries simply ask the network something to be done on the world state; when they return the deed can be far from being done. (So there is no classic atomic transaction execution.) From the point of view of the clients, the invoke request is "done" when QUERY requests begin to show its effect. So the concept of TPS is not exactly trivial. Am I right on this?


I have a more general question, too, open for discussion, if you like. (I'm honestly interested in this.) While the guys at Digital Asset seem to use (and I know that we do use :)) EC2 VMs for performance testing (not optimal, but good enough), there seem to be others simply using single-node (mostly meaning a single VM), purely Docker configuration. Is this really wise for performance testing? I mean containers can and do interfere performance-wise by simply competing for the access of time-shared resources, and while cgroups is there (assuming it's used), to the best of my knowledge it's not perfect for latency-sensitive scenarios. Also, slicing the HW resources of a single VM between containers can make each peer a bit anemic on its own. I understand that for off the cuff perf. measurements using e.g. the Vagrant devnet can be a rapid way to have some numbers, but is it representative enough for what can be expected "in production"? (The answer may very well be "yes and don't nitpick", but I'm having doubts at this point.)


Best regards

Imre





From:        
"Yacov Manevich" <YACOVM@...>
To:        
"Konstantinos Christidis" <kchristidis@...>, tamas@...
Cc:        
Benjamin Mandler <MANDLER@...>, hyperledger-fabric@...
Date:        
06/16/2016 03:20 PM
Subject:        
Re: [Hyperledger-fabric] Performance measurements
Sent by:        
hyperledger-fabric-bounces@...




Hi.

On the same topic, I've also been doing some performance testings.
Specifically I'm interested in correlating network latency, node number and transactions per second.

I made a simple test that:

1.        Spawns N validating peers with PBFT classic in docker containers

2.         deploys a chaincode to them

3.        Injects network latencies (a few milliseconds) to all containers but the first one using the tc command

4.        runs a node.js script that does a few thousand invocations, (in parallel or not, when in parallel- each "thread" runs it against a different validating peer)

5.        Counts the elapsed time and calculates the TPS (transactions per second) value.

The results I got are attached:




Regards,
       Yacov.




From:        
"Konstantinos Christidis" <kchristidis@...>
To:        
ikocsis@...
Cc:        
       hyperledger-fabric@...
Date:        
14/06/2016 19:16
Subject:        
Re: [Hyperledger-fabric] Performance measurements
Sent by:        
hyperledger-fabric-bounces@...





----- Original message -----
From:
ikocsis@...
To: Konstantinos Christidis/Durham/IBM@IBMUS
Cc:
hyperledger-fabric@...
Subject: Re: [Hyperledger-fabric] Performance measurements
Date: Tue, Jun 14, 2016 7:24 AM


On another note: we do use certainly the default "batch" pbft, but I saw the "sieve" and I think "crypto" options in the code, too. And I think I found the paper that explains these (haven't had the time to fully understand the specifics yet):
http://arxiv.org/abs/1603.07351
Question is: do these already work? We certainly should stick to deterministic transactions at this point, I'm just curious.

Imre,

Sieve is a prototype, our development efforts are currently focused on PBFT. I would strongly suggest sticking to PBFT for your experiments.

Best,
Kostas

_______________________________________________
Hyperledger-fabric mailing list

Hyperledger-fabric@...
https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric

[attachment "results.pdf.zip" deleted by Imre Kocsis/ftsrg] _______________________________________________
Hyperledger-fabric mailing list

Hyperledger-fabric@...
https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric



_______________________________________________
Hyperledger-fabric mailing list
Hyperledger-fabric@...
https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric



--
Brian Behlendorf
Executive Director at the Hyperledger Project
bbehlendorf@...
Twitter: @brianbehlendorf
_______________________________________________
Hyperledger-fabric mailing list
Hyperledger-fabric@...
https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric



-- 
Brian Behlendorf
Executive Director at the Hyperledger Project
bbehlendorf@...
Twitter: @brianbehlendorf


Jeremy Eder <jeder@...>
 

I'm a performance engineer for Red Hat -- we're also investigating
blockchain, and ramping up partnerships [1].

We have a scale-out project on the CNCF cluster ongoing as we speak
wrt Kubernetes. If any Hyperledger participant has their
implementation running on Kubernetes (one of the foundation projects
for Red Hat OpenShift), we'd like to hear from you, specifically wrt
performance testing.

[1] https://www.redhat.com/en/about/press-releases/red-hat-introduces-new-partner-initiative-blockchain-software-vendors

On Fri, Jun 17, 2016 at 11:27 AM, Brian Behlendorf via hyperledger-tsc
<hyperledger-tsc@...> wrote:
Here is the form for requesting access to the compute cluster.

https://github.com/cncf/cluster

Note that it says it's focused on CNCF (the Cloud Native Computing
Foundation, also an LF project like us), but I talked with the director of
the project last night and we can use it too. It may be worth putting the
workloads tested here also on EC2 or Azure for comparison, eventually.
While I don't think we can use all 1000 nodes continuously forever, we don't
currently have a lot of contention, and the more it becomes a demonstration
vehicle for CNCF the more priority I think we'll get :) I would even feel
comfortable with us suggesting to them a continuous performance testing
process, though perhaps daily rather than upon every commit would be a good
idea. But let's get together as a community and make a thoughtful,
coordinated ask on that form for the cluster.

I am looking for help on our side from the Linux world in performance
testing and tuning, but I'd also encourage all our members to search their
own organizations for people with these skills, as they are almost as rare
as blockchain skills :) I would hesitate to break this activity out into a
separate formal Working Group, only because then it may come across as
Someone Else's Problem, rather than one of several parallel concerns devs
should always have in mind while working on code (like security, too). I
would suggest this list talk about what the ideal and likely production
environments will look like, use that to drive a realistic set of targets
and an ideal test harness, and then build collectively.

Brian


On 06/17/2016 02:52 AM, ikocsis@... wrote:

Brian,

this is really exciting. What would be the way forward with this? I'm cc-ing
this to the TSC list, too, because in light of the multiple, disconnected
performance testing efforts emerging from many places you guys may want to
consider setting up a WG, perhaps? (Sorry for the cross-post if the TSC list
is not the right place for the suggestion.) As of now, performance
testing/asssurance seems to be present in the project structure only through
the busywork tool (and its project proposal), and that's somewhat orthogonal
to this. At the same time, it's not necessarily the smartest for everybody
to develop or duct tape together their own test harness (this seems to be
happening), but that's another question.

Anyhow, I assume that however big, this is a finite resource so access to it
has to be controlled; either via some form of admission control for project
members to send in jobs or simply periodically running sets of experiments
agreed upon by the interested parties. One could even argue that "Continuous
Performance Testing" wouldn't be a bad idea, either. Your thoughts on this?

Best regards
Imre

---------------------------------------------------------------------------
Imre Kocsis
assistant lecturer

Fault Tolerant Systems Research Group
Department of Measurement and Information Systems
Budapest University of Technology and Economics

mail: ikocsis@...
phone: +36-1-463-2006
mobile: +36-20-514-6881
skype: kocsis_imre




From: Brian Behlendorf <bbehlendorf@...>
To: hyperledger-fabric@...
Date: 06/17/2016 08:13 AM
Subject: Re: [Hyperledger-fabric] Performance measurements
Sent by: hyperledger-fabric-bounces@...
________________________________



I'm following this thread on the periphery, and wanted to highlight Imre's
questions about the test environment and how it may compare to real-world
deployments. I think it's crucial to start to articulate what that
real-world looks like. For example it probably involves nodes that are not
all on the same local network, because each node is "owned" by a different
organization that doesn't trust the others, and thus connection latency
across the chain could vary up to hundreds of milliseconds and face
real-world packet loss from time to time. It also is extremely cheap these
days to get high powered bare metal, so running everything in docker within
a single VM may not reflect what's actually possible if we throw real-world
hardware at the problem. The Linux Foundation has access to resources that
could be used to help the community with performance testing, if this would
be helpful, potentially including a 1000 CPU server farm that has been
loaned to us for part-time experiments. Let me know if that's interesting
(and no, you can't run a BTC or ETH miner on it)

Brian

On 06/16/2016 04:07 PM, ikocsis@... wrote:
Yacov,

measuring this is a good idea; however, I would have a few questions &
comments:

1. First and foremost: could you maybe plot this? :)
2. Especially as you use (I have to assume) a single VM for the whole P2P
network: have you controlled for the state of other resources than network?
E.g. CPU saturation.
3. For the purpose of TPS computing: how do you define a "transaction" in
this case? As I see it, client INVOKE queries simply ask the network
something to be done on the world state; when they return the deed can be
far from being done. (So there is no classic atomic transaction execution.)
From the point of view of the clients, the invoke request is "done" when
QUERY requests begin to show its effect. So the concept of TPS is not
exactly trivial. Am I right on this?

I have a more general question, too, open for discussion, if you like. (I'm
honestly interested in this.) While the guys at Digital Asset seem to use
(and I know that we do use :)) EC2 VMs for performance testing (not optimal,
but good enough), there seem to be others simply using single-node (mostly
meaning a single VM), purely Docker configuration. Is this really wise for
performance testing? I mean containers can and do interfere performance-wise
by simply competing for the access of time-shared resources, and while
cgroups is there (assuming it's used), to the best of my knowledge it's not
perfect for latency-sensitive scenarios. Also, slicing the HW resources of a
single VM between containers can make each peer a bit anemic on its own. I
understand that for off the cuff perf. measurements using e.g. the Vagrant
devnet can be a rapid way to have some numbers, but is it representative
enough for what can be expected "in production"? (The answer may very well
be "yes and don't nitpick", but I'm having doubts at this point.)

Best regards
Imre




From: "Yacov Manevich" <YACOVM@...>
To: "Konstantinos Christidis" <kchristidis@...>,
tamas@...
Cc: Benjamin Mandler <MANDLER@...>,
hyperledger-fabric@...
Date: 06/16/2016 03:20 PM
Subject: Re: [Hyperledger-fabric] Performance measurements
Sent by: hyperledger-fabric-bounces@...
________________________________



Hi.

On the same topic, I've also been doing some performance testings.
Specifically I'm interested in correlating network latency, node number and
transactions per second.

I made a simple test that:
1. Spawns N validating peers with PBFT classic in docker containers
2. deploys a chaincode to them
3. Injects network latencies (a few milliseconds) to all containers
but the first one using the tc command
4. runs a node.js script that does a few thousand invocations, (in
parallel or not, when in parallel- each "thread" runs it against a different
validating peer)
5. Counts the elapsed time and calculates the TPS (transactions per
second) value.
The results I got are attached:



Regards,
Yacov.



From: "Konstantinos Christidis" <kchristidis@...>
To: ikocsis@...
Cc: hyperledger-fabric@...
Date: 14/06/2016 19:16
Subject: Re: [Hyperledger-fabric] Performance measurements
Sent by: hyperledger-fabric-bounces@...
________________________________




----- Original message -----
From: ikocsis@...
To: Konstantinos Christidis/Durham/IBM@IBMUS
Cc: hyperledger-fabric@...
Subject: Re: [Hyperledger-fabric] Performance measurements
Date: Tue, Jun 14, 2016 7:24 AM

On another note: we do use certainly the default "batch" pbft, but I saw the
"sieve" and I think "crypto" options in the code, too. And I think I found
the paper that explains these (haven't had the time to fully understand the
specifics yet): http://arxiv.org/abs/1603.07351
Question is: do these already work? We certainly should stick to
deterministic transactions at this point, I'm just curious.
Imre,

Sieve is a prototype, our development efforts are currently focused on PBFT.
I would strongly suggest sticking to PBFT for your experiments.

Best,
Kostas
_______________________________________________
Hyperledger-fabric mailing list
Hyperledger-fabric@...
https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric

[attachment "results.pdf.zip" deleted by Imre Kocsis/ftsrg]
_______________________________________________
Hyperledger-fabric mailing list
Hyperledger-fabric@...
https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric



_______________________________________________
Hyperledger-fabric mailing list
Hyperledger-fabric@...
https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric



--
Brian Behlendorf
Executive Director at the Hyperledger Project
bbehlendorf@...
Twitter: @brianbehlendorf_______________________________________________
Hyperledger-fabric mailing list
Hyperledger-fabric@...
https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric



--
Brian Behlendorf
Executive Director at the Hyperledger Project
bbehlendorf@...
Twitter: @brianbehlendorf


_______________________________________________
hyperledger-tsc mailing list
hyperledger-tsc@...
https://lists.hyperledger.org/mailman/listinfo/hyperledger-tsc


--

-- Jeremy Eder


Zsolt Szilágyi <zsolt@...>
 

Hi all,

I would like to share my latest performance measurements results.It has been done on current fabric and fabric-api master. For measurement, I used the following:
 - PerfTest test case with GrpcClient HLAPI implementation, 10000 transactions, rest of the things remained unchanged
 - 4 pbft nodes with pbft batch mode, default parameters

Findings (tl;dr):
 - tx/sec dropped to 30/sec (earlier, we measured 300)
 - average transaction time (between sending it in, and getting it in event) is 15 sec
 - 2 transactions seem to be contained by two blocks (their listener called multiple times, from different block events)

Non tl;dr
Logs of the measurement can be found here, where output.txt is the fabric api log, and outputX.txt are vpX nodes of the pbft network. Here is the result from output.txt:

====== Test results ======

Total: 10000 transactions in 353.49 sec
Average transaction/s: 28.29
Average transaction process time: 15027.30 ms
Listener not called for 0 transactions
Listener called multiple times for 2 transactions
0 transactions not found in the ledger
0 transactions rejected
Distribution:
     0 -   1000: tx/sec=34.83 avg_tx_time=1792.79 ms
  1000 -   2000: tx/sec=30.65 avg_tx_time=7089.22 ms
  2000 -   3000: tx/sec=26.52 avg_tx_time=13959.66 ms
  3000 -   4000: tx/sec=26.18 avg_tx_time=19009.87 ms
  4000 -   5000: tx/sec=26.60 avg_tx_time=18913.06 ms
  5000 -   6000: tx/sec=26.61 avg_tx_time=19227.61 ms
  6000 -   7000: tx/sec=27.60 avg_tx_time=18181.01 ms
  7000 -   8000: tx/sec=26.87 avg_tx_time=18792.81 ms
  8000 -   9000: tx/sec=26.39 avg_tx_time=19007.33 ms
  9000 -  10000: tx/sec=38.16 avg_tx_time=14299.58 ms

Hope you find it useful.
BR
Zsolt

On Sun, Jul 3, 2016 at 2:42 AM, Jeremy Eder <jeder@...> wrote:
I'm a performance engineer for Red Hat -- we're also investigating
blockchain, and ramping up partnerships [1].

We have a scale-out project on the CNCF cluster ongoing as we speak
wrt Kubernetes.  If any Hyperledger participant has their
implementation running on Kubernetes (one of the foundation projects
for Red Hat OpenShift), we'd like to hear from you, specifically wrt
performance testing.

[1] https://www.redhat.com/en/about/press-releases/red-hat-introduces-new-partner-initiative-blockchain-software-vendors

On Fri, Jun 17, 2016 at 11:27 AM, Brian Behlendorf via hyperledger-tsc
<hyperledger-tsc@...> wrote:
> Here is the form for requesting access to the compute cluster.
>
> https://github.com/cncf/cluster
>
> Note that it says it's focused on CNCF (the Cloud Native Computing
> Foundation, also an LF project like us), but I talked with the director of
> the project last night and we can use it too.  It may be worth putting the
> workloads tested here also on EC2 or Azure for comparison, eventually.
> While I don't think we can use all 1000 nodes continuously forever, we don't
> currently have a lot of contention, and the more it becomes a demonstration
> vehicle for CNCF the more priority I think we'll get :)  I would even feel
> comfortable with us suggesting to them a continuous performance testing
> process, though perhaps daily rather than upon every commit would be a good
> idea.  But let's get together as a community and make a thoughtful,
> coordinated ask on that form for the cluster.
>
> I am looking for help on our side from the Linux world in performance
> testing and tuning, but I'd also encourage all our members to search their
> own organizations for people with these skills, as they are almost as rare
> as blockchain skills :)  I would hesitate to break this activity out into a
> separate formal Working Group, only because then it may come across as
> Someone Else's Problem, rather than one of several parallel concerns devs
> should always have in mind while working on code (like security, too).  I
> would suggest this list talk about what the ideal and likely production
> environments will look like, use that to drive a realistic set of targets
> and an ideal test harness, and then build collectively.
>
> Brian
>
>
> On 06/17/2016 02:52 AM, ikocsis@... wrote:
>
> Brian,
>
> this is really exciting. What would be the way forward with this? I'm cc-ing
> this to the TSC list, too, because in light of the multiple, disconnected
> performance testing efforts emerging from many places you guys may want to
> consider setting up a WG, perhaps? (Sorry for the cross-post if the TSC list
> is not the right place for the suggestion.) As of now, performance
> testing/asssurance seems to be present in the project structure only through
> the busywork tool (and its project proposal), and that's somewhat orthogonal
> to this. At the same time, it's not necessarily the smartest for everybody
> to develop or duct tape together their own test harness (this seems to be
> happening), but that's another question.
>
> Anyhow, I assume that however big, this is a finite resource so access to it
> has to be controlled; either via some form of admission control for project
> members to send in jobs or simply periodically running sets of experiments
> agreed upon by the interested parties. One could even argue that "Continuous
> Performance Testing" wouldn't be a bad idea, either. Your thoughts on this?
>
> Best regards
> Imre
>
> ---------------------------------------------------------------------------
> Imre Kocsis
> assistant lecturer
>
> Fault Tolerant Systems Research Group
> Department of Measurement and Information Systems
> Budapest University of Technology and Economics
>
> mail: ikocsis@...
> phone: +36-1-463-2006
> mobile: +36-20-514-6881
> skype: kocsis_imre
>
>
>
>
> From:        Brian Behlendorf <bbehlendorf@...>
> To:        hyperledger-fabric@...
> Date:        06/17/2016 08:13 AM
> Subject:        Re: [Hyperledger-fabric] Performance measurements
> Sent by:        hyperledger-fabric-bounces@...
> ________________________________
>
>
>
> I'm following this thread on the periphery, and wanted to highlight Imre's
> questions about the test environment and how it may compare to real-world
> deployments.  I think it's crucial to start to articulate what that
> real-world looks like.  For example it probably involves nodes that are not
> all on the same local network, because each node is "owned" by a different
> organization that doesn't trust the others, and thus connection latency
> across the chain could vary up to hundreds of milliseconds and face
> real-world packet loss from time to time.  It also is extremely cheap these
> days to get high powered bare metal, so running everything in docker within
> a single VM may not reflect what's actually possible if we throw real-world
> hardware at the problem.  The Linux Foundation has access to resources that
> could be used to help the community with performance testing, if this would
> be helpful, potentially including a 1000 CPU server farm that has been
> loaned to us for part-time experiments.  Let me know if that's interesting
> (and no, you can't run a BTC or ETH miner on it)
>
> Brian
>
> On 06/16/2016 04:07 PM, ikocsis@... wrote:
> Yacov,
>
> measuring this is a good idea; however, I would have a few questions &
> comments:
>
> 1. First and foremost: could you maybe plot this? :)
> 2. Especially as you use (I have to assume) a single VM for the whole P2P
> network: have you controlled for the state of other resources than network?
> E.g. CPU saturation.
> 3. For the purpose of TPS computing: how do you define a "transaction" in
> this case? As I see it, client INVOKE queries simply ask the network
> something to be done on the world state; when they return the deed can be
> far from being done. (So there is no classic atomic transaction execution.)
> From the point of view of the clients, the invoke request is "done" when
> QUERY requests begin to show its effect. So the concept of TPS is not
> exactly trivial. Am I right on this?
>
> I have a more general question, too, open for discussion, if you like. (I'm
> honestly interested in this.) While the guys at Digital Asset seem to use
> (and I know that we do use :)) EC2 VMs for performance testing (not optimal,
> but good enough), there seem to be others simply using single-node (mostly
> meaning a single VM), purely Docker configuration. Is this really wise for
> performance testing? I mean containers can and do interfere performance-wise
> by simply competing for the access of time-shared resources, and while
> cgroups is there (assuming it's used), to the best of my knowledge it's not
> perfect for latency-sensitive scenarios. Also, slicing the HW resources of a
> single VM between containers can make each peer a bit anemic on its own. I
> understand that for off the cuff perf. measurements using e.g. the Vagrant
> devnet can be a rapid way to have some numbers, but is it representative
> enough for what can be expected "in production"? (The answer may very well
> be "yes and don't nitpick", but I'm having doubts at this point.)
>
> Best regards
> Imre
>
>
>
>
> From:        "Yacov Manevich" <YACOVM@...>
> To:        "Konstantinos Christidis" <kchristidis@...>,
> tamas@...
> Cc:        Benjamin Mandler <MANDLER@...>,
> hyperledger-fabric@...
> Date:        06/16/2016 03:20 PM
> Subject:        Re: [Hyperledger-fabric] Performance measurements
> Sent by:        hyperledger-fabric-bounces@...
> ________________________________
>
>
>
> Hi.
>
> On the same topic, I've also been doing some performance testings.
> Specifically I'm interested in correlating network latency, node number and
> transactions per second.
>
> I made a simple test that:
> 1.        Spawns N validating peers with PBFT classic in docker containers
> 2.         deploys a chaincode to them
> 3.        Injects network latencies (a few milliseconds) to all containers
> but the first one using the tc command
> 4.        runs a node.js script that does a few thousand invocations, (in
> parallel or not, when in parallel- each "thread" runs it against a different
> validating peer)
> 5.        Counts the elapsed time and calculates the TPS (transactions per
> second) value.
> The results I got are attached:
>
>
>
> Regards,
>        Yacov.
>
>
>
> From:        "Konstantinos Christidis" <kchristidis@...>
> To:        ikocsis@...
> Cc:                hyperledger-fabric@...
> Date:        14/06/2016 19:16
> Subject:        Re: [Hyperledger-fabric] Performance measurements
> Sent by:        hyperledger-fabric-bounces@...
> ________________________________
>
>
>
>
> ----- Original message -----
> From: ikocsis@...
> To: Konstantinos Christidis/Durham/IBM@IBMUS
> Cc: hyperledger-fabric@...
> Subject: Re: [Hyperledger-fabric] Performance measurements
> Date: Tue, Jun 14, 2016 7:24 AM
>
> On another note: we do use certainly the default "batch" pbft, but I saw the
> "sieve" and I think "crypto" options in the code, too. And I think I found
> the paper that explains these (haven't had the time to fully understand the
> specifics yet): http://arxiv.org/abs/1603.07351
> Question is: do these already work? We certainly should stick to
> deterministic transactions at this point, I'm just curious.
> Imre,
>
> Sieve is a prototype, our development efforts are currently focused on PBFT.
> I would strongly suggest sticking to PBFT for your experiments.
>
> Best,
> Kostas
> _______________________________________________
> Hyperledger-fabric mailing list
> Hyperledger-fabric@...
> https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric
>
> [attachment "results.pdf.zip" deleted by Imre Kocsis/ftsrg]
> _______________________________________________
> Hyperledger-fabric mailing list
> Hyperledger-fabric@...
> https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric
>
>
>
> _______________________________________________
> Hyperledger-fabric mailing list
> Hyperledger-fabric@...
> https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric
>
>
>
> --
> Brian Behlendorf
> Executive Director at the Hyperledger Project
> bbehlendorf@...
> Twitter: @brianbehlendorf_______________________________________________
> Hyperledger-fabric mailing list
> Hyperledger-fabric@...
> https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric
>
>
>
> --
> Brian Behlendorf
> Executive Director at the Hyperledger Project
> bbehlendorf@...
> Twitter: @brianbehlendorf
>
>
> _______________________________________________
> hyperledger-tsc mailing list
> hyperledger-tsc@...
> https://lists.hyperledger.org/mailman/listinfo/hyperledger-tsc
>



--

-- Jeremy Eder
_______________________________________________
Hyperledger-fabric mailing list
Hyperledger-fabric@...
https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric



--
​ZSOLT SZILAGYI

​SOFTWARE DEVELOPER



+​36 30 921 34 63
​zsolt
@digitalasset.com
digitalasset.com

This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.digitalasset.com/emaildisclaimer.html. If you are not the intended recipient, please delete this message.


Zsolt Szilágyi <zsolt@...>
 

Documentation about how I ran these tests can be found here.

On Fri, Jul 8, 2016 at 5:03 PM, Zsolt Szilágyi <zsolt@...> wrote:
Hi all,

I would like to share my latest performance measurements results.It has been done on current fabric and fabric-api master. For measurement, I used the following:
 - PerfTest test case with GrpcClient HLAPI implementation, 10000 transactions, rest of the things remained unchanged
 - 4 pbft nodes with pbft batch mode, default parameters

Findings (tl;dr):
 - tx/sec dropped to 30/sec (earlier, we measured 300)
 - average transaction time (between sending it in, and getting it in event) is 15 sec
 - 2 transactions seem to be contained by two blocks (their listener called multiple times, from different block events)

Non tl;dr
Logs of the measurement can be found here, where output.txt is the fabric api log, and outputX.txt are vpX nodes of the pbft network. Here is the result from output.txt:

====== Test results ======

Total: 10000 transactions in 353.49 sec
Average transaction/s: 28.29
Average transaction process time: 15027.30 ms
Listener not called for 0 transactions
Listener called multiple times for 2 transactions
0 transactions not found in the ledger
0 transactions rejected
Distribution:
     0 -   1000: tx/sec=34.83 avg_tx_time=1792.79 ms
  1000 -   2000: tx/sec=30.65 avg_tx_time=7089.22 ms
  2000 -   3000: tx/sec=26.52 avg_tx_time=13959.66 ms
  3000 -   4000: tx/sec=26.18 avg_tx_time=19009.87 ms
  4000 -   5000: tx/sec=26.60 avg_tx_time=18913.06 ms
  5000 -   6000: tx/sec=26.61 avg_tx_time=19227.61 ms
  6000 -   7000: tx/sec=27.60 avg_tx_time=18181.01 ms
  7000 -   8000: tx/sec=26.87 avg_tx_time=18792.81 ms
  8000 -   9000: tx/sec=26.39 avg_tx_time=19007.33 ms
  9000 -  10000: tx/sec=38.16 avg_tx_time=14299.58 ms

Hope you find it useful.
BR
Zsolt

On Sun, Jul 3, 2016 at 2:42 AM, Jeremy Eder <jeder@...> wrote:
I'm a performance engineer for Red Hat -- we're also investigating
blockchain, and ramping up partnerships [1].

We have a scale-out project on the CNCF cluster ongoing as we speak
wrt Kubernetes.  If any Hyperledger participant has their
implementation running on Kubernetes (one of the foundation projects
for Red Hat OpenShift), we'd like to hear from you, specifically wrt
performance testing.

[1] https://www.redhat.com/en/about/press-releases/red-hat-introduces-new-partner-initiative-blockchain-software-vendors

On Fri, Jun 17, 2016 at 11:27 AM, Brian Behlendorf via hyperledger-tsc
<hyperledger-tsc@...> wrote:
> Here is the form for requesting access to the compute cluster.
>
> https://github.com/cncf/cluster
>
> Note that it says it's focused on CNCF (the Cloud Native Computing
> Foundation, also an LF project like us), but I talked with the director of
> the project last night and we can use it too.  It may be worth putting the
> workloads tested here also on EC2 or Azure for comparison, eventually.
> While I don't think we can use all 1000 nodes continuously forever, we don't
> currently have a lot of contention, and the more it becomes a demonstration
> vehicle for CNCF the more priority I think we'll get :)  I would even feel
> comfortable with us suggesting to them a continuous performance testing
> process, though perhaps daily rather than upon every commit would be a good
> idea.  But let's get together as a community and make a thoughtful,
> coordinated ask on that form for the cluster.
>
> I am looking for help on our side from the Linux world in performance
> testing and tuning, but I'd also encourage all our members to search their
> own organizations for people with these skills, as they are almost as rare
> as blockchain skills :)  I would hesitate to break this activity out into a
> separate formal Working Group, only because then it may come across as
> Someone Else's Problem, rather than one of several parallel concerns devs
> should always have in mind while working on code (like security, too).  I
> would suggest this list talk about what the ideal and likely production
> environments will look like, use that to drive a realistic set of targets
> and an ideal test harness, and then build collectively.
>
> Brian
>
>
> On 06/17/2016 02:52 AM, ikocsis@... wrote:
>
> Brian,
>
> this is really exciting. What would be the way forward with this? I'm cc-ing
> this to the TSC list, too, because in light of the multiple, disconnected
> performance testing efforts emerging from many places you guys may want to
> consider setting up a WG, perhaps? (Sorry for the cross-post if the TSC list
> is not the right place for the suggestion.) As of now, performance
> testing/asssurance seems to be present in the project structure only through
> the busywork tool (and its project proposal), and that's somewhat orthogonal
> to this. At the same time, it's not necessarily the smartest for everybody
> to develop or duct tape together their own test harness (this seems to be
> happening), but that's another question.
>
> Anyhow, I assume that however big, this is a finite resource so access to it
> has to be controlled; either via some form of admission control for project
> members to send in jobs or simply periodically running sets of experiments
> agreed upon by the interested parties. One could even argue that "Continuous
> Performance Testing" wouldn't be a bad idea, either. Your thoughts on this?
>
> Best regards
> Imre
>
> ---------------------------------------------------------------------------
> Imre Kocsis
> assistant lecturer
>
> Fault Tolerant Systems Research Group
> Department of Measurement and Information Systems
> Budapest University of Technology and Economics
>
> mail: ikocsis@...
> phone: +36-1-463-2006
> mobile: +36-20-514-6881
> skype: kocsis_imre
>
>
>
>
> From:        Brian Behlendorf <bbehlendorf@...>
> To:        hyperledger-fabric@...
> Date:        06/17/2016 08:13 AM
> Subject:        Re: [Hyperledger-fabric] Performance measurements
> Sent by:        hyperledger-fabric-bounces@...
> ________________________________
>
>
>
> I'm following this thread on the periphery, and wanted to highlight Imre's
> questions about the test environment and how it may compare to real-world
> deployments.  I think it's crucial to start to articulate what that
> real-world looks like.  For example it probably involves nodes that are not
> all on the same local network, because each node is "owned" by a different
> organization that doesn't trust the others, and thus connection latency
> across the chain could vary up to hundreds of milliseconds and face
> real-world packet loss from time to time.  It also is extremely cheap these
> days to get high powered bare metal, so running everything in docker within
> a single VM may not reflect what's actually possible if we throw real-world
> hardware at the problem.  The Linux Foundation has access to resources that
> could be used to help the community with performance testing, if this would
> be helpful, potentially including a 1000 CPU server farm that has been
> loaned to us for part-time experiments.  Let me know if that's interesting
> (and no, you can't run a BTC or ETH miner on it)
>
> Brian
>
> On 06/16/2016 04:07 PM, ikocsis@... wrote:
> Yacov,
>
> measuring this is a good idea; however, I would have a few questions &
> comments:
>
> 1. First and foremost: could you maybe plot this? :)
> 2. Especially as you use (I have to assume) a single VM for the whole P2P
> network: have you controlled for the state of other resources than network?
> E.g. CPU saturation.
> 3. For the purpose of TPS computing: how do you define a "transaction" in
> this case? As I see it, client INVOKE queries simply ask the network
> something to be done on the world state; when they return the deed can be
> far from being done. (So there is no classic atomic transaction execution.)
> From the point of view of the clients, the invoke request is "done" when
> QUERY requests begin to show its effect. So the concept of TPS is not
> exactly trivial. Am I right on this?
>
> I have a more general question, too, open for discussion, if you like. (I'm
> honestly interested in this.) While the guys at Digital Asset seem to use
> (and I know that we do use :)) EC2 VMs for performance testing (not optimal,
> but good enough), there seem to be others simply using single-node (mostly
> meaning a single VM), purely Docker configuration. Is this really wise for
> performance testing? I mean containers can and do interfere performance-wise
> by simply competing for the access of time-shared resources, and while
> cgroups is there (assuming it's used), to the best of my knowledge it's not
> perfect for latency-sensitive scenarios. Also, slicing the HW resources of a
> single VM between containers can make each peer a bit anemic on its own. I
> understand that for off the cuff perf. measurements using e.g. the Vagrant
> devnet can be a rapid way to have some numbers, but is it representative
> enough for what can be expected "in production"? (The answer may very well
> be "yes and don't nitpick", but I'm having doubts at this point.)
>
> Best regards
> Imre
>
>
>
>
> From:        "Yacov Manevich" <YACOVM@...>
> To:        "Konstantinos Christidis" <kchristidis@...>,
> tamas@...
> Cc:        Benjamin Mandler <MANDLER@...>,
> hyperledger-fabric@...
> Date:        06/16/2016 03:20 PM
> Subject:        Re: [Hyperledger-fabric] Performance measurements
> Sent by:        hyperledger-fabric-bounces@...
> ________________________________
>
>
>
> Hi.
>
> On the same topic, I've also been doing some performance testings.
> Specifically I'm interested in correlating network latency, node number and
> transactions per second.
>
> I made a simple test that:
> 1.        Spawns N validating peers with PBFT classic in docker containers
> 2.         deploys a chaincode to them
> 3.        Injects network latencies (a few milliseconds) to all containers
> but the first one using the tc command
> 4.        runs a node.js script that does a few thousand invocations, (in
> parallel or not, when in parallel- each "thread" runs it against a different
> validating peer)
> 5.        Counts the elapsed time and calculates the TPS (transactions per
> second) value.
> The results I got are attached:
>
>
>
> Regards,
>        Yacov.
>
>
>
> From:        "Konstantinos Christidis" <kchristidis@...>
> To:        ikocsis@...
> Cc:                hyperledger-fabric@...
> Date:        14/06/2016 19:16
> Subject:        Re: [Hyperledger-fabric] Performance measurements
> Sent by:        hyperledger-fabric-bounces@...
> ________________________________
>
>
>
>
> ----- Original message -----
> From: ikocsis@...
> To: Konstantinos Christidis/Durham/IBM@IBMUS
> Cc: hyperledger-fabric@...
> Subject: Re: [Hyperledger-fabric] Performance measurements
> Date: Tue, Jun 14, 2016 7:24 AM
>
> On another note: we do use certainly the default "batch" pbft, but I saw the
> "sieve" and I think "crypto" options in the code, too. And I think I found
> the paper that explains these (haven't had the time to fully understand the
> specifics yet): http://arxiv.org/abs/1603.07351
> Question is: do these already work? We certainly should stick to
> deterministic transactions at this point, I'm just curious.
> Imre,
>
> Sieve is a prototype, our development efforts are currently focused on PBFT.
> I would strongly suggest sticking to PBFT for your experiments.
>
> Best,
> Kostas
> _______________________________________________
> Hyperledger-fabric mailing list
> Hyperledger-fabric@...
> https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric
>
> [attachment "results.pdf.zip" deleted by Imre Kocsis/ftsrg]
> _______________________________________________
> Hyperledger-fabric mailing list
> Hyperledger-fabric@...
> https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric
>
>
>
> _______________________________________________
> Hyperledger-fabric mailing list
> Hyperledger-fabric@...
> https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric
>
>
>
> --
> Brian Behlendorf
> Executive Director at the Hyperledger Project
> bbehlendorf@...
> Twitter: @brianbehlendorf_______________________________________________
> Hyperledger-fabric mailing list
> Hyperledger-fabric@...
> https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric
>
>
>
> --
> Brian Behlendorf
> Executive Director at the Hyperledger Project
> bbehlendorf@...
> Twitter: @brianbehlendorf
>
>
> _______________________________________________
> hyperledger-tsc mailing list
> hyperledger-tsc@...
> https://lists.hyperledger.org/mailman/listinfo/hyperledger-tsc
>



--

-- Jeremy Eder
_______________________________________________
Hyperledger-fabric mailing list
Hyperledger-fabric@...
https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric



--
​ZSOLT SZILAGYI

​SOFTWARE DEVELOPER



+​36 30 921 34 63
​zsolt
@digitalasset.com
digitalasset.com



--
​ZSOLT SZILAGYI

​SOFTWARE DEVELOPER



+​36 30 921 34 63
​zsolt
@digitalasset.com
digitalasset.com

This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.digitalasset.com/emaildisclaimer.html. If you are not the intended recipient, please delete this message.