Re: Performance measurements


Brian Behlendorf <bbehlendorf@...>
 

I'm following this thread on the periphery, and wanted to highlight Imre's questions about the test environment and how it may compare to real-world deployments.  I think it's crucial to start to articulate what that real-world looks like.  For example it probably involves nodes that are not all on the same local network, because each node is "owned" by a different organization that doesn't trust the others, and thus connection latency across the chain could vary up to hundreds of milliseconds and face real-world packet loss from time to time.  It also is extremely cheap these days to get high powered bare metal, so running everything in docker within a single VM may not reflect what's actually possible if we throw real-world hardware at the problem.  The Linux Foundation has access to resources that could be used to help the community with performance testing, if this would be helpful, potentially including a 1000 CPU server farm that has been loaned to us for part-time experiments.  Let me know if that's interesting (and no, you can't run a BTC or ETH miner on it)

Brian

On 06/16/2016 04:07 PM, ikocsis@... wrote:
Yacov,

measuring this is a good idea; however, I would have a few questions & comments:

1. First and foremost: could you maybe plot this? :)
2. Especially as you use (I have to assume) a single VM for the whole P2P network: have you controlled for the state of other resources than network? E.g. CPU saturation.
3. For the purpose of TPS computing: how do you define a "transaction" in this case? As I see it, client INVOKE queries simply ask the network something to be done on the world state; when they return the deed can be far from being done. (So there is no classic atomic transaction execution.) From the point of view of the clients, the invoke request is "done" when QUERY requests begin to show its effect. So the concept of TPS is not exactly trivial. Am I right on this?

I have a more general question, too, open for discussion, if you like. (I'm honestly interested in this.) While the guys at Digital Asset seem to use (and I know that we do use :)) EC2 VMs for performance testing (not optimal, but good enough), there seem to be others simply using single-node (mostly meaning a single VM), purely Docker configuration. Is this really wise for performance testing? I mean containers can and do interfere performance-wise by simply competing for the access of time-shared resources, and while cgroups is there (assuming it's used), to the best of my knowledge it's not perfect for latency-sensitive scenarios. Also, slicing the HW resources of a single VM between containers can make each peer a bit anemic on its own. I understand that for off the cuff perf. measurements using e.g. the Vagrant devnet can be a rapid way to have some numbers, but is it representative enough for what can be expected "in production"? (The answer may very well be "yes and don't nitpick", but I'm having doubts at this point.)

Best regards
Imre




From:        "Yacov Manevich" <YACOVM@...>
To:        "Konstantinos Christidis" <kchristidis@...>, tamas@...
Cc:        Benjamin Mandler <MANDLER@...>, hyperledger-fabric@...
Date:        06/16/2016 03:20 PM
Subject:        Re: [Hyperledger-fabric] Performance measurements
Sent by:        hyperledger-fabric-bounces@...




Hi.

On the same topic, I've also been doing some performance testings.
Specifically I'm interested in correlating network latency, node number and transactions per second.


I made a simple test that:

1.        Spawns N validating peers with PBFT classic in docker containers
2.         deploys a chaincode to them
3.        Injects network latencies (a few milliseconds) to all containers but the first one using the tc command
4.        runs a node.js script that does a few thousand invocations, (in parallel or not, when in parallel- each "thread" runs it against a different validating peer)
5.        Counts the elapsed time and calculates the TPS (transactions per second) value.
The results I got are attached:



Regards,
       Yacov.




From:        
"Konstantinos Christidis" <kchristidis@...>
To:        
ikocsis@...
Cc:        
       hyperledger-fabric@...
Date:        
14/06/2016 19:16
Subject:        
Re: [Hyperledger-fabric] Performance measurements
Sent by:        
hyperledger-fabric-bounces@...





----- Original message -----
From: ikocsis@...
To: Konstantinos Christidis/Durham/IBM@IBMUS
Cc: hyperledger-fabric@...
Subject: Re: [Hyperledger-fabric] Performance measurements
Date: Tue, Jun 14, 2016 7:24 AM


On another note: we do use certainly the default "batch" pbft, but I saw the "sieve" and I think "crypto" options in the code, too. And I think I found the paper that explains these (haven't had the time to fully understand the specifics yet):
http://arxiv.org/abs/1603.07351
Question is: do these already work? We certainly should stick to deterministic transactions at this point, I'm just curious.

Imre,

Sieve is a prototype, our development efforts are currently focused on PBFT. I would strongly suggest sticking to PBFT for your experiments.

Best,
Kostas

_______________________________________________
Hyperledger-fabric mailing list
Hyperledger-fabric@...

https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric

[attachment "results.pdf.zip" deleted by Imre Kocsis/ftsrg] _______________________________________________
Hyperledger-fabric mailing list
Hyperledger-fabric@...
https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric



_______________________________________________
Hyperledger-fabric mailing list
Hyperledger-fabric@...
https://lists.hyperledger.org/mailman/listinfo/hyperledger-fabric


-- 
Brian Behlendorf
Executive Director at the Hyperledger Project
bbehlendorf@...
Twitter: @brianbehlendorf

Join fabric@lists.hyperledger.org to automatically receive all group messages.