Relation between no. of. Blocks / Amount of data stored in DB vs Disk writes #fabric #fabric-questions


Prasanth Sundaravelu
 

Hi,

I have the following setup: 

Hardware: 

CPU: Intel Xeon E3-1245 v5 - 3.5 GHz - 4 core(s)
RAM: 32GB - DDR3
Hard Drive(s): 3x 256GB (SSD SATA) (RAID 0)
Network Bandwidth: Unmetered @ 1Gbps
OS Ubuntu 18

Hyperledger network:
3 - Peers 
1 - Organization
Go LevelDB
Chaincode in GoLang
SDK in Node.js accessed using node express - http server.

Load generation (via HTTP API) (Generated from inside the same server):
- Load generated continuously at a constant speed of 200RPS. 
- It calls a simple chaincode function that queries if the unique ID already exists in db and modifies / stores the data (JSON data with 3 fields) as composite key.

 

When the test started, I observed Disk I/O using iotop:

- Total and Actual disk write was less than or around ~5M/s

After 24 hours, a 10mil record has been stored and I noticed iotop again, this time:
- Total and Actual disk write was fluctuating from ~50M/s to ~100M/s or more rarely. 

Why does this happen? Is it normal? Is there a way to reduce this as much as possible?


Senthil Nathan
 

Hi Prasanth,

    This phenomenon might occur due to the leveldb compaction -- https://github.com/google/leveldb/blob/master/doc/impl.md
    If you plot the complete time-series data and compare it against the peer logs, you can pinpoint the root cause.

Regards,
Senthil


On Thu, Dec 5, 2019 at 7:05 PM Prasanth Sundaravelu <prasanths96@...> wrote:

Hi,

I have the following setup: 

Hardware: 

CPU: Intel Xeon E3-1245 v5 - 3.5 GHz - 4 core(s)
RAM: 32GB - DDR3
Hard Drive(s): 3x 256GB (SSD SATA) (RAID 0)
Network Bandwidth: Unmetered @ 1Gbps
OS Ubuntu 18

Hyperledger network:
3 - Peers 
1 - Organization
Go LevelDB
Chaincode in GoLang
SDK in Node.js accessed using node express - http server.

Load generation (via HTTP API) (Generated from inside the same server):
- Load generated continuously at a constant speed of 200RPS. 
- It calls a simple chaincode function that queries if the unique ID already exists in db and modifies / stores the data (JSON data with 3 fields) as composite key.

 

When the test started, I observed Disk I/O using iotop:

- Total and Actual disk write was less than or around ~5M/s

After 24 hours, a 10mil record has been stored and I noticed iotop again, this time:
- Total and Actual disk write was fluctuating from ~50M/s to ~100M/s or more rarely. 

Why does this happen? Is it normal? Is there a way to reduce this as much as possible?


Prasanth Sundaravelu
 

Thanks for the answer, Senthil.


On Thu, Dec 5, 2019 at 7:35 PM Senthilnathan N <cendhu@...> wrote:
Hi Prasanth,

    This phenomenon might occur due to the leveldb compaction -- https://github.com/google/leveldb/blob/master/doc/impl.md
    If you plot the complete time-series data and compare it against the peer logs, you can pinpoint the root cause.

Regards,
Senthil

On Thu, Dec 5, 2019 at 7:05 PM Prasanth Sundaravelu <prasanths96@...> wrote:

Hi,

I have the following setup: 

Hardware: 

CPU: Intel Xeon E3-1245 v5 - 3.5 GHz - 4 core(s)
RAM: 32GB - DDR3
Hard Drive(s): 3x 256GB (SSD SATA) (RAID 0)
Network Bandwidth: Unmetered @ 1Gbps
OS Ubuntu 18

Hyperledger network:
3 - Peers 
1 - Organization
Go LevelDB
Chaincode in GoLang
SDK in Node.js accessed using node express - http server.

Load generation (via HTTP API) (Generated from inside the same server):
- Load generated continuously at a constant speed of 200RPS. 
- It calls a simple chaincode function that queries if the unique ID already exists in db and modifies / stores the data (JSON data with 3 fields) as composite key.

 

When the test started, I observed Disk I/O using iotop:

- Total and Actual disk write was less than or around ~5M/s

After 24 hours, a 10mil record has been stored and I noticed iotop again, this time:
- Total and Actual disk write was fluctuating from ~50M/s to ~100M/s or more rarely. 

Why does this happen? Is it normal? Is there a way to reduce this as much as possible?