Re: [Hyperledger Project TSC] IBM whitepaper


Mic Bowman
 

Thanks for putting this together. The white paper is a good starting point. That said, I have some concerns and a few questions. My concerns boil down to two areas: how do we ensure that the architecture/implementation is not limited to the usages we start with (that is, that we do not preclude additional usages) and how do we ensure that the architecture preserves the disintermediation that distinguishes digital ledgers from simple replicated databases.

 

·         In the abstract you reference business to business and business to customers as the two focus areas. While I fully acknowledge the “idiosyncrasies” of Bitcoin, it remains (one of?) the most popular, most active, and most mature of the digital ledgers. In general, sacrificing consumer to consumer usages (of which Bitcoin is one example) before we even begin to explore the architecture seems unfortunate. Clearly we can think of an ordering of usages based on time criticality, business viability, and probability of adoption… however, considering a broad set of usages ensures that decisions made for the most immediate usages don’t preclude other usages that may become the focus later.

·         You’ve provided three usages though there seems to be significant overlap in the requirements driven by those three. If we are going to limit discussion to three or four usages, I would prefer that they stretch thinking about the architecture. I strongly favor moving quickly on what we know (that is, pick one of the usages you propose to focus early implementation), but not if that means sacrificing the flexibility to address in the future the full spectrum of distributed ledger applications. Architectures ossify very quickly with implementation (which is both good and bad). In addition to the ones you provided (which are clearly the most important in the short term) we have a couple additional usages that we use as the basis for our thinking. I’ll publish them in a separate message.

·         The document seems to focus entirely on permissioned ledgers with vetted participants with centrally distributed identities. The architecture has a very strong dependency on what you called the “Membership Service”. I couldn’t find any additional details about that service. From the description it looks like a centralized PKI service. Was that your intention? If so, who runs that? If not, then what is your thinking about the vetting process for participants? I presume there are both a technical and institutional aspects of the solution. How do you imagine multiple providers working together to provide that service? As you are probably aware, Intel is using EPID for enclave attestation in SGX. That provides a way to verify the integrity of computations based on a trusted execution environment without disclosing identity. We would certainly like to consider the role of that capability in the Registration Service.

·         In most of the deployed ledgers we’ve looked (and we found this to be the case for our own tests), there is some way to manage transaction ingress to avoid DOS attacks and manage over-subscription of capacity. I really like the Ripple approach of “burning XRPs” as a way of ensuring that there is some back-pressure on transaction submission (you can easily submit valid transactions but it’s hard to submit enough to perform a DOS attack). Bitcoin miners are free to establish mechanisms to prioritize the transactions selected for a particular block (and there appear to be many such policies). Even in a permissioned ledger, there is a danger of one institution monopolizing a substantial portion of the resources unless there is some decentralized policy for managing resource utilization. As a group, we should think about mechanisms for incentivizing good behavior or constraining (possibly correct but) inappropriate behavior.

·         In the Usage FAQ linked to the white paper the expected performance numbers are 100K tps with 15 validating nodes “running in close proximity”. What does “close proximity” mean in this case? Single data center? Single metropolitan area? Implementations traditional BFT algorithms are hard to get right and are *VERY* difficult to scale to large numbers of participants (where large is 50). Even if we assume that there are some (to be discovered) techniques for scaling that to a few 100’s of replicas, that severely limits the decentralization of the ledger validation service (so we end up with just a replicated database). What are your thought about the organizational/institutional impact of that architecture (e.g. are there hierarchies of consortiums)? Clearly not every organization that would use the ledger would be in a position to participate in the validation of transactions. That means, at best, we have to build some kind of consortium to manage the limited number of validators or, at worst, we end up with a single provider (in which case we aren’t really talking about a distributed ledger any more).

·         And on the performance… again, it seems likely that some form of traditional BFT algorithm is going to be necessary for the financial applications where there are problems with the rollback potential in the proof-of-* blockchain consensus algorithms (including the one I talked about in the review last week). BFT algorithms force us into a small validator pool deployment. However, not all workloads are intolerant of rollback and are amenable to more scalable (where scalable is defined in terms of number and decentralization of validation resources) algorithms.. It seems unlikely that the three usages in the white paper will expose those requirements. I’m fairly certain that the IoT device registries and some other uses like that would help to expand our thinking.

·         One more issue that is probably more implementation than architecture… we have some concerns about the use of containers. The most obvious is the increase in the number of docker exploits that have been documented recently (see for example https://www.oreilly.com/ideas/five-security-concerns-when-using-docker and https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9357). The other concern that that containers limit access to unique HW capabilities like SGX.

 

Again… I really like the architecture for addressing the needs of small groups of vetted organizations with high transaction rate requirements which is the dominant use in FSI. I look forward to working through validation of that claim and to adapting this (or another architecture if appropriate) to a broader set of requirements.

 

--mic

 

C. Mic Bowman

Principal Engineer

Intel Corporation

 

 

From: hyperledger-tsc-bounces@... [mailto:hyperledger-tsc-bounces@...] On Behalf Of Christopher B Ferris via hyperledger-tsc
Sent: Thursday, February 18, 2016 8:32 AM
To: hyperledger-tsc@...
Subject: [Hyperledger Project TSC] IBM whitepaper

 

All, as we discussed on the TSC call, here's a link to the IBM blockchain whitepaper that we published with the open blockchain repos earlier this week. We welcome any and all feedback.

 

 

Cheers,

Christopher Ferris
IBM Distinguished Engineer, CTO Open Technology
IBM Cloud, Open Technologies
email: chrisfer@...
twitter: @christo4ferris
blog: https://developer.ibm.com/opentech/author/chrisfer/
phone: +1 508 667 0402

 

Join toc@lists.hyperledger.org to automatically receive all group messages.