First time impressions
toggle quoted messageShow quoted text
Thanks for the feedback. Sorry for the slow response.
I would suggest taking a look at the Sawtooth Documentation https://sawtooth.hyperledger.org/docs/core/releases/latest/ This will help with understanding the address format and initializeing the blockchain. Currently Grid smart contracts can be run on Hyperledger Sawtooth using the Sawtooth Sabre smart contract engine https://sawtooth.hyperledger.org/docs/sabre/nightly/master/.
I would also suggest asking questions in https://chat.hyperledger.org/channel/grid so more people can weigh in.
On Tue, Nov 5, 2019 at 3:44 AM Aaron Digulla <digulla@...> wrote:
I'm looking into blockchain for a prototype project. I know a bit about blockchain from high level articles but haven't really used it before. Below, I'm collecting my thoughts, impressions and suggestions from reading your documentation for the first time.
Introduction: How about adding a link to gs1.org for the text "GS1 product definitions"?
Re "Addressing" in "Grid Schema Transaction Family Specification"
For me, it's unclear where those hash collisions can happen. There doesn't seem to be a query API. As I understand it, the objects are just serialized and concatenated as blocks in the blockchain. The type "SchemaList" is also mentioned exactly once in the documentation (or maybe the search is broken).
The type is used 6 times in the code on GitHub but there are no unit tests, no place where a list is created and put into the blockchain, etc. I find this part very confusing.
Later, the name of the schema is used to look them up instead of the address. What use is the address?
In a more general context, how do I initialize a Grid blockchain? I'm missing a flow chart or graph or examples which explain how to implement certain scenarios like catching / transporting / selling fish or ordering (replacement) parts from a supplier. Something that helps to put all parts into a realistic context.
In https://grid.hyperledger.org/docs/grid/nightly/master/transaction_family_specifications/pike_transaction_family.html is this sentence:
Agents whose addresses collide are stored in an agent list.
When I read that the first time, I had missed that hashes are computed for some objects. So this was very confusion. How about adding a link here to "Addressing" https://grid.hyperledger.org/docs/grid/nightly/master/transaction_family_specifications/grid_schema_family_specification.html?highlight=address#addressing
Also, the data structure isn't used anywhere else in the documentation. I'd expect a query API where I can ask for a list of all agents of an organization but there are no query APIs at all.
How can I find out whether an agent is allowed to do a certain operation?
If that's out of scope for the project, then how about pointing to another project which solves this?
Under this scheme, 16^2 * (16^4 - 1) = 16776960 entries
That formula is confusing for me. I understand that "16" is the number of states per hex digit and "4" is the number of digits but I've never seen this approach in any other text. How about
256 * (2^16 - 1) = 16776960
This reuses the existing numbers in the text and the usual term (2^16) to represent 16 bits of precision (64 K).
Which made me think: What happens when I append "xxxx"? Is there a check in the code which makes sure it's a hex number?
Also, what's the point? Property changes are appended to the blockchain just like any other data. Why do I have to give these change "pages" an address? I can read all of them into a list grouped by record_id (which again isn't the address). That can return me any number of records which I must be able to handle. There isn't really an upper limit to the number of changes I can get. I also don't see why this should be a ring buffer.
Example fish. Fish is caught. Then we monitor the temperature of the ice chamber on the factory ship since the fish will stay in there for weeks, maybe months. If I would add one measurement per seconds, the "buffer" would last 16776960/3600/24 = 194 days. Enough for the whole trip.
But the temperature doesn't change very fast, so "once per second" doesn't make sense. Also, I'm not really interested in every data point. The average and highest temperature of the whole voyage is enough to determine that the fish is still good. I understand that there is an issue of trust, so having the raw data in the blockchain will help with that.
That leaves the issue of storage: How much memory would the temperature sensor need to buffer all this data points? I doubt that any sensor in the chain would buffer all the data. Instead, they're going to commit small updates every few minutes. Maybe once per hour. So the Transaction Processor should never have to buffer more than a few pages.
Also unclear for me: After committing a change, does the next update start with namespace 0001? What if the next update is from a different Transaction Processor?
What happens when a page has only 10 values? Can I "edit" it and post an update with the same "address" and 11 values? Or "save" if once more with 10 entries but some with different values? Or do I have to start a new page?
Lastly, I really don't like the idea that page 0000 is special. I would prefer two namespaces, one for master properties (= change rarely if ever) and time series properties (which change often). Each page should use the hash of the previous page to chain them, so there is no upper limit.
Any transaction is invalid if its timestamp is greater than the validator’s system time.
Is the validator always going to share the same clock hardware? If not, then there will be differences. If the system clock is a few seconds off on either side, this will cause spurious failures. The same will happen for configuration mistakes, time zone / DST problems, ect. I'm especially ware of IoT devices like temperature sensors.
The documentation should give pointers how to prevent this for sure. One approach could be to add a shared clock service where people have to get their timestamps from or NTP with close monitoring or an "offset" approach where the IoT device just counts ticks and adds those to a base timestamp in an initial record.
Alternatively, the system could just accept a time which isn't more than 1 minute in the "future" (to handle small glitches).
My fear is that someone "fixes" the problem my just putting 1 or some other, fixes data in the past into the timestamp.
As with agents above: How can I find out the current state of a record or proposal?
Aaron "Optimizer" Digulla a.k.a. Philmann Dark
"It's not the universe that's limited, it's our imagination.
Follow me and I'll show you something beyond the limits."