Making Dynamic Consensus More Dynamic

By | Blog, Hyperledger Sawtooth

In October 2017, the Hyperledger Sawtooth team started to implement a new consensus algorithm for Hyperledger Sawtooth. We wanted a voting-based algorithm with finality, which is very different from the Proof of Elapsed Time (PoET) consensus algorithm that has been closely associated with Hyperledger Sawtooth since its start. This project presented a number of challenges and opportunities.

The greatest challenge in implementing this new consensus algorithm with Sawtooth was in breaking apart an architecture that has been heavily influenced by a lottery-based consensus algorithm with forking. A lot of refactoring and architectural work went into making both voting-based and lottery-based algorithms work well with Sawtooth.

However, the opportunities that we discovered from this effort made overcoming these challenges more than worth it. We designed a new consensus API that simplifies the process of adding new consensus algorithms while continuing to support the existing PoET and Dev mode consensus algorithms. We completed the first prototype validator with consensus API support in July 2018. Since then, we have been able to implement two new voting-based consensus algorithms for the Hyperledger Sawtooth platform: Raft and PBFT.

We are pleased to announce that the Sawtooth 1.1 release supports the new consensus API. This release also includes consensus SDKs to make it easier to implement new consensus algorithms.

Consensus as a Process

The new consensus architecture moves consensus functionality to a separate process, called a consensus engine, and provides an API for each consensus engine to interact with the validator.

Moving the consensus functionality to a separate process allows consensus engines to be implemented in a variety of languages. Currently, SDKs are available for Python and Rust and have been used to create the consensus engines for PoET, PBFT, and Raft.

Multi-language support is important beyond providing a choice for implementing a new consensus engine. This support makes it much easier to reuse existing implementations of consensus algorithms. For example, the Sawtooth Raft consensus engine is built on the pingcap/raft-rs library. We were able to easily integrate this well-regarded Raft library, which is itself a port from the widely-used etcd Raft library.

As SDKs for additional languages are built on top of the consensus API, it will be possible to add more and more consensus algorithms into Hyperledger Sawtooth. For example, a consensus SDK for Go would bring existing implementations such as Hyperledger Labs’ MinBFT one step closer to being compatible with Sawtooth.

Driving the Blockchain with a Consensus Engine

The consensus API is centered around a new consensus engine abstraction that handles consensus-specific functionality. A consensus engine is a separate process that interacts with the validator through the consensus API using protobuf messages and ZMQ.

The role of a consensus engine is to advance the blockchain by creating new blocks and deciding which blocks should be committed. Specifically, a consensus engine must accomplish the following tasks:

  • Determine consensus-related messages to send to peers
  • Send commands to progress the blockchain
  • React to updates from the validator

The validator continues to handle the mechanics of validation, communication, and storage for blocks, batches, and transactions. The validator must perform these tasks:

  • Validate the integrity of blocks, batches, and transactions
  • Validate the signatures for blocks, batches, transactions, and messages
  • Gossip blocks, batches, and transactions
  • Handle the mechanics of block creation and storage
  • Manage the chain head directly

New Consensus API and SDKs

The validator exposes the API for consensus engines as a set of protobuf messages sent over a network interface. This API is split into two types of interactions:

  • Service: A pair of (request, response) messages that allow a consensus engine to send commands to the validator and receive information back. For example, a consensus engine can instruct the validator to commit a block or request an on-chain setting from a specific block. Services are synchronous and on-demand.
  • Updates: Information that the validator sends to a consensus engine, such as the arrival of a new block or receipt of a new consensus message from a peer. Updates are sent asynchronously as they occur.

Although you could use the API directly to implement a new consensus engine, the recommended interface is a consensus SDK. The SDK provides several useful classes that make it easier to implement a consensus engine. Sawtooth currently provides consensus SDKs for Python and Rust. We have used these SDKs to create the consensus engines for the PoET engine (Python), PBFT engine (Rust), and Raft engine (Rust).

These SDKs have a consistent design with an abstract Engine class, an engine Driver, and a validator Service. The abstract Engine class provides a clear starting point for new consensus engine implementations. If you plan to write your own consensus SDK, we recommend conforming to this design.

Try it Today!

One of the most important decisions for a distributed ledger application is the choice of consensus. By opening up this interface, we hope that each application built on Hyperledger Sawtooth can select the consensus algorithm that suits it best.

Hyperledger Sawtooth Blockchain Security (Part Three)

By | 网志, Hyperledger Sawtooth

This is the conclusion of my three-part series on Hyperledger Sawtooth Security. I started with Sawtooth consensus algorithms in part one, then continued with Sawtooth node and transaction processor security in part two. Here I will conclude by discussing Sawtooth application security and Sawtooth network security.

Client Application Security

The client part of a Sawtooth application is written by the application developer. The Sawtooth client communicates with a Sawtooth node by REST API requests, including signed transactions and batches. The signing is performed with a private key and, as such, key management and security is important. With Bitcoins, for example, poor key management has resulted in stolen Bitcoins and a “graveyard of Bitcoins” that are inaccessible forever. Key management is the responsibility of the client application as keys are not managed by Sawtooth software.

A keystore is where you securely store your keys. The public key for a keypair, used for signature verification, can be and should be distributed to anyone. The private key portion, used for signing, must be safeguarded from access by others. Here are some keystore methods, ordered from low to high security:

  • The minimum security used should restrict access to the private key. That is either restrict access to the machine holding the key or restrict read access to the private key file to the signer or (better yet) both
  • Better protection would be the use of software-encrypted keystore. This would be a private keystore accessible by a PIN
  • The best protection is from a Hardware Security Module (HSM) keystore or a network-accessible key manager, accessed using the Key Management Interoperability Protocol (KMIP)

Client Authentication

A Sawtooth client may take external user input. In which case, it is important to authenticate that the user is who they say they are. Authentication methods are usually categorized, from low to high security, into:

  • Single-factor Authentication (SFA). SFA is something you know. It could be something like a PIN, password, passphrase, or one-time password (OTP). The main disadvantage with SFA is it could be weak or hard to remember
  • Two-factor Authentication (2FA). 2FA is SFA plus something you have. It could be a security key, such as a U2F token (e.g., YubiKey). The main disadvantage with 2FA is it can be lost or stolen

  • Three-factor Authentication (3FA). 3FA is 1FA and 2FA plus something you are (biometrics). Examples include fingerprints, face recognition, or retina scan. The main disadvantages with 3FA is it can be forged and cannot be easily changed

With 2FA and 3FA, the idea is defense-in-depth (i.e., multiple hurdles to authenticate).

Network Security

Blockchains are subject to Distributed Denial of Service (DDoS) attacks. That is, an attack that attempts to overload blockchain nodes by flooding the targeted nodes with bogus messages. Classical public, unpermissioned blockchain networks avoid DDoS attacks because transactions require spending digital currency (such as Bitcoin), making attacks costly. Also, public blockchain networks are highly distributed—with thousands of nodes—making a DDoS attack on the entire network impractical.

Private or permissioned blockchains, such as Sawtooth, are not designed to run on a public network. As such, they do not require digital currency and “mining.”

Sawtooth network can and should be mitigated against DDoS attacks as follows:

  • Back pressure, a flow-control technique to reject unusually frequent client submissions. If the validator is overwhelmed, it will stop accepting new batches until it can handle more work. The number of batches the validator can accept is based on a multiplier (currently two) of a rolling average of the number of published batches.
  • Sawtooth communication uses the Zero Message Queue (ZMQ or 0MQ) message library. Sawtooth optionally enables encryption with ZMQ when the network_public_key and network_private_key settings are defined in validator.toml. For production, generate your own key pair instead of using a predefined key that may be present.
  • REST API input is validated to avoid buffer corruption or overflow attacks.
  • TCP port 4004, used for communication between internal validator node components, should be closed to outside access in any firewall configuration,
  • TCP port 5050, used to communicate between the validator node and the consensus engine, should be closed to outside access in any firewall configuration.
  • TCP port 8008, used for the REST API, should be closed to outside access in a firewall configuration providing all application clients accessing the REST API come from the local host.
  • If you use the Seth TP (for WASM smart contracts), TCP port 3030, used for Seth RPC, should be closed to outside access in a firewall configuration, providing all RPC requests come from the local host.
  • TCP port 8800, used to communicate between validator nodes, must be open to outside access in any firewall configuration.

Sawtooth validator nodes should be deployed on a VPN or other private network to prevent any outside access to Sawtooth TCP ports.

Basically, best practices dictate closing as many network ports as possible, encrypting network communications, and deploying in a protected network environment (such as a VPN).

Further Information

Announcing Hyperledger Sawtooth 1.1

By | 网志, Hyperledger Sawtooth

It is with great excitement that we would like to announce the release of Sawtooth version 1.1. Earlier this year we released Sawtooth 1.0, marking the production ready status of the platform. Since then the community has been hard at work adding new features, improving the privacy and performance of the platform, and growing the ecosystem.

The Sawtooth development team has been focused on two major new features for the Sawtooth 1.1 release, an improved consensus interface and support for WebAssembly smart contracts. For a full list of new features and improvements see the Sawtooth 1.1 Release Notes.

Improved consensus interface and new consensus options

While Sawtooth has always enabled ‘pluggable’ consensus and multiple consensus algorithms, recent experiences indicated that the existing consensus interface could be improved. Sawtooth has always aspired to be a modular platform that would enable lean experimentation and rapid adoption of new technologies, in particular, with regards to consensus. After analyzing a number of consensus algorithms that are available today, both Nakamoto (PoW/PoET) and classical (Raft/PBFT), the team decided to re-architect the consensus interface to improve the ease of integration. As a result of this new interface, the team has been able to port the existing Sawtooth consensus options, as well as add two new classical consensus options. Below is the state of these consensus options today:

    • Developer Mode (stable)
    • PoET-Simulator (Crash Fault Tolerant) (stable)
    • PoET-SGX (under development)
    • Raft (alpha)
    • PBFT (under development)

If you are interested in learning more about the new consensus interface, or writing your own, please see the detailed documentation.

Support for WebAssembly smart contracts (Sawtooth Sabre)

Sawtooth Sabre is a new smart contract engine for Sawtooth that enables the execution of WebAssembly-based smart contracts. WebAssembly (WASM) is a new web standard developed at the W3C with participation from major corporations like Apple, Google, and Microsoft. The Sawtooth Sabre project leverages an existing open source WASM interpreter from the broader blockchain community. This on-chain interpreter enables developers to write their code in a variety of languages, compile it down to WebAssembly, and then deploy it directly to the Sawtooth blockchain.

In addition to new feature development, the Sawtooth developer team has continued research and development on improving the privacy and performance of the Sawtooth platform.


On the privacy front, a new Hyperledger Lab called ‘Private Data Objects (PDO)’ has been created. PDO enables smart contracts to execute off-chain with confidentiality and integrity through the use of trusted execution environments. For more information, take a look at this video or read the paper. Private data objects are just one way of addressing blockchain confidentiality, but expect to see more techniques available to Sawtooth over the coming months.


On the performance front, much of the effort has been spent porting core Sawtooth components from Python to Rust. While Python was a great language to start with, and enabled the team to rapidly iterate and define the appropriate modularity in the architecture, it is not the most performant language. The 1.0 release stabilized many of the Sawtooth APIs, and as we began tuning the system, we identified bottlenecks arising from the design of the Python programing language. The speed and type safety of the Rust programming language made it a natural fit for the evolution of Sawtooth. As of today, roughly 40% of the Sawtooth validator components have been ported to Rust, a number that we expect will continue to increase over time.

Finally, in addition to adding new features and improving the robustness of the Sawtooth platform, we have also seen an explosion of activity in the community, with dozens of new developers and a variety of tools and applications being openly built on top the Sawtooth infrastructure. Notable new projects in the Sawtooth ecosystem include:


  • Sawtooth Supply Chain – A platform focused on supply train traceability with contributors from Bitwise IO and Cargill.
  • Sawtooth Next-Directory – An application focused on role-based access control with contributors from T-Mobile.


  • Truffle integration with Sawtooth-Seth – A new integration that allows you to deploy Ethereum smart contracts to Sawtooth using the leading Ethereum development tool, Truffle. Built in collaboration with the Truffle team.
  • Caliper support for Sawtooth – Benchmark Sawtooth in a variety of configurations with Hyperledger Caliper.
  • Sawooth Explorer – A blockchain explorer built for Sawtooth by the team at PokitDok.
  • Grafana monitoring – A set of tools for data collection and visualization for live Sawtooth deployments.

Part of a Grafana dashboard for a Sawtooth Testnet running Raft

The Sawtooth ecosystem and functionality is rapidly expanding, which wouldn’t be possible without the community behind it. I’d like to thank all of the developers who have put in time building tools and applications, or providing support, for their effort, including, but not limited to:

Adam Gering, Adam Ludvik, Adam Parker, Al Hulaton, Amol Kulkarni, Andrea Gunderson, Andrew Backer, Andrew Donald Kennedy, Anne Chenette, Arthur Greef, Ashish Kumar Mishra, Benoit Razet, Boyd Johnson, Bridger Herman, Chris Spanton, Dan Anderson, Dan Middleton, Darian Plumb, Eloá Franca Verona, Gini Harrison, Griffin Howlett, James Mitchell, Joel Dudley, Jonathan Langlois, Kelly Olson, Keith Bloomfield Kenneth Koski, Kevin O’Donnell, Kevin Solorio, Logan Seeley, Manoj Gopalakrishnan, Michael Nguyen, Mike Zaccardo, Nick Drozd, Pankaj Goyal, PGobz, Patrick BUI, Peter Schwarz, Rajeev Ranjan, Richard Berg, Ry Jones, Ryan Banks, Ryan Beck-Buysse, Serge Koba, Shawn T. Amundson, Sutrannu, Tom Barnes, Tomislav Markovski, Yunhang Chen, Zac Delventhal, devsatishm, feihujiang, joewright, kidrecursive, mithunshashidhara, and ruffsl.

If you’d like to join the community or learn more, you can find more information here:

Chat: #Sawtooth in Hyperledger RocketChat

Docs: Sawtooth 1.1 Documentation

Code: Sawtooth-core Github

Website: Hyperledger Sawtooth Homepage

Thanks for reading and look forward to more posts detailing new Sawtooth 1.1 features and improvements. We encourage developers to try these new feature out and give us feedback!


Welcome Hyperledger Ursa!

By | 网志, Hyperledger Ursa

Hyperledger Ursa is the latest project to be accepted by the TSC! It is a modular, flexible cryptography library that is intended for—but not limited to—use by other projects in Hyperledger. Ursa’s objective is to make it much safer and easier for our distributed ledger projects to use existing, time tested, and trusted cryptographic libraries but also new cryptographic library implementations being developed.

Ursa aims to include things like a comprehensive library of modular signatures and symmetric-key primitives built on top of existing implementations, so blockchain developers can choose and modify their cryptographic schemes with a simple configuration file change. Ursa will also have implementations of newer, fancier cryptography, including things like pairing-based signatures, threshold signatures, and aggregate signatures, and also zero-knowledge primitives like SNARKs.

Ursa will be written mostly in Rust, but will have interfaces in all of the different languages that are commonly used throughout Hyperledger.

Why Ursa?

As Hyperledger has matured, the individual projects within Hyperledger have started to find a need for sophisticated cryptographic implementations. Rather than have each project implement its own cryptographic protocols, it is much better to collaborate on a shared library. There are many reasons to do this, including the following:

  1. Avoiding duplication: Crypto implementations are notoriously difficult to get correct (particularly when side channels are taken into account) and often require a lot of work in order to achieve a high level of security.  The library allows projects to share crypto implementations, avoiding unnecessary duplication and extra work.
  2. Security: Having most (or all) of the crypto code in a single location substantially simplifies the security analysis of the crypto portion of Hyperledger.  In addition, the lack of duplication means maintenance is easier (and thus, hopefully security bugs are less numerous). The presence of easy to use, secure crypto implementations might also make it less likely for less experienced people to create their own less secure implementations.  
  3. Expert Review: In addition, the ability to enforce expert review of all cryptographic code should increase security as well.  Having all of our cyptographic code in a single location makes it easier to concentrate all of the cryptographic expertise in the project and ensures that code will be well reviewed, thus decreasing the likelihood of dangerous security bugs.  
  4. Cross-platform interoperability: If two projects use the same crypto libraries, it simplifies (substantially in some cases) cross-platform interoperability, since cryptographic verification involves the same protocols on both sides.
  5. Modularity: This could be the first common component/module and a step towards modular DLT platforms, which share common components.   While we have already outlined most of the advantages this modularity brings in terms of actual functionality, a successful crypto library encourages and pushes forward more modular activities.
  6. New Projects: It is easier for new projects to get off the ground if they have easy access to well-implemented, modular cryptographic abstractions.

Who Is Involved in Ursa?

On the more practical side, Ursa currently includes developers who work on the security aspects of Hyperledger Indy, Sawtooth, and Fabric. In addition, the Ursa project includes several cryptographers with an academic background in theoretical cryptography to ensure that all cryptographic algorithms meet the desired levels of security.

Our goal in creating Ursa is to combine the efforts of all the security and cryptography experts in the Hyperledger community and move all of the projects forward.

Features and Plans

Currently Ursa has two distinct modules: a library for modular, flexible, and standardized basic cryptographic algorithms, and a library for more exotic cryptography, including so-called “smart” signatures and zero knowledge primitives called zmix.

Our first library is our “base crypto” library. Right now we are focused on our shared modular signature library, but we plan to extend this to allow easy modularization of all commonly used cryptographic primitives in Minicrypt. This—work in progress—has the implementation of several different signature schemes with a common API, allowing for blockchain builders to change signature schemes almost on-the-fly—or to use and support multiple signature schemes easily. Exact implementations and APIs have not been finalized, but they are in progress.

We note that there aren’t raw crypto implementations in this library—things here are stable and generally standardized—but wrappers for code from existing libraries and also code generated by commonly used cryptography libraries such as the Apache Milagro Crypto Library (AMCL). The novelty here is the modularization and API, which enables blockchain platforms to easily use a wide variety of changeable cryptographic algorithms without having to understand or interact with the underlying mathematics.

In the future, we expect other wrappings and modular code to go in this library. For instance, Indy makes use of aggregate signatures, a feature which the other platforms would also like available to them. There are also a variety of hash algorithms which provide different performance characteristics or support different signature schemes. Selecting vetted implementations and providing a common interface helps the Hyperledger community manage a growing crypto feature set in a responsible manner.

Our second initial subproject is zmix, which offers a generic way to create zero knowledge proofs that prove statements about multiple cryptographic building blocks, including signatures, commitments, and verifiable encryption. The goal of zmix is to provide a single flexible and secure implementation to construct such zero knowledge proofs. Zmix consists of C-callable code but there are also convenience wrappers for various programming languages.

Getting involved

If you’re interested in learning more about, using, or contributing to Ursa, please check out the following:

We welcome interest even from those who aren’t working with Hyperledger projects, so feel free to join us if you like!

All Are Welcome Here

By | 网志, Hyperledger Burrow, Hyperledger Fabric, Hyperledger Indy, Hyperledger Iroha, Hyperledger Sawtooth

A Minneapolis coffee shop that has fueled or at least caffeinated a lot of Hyperledger commits.

One of the first things people learn when coming to Hyperledger is that Hyperledger isn’t, like it’s name may imply, a ledger. It is a collection of blockchain technology projects. When we started out it was clear almost immediately that a single project could not satisfy the broad range of uses nor explore enough creative and useful approaches to fit those needs. Having a portfolio of projects, though, enables us to have the variety of ideas and contributors to become a strong open source community. Back in January of 2016 Sawtooth and Fabric were both on the horizon followed shortly by Iroha, but we wouldn’t have predicted that we would have Hyperledger Burrow and Hyperledger Indy – two projects that bear no resemblance to each other. Burrow is a permissioned Ethereum-based platform and Indy is a distributed identity ledger. Burrow is written in Go, and Indy was created in Python and is porting to Rust.

Both of these platforms are interesting in their own rights, but Hyperledger is even more interesting for the combination of these projects with the others. Both Sawtooth and Fabric have already integrated with Burrow’s EVM. Now Hyperledger has a set of offerings that can simultaneously satisfy diverse requirements for smart contract language, permissioning, and consensus. Likewise Sawtooth and Indy have been working together at our last several hackfests. The results of that may unlock new use cases and deployment architectures for distributed identity. So it’s not that our multiplicity of projects has given us strength through numbers, but rather strength through diversity.

Hyperledger Hackfest – December 2017 at The Underground Lisboa

The hackfests that we mentioned are one of the rare times that we get together face to face. Most of our collaboration is over mail list, chat, and pull-requests. When we do get together though it’s always in a new city with new faces. One of our most recent projects was hatched inside one of those buses. It wasn’t the most ergonomic meeting I’ve ever had but there was room for everyone on that bus.

Hyperledger Hackfest in Chicago

Our hackfest in Chicago was in a lot more conventional surroundings (still a very cool shared creative space .. lots of lab equipment and benches out of view on the other side of the wall to the right). Looking back at this photo is fun for me. I can see a lot of separate conversations happening at each table… people sharing different ideas, helping ramp new contributors, working on advancing new concepts with existing contributors. I can see a lot of similarity but also a little variety. It’s a busy room but there’s still open chairs and room for more variety.

Our next hackfest won’t be until March 2019 (Hyperledger is hosting Hyperledger Global Forum in December in Basel though). The March hackfest will be somewhere in Asia – location to be settled soon. The dates and locations of the other 2019 hackfests aren’t set yet. I don’t know where they will be specifically, but I do know that there will be a seat available and you will be welcome there.

These face to face meetings really are more the exception than the rule at Hyperledger. There are now more than 780 contributors spread all across the globe. 165 of those were just in the last few months. That means that every day we have a new person contributing to Hyperledger. Most of our engagement is through the development process. People contribute bug fixes, write new documentation, develop new features, file bugs, etc. If you’ve never contributed open source code before getting started might be intimidating. We don’t want it to be, though. There are a number of resources to help you get started. You can watch this quick video from Community Architect, Tracy Kuhrt. There’s documentation for each project, mail lists, a chat server, working groups, and some of the projects even host weekly phone calls to help new developers get engaged. Everyone in Hyperledger abides by a Code of Conduct so you can feel comfortable knowing that when you join any of those forums you will be treated respectfully. Anyone who wants to get involved can regardless of “physical appearance, race, ethnic origin, genetic differences, national or social origin, name, religion, gender, sexual orientation, family or health situation, pregnancy, disability, age, education, wealth, domicile, political view, morals, employment, or union activity.” We know that to get the best ideas, best code, best user experience we need your involvement. Please come join our community.

Image created by for Hyperledger

As always, you can keep up with what’s new with Hyperledger on Twitter or email us with any questions:

Six Hyperledger Blockchain Projects Now in Production

By | 网志, Hyperledger Fabric, Hyperledger Indy

IT leaders have been hearing a lot about blockchain and its potential in the enterprise for the last few years, but until now they may not have heard much about how it is actually being used today for real-world business processes inside and between enterprises. So, we compiled this list of six intriguing, Hyperledger blockchain initiatives that are in production today across a wide range of industries, including food supply, fine art, insurance, aviation and accounting.

  1. Food source tracking using blockchain

Ensuring the safety and quality of a vast portion of the nation’s food supply is a huge undertaking, especially since incidents have occurred over the last several decades in which consumers have become sickened or died after eating tainted foods. IBM Food Trust is powered by Hyperledger Fabric to create unprecedented visibility and accountability in the food supply chain. It is the only network of its kind, connecting growers, processors, distributors, and retailers through a permissioned, permanent and shared record of food system data.

The IBM Food Trust network represents the continuation of more than a year of pilot tests with major retailers and food suppliers, including Golden State Foods, McCormick and Co., Nestlé, Tyson Foods and Wal-Mart Stores Inc. These companies formed a consortium in collaboration with IBM to use its food safety blockchain in order to protect consumers and enhance trust the food supply.

The solution provides authorized users with immediate access to actionable food supply chain data, from farm to store and ultimately the consumer. The complete history and current location of any individual food item, as well as accompanying information such as certifications, test data and temperature data, are readily available in seconds once uploaded onto the blockchain. Learn more here.

2. Blockchain for personal information control is a New Jersey-based startup, whose mission is to ensure that every person on Earth has the full legal right to their own personal data as protected property. To do this, the group built its mobile #My31 app, which works with “intelligent contracts” where that personal data can be protected and seen as personal property using its Human Data Consent and Authorization Blockchain (HD-CAB). The idea is to create an environment where humans, their data and corporations can co-exist through blockchain-backed explicit consent and authorization to use such data, giving individuals more privacy and greater control over their own information. The effort partners with the Sovrin Network, which is an identity network that uses Hyperledger Indy technology to bring it all together. Data isn’t stored in the app, but the app is designed to give users control over their data by using the app. Users can download the #My31 app for Android or iOS.

3. Blockchain for the airline industry

To help airlines improve passenger ticketing processes, NIIT Technologies developed its new Chain-m blockchain application using Hyperledger Fabric that can report on a wide range of critical information, from the number of tickets sold to fare amounts, commissions, taxes collected and more. Using a web-based interface, Chain-m adds transparency to ticketing processes, which is expected to help improve record-keeping, save money and improve security and agility in a complex business.

4. Follow the trail of Cambio Coffee with blockchain

Direct trade organic coffee seller Cambio Coffee provides a clear, traceable supply chain path for its products–from harvesting to roasting, packaging, and shipping–so customers could learn the exact details of what they are buying and drinking. To do that, the company began adding QR scan codes from ScanTrust to its coffee packaging, which when scanned records those details onto a Hyperledger Sawtooth blockchain network. Tying the QR codes together with the blockchain data lets coffee buyers scan the codes to see exactly where their coffee originated and how it arrived to their local store and into their grocery carts. The idea, according to Cambio Coffee, was to give its customers trust in its products and to provide transparency and traceability throughout their journey to customers. Watch the webinar here to learn more.

5. Blockchain for better enterprise operations management

China’s largest retailer,, offers its own JD Blockchain Open Platform to help enterprise customers streamline a wide range of operational procedures by creating, hosting and using their own blockchain applications. The platform uses Hyperledger Fabric and is an expansion of the company’s Retail-as-a-Service strategy, which offers some of its own internal initiatives to other companies as a service. The China Pacific Insurance Company is using the platform to deploy a traceable system for e-invoices, which are official receipts required in China for business. The system strengthens the security governance of e-invoices by applying unique blockchain IDs to each document, increasing efficiency and streamlining the accounting process, according to the company.

The platform allows users to create and update smart contracts on public and private enterprise clouds, while also enabling companies to streamline operational procedures such as tracking and tracing the movement of goods, charity donations, authenticity certification, property assessment, transaction settlements, digital copyrights and more.

6. Blockchain for insurance compliance data

Insurance companies are required to regularly report a significant amount of regulatory data that is subject to a wide range of compliance requirements and must be shared securely with regulators. The American Association of Insurance Services, a not-for-profit insurance advisory organization, has developed openIDL (open Insurance Data Link), which is designed to automate insurance regulatory reporting. Built on IBM Blockchain thus powered by Hyperledger Fabric, openIDL can help streamline regulatory and compliance requirements while improving efficiency and accuracy for both insurers and state insurance departments. The openIDL is the first open blockchain platform focused on the collection and sharing of statistical data between insurance carriers and regulators, according to the group. Using this blockchain network, insurers can contribute data directly onto the secure platform, which satisfies state regulatory requirements, while historical and current data is stored on an immutable blockchain ledger. Regulators are then provided permissioned access to view only the information they need to see for compliance purposes.

If you’re interested in learning about other ways Hyperledger technologies are used today to solve interesting problems, you can read through our case studies and/or visit the Blockchain Showcase.


Conducting Data with Concerto and Hyperledger Fabric

By | 网志, Hyperledger Composer, Hyperledger Fabric


Guest post: Dan Selman, Maintainer Hyperledger Composer, Accord Project Cicero, CTO Clause Inc.


Is the business logic in your blockchain chaincode hard to find because it is buried in code that is doing JSON serialization and data validation? Are you developing REST APIs by hand for your domain model? The good news is that there is now a better way!

In this article for developers and blockchain solution architects, I introduce the Concerto npm module and show you how you can use it from Hyperledger Fabric v1.3 Node.js chaincode.

Concerto is a lightweight, 100% JavaScript schema language and runtime. It works in both a Node.js process or in your browser. The browserified version of Concerto is ±280 KB and the maintainers are working to make it even smaller.

Concerto has recently been modularized and moved out of Hyperledger Composer into the composer-concerto npm module, so you can use it in your Node.js applications,  Fabric Node.js chaincode or even in your web browser!

Things you can do using Concerto:

  • Define an object-oriented model using a domain-specific language that is much easier to read and write than JSON/XML Schema, XMI or equivalents. The metamodel gives you “just enough” expressivity to capture real-world business models, while remaining easy to map to most runtime environments.
  • Optionally edit your models using a powerful VS Code add-on with syntax highlighting and validation
  • Create runtime instances of your model
  • Serialize your instances to JSON
  • Deserialize (and optionally validate) instances from JSON
  • Introspect the model using a powerful set of APIs
  • Convert the model to other formats including JSON Schema, XML Schema, Java, Go, Typescript, Loopback, PlantUML…
  • Generate dynamic web-forms from your data model and bind them to JSON data
  • Import models from URLs
  • Publish your reusable models to any website, including the Accord Project Open Source model repository, hosted at:

Sidebar: Why Use Models?

All software applications have a data or domain model.

Models are required to create generic tools because you need to reason about the structure of user-defined domain models. As soon as you want to implement something like an Object-Relational-Mapper or REST API browser or web-form generator, you need a data model.

The data model for your application can either be implicit (duck typing…) or explicit. If it is explicit, it can be expressed using a wide range of technology including XML Schema, JSON Schema, protobufs, NoSQL design documents, Loopback schema, Java classes, Go structs, RDBMS tables, ad-hoc JSON or YAML documents…the list is almost endless.

These different model representations make different trade-offs with respect to:

  • Integration with computation
  • Optimization of serialization/wire format
  • Cross-platform usage
  • Industry acceptance
  • Human readability and editability
  • Expressiveness of the metamodel
  • Composability and reuse

If developers define models as part of application development, they tend to favour creating Java classes, Go structs, Typescript or similar, because they want to express the model in a language they are familiar with and that integrates closely with the type-system used for computation. The major downside with this approach is that it is almost impossible to then share and reuse these models outside of a single application. It also doesn’t integrate well with modern application development, where we may use different technology across the web, mobile, middle and backend tiers of an application. Out of sync models (or model mapping) is a huge source of anguish and bugs.

When industry-standard bodies define models, they tend to favour representations that are more cross-platform and less tied to computation, such as publishing XML Schemas. Developers tend not to like using these models because the mapping from things like XML Schema to Java classes or Go structs for use at runtime is lossy/messy/complex.

Concerto solves many of these problems by providing an Object-Oriented schema language that allows models to be moved outside of applications while mapping quite naturally to most common programming languages. We like to think of it as a “goldilocks” approach to modeling, “just enough” to cover most business use cases, with a natural mapping to most common programming languages, and with a JSON serialization.

An example

I’ve published a detailed code sample for Node.js chaincode and a HLF v1.3 client so you can start to experiment with using Concerto in HLF v1.3 Node.js chaincode:

The sample shows you how to deploy Concerto models to the blockchain and then use the models to validate instances and to serialize them to JSON for long-lived persistence.

Editing a Concerto model using the Concerto Form Generator


The sample defines a simple Concerto domain model for an employee, which is deployed to the blockchain.
Here is the definition of the Concerto model:

namespace io.clause

enum Gender {

  o MALE




asset Employee identified by email {

  o String email

  o String firstName

  o String lastName

  o String middleName optional

  o Gender gender

  o DateTime startDate

  o DateTime terminationDate optional


The sample then uses the model to validate instances and to serialize them to JSON for long-lived persistence on the blockchain.

A React form, dynamically generated from the Concerto model

First the client creates a valid instance of an employee, and the chaincode validates it and stores it on the blockchain:


$class : 'io.clause.Employee',

email : '',

firstName: 'Bob',

lastName: 'Concerto',

gender: 'MALE',

startDate :


When the client attempts to create an instance that is missing the required firstName field:


$class : 'io.clause.Employee',

email : '',

lastName: 'Concerto',

gender: 'MALE',

startDate :


The instance fails validation, and the chaincode refuses to create the instance:

ValidationException: Instance missing required field firstName

What’s next?

We’ve just scratched the surface of what you can do with Concerto. The metamodel is flexible enough to capture almost any business domain model.

Here are some ideas for what to do next:

  • Generate Java, Go, XML Schema or Typescript code from your Concerto models, ensuring that other parts of your application infrastructure are all in sync with your canonical Concerto model. For any Open Source models that have been published to you can download these directly from the webpage.
  • Embed the Concerto Form Generator into your web application to dynamically generate web forms based on Concerto models.
  • Generate a Loopback schema from your Concerto model, and then use the Loopback framework to expose your modelled assets as a REST API.
  • Import types from the Open Source Accord Project Model Repository into your models: or publish your models to the Model Repository, or to any HTTP(S) site for import and reuse by others.
  • You can even write Accord Project legal contracts and type-safe Ergo logic over the types you’ve modelled. That’s an article for another day, however!

Get Involved!

If you are using Concerto already or would like to get involved with improving it further, please don’t hesitate to get in touch. In the true spirit of Open Source we welcome all contributions.


Blockchain as the Next Artificial Intelligence Enabler

By | 网志

Guest post: Uri Yerushalmi

The new “Collective Intelligence”

Originally, the term “Collective Intelligence” referred to a shared intelligence that emerges from collaboration or competition of many entities in the context of political science or sociobiology. But the same concept can be used to describe a new revolution, where the combination of recent AI technologies with the potential of decentralized ecosystems is expected to change the landscape of AI economy.

In the below we will mention several characteristics of AI economy, currently “pushing” the ecosystem to grow centralized tech giants, and how new types of blockchain-based architectures can “flip” the same attributes to push for massive decentralization:

AI Economy Qualities

Quality #1: The more resources, the more valuable your service

Currently, in order to solve extremely difficult problems, an individual needs control over huge teams and massive amounts of data. The only entities that have this ability are what we now call the “Tech Giants.”  Unfortunately, smaller entities cannot meaningfully contribute to solving highly complex problems (unless a Tech Giant eventually purchases the entity). Therefore, even if Tech Giants didn’t exist, somebody would have needed to create them eventually.



“Flipping” potential: With the ability to have trustless relationships based on the blockchain, an individual does not need full ownership nor full control over huge teams nor massive amounts of data. Instead, ad hoc communities can rise to solve specific problems, controlled by smart contracts, and as more entities join in–the more valuable the service provided by the community.


Quality #2: The loopback advantage in AI business models

Let’s compare two competitors providing an AI-based service: Alice and Bob, where Alice’s model is superior, but Bob’s business model is slightly different. Bob gets feedback for every prediction he gives and the accuracy of the prediction. It’s clear that, in spite of the initial supremacy of Alice’s service, Bob is going to win since  the loopback advantage provides an infinite chance to improve his model. This structure creates a situation where Tech Giants, who have substantially more data flow, tend to control their own loopbacks and therefore get eventually better models.


“Flipping” potential: Given the right infrastructure, Alice can collaborate with Eve, who controls the feedback data-flow, and, together, they can beat Bob.

Quality #3: Trusting small Black Box providers

AI models, unlike traditional software systems, tend to work as black boxes, being that it is very difficult to define or understand the internal logic of the model. (See restaurants example here.) Such a trend might create a phenomenon where several giant providers dominate because of their reputation to provide good services.


“Flipping” potential: Given the immutability of predictions written on the blockchain, reputation of AI providers would not be dependent on their size, which should raise the demand for high quality small AI providers.


Quality #4: Data Sensitivity

More people are now aware of the fact that data is sensitive, and still we usually choose to continue sharing our data (in spite of huge scandals) with Tech Giants and not with small entities just because we tend to trust tech giants more. But various new encryption methods present a “Flipping” potential, where the tech giants lose their size-dependent trust advantage.


Quality #5: Transferable Models

In modern AI, models that were optimized to achieve a certain goal are very useful for achieving very different ones. Therefore, entities that possess knowledge about a large variety of AI models would tend to have advantage building new ones. However, a “flipping” potential would occur when there is a marketplace for transferable models, something blockchain technology is well suited to enable.

Architectural Solutions

The following architectures can be enhanced by blockchain technology and “flip” the outcome of the economic forces above from being “centralizing” to “decentralizing.” All following architectures are based on smart contract mechanisms like the processing engine developed by using the Solidity language. Such an engine can easily be run on Hyperledger Burrow, one of the Hyperledger projects hosted by the Linux Foundation. Burrow provides a modular blockchain client with a permissioned smart contract interpreter partially developed to the specification of the Ethereum Virtual Machine (EVM).  


Crowd Teaching

As a small entity that does not have the data to train a model, why not ask others to train your model and, in return, get rewarded? The reward can be in FIAT, tokens or even equity of the model. Rewards can be based on contribution of the “teacher” to the teaching quality. See an example here.


Transferring Models

For entities looking to develop an AI capability that has not been developed before, the best option may tapping similar models that can be adapted.: Why would one build the capability from scratch if someone else can provide a starting model in exchange for a reward? See example here.

Hybrid Decentralized AI

As AI architectures get more and more complex, every sub-architecture needs its own expertise and know how. This need begs the questions, why not “mix and match” architectures from different providers, based on predefined smart contracts? See an example here.

Tokenized AI

In many of the examples provided, there can be a massive number of contributors to a certain AI service. The best way to create a network effect and encourage the contributors to make the service succeed is to give them ownership. The end result would be an ecosystem with a massive amount of AI services that are actually mini-businesses, each with its own community of equity holders.

Privacy Techniques

Various privacy techniques are available for allowing collaboration while keeping data secure and private: Federated Learning, Differential Privacy, Multi-Party Computation, Homomorphic Encryption.

Reputation Techniques

When using an AI service, users generally make sure that the service has high standards and availability. Blockchain gives us two main techniques for a high quality reputation system:

  • Curation & Staking – In a blockchain model,  the curators and stakers are responsible for having a high quality list of providers, meaning any breakage of the providers‘ reputation would have a direct economic effect on the curators & stakers.
  • Immutable Activity Logs – Unlike current ecosystem, where customers based their confidence  in the accuracy of information on the brand of service provider, the blockchain era introduces a variety of tools that reveal accuracy based on immutable logging of services. For example: A financial company predicting that the price of Bitcoin would get to a certain value in a given date and would not be able to deny or “hide” it’s prediction .

Conditional Pricing

Today when consuming AI services (assuming we’ve decided not to build the solution in-house), we do not have many tools to estimate the quality of a given AI service provider. However blockchain opens the door to many types of AI deals where the payment is conditional. (Consumer pays AI service provider if provider was right, but, if provider was wrong,maybe consumer should get paid?).


We have seen above the possible effect of the combination of AI and blockchain on corporate structure in global economy. In such a view, technology corporations would have less of a size advantage and be more focused on the quality of technological services and less on product and brand name promotion. The market would be more competitive and less dependent on tech giants. Technological and AI expertise would be much more granular. Most importantly, the quality and variety of AI-based products would be much higher, with every such product backed by a whole community of entities instead of a single tech giant. We should expect a wave of new enablers, like, enabling this drastic ecosystem change.

Hyperledger Sawtooth Blockchain Security (Part Two)

By | 网志, Hyperledger Sawtooth

Guest Post by Dan  Anderson, Intel


This is a continuation of my three-part series on Hyperledger Sawtooth Security. I began with Sawtooth consensus algorithms in part one. Here I will continue this series discussing Sawtooth node and transaction processor security.

Sawtooth Node and Transaction Processor Security

Sawtooth has several mechanisms to restrict and secure access to validator peer nodes. These include the following topics, which I’ll discuss below:

  • Sawtooth Permissioning, Policies and Roles
  • Network Roles
  • Challenge-Response Authorization
  • Sawtooth Encryption
  • Transaction Input/Output lists
  • Observability
  • Internal Security Mechanisms

Sawtooth Permissioning

Prelude: Configuration

Permissioning restricts who may access a Sawtooth validator node. Permissioning is set with Sawtooth configuration, so before we can discuss permissioning, we need to review configuration. After that, we will discuss various types of Sawtooth Permissioning.

Sawtooth configuration is set with on-chain configuration or off-chain configuration. On-chain configuration is configuration settings recorded in the blockchain, with changes or additions made as new blocks added to the blockchain. On-chain configuration applies to the entire Sawtooth network for that blockchain. Off-chain configuration is configuration settings recorded in the local validator.toml file, located by default at /etc/sawtooth/validator.toml, and applies to only to the local validator node. This allows further local restrictions for a site, if desired.

The initial permission values are configured in the genesis node (node 0), or, if not set, assume default values. On-chain settings can be modified any time by adding a transaction to the blockchain using the Settings Transaction Processor (which is the only mandatory TP). The change does not take effect until the next block (never the current block that contains the new setting).

On-chain settings are changed through a voting mechanism. Voters (individual peer nodes) are listed in The votes are signed by each peer node as a transaction and recorded on-chain. If only one voter is authorized, the change is immediate. If multiple voters are authorized, the change takes effect when the minimum percentage of votes is reached. The Settings TP manages the election results.

Transaction Family Permissioning

Transaction family permissioning controls what TFs are supported by the current Sawtooth network. All nodes in a Sawtooth network must support the same set of TFs and versions. The applicable setting is sawtooth.validator.transaction_families For example,
[{“family”:”sawtooth_settings”, “version”:”1.0″}, {“family”:”xo”, “version”:”1.0″}]
By default, any Transaction Family is supported by a Sawtooth network.

One can also restrict transaction processors to their own namespaces (the 6 hex character TF namespace). When set, the validator prohibits reads and writes outside a TF namespace. For example,

[{“family”:”sawtooth_settings”, “version”:”1.0″}, {“family”:”intkey”, “version”:”1.0″}]
[{“family”:”sawtooth_settings”, “version”:”1.0″}, {“family”:”intkey”, “version”:”1.0″, “namespaces”:[“1cf126”]}]’

Here is an example of setting the transaction family permissions on the command line on-chain:

$ sawset proposal create –url http://localhost:8008 –key /etc/sawtooth/keys/validator.priv sawtooth.validator.transaction_families='[{“family”:”sawtooth_settings”, “version”:”1.0″}, {“family”:”intkey”, “version”:”1.0″}]’

The above settings can also be set off-chain in a configuration file, in which case it applies only to the local node. For example, in validator.toml :



“sawtooth.validator.transaction_families” = “[{\”family\”:\”sawtooth_settings\”, \”version\”:\”1.0\”}, {\”family\”:\”intkey\”, \”version\”:\”1.0\”}]”

Policies and Roles

Transaction key permissioning use policies and roles, which are implemented using the Identity Transaction Family. A policy is just a set of PERMIT_KEY and DENY_KEY rules that are evaluated in the order listed. A role is an authorization that grants permission to perform operations and access data. Roles and policies may be stored on-chain, as blockchain transactions, or off-chain, in configuration files, in which case they apply to the local node only. An example of a role is transactor.transaction_signer.intkey, which authorizes who can sign intkey transaction family transactions. An example of a policy is
“PERMIT_KEY 03eb5418588737e1b3982f4d863e01e13fd0da03ee2ac51b090860db3bdbbf39b2” “DENY_KEY *”
which denies access to all but the signer identified by their public key beginning with 03eb.

Before roles and policies can be set, sawtooth.identity.allowed_keys must be set to the key(s) of the authorized signers of Identity Transaction Family transactions. For example, the following allows Alice to make Identity TF transactions:

$ sudo sawset proposal create –key /etc/sawtooth/keys/validator.priv sawtooth.identity.allowed_keys=$(cat ~/.sawtooth/keys/

Before roles can be set to policies, policies must be created. A policy is a sequence of PERMIT_KEY and DENY_KEY keywords followed by an identity, which is the public key of a signer. The public key is 64 hex digits, such as 03305c4911bfdbe36c3be526ba665b0638e4376a920844a351708ec94c89ae70fa . A policy can be set on-chain or off-chain. Here’s an example of an on-chain setting:


$ sawtooth identity policy create dans_policy1 \

   “PERMIT_KEY 02a1035d8a6277adf5b92e8f831f647235224fe4dc8660f8bcddf85707156307b5” \

   “PERMIT_KEY 039e4b768b2c8280501fb7b5c56992088b704fb3ef8fd0efced6204ec975d1382f” \

   “DENY_KEY *”

$ sawtooth identity policy list

In the above example, two public keys are permitted and everyone else is denied; For the public key, use a 64 hex character public key from a .pub file.

Off-chain settings, which apply only to a single Sawtooth node, are kept by default in directory /etc/sawtooth/policy/ . For example, file /etc/sawtooth/policy/dans_policy1 may contain


PERMIT_KEY 02a1035d8a6277adf5b92e8f831f647235224fe4dc8660f8bcddf85707156307b5

PERMIT_KEY 039e4b768b2c8280501fb7b5c56992088b704fb3ef8fd0efced6204ec975d1382f


Once we establish policies, we can now set roles to specific policies. For example, if we want to use dans_policy1 above to guide who can submit intkey transactions, set the following on-chain role:

$ sawtooth identity role create transactor.transaction_signer.intkey dans_policy1

Or, if we prefer an off-chain role setting, which applies only to the local node, we can add something like the following to file validator.toml :



“transactor.transaction_signer.intkey” = “dans_policy1”


Note that the key is in quotes, as required by TOML format for dotted keys.

On-chain permissioning is checked with batch submissions from a client and when publishing or validating a block. Off-chain permissioning applies only to batch submissions from a client—not transactions from peer nodes. The latter prevents unnecessary blockchain forks from different permissioning among nodes.

Transaction Key Permissions

Transaction key permissioning controls what clients can submit transactions, based on the signing public key. The relevant permissioning roles are:

  • transactor.transaction_signer.<name of TF> controls what clients can sign transactions for a particular Transaction Family (TF). For example, transactor.transaction_signer.intkey controls what clients can sign intkey TF transactions
  • transactor.transaction_signer controls what clients can sign transactions for any Transaction Family (TF)
  • transactor.batch_signer controls what clients can sign batches (groups of transactions that must be processed atomically—all or none)
  • transactor controls what clients can sign transactions or batches

The most specific role takes precedence over a more general role (for example, for batches, transactor.batch_signer is checked first and transactor is checked only if no rule was found in transactor.batch_signer . By default, anyone can sign a transaction or batch.

Challenge-Response Authorization

When a Sawtooth validator node receives a connection request, it has two authorization modes for the other node—Trust Authorization and Challenge Authorization.

For Trust Authorization, a node trusts connections from other nodes. It checks the public key for role authorizations. This is intended mainly for development and is the default value.

For Challenge Authorization, a connecting node must prove who they are. On a connection request, a node sends a challenge response containing a random nonce. The other node signs the nonce and sends it back to prove they are who they say they are. The node verifies the signed nonce is the same one as it sent, to guard against replay attacks.

To set authorization type, use
$ sawtooth-validator –network-auth {trust|challenge}
on the command line or set network = “trust” or network = “challenge” in configuration file validator.toml .

Sawtooth Encryption

Encryption in Sawtooth is used for

  • Digests for transactions, batches, and blocks
  • Signing transaction and batch headers by the client and blocks by the validator
  • Encrypting data in transit—either between peer nodes or between a components within a node

Sawtooth Transaction and Batch Signing

A Sawtooth node receives transactions from a client in the form of a batch list. A batch list contains one or more batches. A batch contains one or more transactions that must be processed, in order, as one atomic unit. For example, here’s a batch list with two batches containing two transactions and one transaction, respectively:


The client creating a transaction calculates the SHA-512 digest and sets it in the transaction header. The digest ensures the payload data in transactions cannot be altered without detection. Each transaction header contains a client-generated nonce value. The nonce makes every transaction unique and prevents anyone from replaying the transaction. The client signs the transaction header and includes the signing public key in the transaction header. The client then signs each batch and includes the batch signer public key in the batch header. The batch signer and transaction signer are usually, but do not have to be, the same. The public key of the batch signer is also in the transaction header to prevent repackaging of the transaction in another batch. The transaction and batch signer public keys in the transaction and batch header, respectively, allows anyone to identify the signers and to verify the signatures.

All Sawtooth signatures, including client-signed transactions and batches, use ECDSA curve secp256k1. This is the same algorithm and curve used by Bitcoin and Ethereum and allows for signature compatibility with these platforms. The 64-byte signature is the concatenation of the “raw” (unencoded) R and S values of a standard ECDSA signature.

Sawtooth Block Signing and Validation

The Sawtooth validator node creates proposed blocks from transactions it receives. These proposed blocks are signed by the validator and transmitted to the peer nodes on the Sawtooth network. The validator node signs blocks with ECDSA curve secp256k1, the same algorithm used for transaction and batch signatures. The peer nodes’ Validator validates candidate blocks proposed by a node, including verifying the block, batch, and transaction signatures. The digests and signatures not only prevent altering the payload data, but also prevents deleting, reordering, or duplicating transactions within a block or blocks within a blockchain.

Sawtooth Communication Encryption

Sawtooth encrypts data-in-motion —that is, communications between Sawtooth nodes and between components within a sawtooth node (such as between Validator, REST API, and Transaction Processor node processes). Sawtooth uses ZeroMQ (ZMQ or 0MQ) for communications. ZMQ encryption and authentication is implemented with CurveZMQ, which uses a 256-bit ECC key with elliptical curve Curve25519.

Transaction Input/Output lists

All Sawtooth transactions (ledger entries) have a list of input addresses and output addresses in the transaction header. These are optional but highly recommended for two reasons:

  • It allows transactions that do not conflict to be processed in parallel
  • It provides a measure of security by restricting the transaction processor from modifying addresses in state that are not listed in the transaction header.


Observability is the ability to see what the software is doing. This is important not only for debugging code, but for security analysis. One can see during a breach, or with post-mortem forensics, exactly what went wrong. Sawtooth is observable in that its components log time-stamped entries at various verbosity levels. The -v flag means log warning messages, –vv means log information and warning messages, and –vvv means log debug, info, and warning messages.

Additionally, Sawtooth has event subscriptions. The Sawtooth Events API allows an application to subscribe to “block-commit” events (triggered when a block is committed) and “state-delta” events (triggered when data in the blockchain state changes). Events are extensible in that application-defined events may be created and subscribed to by an application. An event handler could look for anomalies (such as too-frequent or over-limit transactions) and take further action to block or warn on these events.

Internal Security Mechanisms

Some security mechanisms are “under the hood” and are not always visible, but they are still important to mention:


This concludes part two of my blog on Hyperledger Sawtooth Security, where I discussed Sawtooth node and transaction processor security. This provides a toolbox to tighten down Sawtooth nodes as your needs require—tightening allowed transaction signers, transaction families, and peer nodes. I also discussed other security mechanisms including node authorization, encryption, observability, and internal security processes. Part three will conclude this series with a discussion on Sawtooth client application security and network security.


Tackling Diversity, Interoperability & Developers at Hyperledger Global Forum

By | 网志, Events

Guest post: Alissa Worley, Accenture

As the business blockchain community convenes in Basel in a few weeks for the first Hyperledger Global Forum, the blockchain team at Accenture would like to share three things that really excite us about this technology and market.

1. Diversity and inclusion

To achieve its breakthrough potential, blockchain desperately needs more talent and more diversity. We’ve been talking about this for a long time and working hard to make blockchain a diverse and inclusive career path.

If you share our passion for and commitment to this cause, please join us on the evening of December 11 for a fun and inspiring Diversity and Inclusion in Blockchain reception. Find all details and register here.

2. Interoperability

Interoperability has emerged as a hot topic across the industry. Multiple DLT platforms are gaining traction in the market and different verticals and ecosystems are gravitating towards different platforms. Concern about picking the ‘wrong system’ or looking ahead to perceived challenges to connecting blockchain-based ecosystems that may be on different platforms has been a hindrance in moving the technology forward.

Accenture has developed and tested two technology solutions that enable two or more blockchain enabled ecosystems to integrate.  

The Accenture solutions show that blockchain platforms from Digital Asset (DA Platform) and R3 (R3 Corda) as well as Hyperledger Fabric and Quorum can integrate to securely orchestrate business processes.

Accenture’s business and technical experts will be at Hyperledger Global Forum to discuss interoperability and many other topics. You will find us in our booth and giving demos in the Demo Theater.

Accenture’s Dave Treat will keynote on the topic of DLT Path to Production at Scale. Dave will discuss how enterprises are pushing the boundaries of the technology and proving it can more than meet the demands. He’ll also cover technical advancements like interoperability and business models like multi-stakeholder governance.  

Schedule a meeting with any of our experts by sending an email to blockchain at Accenture dot com. Please specify the nature of your interest so we can try to connect you with the most appropriate member of our team.

3. Developers! Developers! Developers!

The Hyperledger community is one that is becoming more and more robust every day in terms of the number and backgrounds of developers actively contributing. At Accenture, we think that is very important and pertinent to a thriving technology ecosystem. But with being an open source project, there are challenges in knowing who is doing what exactly with the technology. There are so many interesting projects and developments under way! In fact, to further explore what can be built with Hyperledger technologies, we’ll be hosting our second Blockchain for Good hackathon on Saturday and Sunday, December 15 and 16, immediately following Hyperledger Global Forum. This year, we are focusing on Sustainable Supply Chain. Mentors and judges from Accenture, Hyperledger and leading sustainability organizations will work with hackers during this two-day sprint. The hackathon is open to in-person and online participation, but spots are limited. Find out more and register here.

In conclusion, Accenture has its sights set on being the preeminent independent advisor, implementer and operator of enterprise blockchain systems. Greater diversity, breakthrough innovation, and practical solutions to global challenges like sustainable supply chain are key pillars to achieving this vision. Our team of experts will descend on Basel to roll up our sleeves and work with the entire community on these and other important topics.

We look forward to connecting with you there!