Category

Hyperledger Sawtooth

Hyperledger Sawtooth: Improving the Devops CI Workflow with Kubernetes

By | Blog, Hyperledger Sawtooth

Every devops engineer knows the importance of continuous integration (CI) testing. It’s vital to prevent regressions, as well as maintain performance, security, and supportability. At Bitwise IO, we are experimenting with Kubernetes as an automated CI deployment tool. We like the simplicity of extending tests with deployments on Kubernetes. We think Kubernetes has compelling potential for use in the CI workflow.

Figure 1: The main tools in our CI workflow

This blog post explains how Kubernetes fits into our CI workflow for Hyperledger Sawtooth. We don’t go into detail, but we provide links so you can learn more about Kubernetes and the other tools we use.

Building Sawtooth

Hyperledger Sawtooth uses Jenkins to automate builds of around 20 GitHub repositories. Each new or merged pull request in the master branch initiates a build that contains project-specific tests. The next logical step is to deploy into a test environment.

We have two deployment methods: Debian packages and Docker images. We install the Debian packages inside the Docker deployment image to ensure both that the binaries are tested and that the packages are installable.

Using Docker’s multi-stage build capability, an intermediate build container makes the deployment image smaller and narrows its exposed attack surface (possible vulnerabilities).

Handing off Docker Images with Docker Registry

Jenkins is a great place for build artifacts, but Docker has its own way to easily retrieve images. Docker Registry allows you to push your newly created images and easily retrieve them with Docker, Kubernetes, or anything else that uses the Docker Registry model.

This example shows how to tag an image with the URL of an internal registry, then upload the image to that registry.

$ docker build -f Dockerfile -t registry.url/repo/image_name:${tag} .
$ docker push registry.url/repo/image_name:${tag}

We also use Portus, because Docker Registry does not provide user and access management on its own. The Portus project makes it simple to place an authentication layer over Docker Registry. Now, any authenticated user can pull and deploy the same images that are being deployed into the test environment.

Kubernetes: Simulating Scaled Deployments

Kubernetes excels at creating deployments within and across abstracted infrastructures. We have done our experiments on local (“on-prem”) hardware with a cluster of small Kubernetes nodes dedicated to Sawtooth deployments. Each deployment consists of several pods partitioned in a namespace, which allows us to run multiple networks based on the same deployment file. (Without namespaces, Kubernetes would think we are updating the deployment.) A pod represents a Sawtooth node and contains several containers, each running a specific Sawtooth component: validator, transaction processor, REST API, and so on. Each namespace can have independent quotas for resources such as CPU time, memory, and storage, which prevents a misbehaving network from impacting another network.

Figure 2: Containerized services grouped together in pods.

Because we use Kubernetes, these deployments are portable. We can use them on any cloud infrastructure that supports Kubernetes.

Kubernetes also allows us to scale the number of CI test deployments. With an elastic cloud infrastructure, Kubernetes provides effortless testing on a large number of virtual systems (limited only by the cost of cloud hosting). This solves the issue of limited hardware resources, where each additional deployment will stress existing deployments when they share a node’s resources.

Workload Generation: Deploying Changes Under Load

Deploying Sawtooth is the first step, but you need to give it something to do—better yet, lots to do. Sawtooth includes several workload generators and corresponding transaction processors. In our Kubernetes environment, we deploy intkey_workload and smallbank_workload at rates slightly above what we think the hardware can handle for shorter runs.

Modifying workload rates is as simple as editing the deployment file, changing the rate settings, and reapplying with kubectl. When Kubernetes detects that the pod’s configuration has changed, it terminates the existing workload pod and creates a new one with the changed settings.

This example shows a container definition for an intkey workload pod.

containers:
  - name: sawtooth-intkey-workload
    image: registry.url/repo/sawtooth-intkey-workload:latest
    resources:
      limits:
        memory: "1Gi"
    command:
      - bash
    args:
      - -c
      - |
         intkey-workload \
           --rate 10 \
           --urls ...

Retaining Data with Kubernetes: Logging and Graphing

All this testing isn’t much use if you can’t troubleshoot issues when they arise. Kubernetes can streamline deployments, but it can also frustrate your attempts to gather logs and data embedded inside Docker containers after a pod has failed or stopped. Luckily, Sawtooth provides real-time metrics (which we view with Grafana) and remote logging through syslog. We actively collect logs and metrics from the Sawtooth networks, even down to syslog running on the hardware, then carefully match the logging and metrics artifacts to the testing instance. In the end, we can provide a comprehensive set of log data and system metrics for each code change.

Try It Out!

The Sawtooth documentation can help you get started: See Using Kubernetes for Your Development Environment and Kubernetes: Start a Multiple-node Sawtooth Network.

To configure Grafana, see Using Grafana to Display Sawtooth Metrics.

See these links for more information about each tool in our CI workflow:

About the Authors

Richard Berg is a Senior Systems Engineer at Bitwise IO. He has several years’ experience in sysadmin and devops roles. When not behind a terminal, Richard can be found in the woods with his snow-loving adventure cat.

Ben Betts is a Senior Systems Engineer at Bitwise IO. Ben has lots of experience with deploying, monitoring, coordinating, and supporting large systems and with writing long lists of experiences. He only uses Oxford commas because he has to.

 

 

DAML smart contracts coming to Hyperledger Sawtooth

By | 网志, Hyperledger Sawtooth

It has been a busy two weeks at Digital Asset… first, we announced that we have open-sourced DAML under the Apache 2.0 license and that the DAML SDK is available to all. Five days later, ISDA (the standards body for the derivatives market) announced that DAML is the exclusive smart contract language for their Common Domain Model, and we open-sourced a reference library and application. Next up, we announced that we’ve been working with the team at VMware to integrate DAML with their enterprise blockchain platform, VMware Blockchain.

Today, we’re delighted to share that we have been working with fellow Hyperledger members, Blockchain Technology Partners (BTP), to integrate the DAML runtime with Hyperledger Sawtooth! In this blog post, I’ll describe why we believe it’s important to architect a DLT application independently of the platform, why a new language is needed for smart contracts, and why we are working with BTP to integrate it with Hyperledger Sawtooth.

“Following the recent announcement that DAML has been open-sourced, we are delighted that work is already underway to integrate the DAML runtime with Hyperledger Sawtooth. This demonstrates the power of the open source community to enable collaboration and give developers the freedom required to truly move the industry forward.”

Brian Behlendorf, Executive Director of Hyperledger

One language for multiple platforms

As you all know, the enterprise blockchain space is fairly nascent and highly competitive. There are multiple platforms and protocols battling it out to be the “one true blockchain,” each with their own version of maximalists. Hyperledger alone has six distinct frameworks, each tailored to different needs, making necessary trade-offs to solve different problems. The field is rapidly evolving and we are all learning from the contributions of others to better the industry as a whole. One thing all these platforms have in common: Their purpose is to execute multi-party business processes. The differences arise in how a given platform deals with data representation and privacy, transaction authorization, progressing the state of an agreement, and so on.

And so each platform has its own patterns for writing distributed ledger applications, typically in a general-purpose language such as Java, JavaScript, Kotlin, Go, Python, and C++. The result of this is that developers must pick which framework they want to use and then develop their application specifically for that platform. Their application is now tightly coupled to the underlying architecture of that ledger and if a better alternative arises for their needs, that likely results in a wholesale rewrite.

One of the primary goals of DAML was to decouple smart contracts, the business logic itself, from the ledger by defining an abstraction over implementation details such as data distribution, cryptography, notifications, and the underlying shared store. This provides a clean ledger model accessible via a well specified API. With a mapping between this abstraction layer and the specifics of a given platform, as BTP is developing for Hyperledger Sawtooth, DAML applications can be ported from platform to platform without complex rewrites.

Why do smart contracts need a new language?

DAML’s deep abstraction doesn’t just enable the portability of applications—it greatly improves the productivity of the developer by delivering language-level constructs that deal with boilerplate concerns like signatures, data schemas, and privacy. Blockchain applications are notoriously difficult to get right. Libraries and packages can help improve productivity in some cases, but the application will remain bound to a given platform. Even Solidity, the language of choice for writing to an Ethereum Virtual Machine (EVM), exposes elements of the Ethereum platform directly to the developer. And we’ve seen several examples of how damaging a bug in a smart contract, or even the language itself, can be.

Abstracting away the underlying complexities of blockchains allows you to focus only on the business logic of your project and leave lower-level issues to the platform.

For example, when a contract involves many parties and data types it can be extremely difficult to define fine-grained data permissions in a general-purpose language. DAML allows you to define explicitly in code who is able to see which parts of your model, and who is allowed to perform which updates to it.

As a very simple illustration, consider the model for a cash transfer. DAML’s powerful type system makes it easy to model data schemas—even far more complex schemas than this—directly in the application.

DAML data model

Built-in language elements simplify the specification of which party or parties need to sign a given contract, who can see it, and who is allowed to perform actions on it. These permissions can be specified on a very fine-grained, sub-transaction basis. For example, the issuer of cash does not need to know who owns that currency or what they do with it.

DAML permissions

DAML provides a very clean syntax for describing the actions available on a contract, together with their parameters, assertions, and precise consequences.

DAML business logic

What you won’t find in DAML are low-level, platform-specific concerns like hashing, cryptography, and consensus protocols. You define the rules in DAML, and the runtime enforces the rules that you set out.

If you refer to the examples in the DAML SDK documentation, or the open source code for the set of complete sample applications we’ve provided, you’ll really come to appreciate the full richness of DAML and the simplifying effect it can have on complicated workflows.

Why Hyperledger Sawtooth?

Digital Asset has a long history with Hyperledger, being founding premier members and serving as both the Chairs of the Governing Board and Marketing Committees. In fact, we donated code to the initial implementation of Hyperledger Fabric and the trademark “Hyperledger” itself! I personally worked with Jim Zemlin and team at the Linux Foundation to establish the project and co-founded the company Hyperledger with my colleague Daniel Feichtinger back in early 2014.

We have clearly always believed in the need for an organization such as Hyperledger to exist, to create an open source foundation of common components that can serve as the underlying plumbing for the future of global commerce.

Hyperledger Sawtooth has quickly been emerging as an enterprise-grade platform that exemplifies the umbrella strategy that Brian laid out in his first blog post after joining as executive director. It has an extremely modular architecture that lends itself well to the plug-and-play composability that Hyperledger set out to achieve.

An example of this is that Hyperledger Sawtooth originally only offered support for the Proof of Elapsed Time, or PoET, consensus algorithm; consensus is now a pluggable feature. This modularity is accompanied by a very clean separation of business logic from platform logic, offering developers a degree of ‘future-proofing’ by limiting the amount of code that needs to be changed should a core component such as consensus be replaced.

Modularity also makes Hyperledger Sawtooth very amenable to plugging in new language runtimes. We’ve already seen this in action with Hyperledger Burrow, which integrates an Ethereum Virtual Machine into Hyperledger Sawtooth to support contracts written in Solidity. Incorporating the DAML runtime into Hyperledger Sawtooth similarly enables support for contracts written in DAML as an enterprise-grade alternative to Solidity.

Finally, from a ledger model point of view, many of the Hyperledger Sawtooth characteristics already map well to what DAML expects. Hyperledger Sawtooth’s Transaction Processor has a very flexible approach towards roles and permissions, for example, and is based on a very natural DLT network topology of fully distributed peers. DAML is based on a permissioned architecture and Hyperledger Sawtooth can be configured to be permissioned without requiring special nodes.

What comes next?

Digital Asset and BTP will soon be submitting the DAML Integration to the upstream Hyperledger Sawtooth framework, fully open sourcing our work.

The integration will also be commercially supported by BTP’s blockchain management platform, Sextant, which provides application developers with a cloud-ready instance of Hyperledger Sawtooth. Sextant is already available on the AWS Marketplace for Containers, and DAML support for Sextant will be added in July. BTP expects to support Sextant on other cloud provider support soon thereafter.

BTP is one of Digital Asset’s first partners to use the DAML Integration Toolkit, a new tool designed to enable developers and partners to easily integrate our open source DAML runtime with their own products, immediately offering the benefits of the best in class smart contract language to their end customers. We look forward to any collaboration that brings DAML to even more platforms, including the other frameworks in the Hyperledger family!

To learn more, download the DAML SDK today and start building your applications for Hyperledger Sawtooth!

Hyperledger Indy Graduates To Active Status; Joins Fabric And Sawtooth As “Production Ready” Hyperledger Projects

By | 网志, Hyperledger Fabric, Hyperledger Indy, Hyperledger Sawtooth

By Steven Gubler, Hyperledger Indy contributor and Sovrin infrastructure and pipeline engineer

The Hyperledger Technical Steering Committee (TSC) just approved Indy to be the third of Hyperledger’s twelve projects to graduate from incubation to active status.

This is a major milestone as it shows that Hyperledger’s technical leadership recognizes the maturity of the Indy project. The TSC applies rigorous standards to active projects including code quality, security best practices, open source governance, and a diverse pool of contributors. Becoming an active Hyperledger project is a sign that Indy is ready for prime time and is a big step forward for the project and the digital identity community.

Hyperledger Indy is a distributed ledger purpose-built for decentralized identity. This ledger leverages blockchain technology to enable privacy-preserving digital identity. It provides a decentralized platform for issuing, storing, and verifying credentials that are transferable, private, and secure.

Hyperledger Indy grew out of the need for an identity solution that could face the issues that plague our digital lives like identity theft, lack of privacy, and the centralization of user data. Pioneers in self-sovereign identity realized we could fix many of these issues by creating verifiable credentials that are anchored to a blockchain with strong cryptography and privacy preserving protocols. To this end, the private company Evernym and the non profit Sovrin Foundation teamed up with Hyperledger to contribute the source code that became Hyperledger Indy. The project has advanced significantly due to the efforts of these two organizations and many teams and individuals from around the world.

A diverse ecosystem of people and organizations are already building real-world solutions using Indy. The Sovrin Foundation has organized the largest production network powered by Indy. The Province of British Columbia was the first to deploy a production use case to the Sovrin Network with its pioneering work on Verifiable Organizations Network, a promising platform for managing trust at an institutional level. Evernym, IBM, and others are bringing to market robust commercial solutions for managing credentials. Many other institutions, researchers, and enthusiasts are also actively engaged in improving the protocols, building tools, contributing applications, and bringing solutions to production.

The team behind the project is excited about current efforts that will lead to increased scalability, better performance, easier development tools, and greater security. User agents for managing Indy credentials are under active development, making it easy to adopt Indy as an identity solution for diverse use cases.

If you’d like to support Indy, join our community and contribute! Your contributions will help to fix digital identity for everyone. You can participate in the discussions or help write the code powering Indy. Together, we will build a better platform for digital identity.A

Hyperledger Sawtooth Goes Mobile

By | 网志, Hyperledger Sawtooth

As interest in Hyperledger Sawtooth grows, robust SDKs continue to be important for helping developers innovate on this blockchain platform. Since mobile is one of the most popular application platforms, it is crucial to extend Sawtooth to support native iOS and Android application development.

Additionally, the introduction of Hyperledger Grid has expanded the possibility of supply chain products for the Sawtooth platform. Many of these uses are well suited to mobile clients, allowing on-site professionals at manufacturing or logistics facilities to interact with a Sawtooth application.

This blog post describes the first native mobile client applications for Hyperledger Sawtooth, which we have developed using the Sawtooth Java SDK as well as a new Sawtooth Swift SDK. These first example applications showcase a client for the XO transaction processor that, although simple, opens up possibilities for Sawtooth mobile applications moving forward.

Sawtooth Java and Swift SDKs

The Sawtooth Java SDK is already familiar to many Hyperledger Sawtooth developers; no changes are needed to make it compatible with an Android project. The SDK works similarly in Java-native projects or in Android projects in Java or Kotlin. The Sawtooth Java SDK documentation describes how to import the SDK into an Android project and includes example code for writing a client application in Kotlin.

The new Sawtooth Swift SDK supports iOS applications on the Hyperledger Sawtooth platform and provides a way for an application to sign transactions before submitting them to the blockchain. This SDK implements the same functionality present in other Sawtooth SDKs. Cocoa/Cocoa Touch projects can import the SawtoothSigning framework via Carthage, an iOS dependency manager. The Sawtooth Swift documentation includes instructions for using the SawtoothSigning framework.

Example XO Mobile Apps

Our example mobile applications use the XO transaction processor, which allows two users to play tic-tac-toe on the blockchain. (For the details of XO, see the XO CLI tutorial and transaction family specification.)

Both applications implement most of the functionality from the XO transaction processor. The applications open to a landing screen with tabs to navigate between games to play, games to watch, and finished games.

The Play tab lists the current games that a user is able to join. In the following figures, the iOS example (using Swift) is on the left, and the Android example (using Kotlin) is on the right.

 .                        

Figure 1. Play screen

The Create Game screen lets the user enter a unique name for a new game. The new game will appear on the Play screen once this transaction has been successfully submitted.

                         

Figure 2. Create Game screen

The Game Board screen allows the user to submit their moves, with Player 1 as ‘X’ and Player 2 as ‘O’.

                         

Figure 3. New game board (iOS) and finished game board (Android).

Running the Sawtooth Mobile Apps

If you want to try out these apps, you can find them in the Sawtooth Java SDK and Sawtooth Swift SDK repositories.

  1. Check out the repositories and open the project on Android Studio or Xcode to run the apps on a simulator or on your device.
  2. The application needs to communicate with a Sawtooth validator connected to an XO transaction processor. To start a local Docker network with the validator, XO transaction processor, and Sawtooth REST API, run docker-compose up in the examples/xo_ios folder in the Sawtooth Swift SDK project or examples/xo_android_client in the Sawtooth Java SDK. This will allow your application to send transactions to the validator.

Further Development

We have proposed a new Sawtooth Swift SDK, using the Hyperledger Sawtooth Request for Comments (RFC) process. Once that process is completed, a new GitHub Hyperledger repository will host the Sawtooth Swift SDK. We have already implemented part of that SDK and will contribute the code to the Hyperledger repository once the RFC has been approved. At this moment, the only framework available with the SDK is SawtoothSigning, which can be used to sign create private/public key pairs and sign transactions. In the future, we plan to implement a framework help generate Sawtooth protobuf messages.

Summary

Native mobile applications are a logical step for Hyperledger Sawtooth development. With projects such as Hyperledger Grid encouraging a larger suite of technologies to be used for supply chain solutions, flexibility in the interactions with the Hyperledger Sawtooth platform is critical. Native mobile applications make interactions with Hyperledger Sawtooth more convenient and comfortable.

The Sawtooth Java and Swift SDKs, along with the example XO client application, provide a solid foundation for getting Sawtooth mobile. Using the Sawtooth Java SDK and a new Sawtooth Swift SDK, native mobile applications allow developers and users to unlock the full potential of Sawtooth applications wherever they go.

Want to Learn More?

To learn more about mobile development in Sawtooth, see the source code and the documentation in the Sawtooth Java SDK and Sawtooth Swift SDK git repositories. These repositories include the source code for the example XO client and documentation with details on how to import the modules and how to use the SDKs to write clients.

You can also join the Sawtooth SDK development community in the #sawtooth-sdk-dev channel on chat.hyperledger.org.

About the Authors

Darian Plumb (Senior Software Engineer), Eloá Verona (Software Engineer) and Shannyn Telander (Software Engineer) work on Hyperledger Sawtooth projects at Bitwise IO. They recently worked together on the first venture into native mobile applications for Hyperledger Sawtooth.

Hyperledger Sawtooth events in Go

By | 网志, Hyperledger Sawtooth

This blog post shows how to use the Sawtooth Go SDK for event handling. We can create events in a transaction processor and write a client to subscribe to events via ZMQ. There’s also a web socket based event subscription feature not in scope of the topic discussed here. The article has information useful for developers, technical readers, architects, Sawtooth enthusiasts and anybody interested in blockchain.

Please refer to Sawtooth Go SDK reference for setting up the development environment. It includes information on how to write a client in Go.

Events and Event Subscription In Sawtooth

Thanks to the many open source contributors. The Hyperledger Sawtooth documentation describes in detail about event and event subscription. This article supplements the document. Sawtooth has SDK support in many languages, giving flexibility to the application developer. We will discuss the Go SDK usage in detail.

Sawtooth defines two core events sawtooth/block-commit and sawtooth/state-delta. Both occur when a block is committed. The former carries information about the block. The latter carries information about all state changes at the given address. In addition to the predefined events, Sawtooth allows application developers to define new events. A combination of sophisticated SDK and protobuf definitions abstract the complex network communications. The SDK exposes APIs to add new application-specific events. It also provides the mechanism for clients to subscribe to available events.

Figure 1: Representation of the Validator connection with Transaction Processor and Client via ZMQ.

Event: A savior for real world use cases

  1. Notification events such as state change and block commit help the design of interactive clients. It saves clients from having to poll the validator for the status of submitted batches.
  2. Events can be used  as a debugger for transaction processors. Transaction processors can register application-specific events allowing clients to debug the flow remotely..
  3. In addition to debugging, the asynchronous behavior allows for stateless application designs.
  4. Deferring the processing of events at a later point in time. Sawtooth Supply Chain is one such well known example, which makes use of this feature.
Caution beginners! Do not get carried away with overwhelming application specific use cases. Events are also being used for internal operation of Sawtooth. The horizon is open and our imagination is the only limit.

Adding Application-Specific Event In Go

Creating a custom event involves defining three parts, namely

  1. Define attributes: A list of key-value pairs, which later is used to filter during event subscriptions. There can be many values per attribute key.
import

("github.com/hyperledger/sawtooth-sdk-go/protobuf/events_pb2")

// Create a key-value map to define attribute

attributes := []events_pb2.Event_Attribute{

Key: "MyOwnKey", Value:"ValueIWantToPass",

})

2. Define data payload: Byte information for a specific event type.

payload := []byte{'f', 'u', 'n', '-', 'l', 'e', 'a', 'r', 'n', 'i', 'n', 'g'}

3. Define event type: Used as an identifier when subscribing for the events.

After defining these parameters, we can use the available API in the SDK to create an event and add to the context. Listing 1 shows and example of adding an event. Please note that any change to the context’s events will affect the way subscribed clients work.

import("github.com/hyperledger/sawtooth-sdk-go/processor")

func (self *OneOfMyTPHandler) Apply(

request *processor_pb2.TpProcessRequest,

context *processor.Context)

error {    // -- snip --

context.AddEvent(

"MyEventIdentifier",

attributes,

payload)

// -- snip --
}

Listing 1: Adding an event to the context.

Event-Subscription In Go

Subscribing to an event involves establishing a ZMQ connection with the validator. The event types specify which events are subscribed to. An optional filter attribute can be passed for each event when establishing the subscription. The Sawtooth Go SDK uses protobuf definitions for serializing messages exchanged between the client and the validator. The following four steps show the sample code snippets.

  1. Establish a ZMQ connection: To establish a ZMQ connection from a client as a DEALER. Detailed description can be found in ROUTER-DEALER mechanism in ZMQ. The Sawtooth SDK provides an API for establishing client connection with validator.
import (
"github.com/hyperledger/sawtooth-sdk-go/messaging"
"github.com/pebbe/zmq4"
)
zmq_context, err := zmq4.NewContext()
// Error creating a ZMQ context
if err != nil {
   return err
}

// Remember to replace <VALIDATOR-IP> with hostname
// or the IP where validator is listening on port 4004
zmq_connection, err := messaging.NewConnection(
   zmq_context,
   zmq4.DEALER,
   "tcp://<VALIDATOR-IP>:4004",
   //This specifies a server connection which needs
   //binding or client connection that needs to
   // establish a request to server
   false,
)
// ZMQ connection couldn't be established
if err != nil {
   return err
}
// Remember to close the connection when either done
// processing events processing or an error occursdefer zmq_connection.Close()
// -- snip ––

2. Construct EventFilter, EventSubscription and ClientEventsSubscribeRequest: The Event is said to be subscribed when both event_type and all the EventFilters in EventSubscription field match. EventFilters can be applied on attributes defined earlier. FilterType determines rules for  comparing the match string.

import("github.com/hyperledger/sawtooth-sdk-go/protobuf/events_pb2") 

// Define filters over attributes to be triggered when
// a string matches a particular filter type
filters := []*events_pb2.EventFilter{&events_pb2.EventFilter{   
   Key: "MyOwnKey", 
   MatchString: "MyUniqueString", 
   FilterType:  events_pb2.EventFilter_REGEX_ANY,
}} 
my_identifier_subscription := events_pb2.EventSubscription{   
   EventType: "MyEventIdentifier", 
   Filters: filters,
} 
// -- snip --
// Construct subscription request for the validator
request := client_event_pb2.ClientEventsSubscribeRequest{
   Subscriptions: []*events_pb2.EventSubscription{
     &my_identifier_subscription, 
     &my_another_identifier_subscription,
   },
}
// -- snip ––

3. Send request over ZMQ connection: The client’s event subscription request can be sent through an established ZMQ connection. Note that a correlation id returned by SendMsg() can be used to know if the validator as a ROUTER has response messages. Many events can be subscribed to at once.

import (
 "errors"
"github.com/golang/protobuf/proto"
 "github.com/hyperledger/sawtooth-sdk-go/protobuf/client_event_pb2"
 "github.com/hyperledger/sawtooth-sdk-go/protobuf/validator_pb2"
)
// Get serialized request using protobuf libraries
serialized_subscribe_request, err :=
    proto.Marshal(&request)
if err != nil {
  return err
}
 // Send the subscription request, get a correlation id
// from the SDK
corrId, err := zmq_connection.SendNewMsg(
    validator_pb2.Message_CLIENT_EVENTS_SUBSCRIBE_REQUEST,
    serialized_subscribe_request,
)
// Error requesting validator, optionally based on
// error type may apply retry mechanism here
if err != nil {   return err} // Wait for subscription status, wait for response of
// message with specific correlation id_, response, err :=zmq_connection.RecvMsgWithId(corrId)
if err != nil {
  return err
}
 // Deserialize received protobuf message as response
// for subscription requestevents_subscribe_response :=
    client_event_pb2.ClientEventsSubscribeResponse{}

err = proto.Unmarshal(
    response.Content,
    &events_subscribe_response)
if err != nil {
  return err
}
 // Client subscription is not successful, optional
// retries can be done later for subscription based on
// response cause
if events_subscribe_response.Status !=
  client_event_pb2.ClientEventsSubscribeResponse_OK {
  return errors.New("Client subscription failed")
}
 // Client event subscription is successful, remember to
// unsubscribe when either not required anymore or
// error occurs. Similar approach as followed for
// subscribing events can be used here.
defer func(){
    // Unsubscribe from events
    events_unsubscribe_request :=
      client_event_pb2.ClientEventsUnsubscribeRequest{}
    serialized_unsubscribe_request, err =
      proto.Marshal(&events_unsubscribe_request)
    if err != nil {
      return err
    }
    corrId, err = zmq_connection.SendNewMsg(

      validator_pb2.Message_CLIENT_EVENTS_UNSUBSCRIBE_REQUEST,
      Serialized_unsubscribe_request,
    )
    if err != nil {
      return err
    }
    // Wait for status
    _, unsubscribe_response, err :=
    zmq_connection.RecvMsgWithId(corrId)
    // Optional retries can be done depending on error
    // status
    if err != nil {
      return err
    }
    events_unsubscribe_response :=
      client_event_pb2.ClientEventsUnsubscribeResponse{}
    err =
      proto.Unmarshal(unsubscribe_response.Content, 
      &events_unsubscribe_response)
    if err != nil {
      return err
    }
    // Optional retries can be done depending on error
    // status
    if events_unsubscribe_response.Status !=
      client_event_pb2.ClientEventsUnsubscribeResponse_OK {
        return errors.New("Client couldn't unsubscribe successfully")
    }
}()
// -- snip ––

4. Event handling: The established ZMQ connection will send protobuf messages corresponding to the subscribed events.

import (
    "errors"
    "fmt"
    "github.com/golang/protobuf/proto"
    "github.com/hyperledger/sawtooth-sdk-go/protobuf/validator_pb2"
) 
// Listen for events in an infinite loop
fmt.Println("Listening to events.")
for {
  // Wait for a message on connection 
  _, message, err := zmq_connection.RecvMsg()
  if err != nil {
    return err
  }
  // Check if received is a client event message
  if message.MessageType !=
    validator_pb2.Message_CLIENT_EVENTS {
   return errors.New("Received a message not 
requested for")
  } 
  event_list := events_pb2.EventList{}
  err = proto.Unmarshal(message.Content, &event_list)
  if err != nil {
    return err
  }
  // Received following events from validator   
  for _, event := range event_list.Events {
    // handle event here
    fmt.Printf("Event received: %v\n", *event)
  }
}
// -- snip ––

Try it out!

References:

  1. Subscribing to Events, Using the Go SDK, from the Hyperledger Sawtooth website.
  2. Commits by arsulegai in sawtooth-cookiejar example.
  3. The Sawtooth Go SDK.

Chapter on Router-Dealer, ZMQ protocol.

Introduction to Sawtooth PBFT

By | 网志, Hyperledger Sawtooth

As of release 1.1, Hyperledger Sawtooth supports dynamic consensus through its consensus API and SDKs. These tools, which were covered in a previous blog post, are the building blocks that make it easy to implement different consensus algorithms as consensus engines for Sawtooth. We chose to implement the Raft algorithm as our first consensus engine, which we describe in another blog post. While our Raft implementation is an excellent proof of concept, it is not Byzantine-fault-tolerant, which makes it unsuitable for consortium-style networks with adversarial trust characteristics.

To fill this gap, we chose the Practical Byzantine Fault Tolerance (PBFT) consensus algorithm. We started work on the Sawtooth PBFT consensus engine in the summer of 2018 and continue to develop and improve on it as we work towards its first stable release. This blog post summarizes the PBFT algorithm and describes how it works in Sawtooth.

What Is PBFT?

PBFT dates back to a 1999 paper written by Miguel Castro and Barbara Liskov at MIT. Unlike other algorithms at the time, PBFT was the first Byzantine fault tolerant algorithm designed to work in practical, asynchronous environments. PBFT is thoughtfully defined, well established, and widely understood, which makes it an excellent choice for Hyperledger Sawtooth.

PBFT is similar to Raft in some general ways:

  • It is leader-based and non-forking (unlike lottery-style algorithms)
  • It does not support open-enrollment, but nodes can be added and removed by an administrator
  • It requires full peering (all nodes must be connected to all other nodes)

PBFT provides Byzantine fault tolerance, whereas Raft only supports crash fault tolerance. Byzantine fault tolerance means that liveness and safety are guaranteed even when some portion of the network is faulty or malicious. As long as a minimum percentage of nodes in the PBFT network are connected, working properly, and behaving honestly, the network will always make progress and will not allow any of the nodes to manipulate the network.

How Does PBFT Work?

The original PBFT paper has a detailed and rigorous explanation of the consensus algorithm. What follows is a summary of the algorithm’s key points in the context of Hyperledger Sawtooth. The original definition is broadly applicable to any kind of replicated system; by keeping this information blockchain-specific, we can more easily describe the functionality of the Sawtooth PBFT consensus engine.

Network Overview

A PBFT network consists of a series of nodes that are ordered from 0 to n-1, where n is the number of nodes in the network. As mentioned earlier, there is a maximum number of “bad” nodes that the PBFT network can tolerate. As long as this number of bad nodes—referred to as the constant f—is not exceeded, the network will work properly. For PBFT, the constant f is equal to one third of the nodes in the network. No more than a third of the network (rounded down) can be “out of order” or dishonest at any given time for the algorithm to work. The values of n and f are very important; you’ll see them later as we discuss how the algorithm operates.

Figure 1 — n and f in the PBFT algorithm

As the network progresses, the nodes move through a series of “views”. A view is a period of time that a given node is the primary (leader) of the network. In simple terms, each node takes turns being the primary in a never-ending cycle, starting with the first node. For a four-node network, node 0 is the primary at view 0, node 1 is the primary at view 1, and so on. When the network gets to view 4, it will “wrap back around” so that node 0 is the primary again.

In more technical terms, the primary (p) for each view is determined based on the view number (v) and the ordering of the nodes. The formula for determining the primary for any view on a given network is p = v mod n. For instance, on a four-node network at view 7, the formula p = 7 mod 4 means that node 3 will be the primary (7 mod 4 = 3).

In addition to moving through a series of views, the network moves through a series of “sequence numbers.” In the context of a Sawtooth blockchain, a sequence number is equivalent to a block number; thus, saying that a node is on sequence number 10 is the same as saying that the node is performing consensus on block 10 in the chain.

Each node maintains a few key pieces of information as part of its state:

  • The list of nodes that belong to the network
  • Its current view number
  • Its current sequence number (the block it is working on)
  • The phase of the algorithm it is currently in (see “Normal-Case Operation”)
  • A log of the blocks it has received
  • A log of all valid messages it has received from the other nodes

Normal-Case Operation

Figure 2 — Messages sent during normal operation of PBFT (Node 3 is faulty)

To commit a block and make progress, the nodes in a PBFT network go through three phases:

  1. Pre-preparing
  2. Preparing
  3. Committing

Figure 2 shows these phases for a simple four-node network. In this example, node 0 is the primary and node 3 is a faulty node (so it does not send any messages). Because there are four nodes in the network (n = 4), the value of f for the network is 4-13=1. This means the example network can tolerate only one faulty node.

Pre-preparing

To kick things off, the primary for the current view will create a block and publish it to the network; each of the nodes will receive this block and perform some preliminary verification to make sure that the block is valid.

After the primary has published a block to the network, it broadcasts a pre-prepare message to all of the nodes. Pre-prepare messages contain four key pieces of information: the ID of the block the primary just published, the block’s number, the primary’s view number, and the primary’s ID. When a node receives a pre-prepare message from the primary, it will validate the message and add the message to its internal log. Message validation includes verifying the digital signature of the message, checking that the message’s view number matches the node’s current view number, and ensuring that the message is from the primary for the current view.

The pre-prepare message serves as a way for the primary node to publicly endorse a given block and for the network to agree about which block to perform consensus on for this sequence number. To ensure that only one block is considered at a time, nodes do not allow more than one pre-prepare message at a given view and sequence number.

Preparing

Once a node has received a block and a pre-prepare message for the block, and both the block and message have been added to the node’s log, the node will move on to the preparing phase. In the preparing phase, the node will broadcast a prepare message to the rest of the network (including itself). Prepare messages, like pre-prepare messages, contain the ID and number of the block they are for, as well as the node’s view number and ID.

In order to move onto the next phase, the node must wait until it has received 2f + 1 prepare messages that have the same block ID, block number, and view number, and are from different nodes. By waiting for 2f + 1 matching prepare messages, the node can be sure that all properly functioning nodes (those that are non-faulty and non-malicious) are in agreement at this stage. Once the node has accepted the required 2f + 1 matching prepare messages and added them to its log, it is ready to move onto the committing phase.

Committing

When a node enters the committing phase, it broadcasts a commit message to the whole network (including itself). Like the other message types, commit messages contain the ID and number of the block they are for, along with the node’s view number and ID. As with the preparing phase, a node cannot complete the committing phase until it has received 2f + 1 matching commit messages from different nodes. Again, this guarantees that all non-faulty nodes in the network have agreed to commit this block, which means that the node can safely commit the block knowing that it will not need to be reverted. With the required 2f + 1 commit messages accepted and in its log, the node can safely commit the block.

Once the primary node has finished the committing phase and has committed the block, it will start the whole process over again by creating a block, publishing it, and broadcasting a pre-prepare message for it.

View Changing

In order to be Byzantine fault tolerant, a consensus algorithm must prevent nodes from improperly altering the network (to guarantee safety) or indefinitely halting progress (to ensure liveness). PBFT guarantees safety by requiring all non-faulty nodes to agree in order to move beyond the preparing and committing phases. To guarantee liveness, though, there must be a mechanism to determine if the leader is behaving improperly (such as producing invalid messages or simply not doing anything). PBFT provides the liveness guarantee with view changes.

Figure 3 — Messages sent for a view change in PBFT (Node 0 is the faulty primary, Node 1 is the new primary)

When a node has determined that the primary of view v is faulty (perhaps because the primary sent an invalid message or did not produce a valid block in time), it will broadcast a view change message for view v + 1 to the network. If the primary is indeed faulty, all non-faulty nodes will broadcast view change messages. When the primary for the new view (v + 1) receives 2f + 1 view change messages from different nodes, it will broadcast a new view message for view v + 1 to all the nodes. When the other nodes receive the new view message, they will switch to the new view, and the new primary will start publishing blocks and sending pre-prepare messages.

View changes guarantee that the network can move on to a new primary if the current one is faulty. This PBFT feature allows the network to continue to make progress and not be stalled by a bad primary node.

Want to Learn More?

This blog post only scratches the surface of the PBFT consensus algorithm. Stay tuned to the Hyperledger blog for more information on PBFT, including a future post about our extensions and additional features for Sawtooth PBFT.

In the meantime, learn more about PBFT in the original PBFT paper, read the Sawtooth PBFT RFC, and check out the Sawtooth PBFT source code on GitHub.

About the Author

Logan Seeley is a Software Engineer at Bitwise IO. He has been involved in a variety of Hyperledger Sawtooth projects, including the development of the consensus API, Sawtooth Raft, and Sawtooth PBFT.

Assembling the Future of Smart Contracts with Sawtooth Sabre

By | 网志, Hyperledger Sawtooth

Is WebAssembly the future of smart contracts? We think so. In this post, we will talk about Sawtooth Sabre, a WebAssembly smart contract engine for Hyperledger Sawtooth.

We first learned about WebAssembly a couple of years ago at Midwest JS, a JavaScript conference in Minneapolis. The lecture focused on using WebAssembly inside a web browser, which had nothing to do with blockchain or distributed ledgers. Nonetheless, as we left the conference, we were excitedly discussing the possibilities for the future of smart contracts. WebAssembly is a stack-based virtual machine, newly implemented in major browsers, that provides a sandboxed approach to fast code execution. While that sounds like a perfect way to run smart contracts, what really excited us was the potential for WebAssembly to grow a large ecosystem of libraries and tools because of its association with the browser community.

A smart contract is software that encapsulates the business logic for modifying a database by processing a transaction. In Hyperledger Sawtooth, this database is called “global state”. A smart contract engine is software that can execute a smart contract. By developing Sawtooth Sabre, we hope to leverage the WebAssembly ecosystem for the benefit of application developers writing the business logic for distributed ledger systems. We expect an ever-growing list of WebAssembly programming languages and development environments.

Unblocking Contract Deployment

The primary mechanism for smart contract development in Hyperledger Sawtooth is a transaction processor, which takes a transaction as input and updates global state. Sound like a smart contract? It is! If you implement business logic in the transaction processor, then you are creating a smart contract. If you instead implement support for smart contracts with a virtual machine (or interpreter) like WebAssembly, then you have created a smart contract engine.

If we can implement smart contracts as transaction processors, why bother with a WebAssembly model like Sabre? Well, it is really about deployment strategy. There are three deployment models for smart contracts:

  • Off-chain push: Smart contracts are deployed by pushing them to all nodes from a central authority on the network.
  • Off-chain pull: Smart contracts are deployed by network administrators pulling the code from a centralized location. Network administrators operate independently.
  • On-chain: Smart contracts are submitted to the network and inserted into state. Later, as transactions are submitted, the smart contracts are read from state and executed (generally in a sandboxed environment).

We won’t discuss off-chain push, other than to note that this strategy—having a centralized authority push code to everyone in the network—isn’t consistent with distributed ledgers and blockchain’s promise of distributing trust.

Off-chain pull is an opt-in strategy for updating software, and is widely used for Linux distribution updates. We use this model to distribute Sawtooth, including the transaction processors. By adding the Sawtooth apt repository on an Ubuntu system, you pull the software and install it via the apt-get command. Each software repository is centrally managed, though it is possible to have multiple software repositories configured and managed independently. This model has a practical problem—it requires administrators across organizations to coordinate software updates—which makes business logic updates more complicated than we would like.

On-chain smart contracts are installed on the blockchain with a transaction that stores the contract into an address in global state. The smart contract can later be executed with another transaction. The execution of the smart contract starts by loading it from global state and continues by executing the smart contract in a virtual machine (or interpreter). On-chain smart contracts have a big advantage over off-chain contracts: because the blockchain is immutable and the smart contract itself is now on the chain, we can guarantee that the same smart contract code was used to create the original block and during replay. Specifically, the transaction will always be executed using the same global state, including the stored smart contract. Because contracts are deployed by submitting transactions onto the network, we can define the process that controls the smart contract creation and deletion with other smart contracts! Yes, this is a little meta, but isn’t it great?

The on-chain approach seems superior, so why did we implement Hyperledger Sawtooth transaction processors with the off-chain model? Because our long-term vision—and a main focus for Sawtooth—has been smart contract engines that run on-chain smart contracts. Smart contract engines are more suitable for off-chain distribution, because they do not contain business logic, and are likely to be upgraded at the same time as the rest of the software.

Our initial transaction processor design reflected our goal for several types of smart contract engines. We later implemented one of them: Sawtooth Seth, a smart contract engine that runs Ethereum Virtual Machine (EVM) smart contracts. For us, Seth was a validation that our transaction processor design was flexible enough to implement radically different approaches for smart contracts. Like Ethereum, Seth uses on-chain smart contracts, so Seth is great if you want Ethereum characteristics and compatibility with tools such as Truffle. However, Seth is limited by Ethereum’s design and ecosystem, and does not expose all the features in our blockchain platform. We knew that we needed an additional approach for smart contracts in Hyperledger Sawtooth.

Crafting a Compatible Path Forward

Sawtooth Sabre, our WebAssembly smart contract engine, is our solution for native, on-chain smart contracts.  

The programming model for Sabre smart contracts is the same as that for transaction processors. A transaction processor has full control of data representation, both in global state and in transaction payloads (within certain determinism requirements). Hyperledger Sawtooth uses a global state Merkle-Radix tree, and the transaction processors handle addressing within the tree. A transaction processor can use different approaches for addressing, ranging from calculating an address with a simple field hash to organizing data within the tree in a complex way (to optimize for parallel execution, for example). Multiple transaction processors can access the same global state if they agree on the conventions used in that portion of state.

Sawtooth Sabre smart contracts use this same method for data storage, which means they can access global state in the same way that transaction processors do. In fact, smart contracts and transaction processors can comfortably coexist on the same blockchain.

The other major feature is SDK compatibility. The Sawtooth Sabre SDK API is compatible with the Hyperledger Sawtooth transaction processor API, which means that smart contracts written in Rust can switch between the Sawtooth SDK and the Sabre SDK with a simple compile-time flag. (Currently, Rust is the only supported Sabre SDK.) The details of running within a WebAssembly interpreter are hidden from the smart contract author. Because Sabre smart contracts use the same API as transaction processors, porting a transaction processor to Sabre is relatively easy—just change a few import statements to refer to the Sabre SDK instead of the Hyperledger Sawtooth SDK.

Now the choice between off-chain and on-chain smart contracts is a compile-time option. We use this approach regularly, because we can separate our deployment decisions from the decisions for smart contract development. Most of the transaction-processor-based smart contracts included in Hyperledger Sawtooth are now compatible with Sawtooth Sabre.

A Stately Approach to Permissioning

Hyperledger Sawtooth provides several ways to control which transaction processors can participate on a network. As explained above, transaction processors are deployed with the off-chain pull method. This method lets administrators verify the the transaction processors before adding them to the network. Note that Hyperledger Sawtooth requires the same set of transaction processors for every node in the network, which prevents a single node from adding a malicious transaction processor. Additional controls can limit the accepted transactions (by setting the allowed transaction types) and specify each transaction processor’s read and write access to global state (by restricting namespaces).

These permissions, however, are not granular enough for Sawtooth Sabre, which is itself a transaction processor. Sabre is therefore subject to the same restrictions, which would then apply to all smart contracts. Using the same permission control has several problems:

  • Sabre smart contracts are transaction-based, which means that a smart contract is created by submitting a transaction. This removes the chance to review a contract before it is deployed.
  • Sabre transactions must be accepted by the network to run smart contracts, but we cannot limit which smart contracts these transactions are for, because this information is not available to validator.
  • Sabre must be allowed to access the same areas of global state that the smart contracts can access.

An “uncontrolled” version of Sabre would make it too easy to deploy smart contracts that are  not inherently restricted to a to the permissions that the publisher of the smart contract selects.

Our solution in Sawtooth Sabre is to assign owners for both contracts and namespaces (a subset of global state). A contract has a set of owners and a list of namespaces that it expects to read from and write to. Each namespace also has an owner. The namespace owner can choose which contracts have read and write access to that owner’s area of state. If a contract does not have the namespace permissions it needs, a transaction run against the smart contract will fail. So, while the namespace owner and contract owner are not necessarily the same, there is an implied degree of trust and coordination between them.

Also, contracts are versioned. Only the owners of a contract are able to submit new versions to Sabre, which removes the chance that a malicious smart contract change could be accepted.

A Final Note About WebAssembly

On-chain WebAssembly isn’t limited to just smart contracts. For example, in Hyperledger Grid, we are using on-chain WebAssembly to execute smart permissions for organization-specific permissioning. Another example is smart consensus, which allows consensus algorithm updates to be submitted as a transaction. There are several more possibilities for on-chain WebAssembly as well.

In short, we think WebAssembly is awesome! Sawtooth Sabre combines WebAssembly with existing Hyperledger Sawtooth transaction processors to provide flexible smart contracts with all the benefits of both a normal transaction processor and on-chain smart-contract execution. Sabre also takes advantage of WebAssembly’s ability to maintain dual-target smart contracts, where the contract can be run as either a native transaction processor or a Sabre contract. And the permission control in Sawtooth Sabre allows fine-grained control over both contract changes and access to global state.

We are incredibly grateful for Cargill’s sponsorship of Sawtooth Sabre and Hyperledger Grid (a supply chain platform built with Sawtooth Sabre). We would also like to thank the following people who help make our blog posts a success: Anne Chenette, Mark Ford, David Huseby, and Jessica Rampen.

About the Authors

Andi Gunderson is a Software Engineer at Bitwise IO and maintainer on Hyperledger Sawtooth and Sawtooth Sabre.

Shawn Amundson is Chief Technology Officer at Bitwise IO, a Hyperledger technical ambassador, and a maintainer and architect on Hyperledger Sawtooth and Hyperledger Grid.

Hyperledger Sawtooth Blockchain Performance Metrics with Grafana

By | 网志, Hyperledger Sawtooth

This blog post shows how to setup Grafana to display Sawtooth and system statistics.

Overview

Grafana is a useful tool for displaying Sawtooth performance statistics. Hyperledger Sawtooth optionally generates performance metrics from the validator and REST API components for a each node. Sawtooth sends the metrics to InfluxDB, a time series database that is optimized for fast access to time series data. Telegraf, a metrics reporting agent, gathers supplemental system information from the Linux kernel and also sends it to InfluxDB. Finally, Grafana reads from the InfluxDB and displays an assortment of statistics on several graphical charts in your web browser. Figure 1 illustrates the flow of data.

Figure 1. Metrics gathering data flow.

 

Grafana can display many validator, REST API, and system statistics. The following lists all supported metrics:

Sawtooth Validator Metrics

  • Block number
  • Committed transactions
  • Blocks published
  • Blocks considered
  • Chain head moved to fork
  • Pending batchesnumber of batches waiting to be processed
  • Batches rejected (back-pressure)number of rejected batches due to back-pressure tests
  • Transaction execution rate, in batches per second
  • Transactions in process
  • Transaction processing duration (99th percentile), in milliseconds
  • Valid transaction response rate
  • Invalid transaction response rate
  • Internal error response rate
  • Message round trip times, by message type (95th percentile), in seconds
  • Messages sent, per second, by message type
  • Message received, per second, by message type

Sawtooth REST API Metrics

  • REST API validator response time (75th percentile), in seconds
  • REST API batch submission rate, in batches per second

System Metrics

  • User and system host CPU usage
  • Disk I/O, in kilobytes per second
  • I/O wait percentage
  • RAM usage, in megabytes
  • Context switches
  • Read and write I/O ops
  • Thread pool task run time and task queue times
  • Executing thread pool workers in use
  • Dispatcher server thread queue size

The screenshot in Figure 2 gives you an idea of the metrics that Grafana can show.

Figure 2. Example Grafana graph display.

Setting Up InfluxDB and Grafana

By default, Hyperledger Sawtooth does not gather performance metrics. The rest of this post explains the steps for enabling this feature. The overall order of steps is listed below with in-depth explanations of each step following.

  1. Have the required prerequisites: Sawtooth blockchain software is running on Ubuntu and Docker CE software is installed
  2. Installing and configuring InfluxDB to store performance metrics
  3. Building and installing Grafana
  4. Configuring Grafana to display the performance metrics
  5. Configuring Sawtooth to generate performance metrics
  6. Installing and configuring Telegraf to collect metrics

1. Prerequisites: Sawtooth and Docker

Install Hyperledger Sawtooth software and Docker containers. I recommend Sawtooth 1.1 on Ubuntu 16 LTS (Xenial). Sawtooth installation instructions are here: https://sawtooth.hyperledger.org/docs/core/releases/latest/app_developers_guide/ubuntu.html

The Sawtooth blockchain software must be up and running before you proceed.

Docker CE installation instructions are here: https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-using-the-repository

ProTip: These instructions assume a Sawtooth node is running directly on Ubuntu, not in Docker containers. To use Grafana with Sawtooth on Docker containers, additional steps (not described here) are required to allow the Sawtooth validator and REST API containers to communicate with the InfluxDB daemon at TCP port 8086.

2. Installing and Configuring the InfluxDB Container

InfluxDB stores the Sawtooth metrics used in the analysis and graphing. Listing 1 shows the commands to download the InfluxDB Docker container, create a database directory, start the Docker container, and verify that it is running.

sudo docker pull influxdb
sudo mkdir -p /var/lib/influx-data
sudo docker run -d -p 8086:8086 \
    -v /var/lib/influx-data:/var/lib/influxdb \
    -e INFLUXDB_DB=metrics \
    -e INFLUXDB_HTTP_AUTH_ENABLED=true \
    -e INFLUXDB_ADMIN_USER="admin" \
    -e INFLUXDB_ADMIN_PASSWORD="pwadmin" \
    -e INFLUXDB_USER="lrdata" \
    -e INFLUXDB_USER_PASSWORD="pwlrdata" \
    --name sawtooth-stats-influxdb influxdb
sudo docker ps --filter name=sawtooth-stats-influxdb

Listing 1. Commands to set up InfluxDB.

ProTip: You can change the sample passwords here, pwadmin and pwlrdata, to anything you like. If you do, you must use your passwords in all the steps below. Avoid or escape special characters in your password such as “,@!$” or you will not be able to connect to InfluxDB.

3. Building and Installing the Grafana Container

Grafana displays the Sawtooth metrics in a web browser. Listing 2 shows the commands to download the Sawtooth repository, build the Grafana Docker container, download the InfluxDB Docker container, create a database directory, start the Docker container, start the Grafana container, and verify that everything is running.

git clone https://github.com/hyperledger/sawtooth-core
cd sawtooth-core/docker
sudo docker build . -f grafana/sawtooth-stats-grafana \
    -t sawtooth-stats-grafana
sudo docker run -d -p 3000:3000 --name sawtooth-stats-grafana \
    sawtooth-stats-grafana
sudo docker ps --filter name=sawtooth-stats-grafana

Listing 2. Commands to set up Grafana.

Building the Grafana Docker container takes several steps and downloads several packages into the container. It ends with “successfully built” and “successfully tagged” messages.

4. Configuring Grafana

Configure Grafana from your web browser. Navigate to http://localhost:3000/ (replace “localhost” with the hostname or IP address of the system where you started the Grafana container in the previous step).

  1. Login as user “admin”, password “admin”
  2. (Optional step) If you wish, change the Grafana webpage “admin” password by clicking the orange spiral icon on the top left, selecting “admin” in the pull-down menu, click on “Profile” and “Change Password”, then enter the old password, (admin) and your new password and, finally, click on “Change Password”. This Grafana password is not related to the InfluxDB passwords used in a previous step.
  3. Click the orange spiral icon again on the top left, then click on “Data Sources” in the drop-down menu.
  4. Click on the “metrics” data source.
  5. Under “URL”, change “influxdb” in the URL to the hostname or IP address where you are running InfluxDB. (Use the same hostname that you used for Grafana web page, since the Grafana and InfluxDB containers run on the same host.) This is where Grafana accesses the InfluxDB
  6. Under “Access”, change “proxy” to “direct” (unless you are going through a proxy to access the remote host running InfluxDB)
  7. Under “InfluxDB Details”, set “User” to “lrdata” and “Password” to “pwlrdata”
  8. Click “Save & Test” to save the configuration in the Grafana container
  9. If the test succeeds, the green messages “Data source updated” and “Data source is working” will appear. Figure 3 illustrates the green messages. Otherwise, you get a red error message that you must fix before proceeding. An error at this point is usually a network problem, such as a firewall or proxy configuration or a wrong hostname or IP address.

Figure 3. Test success messages in Grafana.

 

For the older Sawtooth 1.0 release, follow these additional steps to add the Sawtooth 1.0 dashboard to Grafana (skip these steps for Sawtooth 1.1):

  1. In your terminal, copy the file sawtooth_performance.json from the sawtooth-core repository you cloned earlier to your current directory by issuing the commands in Listing 3.
$ cp \
sawtooth-core/docker/grafana/dashboards/sawtooth_performance.json .


Or download this file:

$ wget \

https://raw.githubusercontent.com/hyperledger/sawtooth-core/1-0/docker/grafana/dashboards/sawtooth_performance.json

Listing 3. Commands for getting the Sawtooth 1.0 dashboard file.

  1. In your web browser, click the orange spiral icon again on the top left, select “Dashboards” in the drop-down menu, then click on “Import” and “Upload .json file”.
  2. Navigate to the directory where you saved sawtooth_performance.json.
  3. Select “metrics” in the drop-down menu and click on “Import”.

5. Configuring Sawtooth

The Sawtooth validator and REST API components each report their own set of metrics, so you must configure the login credentials and destination for InfluxDB. In your terminal window, run the shell commands in Listing 4 to create or update the Sawtooth configuration files validator.toml and rest_api.toml:

for i in /etc/sawtooth/validator.toml /etc/sawtooth/rest_api.toml
do
    [[ -f $i ]] || sudo -u sawtooth cp $i.example $i
    echo 'opentsdb_url = "http://localhost:8086"' \
       | sudo -u sawtooth tee -a $i
    echo 'opentsdb_db = "metrics"' \

       | sudo -u sawtooth tee -a $i
    echo 'opentsdb_username  = "lrdata"' \

       | sudo -u sawtooth tee -a $i
    echo 'opentsdb_password  = "pwlrdata"' \

       | sudo -u sawtooth tee -a $i
done

Listing 4. Commands to create or update Sawooth configuration.

After verifying that the files validator.toml and rest_api.toml each have the four new opentsdb_* configuration lines, restart the sawtooth-validator and sawtooth-rest-api using the commands in Listing 5

sudo -v
sudo -u sawtooth pkill sawtooth-rest-api
sudo -u sawtooth pkill sawtooth-validator
sudo -u sawtooth sawtooth-validator -vvv &
sudo -u sawtooth sawtooth-rest-api -vv &

Listing 5. Manual restart commands.

Add any command line parameters you may use to the above example.

If you use systemctl, Listing 6 shows the commands needed to restart.:

systemctl restart sawtooth-rest-api
systemctl restart sawtooth-validator

Listing 6. Systemctl restart commands.

Protip: The InfluxDB daemon, influxd, listens to TCP port 8086, so this port must be accessible over the local network from the validator and REST API components. By default, influxd only listens to localhost.

6. Installing and Configuring Telegraf

Telegraf, InfluxDB’s metrics reporting agent, gathers metrics information from the Linux kernel to supplement the metrics information sent from Sawtooth. Telegraf needs the login credentials and destination for InfluxDB. Install Telegraf use the commands in Listing 7.

curl -sL https://repos.influxdata.com/influxdb.key \
   | sudo apt-key add -
sudo apt-add-repository \
   "deb https://repos.influxdata.com/ubuntu xenial stable"
sudo apt-get update
sudo apt-get install telegraf

Listing 7. Commands for installing Telegraf.

The commands in Listing 8 set up the Telegraf configuration file correctly.

sudo echo '[[outputs.influxdb]]' \
  >/etc/telegraf/telegraf.d/sawtooth.conf
sudo echo 'urls = ["http://localhost:8086"]' \
  >>/etc/telegraf/telegraf.d/sawtooth.conf
sudo echo 'database = "metrics"' \
  >>/etc/telegraf/telegraf.d/sawtooth.conf
sudo echo 'username = "lrdata"' \
  >>/etc/telegraf/telegraf.d/sawtooth.conf
sudo echo 'password = "pwlrdata"' \
  >>/etc/telegraf/telegraf.d/sawtooth.conf

Listing 8. Create the Telegraf configuration file.

Finally restart telegraf with the command in Listing 9.

sudo systemctl restart telegraf

Listing 9. Restart Telegraf.

Try it out!

After completing all the previous steps, Sawtooth and system statics should appear in the Grafana dashboard webpage. To see them, click the orange spiral icon on the top left, then click on “Dashboards” in the drop-down menu, then click on “Home” next to the spiral icon, and then click on “dashboard”. This is the dashboard for Grafana.

Generate some transactions so you can see activity on the Grafana dashboard. For example, run the intkey workload generator by issuing the Listing 10 commands in a terminal window to create test transactions at the rate of 1 batch per second.

intkey-tp-python -v &
intkey workload --rate 1 -d 5

Listing 10. Start the workload generator to get some statistics.

I recommend changing the time interval in the dashboard from 24 hours to something like 30 minutes so you can see new statistics. Do that by clicking on the clock icon in the upper right of the dashboard. Then click on the refresh icon, ♻, to update the page. Individual graphs can be enlarged or shrunk by moving the dotted triangle tab in the lower right of each graph.

Troubleshooting Tips

    • If the Grafana webpage is not accessible, the Grafana container is not running or is not accessible over the network. To verify that it is running and start it:

 

$ docker ps --filter name-sawtooth-stats-grafana
$ docker start sawtooth-stats-grafana

 

    • If the container is running, the docker host may not be accessible on the network
    • If no system statistics appear at the bottom of the dashboard, either Telegraf is not configured or the InfluxDB container is not running or is not accessible over the network. To verify that InfluxDB is running and start it:

 

$ docker ps --filter name-sawtooth-stats-influxdb
$ docker start sawtooth-stats-influxdb

 

    • Check that the InfluxDB server, influxd, is reachable from the local network. Use the InfluxDB client (package influxdb-client) or curl or both to test. The InfluxDB client command should show a “Connected to” message and the curl command should show a  “204 No Content” message.

 

$ influx -username lrdata -password pwlrdata -port 8086 \
   -host localhost
$ curl -sl -I localhost:8086/ping

 

  • Check that the interval range (shown next to the clock on the upper right of the dashboard) is low enough (such as 1 hour).
  • Check that the validator and REST API .toml files and Telegraf sawtooth.conf files have the opentsdb_* configuration lines. Make sure that the passwords and URLs are correct and that they match each other and the passwords set when you started the InfluxDB container.
  • Click the refresh icon, ♻, on the upper right of the dashboard.

Further Information

Safety, Performance and Innovation: Rust in Hyperledger Sawtooth

By | 网志, Hyperledger Sawtooth

Hello, fellow Rustaceans and those curious about Rust. The Hyperledger Sawtooth team is using Rust for new development, so these are exciting times for both Rust and Hyperledger Sawtooth. Rust is a new language that is quickly growing in popularity. The Hyperledger Sawtooth community is using Rust to build components to give application developers and administrators more control, more flexibility, and greater security for their blockchain networks. This blog post will give an overview of some of the new components being built in Rust.

Hyperledger Sawtooth was originally written in Python, which was a good choice for initial research and design. In 2018, the Sawtooth team chose the Rust language for new development. A key benefit is that Rust supports concurrency while also emphasizing memory safety. Several new core components, transaction processors, and consensus engines have already been written in Rust.

Compared to Python, Rust’s most noticeable feature is the expressive type system, along with its compile-time checks. Rust has ownership and borrowing rules to guarantee at compile time that an object has either only one mutable reference of an object or an unlimited number of immutable references. This feature of Rust forces the developer to account for all possible error and edgecases, making our interfaces more robust as we design them.

The validator’s block validation and publishing components are a good example of our recent interface changes. Before release 1.1, these components were heavily tied to PoET, the original consensus in Hyperledger Sawtooth. In addition, they were largely synchronous, where committing a block started the process of building a new block to publish. As we implemented the consensus engine interface, we took the opportunity to rewrite these components in Rust, which helped us to separate them more cleanly. Now there are three separate asynchronous tasks—block validation, block commit, and block publishing—that share a small amount of information. For example, the block publishing component is informed when batches are committed so that it can take them out of a pending queue, but none of the tasks starts either of the other tasks. For more information, see the block validation and block publishing components in the sawtooth-core repository.

This clean separation of tasks allows the new consensus interface to function correctly and makes it easier to develop new consensus engines. The Sawtooth team has already written two new engines in Rust:  Sawtooth PBFT and Sawtooth Raft (which uses the PingCap raft library, raft-rs). The Sawtooth team is proud of the work we have done on these consensus engines, and the flexibility it provides Sawtooth community members who are building a blockchain application.

Rust also excels in its support for compiling to WASM, which can be used as a smart contract. Hyperledger Sawtooth already had Seth, which supports running Ethereum Solidity smart contracts using a transaction processor, but now has Sawtooth Sabre, a transaction processor that runs a WASM smart contract that is compiled from Rust to the WASM target. Sawtooth Sabre includes an innovative feature: using registries for namespaces and contracts. The namespace registry lets administrators control what information a contract can access. The contract registry lists versions of the contract, along with a SHA-512 hash of the contract, giving application developers confidence that the correct contract is registered. Sabre supports API compatibility with the Sawtooth Rust SDK, so developers can write a smart contract that can run either within Sabre or natively as a transaction processor, depending on the deployment methodology.

Rust has also influenced how changes to Hyperledger Sawtooth are handled. Our new RFC process is modeled after Rust’s RFC process, which provides a community-oriented forum for proposing and designing large changes. The Hyperledger Sawtooth team has put effort into a community-oriented design process at sawtooth-rfcs. The consensus API RFC is a good example: The guide-level explanation clearly lays out the purpose and reasoning behind the new component, then has a reference-level explanation of the technical details needed to guide implementation. The Sawtooth RFC process has been a good way to involve the larger Sawtooth community in driving the design and implementation of Sawtooth.

What’s next for Rust in Sawtooth? In 2019, the Sawtooth team is rewriting the remaining Sawtooth validator components in Rust. That means the networking and transaction processing components will be getting an overhaul. Expect that the networking components will be redesigned. The transaction processing components will have minor changes internally, while keeping a stable API. In both cases, there will be an increase in performance and stability thanks to Rust.

Come join the Hyperledger Sawtooth community in 2019 by writing your own transaction processor in Rust or even a consensus engine. Get in touch on the #sawtooth channel on RocketChat.

To learn more about Rust in Hyperledger Sawtooth, check out our recent changes:

 

About the Author:

Boyd Johnson is a Software Engineer at Bitwise IO who has worked on many core components of Hyperledger Sawtooth, including transaction processing components in Python and block validation and block publishing components in Rust. While originally a committed Pythonista, he has oxidized into a Rustacean.

Floating the Sawtooth Raft: Implementing a Consensus Algorithm in Rust

By | 网志, Hyperledger Sawtooth

The 1.1 release of Hyperledger Sawtooth includes official support for a new consensus API and SDKs. These tools, covered in an earlier blog post, open up new possibilities for Sawtooth developers, giving them the power to choose a consensus algorithm that best suits their needs. With support for Proof of Elapsed Time (PoET) and Dev mode consensus engines already available, we decided to expand the platform’s repertoire to include a wider variety of engines and support a broader array of features and use cases. The first of these new engines implements the Raft consensus algorithm. This blog post gives a brief overview of the Raft algorithm, explains our decision to implement it, and takes a quick look at the development of the Raft consensus engine.

What Is Raft?

Originally developed by Diego Ongaro and John Ousterhout at Stanford University in 2013, Raft is designed to be an easy-to-understand, crash fault tolerant consensus algorithm for managing a replicated log. Its primary goal is understandability, since most deterministic consensus algorithms previously developed were convoluted and difficult to grasp. Raft provides crash fault tolerance, allowing a network to continue to make progress as long as at least half of the nodes are available.

Raft has the following key characteristics that set it apart from many other consensus algorithms:

  • Strong leadership: Networks elect a leader that is responsible for making progress
  • Non-forking: Unlike lottery-based algorithms, Raft does not produce forks
  • Closed membership: Raft does not support open-enrollment, but nodes can be added and removed by an administrator
  • Fully peered: All nodes must be peered with all other nodes
  • Crash fault tolerant: Raft does not provide Byzantine fault tolerance, only crash fault tolerance

Raft’s leader-follower model is a direct result of the emphasis placed on simplicity and understandability. With a single node controlling the progress of the log, no forks arise so no extra logic is needed to choose between forks. The leadership model has important implications for other aspects of the algorithm. Because a majority of nodes must agree on the elected leader and on all network progress, membership must be semi-fixed to prevent disjoint majorities. This means that Raft networks do not support open enrollment; membership in the network is restricted and can only be modified by a privileged user.

Raft consensus networks must also be fully peered—with each node connected to all other nodes—because messages need to be passed between all nodes. Furthermore, because a large volume of messages is required for the algorithm to work, larger Raft networks perform slower than smaller networks. If high performance is important, Raft would be best used for smaller networks—usually 10 nodes or fewer.

Lastly, Raft is limited to just guaranteeing crash fault tolerance, not Byzantine fault tolerance. This makes the Raft algorithm ill-suited for networks that are subject to Byzantine faults such as groups of malicious nodes. For more information about the Raft algorithm, please see the original Raft paper and the Raft website.

Why Raft?

Raft was our choice for the first algorithm with the new consensus API for several reasons. First, it is very different from PoET. Where PoET is a forking, lottery-style algorithm, Raft is leader-based and non-forking. This allowed us to not only demonstrate the flexibility of the Sawtooth consensus API, but also to make an algorithm available that is well-suited for situations that an algorithm like PoET is not a good fit for.

Also, Raft is an inherently simple and easy-to-understand algorithm. This made it trivial to adapt to Sawtooth and also made it an excellent example for developing other engines.  Furthermore, we took advantage of an existing high quality implementation of Raft in the Rust programming language called raft-rs.

However, Raft lacks Byzantine fault tolerance. Therefore, we are also working on a PBFT consensus engine that is suitable for consortium-style networks with adversarial trust characteristics.

The Implementation

The raft-rs library, developed by PingCAP, provides almost everything we needed to implement a consensus engine based on the Raft algorithm; it provided a class representing a Raft “node” with a handful of straightforward functions for “driving” the algorithm. The folks at PingCAP wrote an excellent blog post explaining how they implemented this library, so we will not duplicate their efforts here.

Our only major extension to the raft-rs library is a stable storage mechanism, since the library only provided in-memory storage. This extension is required to ensure that Sawtooth nodes can restart in the event of a crash or arbitrary shutdown. If you would like to see the end results, all of the code that follows can be found in the Sawtooth Raft GitHub repository and the Rust SDK.

Defining the Engine

The first step in creating a consensus engine with the Rust SDK is to implement the Engine trait:

pub trait Engine {

    /// Called after the engine is initialized, when a connection

    /// to the validator has been established. Notifications from

    /// the validator are sent along `updates`. `service` is used

    /// to send requests to the validator.

    fn start(

        &mut self,

        updates: Receiver<Update>,

        service: Box<Service>,

        startup_state: StartupState,

    ) -> Result<(), Error>;

 

    /// Get the version of this engine

    fn version(&self) -> String;

 

    /// Get the name of the engine, typically the algorithm being

    /// implemented

    fn name(&self) -> String;

}

Raft’s Engine implementation is in engine.rs. The start method is the main entry point. In Raft—as well as most consensus engines—three main tasks need to be performed here: loading configuration, creating the struct(s) that contain the core logic, and entering a main loop.

Loading Configuration

For Raft, loading configuration consists primarily of reading a few settings that are stored on-chain. We do this by making a call to the load_raft_config function in config.rs:

// Create the configuration for the Raft node.

let cfg = config::load_raft_config(

    &local_peer_info.peer_id,

    chain_head.block_id,

    &mut service

);

info!("Raft Engine Config Loaded: {:?}", cfg);

let RaftEngineConfig {

    peers,

    period,

    raft: raft_config,

    storage: raft_storage

} = cfg;

The settings are loaded by calling the get_settings method in the consensus service, with the chain head provided in the startup_state:

let settings_keys = vec![

    "sawtooth.consensus.raft.peers",

    "sawtooth.consensus.raft.heartbeat_tick",

    "sawtooth.consensus.raft.election_tick",

    "sawtooth.consensus.raft.period",

];

 

let settings: HashMap<String, String> = service

    .get_settings(block_id, 

        settings_keys.into_iter().map(String::from).collect())

    .expect("Failed to get settings keys");

 

Some of these settings are optional, so defaults are used if they’re unset.

Creating the Raft Node

Once the configuration is loaded, we create the Raft node that contains the main logic of the algorithm:

// Create the Raft node.

let raft_peers: Vec<RaftPeer> = raft_config.peers

    .iter()

    .map(|id| RaftPeer { id: *id, context: None })

    .collect();

let raw_node = RawNode::new(

    &raft_config,

    raft_storage,

    Raft_peers

).expect("Failed to create new RawNode");

 

let mut node = SawtoothRaftNode::new(

    local_peer_info.peer_id,

    raw_node,

    service,

    peers,

    Period

);

The RawNode struct is provided by the raft-rs library; it contains the logic for the Raft algorithm itself and provides methods for SawtoothRaftNode to direct it. The SawtoothRaftNode, found in node.rs, defines six methods that are called by the consensus engine:

  • on_block_new is called when the validator notifies the engine that it has received a new block
  • on_block_valid is called when the validator notifies the engine that it has validated a block
  • on_block_commit is called when the validator notifies the engine that it has committed a block
  • on_peer_message is called when one node’s consensus engine sends a message to another
  • tick is used to move the Raft algorithm forward by one “tick”
  • process_ready contains much of the logic that changes the state of Raft

The first four methods (on_block_new, on_block_valid, on_block_commit, and on_peer_message) will be defined for the majority of consensus engines since they handle important messages that are delivered by the validator. The last two methods (tick and process_ready) are specific to Raft; other consensus engines will likely have different methods to handle the logic of the engine.

Entering the Main Loop

With a Raft node created and ready to handle updates, we enter the main loop of our consensus engine:

let mut raft_ticker = ticker::Ticker::new(RAFT_TIMEOUT);

let mut timeout = RAFT_TIMEOUT;

 

// Loop forever to drive the Raft.

loop {

    match updates.recv_timeout(timeout) {

        Err(RecvTimeoutError::Timeout) => (),

        Err(RecvTimeoutError::Disconnected) => break,

        Ok(update) => {

            debug!("Update: {:?}", update);

            if !handle_update(&mut node, update) {

                break;

            }

        }

    }

 

    timeout = raft_ticker.tick(|| {

        node.tick();

    });

 

    if let ReadyStatus::Shutdown = node.process_ready() {

        break;

    }

}

Raft’s main loop performs three main tasks. First, check if there are any updates that have been sent to the engine by the validator. If there is an update, handle it by calling the appropriate method of the SawtoothRaftNode:

fn handle_update<S: StorageExt>(node: &mut SawtoothRaftNode<S>, 

  update: Update) -> bool

{

    match update {

        Update::BlockNew(block) => node.on_block_new(block),

        Update::BlockValid(block_id) =>

            node.on_block_valid(block_id),

        Update::BlockCommit(block_id) => 

 node.on_block_commit(&block_id),

        Update::PeerMessage(message, _id) => 

 node.on_peer_message(&message),

        Update::Shutdown => {

            warn!("Shutting down");

            return false

        },

 

        update => warn!("Unhandled update: {:?}", update),

    }

    true

}

Second, move the Raft algorithm forward by one “tick” at a regular interval, using the Ticker object defined in ticker.rs and a call to the node’s tick method. This “tick” roughly corresponds to progress in the Raft algorithm itself.

Finally, call the node’s process_ready method, which checks the state of the Raft algorithm to determine if it needs to take any actions as a result of the last “tick”.

Starting the Engine

Once the consensus engine itself has been defined, starting it up and connecting it to the validator is easy. In the main function of main.rs, all we need to do is simply determine the validator’s endpoint (using a command-line argument in Raft), instantiate the engine, and start it using the SDK’s ZmqDriver:

let raft_engine = engine::RaftEngine::new();

 

let (driver, _stop) = ZmqDriver::new();

 

info!("Raft Node connecting to '{}'", &args.endpoint);

driver.start(&args.endpoint, raft_engine).unwrap_or_else(|err| {

    error!("{}", err);

    process::exit(1);

});

See for Yourself!

Want to try running a Sawtooth network with Raft consensus? Check out the Raft source code on GitHub as well as the Sawtooth Raft documentation for all you need to get started.

For more on the consensus API and developing your own consensus engine for Hyperledger Sawtooth, take a look at our previous blog post.

 

About the Author

 

Logan Seeley is a Software Engineer at Bitwise IO. He has been involved in a variety of Hyperledger Sawtooth projects, including the development of the consensus API, Sawtooth Raft, and Sawtooth PBFT.