Prologue

Draft

The Catalyst-Core Documentation is currently undergoing review and re-write.

This documentation is published AS-IS. There is no guarantee that it is correct with regards to the current implementation. The source of all truth with regards to the implementation is the source code.

Patches to improve the documentation are very welcome. See Contributing.

Introduction

What is Catalyst-Core

Core Ledger

Welcome to the Jörmungandr User Guide.

Jörmungandr is a node implementation, written in rust, with the initial aim to support the Ouroboros type of consensus protocol.

A node is a participant of a blockchain network, continuously making, sending, receiving, and validating blocks. Each node is responsible to make sure that all the rules of the protocol are followed.

Mythology

Jörmungandr refers to the Midgard Serpent in Norse mythology. It is a hint to Ouroboros, the Ancient Egyptian serpent, who eat its own tail, as well as the IOHK paper on proof of stake.

General Concepts

This chapter covers the general concepts of the blockchain, and their application in the node, and is followed by the node organisation and the user interaction with it.

Blockchain concepts

Time

Slots represent the basic unit of time in the blockchain, and at each slot a block could be present.

Consecutive slots are grouped into epochs, which have updatable size defined by the protocol.

Fragments

Fragments are part of the blockchain data that represent all the possible events related to the blockchain health (e.g. update to the protocol), but also and mainly the general recording of information like transactions and certificates.

Blocks

Blocks represent the spine of the blockchain, safely and securely linking blocks in a chain, whilst grouping valid fragments together.

Blocks are composed of 2 parts:

  • The header
  • The content

The header link the content with the blocks securely together, while the content is effectively a sequence of fragments.

Blockchain

The blockchain is the general set of rules and the blocks that are periodically created. Some of the rules and settings, can be changed dynamically in the system by updates, while some other are hardcoded in the genesis block (first block of the blockchain).

    +-------+      +-------+
    |Genesis+<-----+Block 1+<--- ....
    |Header |      |Header |
    +---+---+      +---+---+
        |              |
    +---v---+      +---v---+
    |Genesis|      |Block 1|
    |Content|      |Content|
    +-------+      +-------+

Consensus

The node currently support the following consensus protocol:

  • Ouroboros BFT (OBFT)
  • Ouroboros Genesis-Praos

Ouroboros BFT is a simple Byzantine Fault Tolerant (BFT) protocol where the block makers is a known list of leaders that successively create a block and broadcast it on the network.

Ouroboros Genesis Praos is a proof of stake (PoS) protocol where the block maker is made of a lottery where each stake pool has a chance proportional to their stake to be elected to create a block. Each lottery draw is private to each stake pool, so that the overall network doesn’t know in advance who can or cannot create blocks.

In Genesis-Praos slot time duration is constant, however the frequency of creating blocks is not stable, since the creation of blocks is a probability that is linked to the stake and consensus_genesis_praos_active_slot_coeff.

Note: In Genesis-Praos, if there is no stake in the system, no blocks will be created anymore starting with the next epoch.

Leadership

The leadership represent in abstract term, who are the overall leaders of the system and allow each individual node to check that specific blocks are lawfully created in the system.

The leadership is re-evaluated at each new epoch and is constant for the duration of an epoch.

Leader

Leader are an abstraction related to the specific actor that have the ability to create block; In OBFT mode, the leader just the owner of a cryptographic key, whereas in Genesis-Praos mode, the leader is a stake pool.

Transaction

Transaction forms the cornerstone of the blockchain, and is one type of fragment and also the most frequent one.

Transaction is composed of inputs and outputs; On one side, the inputs represent coins being spent, and on the other side the outputs represent coins being received.

    Inputs         Alice (80$)        Bob (20$)
                        \             /
                         \           /
                          -----------
                                100$
                             ---------
                            /         \
    Outputs            Charlie (50$)  Dan (50$)

Transaction have fees that are defined by the blockchain settings and the following invariant hold:

\( \sum Inputs = \sum Outputs + fees \)

Transaction need to be authorized by each of the inputs in the transaction by their respective witness. In the most basic case, a witness is a cryptographic signature, but depending on the type of input can the type of witness vary.

Accounting

The blockchain has two methods of accounting which are interoperable:

  • Unspent Transaction Output (UTXO)
  • Accounts

UTXO behaves like cash/notes, and work like fixed denomination ticket that are cumulated. This is the accounting model found in Bitcoin. A UTXO is uniquely reference by its transaction ID and its index.

Accounts behaves like a bank account, and are simpler to use since exact amount can be used. This is the accounting model found in Ethereum. An account is uniquely identified by its public key.

Each inputs could refer arbitrarily to an account or a UTXO, and similarly each outputs could refer to an account or represent a new UTXO.

Network overview

Jörmungandr network capabilities are split into:

  1. the REST API, used for informational queries or control of the node;
  2. the gRPC API for blockchain protocol exchange and participation;

Here we will only review the gRPC API as the REST API is described in another chapter: go to the REST documentation

The protocol

The protocol is based on gRPC that combines commonly used protocols like HTTP/2 and RPC. More precisely, Jörmungandr utilises.

This choice was made because gRPC is already widely supported around the world because of it’s uitilization of standard protocols HTTP/2 which makes it much easier for Proxies and Firewalls to recognise the protocol and permit the traffic.

Type of queries

The protocol allows you to send multiple types of messages between nodes:

  • sync block to remote peer’s Last Block (tip).
  • propose new fragments (new transactions, certificates, …): this is for the fragment propagation.
  • propose new blocks: for block propagation.

There are other commands that optimise the communication and synchronization between nodes that will be documented here in the future.

Another type of messages is the Gossip message. These gossip messages allow Nodes to exchange information (gossips) about other nodes on the network, allowing for peer discovery.

Peer to peer

The peer 2 peer connections are established utilising multiple components:

  • A multilayered topology (e.g. Poldercast);
  • Gossiping for node discoverability;
  • Subscription mechanism for event propagation;
  • Security and countermeasures: (such as Topology Policy for scoring and/or blacklisting nodes);

Multilayered topology

As described in the Poldercast paper, our network topology is built on multiple layers that allow for granular control of it’s behavior. In practice this means a node will have different groups of nodes that it connects to based on different algorithms, each of these groups are a subset of the whole known list of nodes.

In short we have:

  • The rings layer selects a predecessor(s) and a successor(s) for each topic (Fragment or Blocks);
  • The Vicinity layer will select nodes that have similar interests;
  • The Cyclon layer, will select nodes randomly.

However, we keep the option open to remove some of these layers or to add new ones, such as:

  • A layer to allow privilege connections between stake pools;
  • A layer for the user’s whitelist, a list of nodes the users considered trustworthy and that we could use to check in the current state of the network and verify the user’s node is not within a long running fork;

Gossiping

Gossiping is the process used for peer discovery. It allows two things:

  1. For any nodes to advertise themselves as discoverable;
  2. To discover new nodes via exchanging a list of nodes (gossips);

The gossips are selected by the different layers of the multilayered topology. For the Poldercast modules, the gossips are selected just as in the paper. Additional modules may select new nodes in the gossip list or may decide to not add any new information.

Subscription mechanism

Based on the multilayered topology, the node will open multiplexed and bi-directional connections (thanks to industry standard gRPC, this comes for free). These bi-directional connections are used to propagate events such as:

  • Gossiping events, when 2 nodes exchange gossips for peer discovery;
  • Fragment events, when a node wants to propagate a new fragment to other nodes;
  • Block events, when a node wants to propagate a new block creation event

Security and countermeasures

In order to facilitate the handling of unreachable nodes or of misbehaving ones we have built a node policy tooling. Currently, we collect connectivity statuses for each node. The policy can then be tuned over the collected data to apply some parameters when connecting to a given node, as well as banning nodes from our topology.

For each node, the following data is collected:

Connection statuses:

  • The failed connection attempts and when it happened;
  • Latency
  • Last message used per topic item (last time a fragment has been received from that node, last time a block has been received from that node…)

In the future, we may expand the polocy to include data collected at the blockchain level lile:

  • Faults (e.g. trying to send an invalid block)
  • Contributions in the network
  • Their blockchain status (e.g. tips)

Policy

The p2p policy provides some more fine control on how to handle nodes flagged as not behaving as expected (see the list of data collected).

It currently works as a 4 levels: trusted, possible contact, quarantined, forgotten. Each gossip about a new node will create a new entry in the list of possible contact. Then the policy, based on the logged data associated to this node, may decide to put this node in quarantine for a certain amount of time.

Trusted nodes are the ones to which we were able to connect successfully. A connectivity report against those nodes will make them transition to the possible contact level, while a successful connection attempt will promote them again to trusted.

The changes from one level to another is best effort only. Applying the policy may be costly so the node applies the policy only on the node it is interested about (a gossip update or when reporting an issue against a node). This guarantees that the node does not spend too much time policing its database. And it also makes sure that only the nodes of interest are up to date. However it is possible for the node to choose, at a convenient time, to policy the whole p2p database. This is not enforced by the protocol.

DispositionDescription
availableNode is available for the p2p topology for view selection and gossips.
quarantinedNode is not available for the p2p topology for view selection or gossips. After a certain amount of time, if the node is still being gossiped about, it will be moved to available.
forgottenA node forgotten is simply removed from the whole p2p database. However, if the node is still being gossiped about it will be added back as available and the process will start again.

Node organization

Secure Enclave

The secure enclave is the component containing the secret cryptographic material, and offering safe and secret high level interfaces to the rest of the node.

Network

The node’s network is 3 components:

  • Intercommunication API (GRPC)
  • Public client API (REST)
  • Control client API (REST)

More detailed information here

Intercommunication API (GRPC)

This interface is a binary, efficient interface using the protobuf format and GRPC standard. The protobuf files of types and interfaces are available in the source code.

The interface is responsible to communicate with other node in the network:

  • block sending and receiving
  • fragments (transaction, certificates) broadcast
  • peer2peer gossip

Public API REST

This interface is for simple queries for clients like:

  • Wallet Client & Middleware
  • Analytics & Debugging tools
  • Explorer

it’s recommended for this interface to not be opened to the public.

TODO: Add a high level overview of what it does

Control API REST

This interface is not finished, but is a restricted interface with ACL, to be able to do maintenance tasks on the process:

  • Shutdown
  • Load/Retire cryptographic material

TODO: Detail the ACL/Security measure

Stake

In a proof of stake, participants are issued a stake equivalent to the amount of coins they own. The stake is then used to allow participation in the protocol, simply explained as:

The more stake one has, the more likely one will participate in the good health of the network.

When using the BFT consensus, the stake doesn’t influence how the system runs, but stake can still be manipulated for a later transition of the chain to another consensus mode.

Stake in the Account Model

Account are represented by 1 type of address and are just composed of a public key. The account accumulate moneys and its stake power is directly represented by the amount it contains

For example:


    A - Account with 30$ => Account A has stake of 30
    B - Account with 0$ => Account B has no stake

The account might have a bigger stake than what it actually contains, since it could also have associated UTXOs, and this case is covered in the next section.

Stake in the UTXO Model

UTXO are represented by two kind of addresses:

  • single address: those type of address have no stake associated
  • group address: those types of address have an account associated which receive the stake power of the UTXOs value

For example with the following utxos:

    UTXO1 60$ (single address) => has stake of 0

    UTXO2 50$ (group address A) \
                                 ->- A - Account with 10$ => Account A has stake of 100
    UTXO3 40$ (group address A) /

    UTXO4 20$ (group address B) -->- B - Account with 5$ => Account B has stake of 25

Stake pool

Stake pool are the trusted block creators in the genesis-praos system. A pool is declared on the network explicitly by its owners and contains, metadata and cryptographic material.

Stake pool has no stake power on their own, but participants in the network delegate their stake to a pool for running the operation.

Stake Delegation

Stake can and need to be delegated to stake pool in the system. They can change over time with a publication of a new delegation certificate.

Delegation certificate are a simple declaration statement in the form of:

    Account 'A' delegate to Stake Pool 'Z'

Effectively it assigns the stake in the account and its associated UTXO stake to the pool it delegates to until another delegation certificate is made.

Quickstart

The rust node comes with tools and help in order to quickly start a node and connect to the blockchain.

It is compatible with most platforms and it is pre-packaged for some of them.

Here we will see how to install jormungandr and its helper jcli and how to connect quickly to a given blockchain.

There are three posible ways you can start jormungandr.

As a passive node in an existing network

As described here.

The passive Node is the most common type of Node on the network. It can be used to download the blocks and broadcast transactions to peers. However, it doesn’t have cryptographic materials or any mean to create blocks. This type of nodes are mostly used for wallets, explorers or relays.

As a node generating blocks in an existing network

The network could be running either bft or genesis consensus. In the former case the node must have the private key of a registered as a slot leader. For the latter the private keys of a registered stake pool are needed.

More information here

Creating your own network

This is similar to the previous case, but configuring a genesis file is needed. Consult the Advanced section for more information on this procedure.

Command line tools

The software is bundled with 2 different command line software:

  1. jormungandr: the node;
  2. jcli: Jörmungandr Command Line Interface, the helpers and primitives to run and interact with the node.

Installation

From a release

This is the recommended method. Releases are all available here.

From source

Jörmungandr’s code source is available on github. Follow the instructions to build the software from sources.

Help and auto completion

All commands come with usage help with the option --help or -h.

For jcli, it is possible to generate the auto completion with:

jcli auto-completion bash ${HOME}/.bash_completion.d

Supported shells are:

  • bash
  • fish
  • zsh
  • powershell
  • elvish

Note: Make sure ${HOME}/.bash_completion.d directory previously exists on your HD. In order to use auto completion you still need to:

source ${HOME}/.bash_completion.d/jcli.bash

You can also put it in your ${HOME}/.bashrc.

Starting a passive node

In order to start the node, you first need to gather the blockchain information you need to connect to.

  1. the hash of the genesis block of the blockchain, this will be the source of truth of the blockchain. It is 64 hexadecimal characters.
  2. the trusted peers identifiers and access points.

These information are essentials to start your node in a secure way.

The genesis block is the first block of the blockchain. It contains the static parameters of the blockchain as well as the initial funds. Your node will utilise the Hash to retrieve it from the other peers. It will also allows the Node to verify the integrity of the downloaded genesis block.

The trusted peers are the nodes in the public network that your Node will trust in order to initialise the Peer To Peer network.

The node configuration

Your node configuration file may look like the following:

Note

This config shouldn’t work as it is, the ip address and port for the trusted peer should be those of an already running node. Also, the public_address (‘u.x.v.t’) should be a valid address (you can use an internal one, eg: 127.0.0.1). Furthermore, you need to have permission to write in the path specified by the storage config.

storage: "/mnt/cardano/storage"

rest:
  listen: "127.0.0.1:8443"

p2p:
  trusted_peers:
    - address: "/ip4/104.24.28.11/tcp/8299"
      id: ad24537cb009bedaebae3d247fecee9e14c57fe942e9bb0d

Description of the fields:

  • storage: (optional) Path to the storage. If omitted, the blockchain is stored in memory only.
  • log: (optional) Logging configuration:
    • level: log messages minimum severity. If not configured anywhere, defaults to “info”. Possible values:
      • “off”
      • “critical”
      • “error”
      • “warn”
      • “info”
      • “debug”
      • “trace”
    • format: Log output format, plain or json.
    • output: Log output destination. Possible values are:
      • stdout: standard output
      • stderr: standard error
      • syslog: syslog (only available on Unix systems)
      • syslogudp: remote syslog (only available on Unix systems)
        • host: address and port of a syslog server
        • hostname: hostname to attach to syslog messages
      • journald: journald service (only available on Linux with systemd, (if jormungandr is built with the systemd feature)
      • gelf: Configuration fields for GELF (Graylog) network logging protocol (if jormungandr is built with the gelf feature):
        • backend: hostname:port of a GELF server
        • log_id: identifier of the source of the log, for the host field in the messages.
      • file: path to the log file.
  • rest: (optional) Configuration of the REST endpoint.
    • listen: address:port to listen for requests
    • tls: (optional) enables TLS and disables plain HTTP if provided
      • cert_file: path to server X.509 certificate chain file, must be PEM-encoded and contain at least 1 item
      • priv_key_file: path to server private key file, must be PKCS8 with single PEM-encoded, unencrypted key
    • cors: (optional) CORS configuration, if not provided, CORS is disabled
      • allowed_origins: (optional) allowed origins, if none provided, echos request origin
      • max_age_secs: (optional) maximum CORS caching time in seconds, if none provided, caching is disabled
  • p2p: P2P network settings
    • trusted_peers: (optional) the list of nodes’s multiaddr with their associated public_id to connect to in order to bootstrap the P2P topology (and bootstrap our local blockchain);
    • public_id: (optional) the node’s public ID that will be used to identify this node to the network.
    • public_address: multiaddr string specifying address of the P2P service. This is the public address that will be distributed to other peers of the network that may find interest in participating to the blockchain dissemination with the node.
    • listen: (optional) address:port to specifies the address the node will listen to to receive p2p connection. Can be left empty and the node will listen to whatever value was given to public_address.
    • topics_of_interest: The dissemination topics this node is interested to hear about:
      • messages: Transactions and other ledger entries. Typical setting for a non-mining node: low. For a stakepool: high;
      • blocks: Notifications about new blocks. Typical setting for a non-mining node: normal. For a stakepool: high.
    • max_connections: The maximum number of simultaneous P2P connections this node should maintain.
  • explorer: (optional) Explorer settings
    • enabled: True or false
  • no_blockchain_updates_warning_interval: (optional, seconds) if no new blocks were received after this period of time, the node will start sending you warnings in the logs.

Starting the node

jormungandr --config config.yaml --genesis-block-hash 'abcdef987654321....'

The ‘abcdef987654321….’ part refers to the hash of the genesis. This should be given to you from one of the peers in the network you are connecting to.

In case you have the genesis file (for example block-0.bin, because you are creating the network) you can get this hash with jcli.

jcli genesis hash --input block-0.bin

or, in case you only have the yaml file

jcli genesis encode --input genesis.yaml | jcli genesis hash

REST Api

It is possible to query the node via its REST Interface.

In the node configuration, you have set something like:

# ...

rest:
  listen: "127.0.0.1:8443"

#...

This is the REST endpoint to talk to the node, to query blocks or send transaction.

It is possible to query the node stats with the following end point:

curl http://127.0.0.1:8443/api/v0/node/stats

The result may be:

{"blockRecvCnt":120,"txRecvCnt":92,"uptime":245}

THE REST API IS STILL UNDER DEVELOPMENT

Please note that the end points and the results may change in the future.

To see the whole Node API documentation:

Explorer mode

The node can be configured to work as a explorer. This consumes more resources, but makes it possible to query data otherwise not available.

Configuration

There are two ways of enabling the explorer api. It can either be done by passing the --enable-explorer flag on the start arguments or by the config file:

explorer:
    enabled: true

CORS

For configuring CORS the explorer API, this needs to be done on the REST section of the config, as documented here.

API

A graphql interface can be used to query the explorer data. When enabled, two endpoints are available in the REST interface:

  • /explorer/graphql
  • /explorer/playground

The first is the one that queries are made against, for example:

curl \
    -X POST \
    -H "Content-Type: application/json" \
    --data '{'\
        '"query": "{'\
        '   status {'\
        '       latestBlock {'\
        '           chainLength'\
        '           id'\
        '           previousBlock {'\
        '               id'\
        '           }'\
        '       }'\
        '   }'\
        '}"'\
    '}' \
  http://127.0.0.1:8443/explorer/graphql

While the second serves an in-browser graphql IDE that can be used to try queries interactively.

How to start a node as a leader candidate

Gathering data

Like in the passive node case, two things are needed to connect to an existing network

  1. the hash of the genesis block of the blockchain, this will be the source of truth of the blockchain. It is 64 hexadecimal characters.
  2. the trusted peers identifiers and access points.

The node configuration could be the same as that for running a passive node.

There are some differences depending if you are connecting to a network running a genesis or bft consensus protocol.

Connecting to a genesis blockchain

Registering a stake pool

In order to be able to generate blocks in an existing genesis network, a registered stake pool is needed.

Creating the secrets file

Put the node id and private keys in a yaml file in the following way:

Example

filename: node_secret.yaml

genesis:
  sig_key: Content of stake_pool_kes.prv file
  vrf_key: Content of stake_pool_vrf.prv file
  node_id: Content of stake_pool.id file

Starting the Genesis node

jormungandr --genesis-block-hash asdf1234... --config config.yaml --secret node_secret.yaml

The ‘asdf1234…’ part should be the actual block0 hash of the network.

Connecting to a BFT blockchain

In order to generate blocks, the node should be registered as a slot leader in the network and started in the following way.

The secret file

Put secret key in a yaml file, e.g. node_secret.yaml as follows:

bft:
 signing_key: ed25519_sk1kppercsk06k03yk4qgea....

where signing_key is a private key associated to the public id of a slot leader.

Starting the BFT node

jormungandr --genesis-block asdf1234... --config node.config --secret node_secret.yaml

The ‘asdf1234…’ part should be the actual block0 hash of the network.

Configuration

This chapter covers the node documentation, necessary to have a working system. It covers the network, logging and storage parameters.

Node Configuration

This is an common example of a Jörmungandr node configuration file typically named node-config.yaml. However your’s will vary depending on your needs. Additionally, this configuration has been tested on a specific Jörmungandr version and may change with newer versions. It’s important to keep in mind that the trusted_peers portion of this configuration will be different for each Cardano blockchain network. If you’re trying to connect this node to a specific network, you need to know:

  • its genesis block hash
  • its associated list of trusted peers.

Example Configuration - 1:

---
log:
  output: stderr
  level:  info
  format: plain

http_fetch_block0_service:
  - https://url/jormungandr-block0/raw/master/data

skip_bootstrap: false # If set to true - will skip the bootstrapping phase

bootstrap_from_trusted_peers: false

p2p:
  public_address: "/ip4/X.X.X.X/tcp/Y" # This should match your public IP address (X) and port number (Y)
  #listen: 0.0.0.0:Y
  topics_of_interest:
    blocks: normal # Default is normal - set to high for stakepool
    messages: low  # Default is low    - set to high for stakepool
  allow_private_addresses: false
  max_connections: 256
  max_client_connections: 192
  gossip_interval: 10s
  max_bootstrap_attempts: # Default is not set
  trusted_peers:
    - address: "/ip4/13.230.137.72/tcp/3000"
      id: e4fda5a674f0838b64cacf6d22bbae38594d7903aba2226f
    - address: "/ip4/13.230.48.191/tcp/3000"
      id: c32e4e7b9e6541ce124a4bd7a990753df4183ed65ac59e34
    - address: "/ip4/18.196.168.220/tcp/3000"
      id: 74a9949645cdb06d0358da127e897cbb0a7b92a1d9db8e70
    - address: "/ip4/3.124.132.123/tcp/3000"
      id: 431214988b71f3da55a342977fea1f3d8cba460d031a839c
    - address: "/ip4/18.184.181.30/tcp/3000"
      id: e9cf7b29019e30d01a658abd32403db85269fe907819949d
    - address: "/ip4/184.169.162.15/tcp/3000"
      id: acaba9c8c4d8ca68ac8bad5fe9bd3a1ae8de13816f40697c
    - address: "/ip4/13.56.87.134/tcp/3000"
      id: bcfc82c9660e28d4dcb4d1c8a390350b18d04496c2ac8474
  policy:
    quarantine_duration: 30m
    quarantine_whitelist:
      - "/ip4/13.230.137.72/tcp/3000"
      - "/ip4/13.230.48.191/tcp/3000"
      - "/ip4/18.196.168.220/tcp/3000"
  layers:
    preferred_list:
      view_max: 20
      peers:
        - address: "/ip4/13.230.137.72/tcp/3000"
          id: e4fda5a674f0838b64cacf6d22bbae38594d7903aba2226f
        - address: "/ip4/13.230.48.191/tcp/3000"
          id: c32e4e7b9e6541ce124a4bd7a990753df4183ed65ac59e34
        - address: "/ip4/18.196.168.220/tcp/3000"
          id: 74a9949645cdb06d0358da127e897cbb0a7b92a1d9db8e70

rest:
  listen: 127.0.0.1:3100

storage: "./storage"

explorer:
  enabled: false

mempool:
    pool_max_entries: 100000
    log_max_entries: 100000

leadership:
    logs_capacity: 1024

no_blockchain_updates_warning_interval: 15m

Note: The node configuration uses the YAML format.

Advanced

Rewards report

Starting the node jormungandr with the command line option --rewards-report-all will collect a thorough report of all the reward distribution. It can then be accessed via the REST endpoints /api/v0/rewards/history/1 or /api/v0/rewards/epoch/10.

this is not a recommended setting as it may take memory and may trigger some latency.

Handling of time-consuming transactions

By default we allow a single transaction to delay a block by 50 slots. This can be changed by adjusting the block_hard_deadline setting.

The following is deprecated and will be removed

If you want to record the reward distributions in a directory it is possible to set the environment variable: JORMUNGANDR_REWARD_DUMP_DIRECTORY=/PATH/TO/DIR/TO/WRITE/REWARD.

If an error occurs while dumping the reward, the node will panic with an appropriate error message.

Leadership

The leadership field in your node config file is not mandatory, by default it is set as follow:

leadership:
    logs_capacity: 1024
  • logs_capacity: the maximum number of logs to keep in memory. Once the capacity is reached, older logs will be removed in order to leave more space for new ones [default: 1024]

Logging

The following options are available in the log section:

  • level: log messages minimum severity. If not configured anywhere, defaults to info. Possible values: off, critical, error, warn, info, debug, trace

  • format: Log output format, plain or json

  • output: Log output destination (multiple destinations are supported). Possible values are:

    • stdout: standard output
    • stderr: standard error
    • journald: journald service (only available on Linux with systemd, (if jormungandr is built with the systemd feature)
    • gelf: Configuration fields for GELF (Graylog) network logging protocol (if jormungandr is built with the gelf feature):
      • backend: hostname:port of a GELF server
      • log_id: identifier of the source of the log, for the host field in the messages
    • file: path to the log file

Example

A single configurable backend is supported.

Output to stdout

log:
  output: stdout
  level:  trace
  format: plain

Output to a file

log:
  output:
    file: example.log
  level: info
  format: json

Mempool

When running an active node (BFT leader or stake pool) it is interesting to be able to make choices on how to manage the pending transactions: how long to keep them, how to prioritize them etc.

The mempool field in your node config file is not mandatory, by default it is set as follow:

mempool:
    pool_max_entries: 10000
    log_max_entries: 100000
  • pool_max_entries: (optional, default is 10000). Set a maximum size of the mempool
  • log_max_entries: (optional, default is 100000). Set a maximum size of fragment logs
  • persistent_log: (optional, disabled by default) log all incoming fragments to log files, rotated on a hourly basis. The value is an object, with the dir field specifying the directory name where log files are stored.

Persistent logs

A persistent log is a collection of records comprised of a UNIX timestamp of when a fragment was registereed by the mempool followed by the hex-encoded fragment body. This log is a line-delimited JSON stream.

Keep in mind that enabling persistent logs could result in impaired performance of the node if disk operations are slow. Consider using a reasonably fast ssd for best results.

Node network

There are 2 different network interfaces which are covered by their respective section:

rest:
   ...
p2p:
   ...

REST interface configuration

  • listen: listen address
  • tls: (optional) enables TLS and disables plain HTTP if provided
    • cert_file: path to server X.509 certificate chain file, must be PEM-encoded and contain at least 1 item
    • priv_key_file: path to server private key file, must be PKCS8 with single PEM-encoded, unencrypted key
  • cors: (optional) CORS configuration, if not provided, CORS is disabled
    • allowed_origins: (optional) allowed origins, if none provided, echos request origin, note that an origin should include a scheme, for example: http://127.0.0.1:8080.
    • max_age_secs: (optional) maximum CORS caching time in seconds, if none provided, caching is disabled

Configuring TLS

In order to enable TLS there must be provided certificate and private key files.

jcli TLS requirements

Note that jormungandr itself does not have any specific requirements for TLS certificates and you may give whatever you want including self-signed certificates as long as you do not intend to use jcli.

The cryptography standards used by jcli as well as by all modern browsers and many http clients place the following requirements on certificates:

  • A certificate should adhere to X.509 v3 with appropriate key usage settings and subject alternative name.
  • A certificate must not be self-signed.

Given that, your options are to either get a certificate from a well-known CA (Let’s Encrypt will do, jcli uses Mozilla’s CA bundle for verification) or create your own local CA and provide the root certificate to jcli via the --tls-cert-path option.

Creating a local CA using OpenSSL and EasyRSA

EasyRSA is a set of scripts that use OpenSSL and give you an easier experience with setting up your local CA. You can download them here.

  1. Go to easy-rsa/easy-rsa3.

  2. Configure your CA. To do that, create the configuration file (cp vars.example vars); open it with the text editor of your choise (for example, vim vars); uncomment and edit fields you need to change. Each CA needs to edit these lines (find then in your vars file according to their organization structure:

    #set_var.EASYRSA_REQ_COUNTRY––“US” #set_var.EASYRSA_REQ_PROVINCE—“California” #set_var.EASYRSA_REQ_CITY—“San.Francisco” #set_var.EASYRSA_REQ_ORG––“Copyleft.Certificate.Co” #set_var.EASYRSA_REQ_EMAIL–“me@example.net” #set_var.EASYRSA_REQ_OU—–“My.Organizational.Unit”

  3. When your configuration is ready, run ./easyrsa init-pki and ./easyrsa build-ca nopass. You will be prompted to set the name of your CA.

  4. Run ./easyrsa gen-req server nopass to create a new private key and a certificate signing request. You will be prompted to enter the host name (localhost for local testing).

  5. Run ./easyrsa sign-req server server to sign the request.

To use the generated certificate, use it and the corresponding key in your jormungandr config:

rest:
  tls:
    cert_file: <path to server.crt>
    priv_key_file: <path to server.key>

Use the CA certificate with jcli.

P2P configuration

  • trusted_peers: (optional) the list of nodes’ multiaddr to connect to in order to bootstrap the p2p topology (and bootstrap our local blockchain). Note that you can use a DNS name in the following format: /dns4/node.example.com/tcp/3000. Use dns6 instead of dns4 if you want the peer to connect with IPv6.
  • public_address: multiaddr the address to listen from and accept connection from. This is the public address that will be distributed to other peers of the network that may find interest into participating to the blockchain dissemination with the node. Currently only TCP is supported.
  • node_key_file: (optional) Path to a file containing a bech32-encoded ed25519 secret key. The keys are used to advertize the node in network gossip and to authenticate a connection to the node if the node is used as a trusted peer. Most of the users don’t need to set this value as the key will be randomly generated if the option is not present.
  • listen: (optional) socket address (IP address and port separated by a comma), specifies the interface address and port the node will listen at to receive p2p connection. Can be left empty and the node will listen to whatever value was given to public_address.
  • topics_of_interest: (optional) the different topics we are interested to hear about:
    • messages: notify other peers this node is interested about Transactions typical setting for a non mining node: "low". For a stakepool: "high";
    • blocks: notify other peers this node is interested about new Blocks. typical settings for a non mining node: "normal". For a stakepool: "high".
  • max_connections: the maximum number of P2P connections this node should maintain. If not specified, an internal limit is used by default [default: 256]
  • max_client_connections: the maximum number of client P2P connections this node should keep open. [default: 192]
  • policy: (optional) set the setting for the policy module
    • quarantine_duration set the time to leave a node in quarantine before allowing it back (or not) into the fold. It is recommended to leave the default value [default: 30min].
    • quarantine_whitelist set a trusted list of peers that will not be quarantined in any circumstance. It should be a list of valid addresses, for example: ["/ip4/127.0.0.1/tcp/3000"]. By default this list is empty, [default: []].
  • layers: (optional) set the settings for some of the poldercast custom layers (see below)
  • gossip_interval: (optional) interval to start gossiping with new nodes, changing the value will affect the bandwidth. The more often the node will gossip the more bandwidth the node will need. The less often the node gossips the less good the resilience to node churn. [default: 10s]
  • network-stuck_check: (optional) If no gossip has been received in the last interval, try to connect to nodes that were previously known to this node. This helps to rejoin the protocol in case there is a network outage and the node cannot reach any other peer. [default: 5min]
  • max_bootstrap_attempts: (optional) number of times to retry bootstrapping from trusted peers. If not set, default behavior, the bootstrap process will keep retrying indefinitely, until completed successfully. If set to 0 (zero), the node will skip bootstrap all together – even if trusted peers are defined. If the node fails to bootstrap from any of the trusted peers and the number of bootstrap retry attempts is exceeded, then the node will continue to run without completing the bootstrap process. This will allow the node to act as the first node in the p2p network (i.e. genesis node), or immediately begin gossip with the trusted peers if any are defined.

The trusted peers

The trusted peers is a concept that is not fully implemented yet. One of the key element for now is that this is the first node any node tries to connect in order to meet new nodes. Right now, as far as we know, only one of them is needed. IOHK provides a few others for redundancy.

Layers

Jörmungandr provides multiple additional layers to the poldercast default ones: the preferred list or the bottle in the sea.

Preferred list

This is a special list that allows to connect multiple nodes together without relying on the auto peer discovery. All entries in the preferred list are also whitelisted automatically, so they cannot be quarantined.

configuration
  • view_max: this is the number of entries to show in the view each round the layer will randomly select up to view_max entries from the whole preferred_list.peers list of entries. [default: 20]
  • peers: the list of peers to keep in the preferred list [default: EMPTY]

Also, the preferred list will never be quarantined or blacklisted, the node will attempt to connect to (up to view_max of) these nodes every time, even if some are down, unreachable or not operated anymore.

COMPATIBILITY NOTE: in near future the peer list will be only a list of addresses and the ID part will not be necessary.

Example
p2p:
  layers:
    preferred_list:
      view_max: 20
      peers:
        - address: '/ip4/127.0.0.1/tcp/2029'
          id: 019abc...
        - ...

Setting the public_id

This is needed to advertise your node as a trusted peer. If not set, the node will generate a random ID, which is fine for a regular user. You can generate a public id with openssl, for example: openssl rand -hex 24

topics_of_interest

This is an optional value to set. The default is:

messages: low
blocks: normal

These values make sense for most of the users that are not running stake pools or that are not even publicly reachable.

However for a publicly reachable node, the recommended settings would be:

messages: normal
blocks: normal

and for a stake pool:

messages: high
blocks: high

Prometheus

Prerequisites

To use Prometheus you need Jormungandr compiled with the prometheus-metrics feature enabled.

Usage

To enable Prometheus endpoint you need to enable it in the configuration file:

prometheus:
  enabled: true

Alternatively, you can use the --prometheus-metrics flag.

When enabled, the Prometheus endpoint is exposed as http(s)://<API_ADDR>:<API_PORT>/prometheus.

jcli

This is the node command line helper. It is mostly meant for developers and stake pool operators. It allows offline operations:

  • generating cryptographic materials for the wallets and stake pools;
  • creating addresses, transactions and certificates;
  • prepare a new blockchain

and it allows simple interactions with the node:

  • query stats;
  • send transactions and certificates;
  • get raw blocks and UTxOs.

Address

Jormungandr comes with a separate CLI to create and manipulate addresses.

This is useful for creating addresses from their components in the CLI, for debugging addresses and for testing.

Display address info

To display an address and verify it is in a valid format you can utilise:

$ jcli address info ta1svy0mwwm7mdwcuj308aapjw6ra4c3e6cygd0f333nvtjzxg8ahdvxlswdf0
discrimination: testing
public key: ed25519e_pk1pr7mnklkmtk8y5tel0gvnksldwywwkpzrt6vvvvmzus3jpldmtpsx9rnmx

or for example:

$ jcli address \
    info \
    ca1qsy0mwwm7mdwcuj308aapjw6ra4c3e6cygd0f333nvtjzxg8ahdvxz8ah8dldkhvwfghn77se8dp76uguavzyxh5cccek9epryr7mkkr8n7kgx
discrimination: production
public key: ed25519_pk1pr7mnklkmtk8y5tel0gvnksldwywwkpzrt6vvvvmzus3jpldmtpsx9rnmx
group key:  ed25519_pk1pr7mnklkmtk8y5tel0gvnksldwywwkpzrt6vvvvmzus3jpldmtpsx9rnmx

Creating an address

Each command following allows to create addresses for production and testing chains. For chains, where the discrimination is testing, you need to use the --testing flag.

There’s 3 types of addresses:

  • Single address : A simple spending key. This doesn’t have any stake in the system
  • Grouped address : A spending key attached to an account key. The stake is automatically
  • Account address : An account key. The account is its own stake

Address for UTxO

You can create a single address (non-staked) using the spending public key for this address utilising the following command:

$ jcli address \
    single ed25519e_pk1jnlhwdgzv3c9frknyv7twsv82su26qm30yfpdmvkzyjsdgw80mfqduaean
ca1qw207ae4qfj8q4yw6v3ned6psa2r3tgrw9u3y9hdjcgj2p4pcaldyukyka8

To add the staking information and make a group address, simply add the account public key as a second parameter of the command:

$ jcli address \
    single \
    ed25519_pk1fxvudq6j7mfxvgk986t5f3f258sdtw89v4n3kr0fm6mpe4apxl4q0vhp3k \
    ed25519_pk1as03wxmy2426ceh8nurplvjmauwpwlcz7ycwj7xtl9gmx9u5gkqscc5ylx
ca1q3yen35r2tmdye3zc5lfw3x992s7p4dcu4jkwxcda80tv8xh5ym74mqlzudkg42443nw08cxr7e9hmcuzals9ufsa9uvh723kvteg3vpvrcxcq

Address for Account

To create an account address you need the account public key and run:

$ jcli address \
    account ed25519_pk1c4yq3hflulynn8fef0hdq92579n3c49qxljasrl9dnuvcksk84gs9sqvc2
ca1qhz5szxa8lnujwva8997a5q42nckw8z55qm7tkq0u4k03nz6zc74ze780qe

changing the address prefix

You can decide to change the address prefix, allowing you to provide more enriched data to the user. However, this prefix is not forwarded to the node, it is only for UI/UX.

$ jcli address \
    account \
    --prefix=address_ \
    ed25519_pk1yx6q8rsndawfx8hjzwntfs2h2c37v5g6edv67hmcxvrmxfjdz9wqeejchg
address_1q5smgquwzdh4eyc77gf6ddxp2atz8ej3rt94nt6l0qes0vexf5g4cw68kdx

Certificate

Tooling for offline transaction creation

Building stake pool registration certificate

Builds a stake pool registration certificate.

jcli certificate new stake-pool-registration \
    --vrf-key <vrf-public-key> \
    --kes-key <kes-public-key> \
    --start-validity <seconds-since-start> \
    --management-threshold <THRESHOLD> \
    --owner <owner-public-key> \
    [--operator <operator-public-key>] \
    [<output-file>]

Where:

  • --operator <operator-public-key> - optional, public key of the operator(s) of the pool.
  • output-file - optional, write the output to the given file or print it to the standard output if not defined

Retiring a stake pool

It is possible to retire a stake pool from the blockchain. By doing so the stake delegated to the stake pool will become dangling and will need to be re-delegated.

Remember though that the action won’t be applied until the next following epoch. I.e. the certificate will take a whole epoch before being applied, this should leave time for stakers to redistribute their stake to other pools before having their stake becoming dangling.

It might be valuable for a stake pool operator to keep the stake pool running until the stake pool retirement certificate is fully applied in order to not miss any potential rewards.

example:

jcli certificate new stake-pool-retirement \
    --pool-id <STAKE_POOL_ID> \
    --retirement-time <seconds-since-start> \
    [<output-file>]

where:

  • output-file - optional, write the output to the given file or print it to the standard output if not defined.
  • --retirement-time - is the number of seconds since the start in order to make the stake pool retire. 0 means as soon as possible.
  • --pool-id - hex-encoded stake pool ID. Can be retrieved using jcli certificate get-stake-pool-id command. See here for more details.

Building stake pool delegation certificate

Builds a stake pool delegation certificate.

jcli certificate new stake-delegation <STAKE_KEY> <STAKE_POOL_IDS> [--output <output-file>]

Where:

  • -o, --output <output-file> - optional, write the output to the given file or print it to the standard output if not defined
  • <STAKE_KEY> - the public key used in the stake key registration
  • <STAKE_POOL_IDS>... - hex-encoded stake pool IDs and their numeric weights in format “pool_id:weight”. If weight is not provided, it defaults to 1.

Building update proposal certificate

Builds an update proposal certificate.

jcli certificate new update-proposal \
    <PROPOSER_ID> \
    <CONFIG_FILE> \
    [<output-file>]

Where:

  • <PROPOSER_ID> - the proposer ID, public key of the one who will sign this certificate
  • <CONFIG_FILE> - optional, the file path to the config file defining the config param changes. If omitted it will be read from the standard input.
  • output-file - optional, write the output to the given file or print it to the standard output if not defined

For example your config file may look like:


  # The block0-date defines the date the blockchain starts
  # expected value in seconds since UNIX_EPOCH
  #
  # By default the value will be the current date and time. Or you can
  # add a specific time by entering the number of seconds since UNIX
  # Epoch
- Block0Date: 17

  # This is the type of discrimination of the blockchain
  # if this blockchain is meant for production then
  # use 'production' otherwise use 'test'.
- Discrimination: test

  # The initial consensus version:
  #
  # * BFT consensus: bft
  # * Genesis Praos consensus: genesis
- ConsensusVersion: bft

  # Number of slots in each epoch.
- SlotsPerEpoch: 42

  # The slot duration, in seconds, is the time between the creation
  # of 2 blocks
- SlotDuration: 79

  # Epoch stability depth
- EpochStabilityDepth: 12

  # Genesis praos active slot coefficient
  # Determines minimum stake required to try becoming slot leader, must be in range (0,1]
- ConsensusGenesisPraosActiveSlotsCoeff: "0.004"

  # This is the size, in bytes, of all the contents of the block (excluding the
  # block header).
- BlockContentMaxSize: 96

  # Add a new bft leader
- AddBftLeader: ed25519_pk1g53asm6l4gcwk2pm5ylr092umaur5yes47rqv7ng5yl525x8g8mq5nk7x7

  # Remove a bft leader
- RemoveBftLeader: ed25519_pk1a3sjcg6gt4d05k5u6uqyzmsap8cjw37ul9cgztz8m697lvkz26uqg49nm3

  # The fee calculations settings
  #
  # total fees: constant + (num_inputs + num_outputs) * coefficient [+ certificate]
- LinearFee:
    # this is the minimum value to pay for every transaction
    constant: 57
    # the additional fee to pay for every inputs and outputs
    coefficient: 14
    # the additional fee to pay if the transaction embeds a certificate
    certificate: 95
    # (optional) fees for different types of certificates, to override the one
    # given in `certificate` just above
    #
    # here: all certificate fees are set to `4` except for pool registration
    # and stake delegation which are respectively `5` and `2`.
    per_certificate_fees:
      # (optional) if not specified, the pool registration certificate fee will be
      # the one set by linear_fees.certificate
      certificate_pool_registration: 5
      # (optional) if not specified, the delegation certificate fee will be
      # the one set by linear_fees.certificate
      certificate_stake_delegation: 2
      # (optional) if not specified, the owner delegation certificate fee will be
      # the one set by linear_fees.certificate. Uncomment to set the owner stake
      # delegation to `1` instead of default `4`:
      certificate_owner_stake_delegation: 1

  # Proposal expiration in epochs
- ProposalExpiration: 68

  # The speed to update the KES Key in seconds
- KesUpdateSpeed: 120

  # Increase the treasury amount
- TreasuryAdd: 10000

  # Set the total reward supply available for monetary creation
- RewardPot: 100000000000000

  # Set the treasury parameters, this is the tax type, just as in stake pool
  # registration certificate parameters.
  #
  # When distributing the rewards, the treasury will be first serve as per
  # the incentive specification document
- TreasuryParams:
    # the fix value the treasury will take from the total reward pot of the epoch
    fixed: 1000
    # the extra percentage the the treasury will take from the reward pot of the epoch
    ratio: "1/10"
    # it is possible to add a max bound to the total value the treasury takes
    # at each reward distribution. For example, one could cap the treasury tax
    # to 10000. Uncomment the following line to apply a max limit:
    max_limit: 10000

  # set the reward supply consumption. These parameters will define how the
  # total_reward_supply is consumed for the stake pool reward
  #
  # There's fundamentally many potential choices for how rewards are contributed back, and here's two potential valid examples:
  #
  # Linear formula: constant - ratio * (#epoch after epoch_start / epoch_rate)
  # Halving formula: constant * ratio ^ (#epoch after epoch_start / epoch_rate)
- RewardParams:
    halving: # or use "linear" for the linear formula
      # In the linear formula, it represents the starting point of the contribution
      # at #epoch=0, whereas in halving formula is used as starting constant for
      # the calculation.
      constant: 2

      # In the halving formula, an effective value between 0.0 to 1.0 indicates a
      # reducing contribution, whereas above 1.0 it indicate an acceleration of contribution.
      #
      # However in linear formula the meaning is just a scaling factor for the epoch zone
      # (current_epoch - start_epoch / epoch_rate). Further requirement is that this ratio
      # is expressed in fractional form (e.g. 1/2), which allow calculation in integer form.
      ratio: 3/68

      # indicates when this contribution start. note that if the epoch is not
      # the same or after the epoch_start, the overall contribution is zero.
      epoch_start: 89

      # the rate at which the contribution is tweaked related to epoch.
      epoch_rate: 20

  # Fees for different types of certificates, to override the one
  # given in `certificate` just above.
- PerCertificateFees:
    # (optional) if not specified, the pool registration certificate fee will be
    # the one set by linear_fees.certificate
    certificate_pool_registration: 5
    # (optional) if not specified, the delegation certificate fee will be
    # the one set by linear_fees.certificate
    certificate_stake_delegation: 2
    # (optional) if not specified, the owner delegation certificate fee will be
    # the one set by linear_fees.certificate. Uncomment to set the owner stake
    # delegation to `1` instead of default `4`:
    certificate_owner_stake_delegation: 1

  # Set where to send the fees generated by transactions activity.
  #
  # It is possible to send all the generated fees to the "treasury"
- FeesInTreasury: rewards

- RewardLimitNone

  # Limit the epoch total reward drawing limit to a portion of the total
  # active stake of the system.
  #
  # for example, if set to 10%, the reward drawn will be bounded by the
  # 10% of the total active stake.
- RewardLimitByAbsoluteStake: 22/72

  # Settings to incentivize the numbers of stake pool to be registered
  # on the blockchain.
  #
  # These settings does not prevent more stake pool to be added. For example
  # if there is already 1000 stake pools, someone can still register a new
  # stake pool and affect the rewards of everyone else too.
  #
  # if the threshold is reached, the pool doesn't really have incentive to
  # create more blocks than 1 / set-value-of-pools % of stake.
- PoolRewardParticipationCapping:
    min: 8
    max: 52

  # Add a new committee id
- AddCommitteeId: 8103973beaa56f4e9440004ee8e8f8359ea18499d4199c1b018c072e7f503ea0

  # Remove a committee id
- RemoveCommitteeId: 6375dcdd714e69c197e99c32486ec28f166a50da7a1e3694807cd8a76f1c8175

- PerVoteCertificateFees:
    certificate_vote_plan: 52
    certificate_vote_cast: 57

  # The transaction max expiry epochs
- TransactionMaxExpiryEpochs: 91

Building vote cast certificate

Builds a vote cast certificate.

Public vote cast

jcli certificate new update-cast public \
    --choice <choice> \
    --proposal-index <proposal-index> \
    --vote-plan-id <vote-plan-id> \
    --output <output-file>

Where:

  • <choice> - the number of choice within the proposal you vote for
  • <proposal-index> - the number of proposal in the vote plan you vote for
  • <vote-plan-id> - the vote plan identified on the blockchain
  • <output-file> - optional write the output to the given file or print it to the standard output if not defined

Private vote cast

jcli certificate new update-cast private \
    --choice <choice> \
    --options-size <options> \
    --proposal-index <proposal-index> \
    --vote-plan-id <vote-plan-id> \
    --key-path <secret-key>
    --output <output-file>

Where:

  • <choice> - the number of choice within the proposal you vote for
  • <options> - size of voting options
  • <proposal-index> - the number of proposal in the vote plan you vote for
  • <vote-plan-id> - the vote plan identified on the blockchain
  • <secret-key> - optional key to encrypt the vote with, if not provided read secret key from the stdit
  • <output-file> - optional write the output to the given file or print it to the standard output if not defined

Genesis

Tooling for working with a genesis file

Usage

jcli genesis [subcommand]

Subcommands

  • decode: Print the YAML file corresponding to an encoded genesis block.
  • encode: Create the genesis block of the blockchain from a given yaml file.
  • hash: Print the block hash of the genesis
  • init: Create a default Genesis file with appropriate documentation to help creating the YAML file
  • help

Examples

Encode a genesis file

jcli genesis encode --input genesis.yaml --output block-0.bin

or equivantely

cat genesis.yaml | jcli genesis encode > block-0.bin

Get the hash of an encoded genesis file

jcli genesis hash --input block-0.bin

cryptographic keys

There are multiple type of key for multiple use cases.

typeusage
ed25519Signing algorithm for Ed25519 algorithm
ed25519-bip32Related to the HDWallet, Ed25519 Extended with chain code for derivation
ed25519-extendedRelated to Ed25519Bip32 without the chain code
sum-ed25519-12For stake pool, necessary for the KES
ristretto-group2-hash-dhFor stake pool, necessary for the VRF

There is a command line parameter to generate this keys:

$ jcli key generate --type=Ed25519
ed25519_sk1cvac48ddf2rpk9na94nv2zqhj74j0j8a99q33gsqdvalkrz6ar9srnhvmt

and to extract the associated public key:

$ echo ed25519_sk1cvac48ddf2rpk9na94nv2zqhj74j0j8a99q33gsqdvalkrz6ar9srnhvmt | jcli key to-public
ed25519_pk1z2ffur59cq7t806nc9y2g64wa60pg5m6e9cmrhxz9phppaxk5d4sn8nsqg

Signing data

Sign data with private key. Supported key formats are: ed25519, ed25519-bip32, ed25519-extended and sumed25519_12.

jcli key sign <options> <data>

The options are

  • –secret-key <secret_key> - path to file with bech32-encoded secret key
  • -o, –output <output> - path to file to write signature into, if no value is passed, standard output will be used

<data> - path to file with data to sign, if no value is passed, standard input will be used

Verifying signed data

Verify signed data with public key. Supported key formats are: ed25519, ed25519bip32 and sumed25519_12.

jcli key verify <options> <data>

The options are

  • –public-key <public_key> - path to file with bech32-encoded public key
  • –signature <signature> - path to file with signature

<data> - path to file with data to sign, if no value is passed, standard input will be used

REST

Jormungandr comes with a CLI client for manual communication with nodes over HTTP.

Conventions

Many CLI commands have common arguments:

  • -h <addr> or --host <addr> - Node API address. Must always have http:// or https:// prefix and always ends with the /api. E.g. -h http://127.0.0.1:8081/api, --host https://node.com:8443/cardano/api.
  • --debug - Print additional debug information to stderr. The output format is intentionally undocumented and unstable
  • --output-format <format> - Format of output data. Possible values: json, yaml, default yaml. Any other value is treated as a custom format using values from output data structure. Syntax is Go text template: https://golang.org/pkg/text/template/.

Node stats

Fetches node stats

jcli rest v0 node stats get <options>

The options are

YAML printed on success

---
# Number of blocks received by node
blockRecvCnt: 1102
# Size in bytes of all transactions in last block
lastBlockContentSize: 484
# The Epoch and slot Number of the block (optional)
lastBlockDate: "20.29"
# Sum of all fee values in all transactions in last block
lastBlockFees: 534
# The block hash, it's unique identifier in the blockchain (optional)
lastBlockHash: b9597b45a402451540e6aabb58f2ee4d65c67953b338e04c52c00aa0886bd1f0
# The block number, in order, since the block0 (optional)
lastBlockHeight: 202901
# Sum of all input values in all transactions in last block
lastBlockSum: 51604
# The time slot of the tip block
lastBlockTime: "2020-01-30T22:37:46+00:00"
# Number of transactions in last block
lastBlockTx: 2
# The time at which we received the last block, not necessarily the current tip block (optional)
lastReceivedBlockTime: "2020-01-30T22:37:59+00:00"
# 24 bytes encoded in hexadecimal Node ID
nodeId: "ad24537cb009bedaebae3d247fecee9e14c57fe942e9bb0d"
# Number of nodes that are available for p2p discovery and events propagation
peerAvailableCnt: 321
# Number of nodes that have been quarantined by our node
peerQuarantinedCnt: 123
# Total number of nodes
peerTotalCnt: 444
# Number of nodes that are connected to ours but that are not publicly reachable
peerUnreachableCnt: 0
# State of the node
state: Running
# Number of transactions received by node
txRecvCnt: 5440
# Node uptime in seconds
uptime: 20032
# Node app version
version: jormungandr 0.8.9-30d20d2e

Get UTxO

Fetches UTxO details

jcli rest v0 utxo <fragment-id> <output-index> get <options>
  • <fragment-id> - hex-encoded ID of the transaction fragment
  • <output-index> - index of the transaction output

The options are

YAML printed on success

---
# UTxO owner address
address: ca1svs0mwkfky9htpam576mc93mee5709khre8dgnqslj6y3p5f77s5gpgv02w
# UTxO value
value: 10000

Post transaction

Posts a signed, hex-encoded transaction

jcli rest v0 message post <options>

The options are

  • -h <node_addr> - see conventions
  • –debug - see conventions
  • -f –file <file_path> - File containing hex-encoded transaction. If not provided, transaction will be read from stdin.

Fragment Id is printed on success (which can help finding transaction status using get message log command)

50f21ac6bd3f57f231c4bf9c5fff7c45e2529c4dffed68f92410dbf7647541f1

Get message log

Get the node’s logs on the message pool. This will provide information on pending transaction, rejected transaction and or when a transaction has been added in a block

jcli rest v0 message logs <options>

The options are

YAML printed on success

---
- fragment_id: 7db6f91f3c92c0aef7b3dd497e9ea275229d2ab4dba6a1b30ce6b32db9c9c3b2 # hex-encoded fragment ID
  last_updated_at: 2019-06-02T16:20:26.201000000Z                               # RFC3339 timestamp of last fragment status change
  received_at: 2019-06-02T16:20:26.201000000Z                                   # RFC3339 timestamp of fragment receivement
  received_from: Network,                                                       # how fragment was received
  status: Pending,                                                              # fragment status

received_from can be one of:

received_from: Rest     # fragment was received from node's REST API
received_from: Network  # fragment was received from the network

status can be one of:

status: Pending                 # fragment is pending
status:
  Rejected:                     # fragment was rejected
    reason: reason of rejection # cause
status:                         # fragment was included in a block
  InABlock:
    date: "6637.3"            # block epoch and slot ID formed as <epoch>.<slot_id>
    block: "d9040ca57e513a36ecd3bb54207dfcd10682200929cad6ada46b521417964174"

Blockchain tip

Retrieves a hex-encoded ID of the blockchain tip

jcli rest v0 tip get <options>

The options are

Get block

Retrieves a hex-encoded block with given ID

jcli rest v0 block <block_id> get <options>
  • <block_id> - hex-encoded block ID

The options are

Get next block ID

Retrieves a list of hex-encoded IDs of descendants of block with given ID. Every list element is in separate line. The IDs are sorted from closest to farthest.

jcli rest v0 block <block_id> next-id get <options>
  • <block_id> - hex-encoded block ID

The options are

  • -h <node_addr> - see conventions
  • –debug - see conventions
  • -c –count <count> - Maximum number of IDs, must be between 1 and 100, default 1

Get account state

Get account state

jcli rest v0 account get <account-id> <options>
  • <account-id> - ID of an account, bech32-encoded

The options are

YAML printed on success

---
counter: 1
delegation: c780f14f9782770014d8bcd514b1bc664653d15f73a7158254730c6e1aa9f356
value: 990
  • value is the current balance of the account;
  • counter is the number of transactions performed using this account this is useful to know when signing new transactions;
  • delegation is the Stake Pool Identifier the account is delegating to. it is possible this value is not set if there is no delegation certificate sent associated to this account.

Node settings

Fetches node settings

jcli rest v0 settings get <options>

The options are

YAML printed on success

---
block0Hash: 8d94ecfcc9a566f492e6335858db645691f628b012bed4ac2b1338b5690355a7  # block 0 hash of
block0Time: "2019-07-09T12:32:51+00:00"         # block 0 creation time of
blockContentMaxSize: 102400                     # the block content's max size in bytes
consensusVersion: bft                           # currently used consensus
currSlotStartTime: "2019-07-09T12:55:11+00:00"  # current slot start time
epochStabilityDepth: 102400                     # the depth, number of blocks, to which we consider the blockchain to
                                                # be stable and prevent rollback beyond that depth
fees:                                           # transaction fee configuration
  certificate: 4                                # fee per certificate
  coefficient: 1                                # fee per every input and output
  constant: 2                                   # fee per transaction
  per_certificate_fees:                         # fee per certificate operations, all zero if this object absent (optional)
    certificate_pool_registration: 5            # fee per pool registration, zero if absent (optional)
    certificate_stake_delegation: 15            # fee per stake delegation, zero if absent (optional)
    certificate_owner_stake_delegation: 2       # fee per pool owner stake delegation, zero if absent (optional)
rewardParams:                                   # parameters for rewards calculation
  compoundingRatio:                             # speed at which reward is reduced. Expressed as numerator/denominator
    denominator: 1024
    numerator: 1
  compoundingType: Linear                       # reward reduction algorithm. Possible values: "Linear" and "Halvening"
  epochRate: 100                                # number of epochs between reward reductions
  epochStart: 0                                 # epoch when rewarding starts
  initialValue: 10000                           # initial reward
slotDuration: 5                                 # slot duration in seconds
slotsPerEpoch: 720                              # number of slots per epoch
treasuryTax:                                    # tax from reward that goes to pot
  fixed: 5                                      # what get subtracted as fixed value
  ratio:                                        # ratio of tax after fixed amount is subtracted. Expressed as numerator/denominator
    numerator: 1
    denominator: 10000
  max: 100                                      # limit of tax (optional)

Node shutdown

Node shutdown

jcli rest v0 shutdown get <options>

The options are

Get leaders

Fetches list of leader IDs

jcli rest v0 leaders get <options>

The options are

YAML printed on success

---
- 1 # list of leader IDs
- 2

Register leader

Register new leader and get its ID

jcli rest v0 leaders post <options>

The options are

  • -h <node_addr> - see conventions
  • –debug - see conventions
  • –output-format <format> - see conventions
  • -f, –file <file> - File containing YAML with leader secret. It must have the same format as secret YAML passed to Jormungandr as –secret. If not provided, YAML will be read from stdin.

On success created leader ID is printed

3

Delete leader

Delete leader with given ID

jcli rest v0 leaders delete <id> <options>
  • <id> - ID of deleted leader

The options are

Get leadership logs

Fetches leadership logs

jcli rest v0 leaders logs get <options>

The options are

YAML printed on success

---
- created_at_time: "2019-08-19T12:25:00.417263555+00:00"
  enclave_leader_id: 1
  finished_at_time: "2019-08-19T23:19:05.010113333+00:00"
  scheduled_at_date: "0.3923"
  scheduled_at_time: "2019-08-19T23:18:35+00:00"
  wake_at_time: "2019-08-19T23:18:35.001254555+00:00"
  status:
    Block:
      chain_length: 201018
      block: d9040ca57e513a36ecd3bb54207dfcd10682200929cad6ada46b521417964174
      parent: cc72d4ca957b03d7c795596b7fd7b1ff09c649c3e2877c508c0466abc8604832

Different value for the status:

# meaning the action is still pending to happen
status: Pending
# meaning the action successfully create the given block with the given hash and parent
status:
  Block:
    chain_length: 201018
    block: d9040ca57e513a36ecd3bb54207dfcd10682200929cad6ada46b521417964174
    parent: cc72d4ca957b03d7c795596b7fd7b1ff09c649c3e2877c508c0466abc8604832
# meaning the event has failed for some reasons
status:
  Rejected:
    reason: "Missed the deadline to compute the schedule"

Get stake pools

Fetches list of stake pool IDs

jcli rest v0 stake-pools get <options>

The options are

YAML printed on success

---
- 5cf03f333f37eb7b987dbc9017b8a928287a3d77d086cd93cd9ad05bcba7e60f # list of stake pool IDs
- 3815602c096fcbb91072f419c296c3dfe1f730e0f446a9bd2553145688e75615

Get stake distribution

Fetches stake information

jcli rest v0 stake get <options> [<epoch>]
  • <epoch> - Epoch to get the stake distribution from. (optional)

The options are

YAML printed on success

  • jcli rest v0 stake get <options> - stake distribution from the current epoch
---
epoch: 228      # Epoch of last block
stake:
  dangling: 0 # Total value stored in accounts, but assigned to nonexistent pools
  pools:
    - - 5cf03f333f37eb7b987dbc9017b8a928287a3d77d086cd93cd9ad05bcba7e60f # stake pool ID
      - 1000000000000                                                    # staked value
    - - 3815602c096fcbb91072f419c296c3dfe1f730e0f446a9bd2553145688e75615 # stake pool ID
      - 1000000000000                                                    # staked value
  unassigned: 0 # Total value stored in accounts, but not assigned to any pool
  • jcli rest v0 stake get <options> 10 - stake distribution from a specific epoch (epoch 10 in this example)
---
epoch: 10      # Epoch specified in the request
stake:
  dangling: 0 # Total value stored in accounts, but assigned to nonexistent pools
  pools:
    - - 5cf03f333f37eb7b987dbc9017b8a928287a3d77d086cd93cd9ad05bcba7e60f # stake pool ID
      - 1000000000000                                                    # staked value
    - - 3815602c096fcbb91072f419c296c3dfe1f730e0f446a9bd2553145688e75615 # stake pool ID
      - 1000000000000                                                    # staked value
  unassigned: 0 # Total value stored in accounts, but not assigned to any pool

Network stats

Fetches network stats

jcli rest v0 network stats get <options>

The options are

YAML printed on success

---
- # node address (optional)
  addr: "3.124.55.91:3000"
  # hex-encoded node ID
  nodeId: 0102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20
  # timestamp of when the connection was established
  establishedAt: "2019-10-14T06:24:12.010231281+00:00"
  # timestamp of last time block was received from node if ever (optional)
  lastBlockReceived: "2019-10-14T00:45:57.419496113+00:00"
  # timestamp of last time fragment was received from node if ever (optional)
  lastFragmentReceived: "2019-10-14T00:45:58.419496150+00:00"
  # timestamp of last time gossip was received from node if ever (optional)
  lastGossipReceived: "2019-10-14T00:45:59.419496188+00:00"

Get stake pool details

Fetches stake pool details

jcli rest v0 stake-pool get <pool-id> <options>
  • <pool-id> - hex-encoded pool ID

The options are

YAML printed on success

---
tax:                        # pool reward
  fixed: 5                  # what get subtracted as fixed value
  ratio:                    # ratio of tax after fixed amount is subtracted. Expressed as numerator/denominator
    numerator: 1
    denominator: 10000
  max: 100                  # limit of tax (optional)
total_stake: 2000000000000  # total stake pool value
# bech32-encoded stake pool KES key
kesPublicKey: kes25519-12-pk1q7susucqwje0lpetqzjgzncgcrjzx7e2guh900qszdjskkeyqpusf3p39r
# bech32-encoded stake pool VRF key
vrfPublicKey: vrf_pk1rcm4qm3q9dtwq22x9a4avnan7a3k987zvepuxwekzj3uyu6a8v0s6sdy0l

Get rewards history for a specific epoch

Get the rewards history of a given epoch.

jcli rest v0 rewards epoch get <epoch> <options>
  • <epoch> - epoch number to get the rewards history for.

The options are

jcli rest v0 rewards epoch get 82 -h <node_addr>
[
  {
    "epoch": 82,              // the epoch number to collect rewards info from (rewards are from epoch 81)
    "drawn": 3835616440000,   // Total Drawn from reward escrow pot for the epoch
    "fees": 1828810000,       // Fees contributed into the pot the epoch
    "treasury": 462179124139, // Value added to the treasury
    "stake_pools": {
      "0087011b9c626759f19d9d0315a9b42492ba497438c12efc026d664c9f324ecb": [
        1683091391, // pool's owned rewards from taxes
        32665712521 // distributed rewards to delegators
      ],
      "014bb0d84f40900f6dd85835395bc38da3ab81435d1e6ee27d419d6eeaf7d16a": [
        47706672,
        906426770
      ],
    },
    "accounts": {
      "ed25519_pk1qqq6r7r7medu2kdpvdra5kwh8uz9frvftm9lf25shm7ygx9ayvss0nqke9": 427549785, // Amount added to each account
      "ed25519_pk1qqymlwehsztpzhy2k4szkp7j0xk0ra35jyxcpgr9p9q4ngvzzc5q4sh2gm": 24399360,
      "ed25519_pk1qq9h62jv6a0mz36xgecjrz9tm8z6ay3vj4d64ashxkgxcyhjewwsvgvelj": 22449169,
      "ed25519_pk1qq9l2qrqazk5fp4kt2kvjtsjc32g0ud888um8k2pvms0cw2r0uzsute83u": 1787992,
      "ed25519_pk1qqx6h559ee7pa67dm255d0meekt6dmq6857x302wdwrhzv47z9hqucdnt2": 369024,
    }
  }
]

Get rewards history for some epochs

Get the rewards history of the length last epoch(s) from tip.

jcli rest v0 rewards history get <length> <options>
  • <length> - number of epochs, starting from the last epoch from tip, to get the reward history for.

The options are

jcli rest v0 rewards history get 2 -h <node_addr>
[
  {
    "epoch": 93,
    "drawn": 3835616440000,
    "fees": 641300000,
    "treasury": 467151470296,
    "stake_pools": {
      "0087011b9c626759f19d9d0315a9b42492ba497438c12efc026d664c9f324ecb": [
        1121750881,
        21771124247
      ],
      "014bb0d84f40900f6dd85835395bc38da3ab81435d1e6ee27d419d6eeaf7d16a": [
        429241408,
        8155586765
      ],
      "01bd272cede02d0b0c9cd47b16e5356ab3fb2330dd9d1e972ab5494365309d2a": [
        1691506850,
        32829041110
      ],
    },
    "accounts": {
      "ed25519_pk1002kje4l8j7kvsseyauusk3s7nzef4wcvvafltjmg0rkzr6qccyqg064kz": 33311805,
      "ed25519_pk100549kxqn8tnzfzr5ndu0wx7pp2y2ck28mnykq03m2z5qcwkvazqx9fp0h": 15809,
      "ed25519_pk10054y058qfn5wnazalnkax0mthg06ucq87nn9320rphtye5ca0xszjcelk": 10007789,
      "ed25519_pk10069dsunppwttl4qtsfnyhjnqwkunuwxjxlandl2fnpwpuznf5pqmg3twe": 545094806,
      "ed25519_pk1009sfpljfgx30z70l3n63gj7w9vp3epugmd3vn62fyr07ut9pfwqjp7f8h": 4208232,
    },
  },
  {
    "epoch": 92,
    "drawn": 3835616440000,
    "fees": 620400000,
    "treasury": 480849578351,
    "stake_pools": {
      "0087011b9c626759f19d9d0315a9b42492ba497438c12efc026d664c9f324ecb": [
        979164601,
        19003786459
      ],
      "0105449dd66524111349ef677d1ebc25247a5ba2d094913f52aa4db265eac03a": [
        26977274,
        972170279
      ],
      "014bb0d84f40900f6dd85835395bc38da3ab81435d1e6ee27d419d6eeaf7d16a": [
        299744265,
        5695141053
      ],
    },
    "accounts": {
      "ed25519_pk1002kje4l8j7kvsseyauusk3s7nzef4wcvvafltjmg0rkzr6qccyqg064kz": 40581616,
      "ed25519_pk100549kxqn8tnzfzr5ndu0wx7pp2y2ck28mnykq03m2z5qcwkvazqx9fp0h": 49156,
      "ed25519_pk10054y058qfn5wnazalnkax0mthg06ucq87nn9320rphtye5ca0xszjcelk": 12306084,
      "ed25519_pk10069dsunppwttl4qtsfnyhjnqwkunuwxjxlandl2fnpwpuznf5pqmg3twe": 142737175,
      "ed25519_pk1009sfpljfgx30z70l3n63gj7w9vp3epugmd3vn62fyr07ut9pfwqjp7f8h": 3932910,
    },
  }
]

Get voting committee members

Get the list of voting committee members.

jcli rest v0 vote active committees get <options>

The options are

YAML printed on success

---
- 7ef044ba437057d6d944ace679b7f811335639a689064cd969dffc8b55a7cc19 # list of members
- f5285eeead8b5885a1420800de14b0d1960db1a990a6c2f7b517125bedc000db

Get active voting plans and proposals

Get the list of active voting plans and proposals.

jcli rest v0 vote active plans get <options>

The options are

YAML printed on success

---
- committee_end:
    epoch: 10
    slot_id: 0
  proposals:
    - external_id: adb92757155d09e7f92c9f100866a92dddd35abd2a789a44ae19ab9a1dbc3280
      options:
        OneOf:
          max_value: 3
    - external_id: 6778d37161c3962fe62c9fa8a31a55bccf6ec2d1ea254a467d8cd994709fc404
      options:
        OneOf:
          max_value: 3
  vote_end:
    epoch: 5
    slot_id: 0
  vote_start:
    epoch: 1
    slot_id: 0

Transaction

Tooling for offline transaction creation and signing.

jcli transaction

Those familiar with cardano-cli transaction builder will see resemblance in jcli transaction.

There is a couple of commands that can be used to:

  1. prepare a transaction:
    • new create a new empty transaction;
    • add-input
    • add-account
    • add-output
  2. finalize the transaction for signing:
  3. create witnesses and add the witnesses:
    • make-witness
    • add-witness
  4. seal the transaction, ready to send to the blockchain
  5. auth the transaction, if it contains a certificate

There are also functions to help decode and display the content information of a transaction:

  • info displays summary of transaction being constructed
  • data-for-witness get the data to sign from a given transaction
  • fragment-id get the Fragment ID from a transaction in sealed state
  • to-message to get the hexadecimal encoded message, ready to send with cli rest message

DEPRECATED:

  • id get the data to sign from a given transaction (use data-for-witness instead)

Transaction info

On every stage of building a transaction user can display its summary

jcli transaction info <options>

The options are:

  • --prefix <address-prefix> - set the address prefix to use when displaying the addresses (default: ca)

  • --fee-certificate <certificate> - fee per certificate (default: 0)

  • --fee-coefficient <coefficient> - fee per every input and output (default: 0)

  • --fee-constant <constant> - fee per transaction (default: 0)

  • --fee-owner-stake-delegation <certificate-owner-stake-delegation> - fee per owner stake delegation (default: fee-certificate)

  • --fee-pool-registration <certificate-pool-registration> - fee per pool registration (default: fee-certificate)

  • --fee-stake-delegation <certificate-stake-delegation> - fee per stake delegation (default: fee-certificate)

  • --fee-vote-cast <certificate-vote-cast> - fee per vote cast

  • --fee-vote-plan <certificate-vote-plan> - fee per vote plan

  • --output-format <format> - Format of output data. Possible values: json, yaml. Any other value is treated as a custom format using values from output data structure. Syntax is Go text template: https://golang.org/pkg/text/template/. (default: yaml)

  • --output <output> - write the info in the given file or print it to the standard output

  • --staging <staging-file> - place where the transaction is going to be saved during its staging phase. If a file is given, the transaction will be read from this file and modification will be written into this same file. If no file is given, the transaction will be read from the standard input and will be rendered in the standard output.

YAML printed on success

---
balance: 40         # transaction balance or how much input is not spent
fee: 60             # total fee for transaction
input: 200          # total input of transaction
inputs:             # list of transaction inputs, each can be of either "utxo" or "account" kind
  - index: 4        # index of transaction output
    kind: utxo      # constant value, signals that UTxO is used
                    # hex-encoded ID of transaction
    txid: 543326b2739356ab6d14624a536ca696f1020498b36456b7fdfe8344c084bfcf
    value: 130      # value of transaction output
  -                 # hex-encoded account address
    account: 3fd45a64ae5a3b9c35e37114baa099b8b01285f7d74b371597af22d5ff393d9f
    kind: account   # constant value, signals that account is used
    value: 70       # value taken from account
num_inputs: 1       # total number of inputs of transaction
num_outputs: 1      # total number of outputs of transaction
num_witnesses: 1    # total number of witnesses of transaction
output: 100         # total output of transaction
outputs:            # list of transaction outputs
  -                 # bech32-encoded address
    address: ca1swedukl830v26m8hl7e5dzrjp77yctuz79a68r8jl2l79qnpu3uwz0kg8az
    value: 100      # value sent to address
                    # hex-encoded transaction hash, when transaction is complete, it's also its ID
sign_data_hash: 26be0b8bd7e34efffb769864f00d7c4aab968760f663a7e0b3ce213c4b21651b
status: sealed      # transaction status, can be "balancing", "finalizing", "sealed" or "authed"

Examples

The following example focuses on using an utxo as input, the few differences when transfering from an account will be pointed out when necessary. Also, the simplified make-transaction command in jcli covers all this process. For more information run:

jcli transaction make-transaction --help

Let’s use the following utxo as input and transfer 50 lovelaces to the destination address

Input utxo

FieldValue
UTXO’s transaction ID55762218e5737603e6d27d36c8aacf8fcd16406e820361a8ac65c7dc663f6d1c
UTXO’s output index0
associated addressca1q09u0nxmnfg7af8ycuygx57p5xgzmnmgtaeer9xun7hly6mlgt3pjyknplu
associated value100

Destination address

address: ca1qvnr5pvt9e5p009strshxndrsx5etcentslp2rwj6csm8sfk24a2wlqtdj6

Create a staging area

jcli transaction new --staging tx

Add input

For the input, we need to reference the utxo with the UTXO’s transaction ID and UTXO’S output index fields. We also need to specify how many coins there are with the associated value field.

Example - UTXO address as Input

jcli transaction add-input 55762218e5737603e6d27d36c8aacf8fcd16406e820361a8ac65c7dc663f6d1c 0 100 --staging tx

Example - Account address as Input

If the input is an account, the command is slightly different

jcli transaction add-account account_address account_funds --staging tx

Add output

For the output, we need the address we want to transfer to, and the amount.

jcli transaction add-output ca1qvnr5pvt9e5p009strshxndrsx5etcentslp2rwj6csm8sfk24a2wlqtdj6 50 --staging tx

Add fee and change address

We want to get the change in the same address that we are sending from (the associated address of the utxo). We also specify how to compute the fees. You can leave out the --fee-constant 5 --fee-coefficient 2 part if those are both 0.

jcli transaction finalize ca1q09u0nxmnfg7af8ycuygx57p5xgzmnmgtaeer9xun7hly6mlgt3pjyknplu --fee-constant 5 \
  --fee-coefficient 2 --staging tx

Now, if you run

jcli transaction info --fee-constant 5 --fee-coefficient 2 --staging tx

You should see something like this

---
balance: 0
fee: 11
input: 100
inputs:
  - index: 0
    kind: utxo
    txid: 55762218e5737603e6d27d36c8aacf8fcd16406e820361a8ac65c7dc663f6d1c
    value: 100
num_inputs: 1
num_outputs: 2
num_witnesses: 0
output: 89
outputs:
  - address: ca1qvnr5pvt9e5p009strshxndrsx5etcentslp2rwj6csm8sfk24a2wlqtdj6
    value: 50
  - address: ca1q09u0nxmnfg7af8ycuygx57p5xgzmnmgtaeer9xun7hly6mlgt3pjyknplu
    value: 39
sign_data_hash: 0df39a87d3f18a188b40ba8c203f85f37af665df229fb4821e477f6998864273
status: finalizing

Sign the transaction

Make witness

For signing the transaction, you need:

  • the hash of the genesis block of the network you are connected to.
  • the private key associated with the input address (the one that’s in the utxos).
  • the hash of the transaction, that can be retrieved in two ways:
    1. sign_data_hash value from jcli transaction info --staging tx or
    2. jcli transaction data-for-witness --staging tx

The genesis’ hash is needed for ensuring that the transaction:

  • cannot be re-used in another blockchain
  • and for security concerns on offline transaction signing, as we are signing the transaction for the specific blockchain started by this block0 hash.

First we need to get the hash of the transaction we are going to sign.

jcli transaction data-for-witness --staging tx

You should see something like this (the value may be different since it depends on the input/output data)

0df39a87d3f18a188b40ba8c203f85f37af665df229fb4821e477f6998864273

The following command takes the private key in the key.prv file and creates a witness in a file named witness in the current directory.

jcli transaction make-witness --genesis-block-hash abcdef987654321... \
  --type utxo 0df39a87d3f18a188b40ba8c203f85f37af665df229fb4821e477f6998864273 witness key.prv

Account input

When using an account as input, the command takes account as the type and an additional parameter: --account-spending-counter, that should be increased every time the account is used as input.

e.g.

jcli transaction make-witness --genesis-block-hash abcdef987654321... --type account --account-spending-counter 0 \
  0df39a87d3f18a188b40ba8c203f85f37af665df229fb4821e477f6998864273 witness key.prv

Add witness

jcli transaction add-witness witness --staging tx

Send the transaction

jcli transaction seal --staging tx
jcli transaction to-message --staging tx > txmsg

Send it using the rest api

jcli rest v0 message post -f txmsg --host http://127.0.0.1:8443/api

You should get some data back referring to the TransactionID (also known as FragmentID)

d6ef0b2148a51ed64531efc17978a527fd2d2584da1e344a35ad12bf5460a7e2

Checking if the transaction was accepted

You can check if the transaction was accepted by checking the node logs, for example, if the transaction is accepted

jcli rest v0 message logs -h http://127.0.0.1:8443/api

---
- fragment_id: d6ef0b2148a51ed64531efc17978a527fd2d2584da1e344a35ad12bf5460a7e2
  last_updated_at: "2019-06-11T15:38:17.070162114Z"
  received_at: "2019-06-11T15:37:09.469101162Z"
  received_from: Rest
  status:
    InABlock:
      date: "4.707"
      block: "d9040ca57e513a36ecd3bb54207dfcd10682200929cad6ada46b521417964174"

Where the InABlock status means that the transaction was accepted in the block with date “4.707” and for block d9040ca57e513a36ecd3bb54207dfcd10682200929cad6ada46b521417964174.

The status here could also be:

Pending: if the transaction is received and is pending being added in the blockchain (or rejected).

or

Rejected: with an attached message of the reason the transaction was rejected.

Voting

Jormungandr supports decentralized voting with privacy features.

The voting process is controlled by a committee whose private keys can be used to decrypt and certify the tally.

Creating committee keys

Private

Please refer to jcli votes committee --help for help with the committee related cli operations and specification of arguments.

In this example we will be using 3 kind of keys for the private vote and tallying.

In order:

Committee communication key

jcli votes committee communication-key generate > ./comm.sk

We can get its public representation with:

jcli votes committee communication-key to-public --input ./comm.sk > ./comm.pk

Committee member key

jcli votes committee member-key generate --threshold 3 --crs "$crs" --index 0 --keys pk1 pk2 pk3 > ./member.sk

Where pkX are each of the committee communication public keys in bech32 format. The order of the keys shall be the same for every member invoking the command, and the --index parameter provides the 0-based index of the member this key is generated for. Note that all committee members shall use the same CRS.

We can also easily get its public representation as before:

jcli votes committee member-key to-public --input ./member.sk ./member.pk

Election public key

This key (public) is the key every vote should be encrypted with.

jcli votes election-key --keys mpk1 mpk2 mpk3 > ./vote.pk

Notice that we can always rebuild this key with the committee member public keys found within the voteplan certificate.

jcli rest v0 vote active plans > voteplan.json

Creating a vote plan

We need to provide a vote plan definition file to generate a new voteplan certificate. That file should be a yaml (or json) with the following format:

{
  "payload_type": "private",
  "vote_start": {
    "epoch": 1,
    "slot_id": 0
  },
  "vote_end": {
    "epoch": 3,
    "slot_id": 0
  },
  "committee_end": {
    "epoch": 6,
    "slot_id": 0
  },
  "proposals": [
    {
      "external_id": "d7fa4e00e408751319c3bdb84e95fd0dcffb81107a2561e691c33c1ae635c2cd",
      "options": 3,
      "action": "off_chain"
    },
    ...
  ],
  "committee_member_public_keys": [
    "pk....",
  ]
}

Where:

  • payload_type is either public or private
  • commitee_public_keys is only needed for private voting, can be empty for public.

Then, we can generate the voteplan certificate with:

jcli certificate new vote-plan voteplan_def.json --output voteplan.certificate

Casting votes

To generate a vote cast transaction:

  1. firstly you need to generate vote-cast certificate following this instructions.
  2. Storing it into the ’vote-cast.certificate`
  3. now you can generate a transaction following this intructions.

Note that a valid vote cast transaction MUST have only:

  • one input with the corresponding account of the voter
  • zero outputs
  • 1 corresponding witness.

Example (voter.sk contains a private key of the voter):

genesis_block_hash=$(jcli genesis hash < block0.bin)
vote_plan_id=$(jcli rest v0 vote active plans get --output-format json|jq '.[0].id')
voter_addr=$(jcli address account $(jcli key to-public < voter.sk))
voter_addr_counter=$(jcli rest v0 account get "$committee_addr" --output-format json|jq .counter)
jcli certificate new vote-cast public --choice 0 --proposal-index 0 --vote-plan-id "$vote_plan_id" --output vote-cast.certificate
jcli transaction new --staging vote-cast.staging
jcli transaction add-account "$committee_addr" 0 --staging vote-cast.staging
jcli transaction add-certificate $(< vote-cast.certificate) --staging vote-cast.staging
jcli transaction finalize --staging vote-cast.staging
jcli transaction data-for-witness --staging vote-cast.staging > vote-cast.witness-data
jcli transaction make-witness --genesis-block-hash "$genesis_block_hash" --type account --account-spending-counter
"$voter_addr_counter" $(< vote-cast.witness-data) vote-cast.witness committee.sk
jcli transaction seal --staging vote-cast.staging
jcli transaction to-message --staging vote-cast.staging > vote-cast.fragment
jcli rest v0 message post --file vote-cast.fragment

Tallying

Public vote plan

To tally public votes, a single committee member is sufficient. In the example below, the file committee.sk contains the committee member’s private key in bech32 format, and block0.bin contains the genesis block of the voting chain.

genesis_block_hash=$(jcli genesis hash < block0.bin)
vote_plan_id=$(jcli rest v0 vote active plans get --output-format json|jq '.[0].id')
committee_addr=$(jcli address account $(jcli key to-public < committee.sk))
committee_addr_counter=$(jcli rest v0 account get "$committee_addr" --output-format json|jq .counter)
jcli certificate new vote-tally --vote-plan-id "$vote_plan_id" --output vote-tally.certificate
jcli transaction new --staging vote-tally.staging
jcli transaction add-account "$committee_addr" 0 --staging vote-tally.staging
jcli transaction add-certificate $(< vote-tally.certificate) --staging vote-tally.staging
jcli transaction finalize --staging vote-tally.staging
jcli transaction data-for-witness --staging vote-tally.staging > vote-tally.witness-data
jcli transaction make-witness --genesis-block-hash "$genesis_block_hash" --type account --account-spending-counter \
  "$committee_addr_counter" $(< vote-tally.witness-data) vote-tally.witness committee.sk
jcli transaction add-witness --staging vote-tally.staging vote-tally.witness
jcli transaction seal --staging vote-tally.staging
jcli transaction auth --staging vote-tally.staging --key committee.sk
jcli transaction to-message --staging vote-tally.staging > vote-tally.fragment
jcli rest v0 message post --file vote-tally.fragment

Private vote plan

To tally private votes, all committee members are needed. The process is similar to the public one, but we need to issue different certificates.

First, we need to retrieve vote plans info:

jcli rest v0 vote active plans > active_plans.json

If there is more than one vote plan in the file, we also need to provide the id of the vote plan we are interested in. We can get the id of the first vote plan with:

...
vote_plan_id=$(cat active_plans.json |jq '.[0].id')
...

Each committee member needs to generate their shares for the vote plan, which we will use later to decrypt the tally.

jcli votes tally decryption-shares --vote-plan active_plans.json --vote-plan-id $"vote_plan_id" --key member.sk --output-format json

Then, the committee members need to exchange their shares (only one full set of shares is needed). Once all shares are available, we need to merge them in a single file with the following command (needed even if there is only one set of shares):

jcli votes tally merge-shares  share_file1 share_file2 ... > merged_shares.json

With the merged shares file, we are finally able to process the final tally result as follows:

jcli votes tally decrypt-results \
--vote-plan active_plans.json \
--vote-plan-id $"vote_plan_id" \
--shares merged_shares.json \
--threshold number_of_committee_members \
--output-format json > result.json

Staking with Jörmungandr

Here we will describe how to:

  • delegate your stake to a stake pool - so that you can participate to the consensus and maybe collect rewards for that.
  • register a stake pool
  • retire a stake pool

Delegating your stake

how to create the delegation certificate

Stake is concentrated in accounts, and you will need account public key to delegate its associated stake.

for own account

You will need:

  • the Stake Pool ID: an hexadecimal string identifying the stake pool you want to delegate your stake to.
jcli certificate new owner-stake-delegation STAKE_POOL_ID --output stake_delegation.cert

Note that the certificate is in blaco, there’s no account key used for its creation. In order for delegation to work it must be submitted to a node inside a very specific transaction:

  • Transaction must have exactly 1 input
  • The input must be from account
  • The input value must be strictly equal to fee of the transaction
  • Transaction must have 0 outputs

The account used for input will have its stake delegated to the stake pool

for any account

You will need:

  • account public key: a bech32 string of a public key
  • the Stake Pool ID: an hexadecimal string identifying the stake pool you want to delegate your stake to.
jcli certificate new stake-delegation ACCOUNT_PUBLIC_KEY STAKE_POOL_ID --output stake_delegation.cert

submitting to a node

The jcli transaction add-certificate command should be used to add a certificate before finalizing the transaction.

For example:


...

jcli transaction add-certificate $(cat stake_delegation.cert) --staging tx
jcli transaction finalize CHANGE_ADDRESS --fee-constant 5 --fee-coefficient 2 --fee-certificate 2 --staging tx

...
jcli transaction seal --staging tx
jcli transaction auth --key account_key.prv --staging tx
...

The --fee-certificate flag indicates the cost of adding a certificate, used for computing the fees, it can be omitted if it is zero.

See here for more documentation on transaction creation.

how to sign your delegation certificate

This procedure is needed only for certificates that are to be included in the genesis config file.

We need to make sure that the owner of the account is authorizing this delegation to happens, and for that we need a cryptographic signature.

We will need the account secret key to create a signature

jcli certificate sign --certificate stake_delegation.cert --key account_key.prv --output stake_delegation.signedcert

The content of stake_delegation.signedcert will be something like:

signedcert1q9uxkxptz3zx7akmugkmt4ecjjd3nmzween2qfr5enhzkt37tdt4uqt0j0039z5048mu9ayv3ujep5sl28q2cpdnx9fkvpq30lmjrrgtmqqctzczvu6e3v65m40n40c3y2pnu4vhd888dygkrtnfm0ts92fe50jy0h0ugh6wlvgy4xvr3lz4uuqzg2xgu6vv8tr24jrwhg0l09klp5wvwzl5

and can now be added in the genesis config file.

Registering a stake pool

There are multiple components to be aware of when running a stake pool:

  • your NodeId: it is the identifier within the blockchain protocol (wallet will delegate to your stake pool via this NodeId);
  • your [VRF] key pairs: this is the cryptographic material we will use to participate to the leader election;
  • your KES key pairs: this is the cryptographic material we will use to sign the block with.
  • the stake pool Tax: the value the stake pool will take from the total reward due to the stake pool before distributing rewards (if any left) to the delegators.

So in order to start your stake pool you will need to generate these objects.

The primitives

VRF key pair

To generate your [VRF] Key pairs, we will utilise jcli as described here:

jcli key generate --type=RistrettoGroup2HashDhH stake_pool_vrf.prv

stake_pool_vrf.prv file now contains the VRF private key.

jcli key to-public --input stake_pool_vrf.prv stake_pool_vrf.pub

stake_pool_vrf.pub file now contains the VRF public key.

KES key pair

Similar to above:

jcli key generate --type=SumEd25519_12 stake_pool_kes.prv

stake_pool_kes.prv file now contains the KES private key

jcli key to-public --input stake_pool_kes.prv stake_pool_kes.pub

stake_pool_kes.pub file now contains the KES public key

Choosing the Tax parameters

There are 3 values you can set to configure the stake pool’s Tax:

  • tax-fixed: this is the fixed cut the stake pool will take from the total reward due to the stake pool;
  • tax-ratio: this is the percentage of the remaining value that will be taken from the total due
  • tax-limit: a value that can be set to limit the pool’s Tax.

All of these values are optionals, if not set, they will be set to 0. This will mean no tax for the stake pool: rewards are all distributed to the delegators.

So how does this works

Let say you control a stake pool SP, with 2 owners (O1 and O2). During epoch 1, SP has created some blocks and is entitled to receive 10_000.

Before distributing the 10_000 among the delegators, SP will take its Tax.

  1. we extract the tax-fixed. If this is greater or equal to the total (10_000) then we stop there, there is no more rewards to distribute.
  2. with what remains the SP extracts its tax-ratio and checks the tax from the ratio is not greater than tax-limit.
  3. the total SP rewards will then be distributed equally to the owners (O1 and O2). Note that if the --reward-account is set, the rewards for SP are then distributed to that account and nothing to O1 and O2.

For example:

totalfixedratiolimitSPO1O2for delegators
takes 100%1000001/1010000500050000
fixed of 10001000010000/1010005005009000
fixed + 10%200010001/1001100550550900
fixed + 20% up to 150200010001/51501150575575850

The options to set

--tax-limit <TAX_LIMIT>
    The maximum tax value the stake pool will take.

    This will set the maximum the stake pool value will reserve for themselves from the `--tax-ratio` (excluding `--tax-fixed`).
--tax-ratio <TAX_RATIO>
    The percentage take of the stake pool.

    Once the `tax-fixed` has been take, this is the percentage the stake pool will take for themselves. [default: 0/1]
--tax-fixed <TAX_VALUE>
    set the fixed value tax the stake pool will reserve from the reward

    For example, a stake pool may set this value to cover their fixed operation costs. [default: 0]

creating a stake pool certificate

The certificate is what will be sent to the blockchain in order to register yourself to the other participants of the blockchain that you are a stake pool too.

jcli certificate new stake-pool-registration \
    --kes-key $(cat stake_pool_kes.pub) \
    --vrf-key $(cat stake_pool_vrf.pub) \
    --start-validity 0 \
    --management-threshold 1 \
    --tax-fixed 1000000 \
    --tax-limit 1000000000 \
    --tax-ratio "1/10" \
    --owner $(cat owner_key.pub) > stake_pool.cert

The --operator flag is optional.

And now you can retrieve your stake pool id (NodeId):

jcli certificate get-stake-pool-id stake_pool.cert
ea830e5d9647af89a5e9a4d4089e6e855891a533316adf4a42b7bf1372389b74

submitting to a node

The jcli transaction add-certificate command should be used to add a certificate before finalizing the transaction.

For example:

...

jcli transaction add-certificate $(cat stake_pool.cert) --staging tx
jcli transaction finalize CHANGE_ADDRESS --fee-constant 5 --fee-coefficient 2 --fee-certificate 2 --staging tx

...
jcli transaction seal --staging tx
jcli transaction auth --key owner_key.prv --staging tx
...

The --fee-certificate flag indicates the cost of adding a certificate, used for computing the fees, it can be omitted if it is zero.

See here for more documentation on transaction creation.

Retiring a stake pool

Stake pool can be retired by sending transaction with retirement certificate. From technical stand point, it is very similar to register stake pool operation. Before start we need to be sure, that:

  • you have sufficient amount of ada to pay fee for transaction with retirement certificate.
  • you know your stake pool id.

Retrieve stake pool id

To retrieve your stake pool id:

jcli certificate get-stake-pool-id stake_pool.cert
ea830e5d9647af89a5e9a4d4089e6e855891a533316adf4a42b7bf1372389b74

creating a retirement certificate

The certificate is what will be sent to the blockchain in order to retire your stake pool.

jcli certificate new stake-pool-retirement \
    --pool-id ea830e5d9647af89a5e9a4d4089e6e855891a533316adf4a42b7bf1372389b74 \
    --retirement-time 0 \
    retirement.cert

where:

  • retirement.cert - write the output of to the retirement.cert
  • --retirement-time 0 - 0 means as soon as possible. Which is until the next following epoch.
  • --pool-id ea830e5d9647af89a5e9a4d4089e6e855891a533316adf4a42b7bf1372389b74 - hex-encoded stake pool ID.

submitting to a node

The jcli transaction add-certificate command should be used to add a certificate before finalizing the transaction.

For example:

...

jcli transaction add-certificate $(cat retirement.cert) --staging tx
jcli transaction finalize CHANGE_ADDRESS --fee-constant 5 --fee-coefficient 2 --fee-certificate 2 --staging tx

...
jcli transaction seal --staging tx
jcli transaction auth --key owner_key.prv --staging tx
...

The --fee-certificate flag indicates the cost of adding a certificate, used for computing the fees, it can be omitted if it is zero.

Important ! Please be sure that you have sufficient amount of owners signatures in order to retire stake pool. At least half of owners singatures (which were provided when registering stake pool) are required to sign retirement certificate.

See here for more documentation on transaction creation.

Advanced

This section is meant for advanced users and developers of the node, or if you wish to learn more about the node.

At the moment, it only covers details on how to create your own blockchain genesis configuration, but in normal case, the blockchain configuration should be available with the specific blockchain system.

genesis file

The genesis file is the file that allows you to create a new blockchain from block 0. It lays out the different parameters of your blockchain: the initial utxo, the start time, the slot duration time, etc…

Example of a BFT genesis file with an initial address UTxO and an account UTxO. More info regarding starting a BFT blockchain here and regarding addresses there. You could also find information regarding the jcli genesis tooling.

You can generate a documented pre-generated genesis file:

jcli genesis init

For example your genesis file may look like:

# The Blockchain Configuration defines the settings of the blockchain.
blockchain_configuration:

  # The block0-date defines the date the blockchain starts
  # expected value in seconds since UNIX_EPOCH
  #
  # By default the value will be the current date and time. Or you can
  # add a specific time by entering the number of seconds since UNIX
  # Epoch
  block0_date: {default_block0_date}

  # This is the type of discrimination of the blockchain
  # if this blockchain is meant for production then
  # use 'production' instead.
  #
  # otherwise leave as this
  discrimination: {discrimination}

  # The initial consensus version:
  #
  # * BFT consensus: bft
  # * Genesis Praos consensus: genesis
  block0_consensus: bft

  # Number of slots in each epoch.
  #
  # default value is {default_slots_per_epoch}
  slots_per_epoch: {default_slots_per_epoch}

  # The slot duration, in seconds, is the time between the creation
  # of 2 blocks
  #
  # default value is {default_slot_duration}
  slot_duration: {default_slot_duration}

  # set the block content max size
  #
  # This is the size, in bytes, of all the contents of the block (excluding the
  # block header).
  #
  # default value is {default_block_content_max_size}
  block_content_max_size: {default_block_content_max_size}

  # A list of Ed25519 PublicKey that represents the
  # BFT leaders encoded as bech32. The order in the list matters.
  consensus_leader_ids:
    - {leader_1}
    - {leader_2}

  # Epoch stability depth
  #
  # Optional: default value {default_epoch_stability_depth}
  epoch_stability_depth: {default_epoch_stability_depth}

  # Genesis praos active slot coefficient
  # Determines minimum stake required to try becoming slot leader, must be in range (0,1]
  #
  # default value: {default_consensus_genesis_praos_active_slot_coeff}
  consensus_genesis_praos_active_slot_coeff: {default_consensus_genesis_praos_active_slot_coeff}

  # The fee calculations settings
  #
  # total fees: constant + (num_inputs + num_outputs) * coefficient [+ certificate]
  linear_fees:
    # this is the minimum value to pay for every transaction
    constant: 2
    # the additional fee to pay for every inputs and outputs
    coefficient: 1
    # the additional fee to pay if the transaction embeds a certificate
    certificate: 4
    # (optional) fees for different types of certificates, to override the one
    # given in `certificate` just above
    #
    # here: all certificate fees are set to `4` except for pool registration
    # and stake delegation which are respectively `5` and `2`.
    per_certificate_fees:
      # (optional) if not specified, the pool registration certificate fee will be
      # the one set by linear_fees.certificate
      certificate_pool_registration: 5
      # (optional) if not specified, the delegation certificate fee will be
      # the one set by linear_fees.certificate
      certificate_stake_delegation: 2
      # (optional) if not specified, the owner delegation certificate fee will be
      # the one set by linear_fees.certificate. Uncomment to set the owner stake
      # delegation to `1` instead of default `4`:
      # certificate_owner_stake_delegation: 1

  # Proposal expiration in epochs
  #
  # default value: {default_proposal_expiration}
  proposal_expiration: {default_proposal_expiration}

  # The speed to update the KES Key in seconds
  #
  # default value: {default_kes_update_speed}
  kes_update_speed: {default_kes_update_speed}

  # Set where to send the fees generated by transactions activity.
  #
  # by default it is send to the "rewards" pot of the epoch which is then
  # distributed to the different stake pools who created blocks that given
  # epoch.
  #
  # It is possible to send all the generated fees to the "treasury".
  #
  # Optional, default is "rewards"
  # fees_go_to: "rewards"

  # initial value the treasury will start with, if not set the treasury
  # starts at 0
  treasury: 1000000000000

  # set the treasury parameters, this is the tax type, just as in stake pool
  # registration certificate parameters.
  #
  # When distributing the rewards, the treasury will be first serve as per
  # the incentive specification document
  #
  # if not set, the treasury will not grow
  treasury_parameters:
    # the fix value the treasury will take from the total reward pot of the epoch
    fixed: 1000
    # the extra percentage the the treasury will take from the reward pot of the epoch
    ratio: "1/10"
    # It is possible to add a max bound to the total value the treasury takes
    # at each reward distribution. For example, one could cap the treasury tax
    # to 10000. Uncomment the following line to apply a max limit:
    # max_limit: 10000

  # Set the total reward supply available for monetary creation
  #
  # if not set there is no monetary creation
  # once emptied, there is no more monetary creation
  total_reward_supply: 100000000000000

  # set the reward supply consumption. These parameters will define how the
  # total_reward_supply is consumed for the stake pool reward
  #
  # There's fundamentally many potential choices for how rewards are contributed back, and here's two potential valid examples:
  #
  # Linear formula: constant - ratio * (#epoch after epoch_start / epoch_rate)
  # Halving formula: constant * ratio ^ (#epoch after epoch_start / epoch_rate)
  #
  reward_parameters:
    halving: # or use "linear" for the linear formula
      # In the linear formula, it represents the starting point of the contribution
      # at #epoch=0, whereas in halving formula is used as starting constant for
      # the calculation.
      constant: 100

      # In the halving formula, an effective value between 0.0 to 1.0 indicates a
      # reducing contribution, whereas above 1.0 it indicate an acceleration of contribution.
      #
      # However in linear formula the meaning is just a scaling factor for the epoch zone
      # (current_epoch - start_epoch / epoch_rate). Further requirement is that this ratio
      # is expressed in fractional form (e.g. 1/2), which allow calculation in integer form.
      ratio: "13/19"

      # indicates when this contribution start. note that if the epoch is not
      # the same or after the epoch_start, the overall contribution is zero.
      epoch_start: 1

      # the rate at which the contribution is tweaked related to epoch.
      epoch_rate: 3

  # set some reward constraints and limits
  #
  # this value is optional, the default is no constraints at all. The settings
  # are commented below:
  #
  #reward_constraints:
  #  # limit the epoch total reward drawing limit to a portion of the total
  #  # active stake of the system.
  #  #
  #  # for example, if set to 10%, the reward drawn will be bounded by the
  #  # 10% of the total active stake.
  #  #
  #  # this value is optional, the default is no reward drawing limit
  #  reward_drawing_limit_max: "10/100"
  #
  #  # settings to incentivize the numbers of stake pool to be registered
  #  # on the blockchain.
  #  #
  #  # These settings does not prevent more stake pool to be added. For example
  #  # if there is already 1000 stake pools, someone can still register a new
  #  # stake pool and affect the rewards of everyone else too.
  #  #
  #  # if the threshold is reached, the pool doesn't really have incentive to
  #  # create more blocks than 1 / set-value-of-pools % of stake.
  #  #
  #  # this value is optional, the default is no pool participation capping
  #  pool_participation_capping:
  #    min: 300
  #    max: 1000

  # list of the committee members, they will be used to guarantee the initial
  # valid operation of the vote as well as privacy.
  committees:
    - "7ef044ba437057d6d944ace679b7f811335639a689064cd969dffc8b55a7cc19"
    - "f5285eeead8b5885a1420800de14b0d1960db1a990a6c2f7b517125bedc000db"

# Initial state of the ledger. Each item is applied in order of this list
initial:
  # Initial deposits present in the blockchain
  - fund:
      # UTxO addresses or account
      - address: {initial_funds_address_1}
        value: 10000
      - address: {initial_funds_address_2}
        value: 10000
  # Initial token distribution
  - token:
      token_id: 00000000000000000000000000000000000000000000000000000000.7e5d6abc
      to:
        - address: {initial_funds_address_1}
          value: 150
        - address: {initial_funds_address_2}
          value: 255
  - token:
      token_id: 00000000000000000000000000000000000000000000000000000000.6c1e8abc
      to:
        - address: {initial_funds_address_1}
          value: 22
        - address: {initial_funds_address_2}
          value: 66

  # Initial certificates
  #- cert: ..

  # Initial deposits present in the blockchain
  #- legacy_fund:
  #    # Legacy Cardano address
  #    - address: 48mDfYyQn21iyEPzCfkATEHTwZBcZJqXhRJezmswfvc6Ne89u1axXsiazmgd7SwT8VbafbVnCvyXhBSMhSkPiCezMkqHC4dmxRahRC86SknFu6JF6hwSg8
  #      value: 123

There are multiple parts in the genesis file:

  • blockchain_configuration: this is a list of configuration parameters of the blockchain, some of which can be changed later via the update protocol;
  • initial: list of steps to create initial state of ledger

blockchain_configuration options

optionformatdescription
block0_datenumberthe official start time of the blockchain, in seconds since UNIX EPOCH
discriminationstringproduction or test
block0_consensusstringbft
slot_durationnumberthe number of seconds between the creation of 2 blocks
epoch_stability_depthnumberallowed size of a fork (in number of block)
consensus_leader_idsarraythe list of the BFT leader at the beginning of the blockchain
block_content_max_sizenumberthe maximum size of the block content (excluding the block header), in bytes.
linear_feesobjectlinear fee settings, set the fee for transaction and certificate publishing
consensus_genesis_praos_active_slot_coeffnumbergenesis praos active slot coefficient. Determines minimum stake required to try becoming slot leader, must be in range (0,1]
kes_update_speednumberthe speed to update the KES Key in seconds
slots_per_epochnumbernumber of slots in each epoch

for more information about the BFT leaders in the genesis file, see Starting a BFT Blockchain

initial options

Each entry can be one of 3 variants:

variantformatdescription
fundsequenceinitial deposits present in the blockchain (up to 255 outputs per entry)
certstringinitial certificate
legacy_fundsequencesame as fund, but with legacy Cardano address format

Example:

initial:
  - fund:
      - address: <address>
        value: 10000
      - address: <address2>
        value: 20000
      - address: <address3>
        value: 30000
  - cert: <certificate>
  - legacy_fund:
      - address: <legacy address>
        value: 123
  - fund:
      - address: <another address>
        value: 1001

fund and legacy_fund format

variantformatdescription
addressstringcan be a single address or an account address
valuenumberassigned value

legacy_fund differs only in address format, which is legacy Cardano

starting a bft node

BFT stands for the Byzantine Fault Tolerant (read the paper).

Jormungandr allows you to start a BFT blockchain fairly easily. The main downside is that it is centralized, only a handful of nodes will ever have the right to create blocks.

How does it work

It is fairly simple. A given number of Nodes (N) will generate a key pairs of type Ed25519 (see JCLI’s Keys).

They all share the public key and add them in the genesis.yaml file. It is the source of truth, the file that will generate the first block of the blockchain: the Block 0.

Then, only by one after the other, each Node will be allowed to create a block. Utilising a Round Robin algorithm.

Example of genesis file

blockchain_configuration:
  block0_date: 1550822014
  discrimination: test
  block0_consensus: bft
  slots_per_epoch: 5
  slot_duration: 15
  epoch_stability_depth: 10
  consensus_leader_ids:
    - ed25519e_pk1k3wjgdcdcn23k6dwr0cyh88ad7a4ayenyxaherfazwy363pyy8wqppn7j3
    - ed25519e_pk13talprd9grgaqzs42mkm0x2xek5wf9mdf0eefdy8a6dk5grka2gstrp3en
  consensus_genesis_praos_active_slot_coeff: 0.22
  linear_fees:
    constant: 2
    coefficient: 1
    certificate: 4
  kes_update_speed: 43200
initial:
  - fund:
      - address: ta1svy0mwwm7mdwcuj308aapjw6ra4c3e6cygd0f333nvtjzxg8ahdvxlswdf0
        value: 10000
  - cert: cert1qgqqqqqqqqqqqqqqqqqqq0p5avfqqmgurpe7s9k7933q0wj420jl5xqvx8lywcu5jcr7fwqa9qmdn93q4nm7c4fsay3mzeqgq3c0slnut9kns08yn2qn80famup7nvgtfuyszqzqrd4lxlt5ylplfu76p8f6ks0ggprzatp2c8rn6ev3hn9dgr38tzful4h0udlwa0536vyrrug7af9ujmrr869afs0yw9gj5x7z24l8sps3zzcmv
  - legacy_fund:
      - address: 48mDfYyQn21iyEPzCfkATEHTwZBcZJqXhRJezmswfvc6Ne89u1axXsiazmgd7SwT8VbafbVnCvyXhBSMhSkPiCezMkqHC4dmxRahRC86SknFu6JF6hwSg8
        value: 123

In order to start your blockchain in BFT mode you need to be sure that:

  • consensus_leader_ids is non empty;

more information regarding the genesis file here.

Creating the block 0

jcli genesis encode --input genesis.yaml --output block-0.bin

This command will create (or replace) the Block 0 of the blockchain from the given genesis configuration file (genesis.yaml).

Starting the node

Now that the blockchain is initialized, you need to start your node.

Write your private key in a file on your HD:

$ cat node_secret.yaml
bft:
  signing_key: ed25519_sk1hpvne...

Configure your Node (config.yml) and run the following command:

$ jormungandr --genesis-block block-0.bin \
    --config example.config \
    --secret node_secret.yaml

It’s possible to use the flag --secret multiple times to run a node with multiple leaders.

Step by step to start the BFT node

  1. Generate initial config jcli genesis init > genesis.yaml

  2. Generate secret key, e.g. jcli key generate --type=Ed25519 > key.prv

  3. Put secret key in a file, e.g. node_secret.yaml as follows:

    bft:
    signing_key: ed25519_sk1kppercsk06k03yk4qgea....
    
  4. Generate public key out of previously generated key cat key.prv | jcli key to-public

  5. Put generated public key as in genesis.yaml under consensus_leader_ids:

  6. Generate block = jcli genesis encode --input genesis.yaml --output block-0.bin

  7. Create config file and store it on your HD as node.config e.g. ->

    ---
    log:
      level: trace
      format: json
    rest:
      listen: "127.0.0.1:8607"
    p2p:
      public_address: /ip4/127.0.0.1/tcp/8606
      topics_of_interest:
        messages: low
        blocks: normal
    
  8. Start Jörmungandr node :

    jormungandr --genesis-block block-0.bin --config node.config --secret node_secret.yaml
    

Script

Additionally, there is a script here that can be used to bootstrap a test node with bft consensus protocol.

starting a genesis blockchain

When starting a genesis praos blockchain there is an element to take into consideration while constructing the block 0: the stake distribution.

In the context of Genesis/Praos the network is fully decentralized and it is necessary to think ahead about initial stake pools and to make sure there is stake delegated to these stake pools.

In your genesis yaml file, make sure to set the following values to the appropriate values/desired values:

# The Blockchain Configuration defines the settings of the blockchain.
blockchain_configuration:
  block0_consensus: genesis_praos
  consensus_genesis_praos_active_slot_coeff: 0.1
  kes_update_speed: 43200 # 12hours

block0_consensus set to genesis_praos means you want to start a blockchain with genesis praos as the consensus layer.

consensus_genesis_praos_active_slot_coeff determines minimum stake required to try becoming slot leader, must be in range 0 exclusive and 1 inclusive.

The initial certificates

In the initial_certs field you will set the initial certificate. It is important to declare the stake pool and delegate stake to them. Otherwise no block will ever be created.

Remember that in this array the order matters:

In order to delegate your stake, you need a stake pool to already exist, so the stake pool registration certificate should go first.

Stake pool registration

Now you can register a stake pool. Follow the instructions in registering stake pool guide.

The owner key (the key you sign the stake pool registration certificate) is the secret key associated to a previously registered stake key.

Delegating stake

Now that there is both your stake key and there are stake pools available in the block0 you need to delegate to one of the stake pool. Follow the instruction in delegating stake.

And in the initial funds start adding the addresses. To create an address with delegation follow the instruction in JCLI’s address guide. Utilise the stake key registered previously as group address:

jcli address single $(cat wallet_key.pub) $(cat stake_key.pub)
ta1sjx4j3jwel94g0cgwzq9au7h6m8f5q3qnyh0gfnryl3xan6qnmjse3k2uv062mzj34eacjnxthxqv8fvdcn6f4xhxwa7ms729ak3gsl4qrq2mm

You will notice that addresses with delegation are longer (about twice longer) than addresses without delegation.

For example, the most minimal setting you may have is:

initial_certs:
  # register a stake pool (P), owner of the stake pool is the stake key (K)
  - cert1qsqqqqqqqqqqqqqqqqqqq0p5avfqp9tzusr26chayeddkkmdlap6tl23ceca8unsghc22tap8clhrzslkehdycufa4ywvqvs4u36zctw4ydtg7xagprfgz0vuujh3lgtxgfszqzqj4xk4sxxyg392p5nqz8s7ev5wna7eqz7ycsuas05mrupmdsfk0fqqudanew6c0nckf5tsp0lgnk8e8j0dpnxvjk2usn52vs8umr3qrccegxaz

  # delegate stake associated to stake key (K) to stake pool (P)
  - cert1q0rv4ccl54k99rtnm39xvhwvqcwjcm385n2dwvamahpu5tmdz3plt65rpewev3a03xj7nfx5pz0xap2cjxjnxvt2ma9y9dalzder3xm5qyqyq0lx05ggrws0ghuffqrg7scqzdsd665v4m7087eam5zvw4f26v2tsea3ujrxly243sgqkn42uttk5juvq78ajvfx9ttcmj05lfuwtq9qhdxzr0

initial_funds:
  # address without delegation
  - address: ta1swx4j3jwel94g0cgwzq9au7h6m8f5q3qnyh0gfnryl3xan6qnmjsczt057x
    value: 10000
  # address delegating to stake key (K)
  - address: ta1sjx4j3jwel94g0cgwzq9au7h6m8f5q3qnyh0gfnryl3xan6qnmjse3k2uv062mzj34eacjnxthxqv8fvdcn6f4xhxwa7ms729ak3gsl4qrq2mm
    value: 1000000

Starting the node

Now, to start the node and be able to generate new blocks, you have to put your pool’s private keys and id in a file. Then start the node with the --secret filename parameter.


For example, if you follow the examples of the registering stake pool guide

You could create a file called poolsecret.yaml with the following content.

genesis:
  sig_key: Content of stake_pool_kes.prv file
  vrf_key: Content of stake_pool_vrf.prv file
  node_id: Content of stake_pool.id file

And you could start the node with this command

jormungandr --genesis-block block-0.bin --config config.yaml --secret poolsecret.yaml

Test script

There is a script here that can be used to bootstrap a test node with a pre-set faucet and stake pool and can be used as an example.

How Vote plans, Vote Fragments and the blockchain transaction work and inter-relate

Please just brain dump everything you know about the above topics, or anything related to them, either individually or interrelated. This process is not intended to consume an excessive amount of your time, so focus more on getting the information you have to contribute down in the quickest way possible.

Don’t be overly concerned with format or correctness, its not a test. If you think things work in a particular way, describe it. Obviously, different people will know different things, don’t second guess info and not include it because you think someone else might say it.

If you have technical details, like the format of a data entity that can be explained, please include it. This is intended to become a deep dive, to the byte level. If you want to, feel free to x-ref the code as well.

Add what you know (if anything) in the section below your name and submit a PR to the DOCS branch (not main) with Steven Johnson for review. I will both review and merge these. I will also start collating the data once this process is complete, and we can then iterate until the picture is fully formed and accurate. Feel free to include other .md files if there is a big piece of information, such as the format of a vote transaction, or the vote plan section of block 0, etc. Or refer to other documentation we may already have (in any form, eg confluence, jira issue or Miro, or the old repos or Anywhere else is ok.).

For Jormungandr, we are particularly interested in:

How the vote plan is set up, what the various fields of the vote plan are and how they are specified. 2. How individual votes relate to vote-plans. 3. How votes are prevented from being cast twice by the same voter. 4. The format of the entire vote transaction, both public and private. 5. How is the tally conducted? (is it done in Jormungandr, or with the jcli tool for example)? 6. Anything else which is not listed but is necessary to fully understand the votes cast in Jormungandr.

Don’t feel limited by this list, if there is anything else the list doesn’t cover but you want to describe it, please do.

Sasha Prokhorenko

Nicolo Padovani

Felipe Rosa

Joaquin Rosales

Proposal.chain_proposal_id

This field is not very well documented, except for a line in book/src/core-vitss-doc/api/v0.yaml that describes it as:

    > Identifier of the proposal on the blockchain.

Internally, the identifier is of type ExternalProposalId (src/chain-libs/chain-impl-mockchain/src/certificate/vote_plan.rs). This is an alias type for DigestOf<Blake2b256, _>, from the chain_crypto crate. This is undocumented.

The ExternalProposalId is sent through the wire and csv files as a 64-character hex-encoded string.

The catalyst-toolbox binary decodes this hex string, and converts it into a valid ExternalProposalId. So that the underlying [u8; 32] can be extracted, hashed and used in logic related to rewards thresholds, votes, and dreps.

There is an arbitrary snapshot generator used in vit-servicing-station-tests. It creates valid ExternalProposalId from a randomized [u8; 32] array that is used in integration tests found in vit-testing.

Stefano Cunego

Conor Gannon

Alex Pozhylenkov

Spending Counters

Spending counter associated to an account. Every time the owner is spending from an account, the counter is incremented. This features is similar to the Ethereum nonce field in the block and prevents from the replay attack.

#![allow(unused)]
fn main() {
pub struct SpendingCounter(pub(crate) u32);
}

As it was said before every account associated with the a current state of the Spending Counter. Or to be more precised to an array of 8 Spending counters.

#![allow(unused)]
fn main() {
pub struct SpendingCounterIncreasing {
    nexts: Vec<SpendingCounter>,
}
}

Each spending counter differers with each other. The specified lane bits are a first 3 bits of the original Spending counter value. Spending counter structure:

(001)[lane] (00000 00000000 00000000 00000001){counter}
(00100000 00000000 00000000 00000001){whole Spending Counter}

With such approach user can:

  • generate up to 8 transactions with the specified different lanes and corresponding counters
  • submit it into the blockchain with no matter on the transaction processing order.

So incrementing of the counter will be done in “parallel” for each lane. That is the only difference with the original Ethereum approach with nonce (counter in our implementation). Where for each transaction you should specify an exact value and submits transaction in the exact order.

Cameron Mcloughlin

Dariusz Kijania

Ognjen Dokmanovic

Stefan Rasevic

Jormungandr Specifications

This directory contains Jormungandr’s specifications.

filecontent
networkthe node to node communication and the peer to peer topology

MIGRATION

This is the migration plan for current cardano blockchain (henceforth refered as legacy) to jormungandr style state and formats.

Vocabulary

  • Block Zero: first/genesis block of the blockchain.

Description

It’s paramount for all users from the legacy chain to find their precious data after the migration. Also as one secondary consideration, the users need not to be aware, as much as possible of the transition, apart from requiring new or updated software capable of handling the new formats and processes. Lastly, it would be useful to provide some kind of cryptographic continuinity of the chains, increasing assurances during transition.

The first thing that need consideration is the legacy utxos. We need the ability to take the latest known state of coin distribution and transfer this as is to the new state order.

The settings of the legacy chain, are automatically superseded by the new settings mandatory in block zero, so there’s no need to keep any related data.

The heavy/light delegation certificates are also superseded by either the BFT leaders or the Genesis-Praos stake pools defined explicitely in block zero.

From a user experience and offering continuinity of history, it would be preferable to start the chain initial date at the end of the legacy one. This way the user can still refer to historical transaction in the legacy era of the chain without seeing similar block date on two different era.

Finally it’s important to provide as much guarantee as possible of the transition, and hence knowing the hash of last block of the legacy chain on “the other side”, would allow some validation mechanism. Block 0 content being a trusted data assumption, having the previous hash embedded directly inside, reconstruct the inherent chain of trust of a blockchain cheaply.

Mechanisms

To support this, the following continuinity mechanisms are thus available:

  • blockchain continuity: the ability to embed inside block zero of the chain an arbitrary hash of data, representing the last block of the legacy chain.
  • user experience: block zero choice of start of epoch (e.g. starting the new chain at epoch 129).
  • legacy funds: A sequence of legacy address and their associated values

Note: On the blockchain continuity, we decided to store the hash as an opaque blob of data in the content, instead of using the normal blockchain construction of the previous hash. Using the previous hash, would have made the start condition of the blockchain harder to detect compared to the sentinel 0 value currently in place and would have forced to have an identical hash size by construction.

The legacy funds are automatically assigned a new transaction-id / index in the new system, compared to whichever computed transaction-id / index in the legacy chain. This new transaction-id is computed similarly from normal transaction in the new system, and no special case has been added to support this. However the legacy address is stable across this transition, allowing user to find their funds on whichever address it was left, at the value it was left.

Transaction

To clearly break from the past, the old funds are only allowed to be consumed, leading to the old state monotonically decreasing. This also prevent from having the old legacy address construction available in witness or outputs.

The transaction-id/index system is the same as normal funds, so the inputs doesn’t requires any modification, however we need to distinguish the witness since the witness on the old chain is different. This provide a clear mechanism to distinguish the type of input (fund or old-fund).

The witness construction is similar to what is found on the old chain, an extended public key followed by a signature.

Mainnet-Testnet tests

Considering the risk involve in such a migration, we can repeatly tests mock migration at arbitrary points (preferably at end of epoch).

The migration itself will be fully automated and repeadtly tested for solidity and accuracy, and can be done with mostly off the shelf code that we already have.

The migration will capture the latest known state and create the equivalent genesis.yaml file mingled with the settings for the new blockchain, and subsequently compiled into a working block0. The task itself should be completeable in sub-second, leading to a very small window of transition. Although to note, the block0 size is proportional to the number of state point that is being kept; Approximately for ~200000 utxos, 13mb of block zero will be created.

rust-cardano’s chain-state is already capable to capture the latest known state, but there’s no currently any genesis generational tool for this task, although the task remain fairly simple.

Advantages

The net benefits is the total removal of all legacy constructs; The new users or software have no need to handle any of the legacy data.

This also provide an implicit net chain “compression”:

what happened in Byron, stays in Byron.

The legacy addresses are particularly problematic for many reasons not described here, but not providing full usage is particularly advantageous, nonetheless since it provide a way to not have their numbers go up ever after transition.

Historical data

From historical purpose and bip44 wallets, we need to provide the legacy blocks.

The legacy blocks can be made available from a 3rd party service for a one-of 2.0 Gb download (approximate: all the mainnet data), for example using a service like cardano-http-bridge which have caching and CDN capability, leading to a very small cost for the provider of such a service.

It’s also possible to provide the historical data as part of the node, supplementing the current interface with an interface to download old data ala cardano-http-bridge. The first option is strongly favored to cleanly break the legacy data from the new data.

Legacy randomized wallets (e.g. Ddz addresses) will not need to download the full history, since the legacy address contains the metadata sufficient for recovery, so only block zero is necessary to know their own fund.

On the other hand, legacy BIP44 wallets will need to download the full history to be able to recover their BIP44 state at the transition.

For wallet history of legacy wallets, the historical data will have to be downloaded too.

For new wallet, after the transition, this historical data will not be needed whatsoever, saving 2.0gb of download for new users.

Network

Bringing Ouroboros to the people


Introduction

This document highlights the requirements we wish to apply to a decentralised network applied to cardano blockchain. Then we will discuss the possible solutions we can provide in a timely manner and the tradeoff we will need to make.

Design decisions guidelines

This is a main of general guidelines for the design decision in this document, and to judge the merit of solutions:

  • Efficiency: the communication between the nodes needs to be succinct. to the point, avoiding unnecessary redundancies. The protocol needs to stabilise quickly to a well distributed network, guaranteeing a fast propagation of the important events;
  • Security: limit the ability for other nodes to trigger behavior that would prevent a peer from working (e.g. unbounded resources usage)
  • Simplicity: we need to easily implement the protocol for any platforms or environment that will matter for our users.

Node-to-Node communication

This section describes the communication between 2 different peers on the network. It involves synchronous queries with the context of the local state and remote state.

General Functionality

This is a general high level list of what information will need to be exchanged:

  • Bootstrap local state from nothing
  • Update local state from an arbitrary point
  • Answer Synchronous queries: RPC style
  • Asynchronous messages for state propagation (transactions, blocks, ..)
  • P2P messages (See P2P Communication)

Design

User Stories

  • Alice wants to synchronise its local state from Bob from Alice’s Tip:

    • Alice downloads Block Headers from Bob (starting from Alice’s Tip);

      • Bob does not know this Tip:
        • Error: unknown block
        • Alice starts again with a previous Tip;
      • Bob does know this state:
        • Bob streams back the block headers
    • Alice downloads block

      • Since Alice knows the list of Block Headers and the number of blocks to download, Alice can download from multiple peers, requiring to get block stream from different Hash in this list of Block;
    • State: tip_hash, storage

    • Pseudocode (executed by Alice):

      #![allow(unused)]
      fn main() {
      sync():
        bob.get_headers(alice.tip)
      }
  • Alice wants to propagate a transaction to Bob

    • Alice send the transaction hash to Bob
    • Bob replies whether it want to hear more
    • Alice send the transaction to Bob if Bob agrees
  • Alice wants to submit a Block to Bob

    • Alice sends the Header to Bob;
    • Bob replies whether it want to hear more
    • Alice sends the Block to Bob if Bob agrees
  • Alice want to exchange peers with Bob

High Level Messages

We model everything so that we don’t need any network state machine. Everything is stateless for

  • Handshake: () -> (Version, Hash)
    • This should be the first request performed by the client after connecting. The server responds with the protocol version and the hash of the genesis block.
    • The handshake is used to establish that the remote node has a compatible protocol implementation and serves the right block chain.
  • Tip: () -> Header:
    • Return the header of the latest block known by the peer (also known as at the tip of the blockchain).
    • DD? : Block vs hash: block is large but contain extra useful metadata (slotid, prevhash), whereas hash is small.
  • GetHeaders: ([Hash]) -> [Header]:
    • Fetch the headers (cryptographically verifiable metadata summaries) of the blocks identified by hashes.
  • GetBlocks: ([Hash]) -> [Block]:
    • Like GetHeaders, but returns full blocks.
  • PullBlocksToTip: ([Hash]) -> Stream<Block>:
    • Retrieve a stream of blocks descending from one of the given hashes, up to the remote’s current tip.
    • This is an easy way to pull blockchain state from a single peer, for clients that don’t have a need to fiddle with batched GetBlocks requests and traffic distribution among multiple peers.
  • BlockSubscription: (Stream<Header>) -> Stream<BlockEvent>
    • Establish a bidirectional subscription to send and receive announcements of new blocks and (in the client role) receive solicitations to upload blocks or push the chain of headers.
    • The stream item is a tagged enumeration: BlockEvent: Announce(Header)|Solicit([Hash])|Missing([Hash], Hash)
      • Announce propagates header information of a newly minted block.
      • Solicit requests the client to upload blocks identified by the given hashes using the UploadBlocks request.
      • Missing requests the client to stream the chain of block headers using the given range parameters. The meaning of the parameters is the same as in the PullHeaders request.
    • The client does not need to stream solicitations upwards, as it can request blocks directly with GetBlocks or PullHeaders.
    • The announcements send in either direction are used for both announcing new blocks when minted by this node in the leadership role, and propagating blocks received from other nodes on the p2p network.
  • PullHeaders: ([Hash], Hash) -> Stream<Header>
    • Retrieve a stream of headers for blocks descending from one of the hashes given in the first parameter, up to the hash given in the second parameter. The starting point that is latest in the chain is selected.
    • The client sends this request after receiving an announcement of a new block via the BlockSubscription stream, when the parent of the new block is not present in its local storage. The proposed starting points are selected from locally known blocks with exponentially receding depth.
  • PushHeaders: (Stream<Header>)
    • Streams the chain of headers in response to a Missing event received via the BlockSubscription stream.
  • UploadBlocks: (Stream<Block>)
    • Uploads blocks in response to a Solicit event received via the BlockSubscription stream.
  • ContentSubscription: (Stream<Fragment>) -> Stream<Fragment>
    • Establish a bidirectional subscription to send and receive new content for the block under construction.
    • Used for submission of new fragments submitted to the node by application clients, and for relaying of fragment gossip on the network.
  • P2P Messages: see P2P messages section.

The protobuf files describing these methods are available in the proto directory of chain-network crate in the chain-libs project repository.

Pseudocode chain sync algorithm

#![allow(unused)]
fn main() {
struct State {
  ChainState chain_state,
  HashMap<Hash, Block> blocks
}

struct ChainState {
  Hash tip,
  HashSet<Hash> ancestors,
  Utxos ...,
  ...
}

impl ChainState {
  Fn is_ancestor(hash) -> bool {
    self.ancestors.exists(hash)
  }
}

// Fetch ‘dest_tip’ from `server’ and make it our tip, if it’s better.
sync(state, server, dest_tip, dest_tip_length) {
  if is_ancestor(dest_tip, state.chain_state.tip) {
    return; // nothing to do
  }

  // find a common ancestor of `dest_tip` and our tip.
  // FIXME: do binary search to find exact most recent ancestor
  n = 0;
  loop {
    hashes = server.get_chain_hashes(dest_tip, 2^n, 1);
    if hashes == [] {
      ancestor = genesis;
      break;
    }
    ancestor = hashes[0];
    if state.chain_state.has_ancestor(ancestor): { break }
    n++;
  }

  // fetch blocks from ancestor to dest_tip, in batches of 1000
  // blocks, forwards
  // FIXME: integer arithmetic is probably off a bit here, but you get the idea.
  nr_blocks_to_fetch = 2^n;
  batch_size = 1000;
  batches = nr_blocks_to_fetch / batch_size;
  new_chain_state = reconstruct_chain_state_at(ancestor);
  for (i = batches; i > 0; i--) {
    // validate the headers ahead of downloading blocks to validate
    // cryptographically invalid blocks. It is interesting to do that
    // ahead of time because of the small size of a BlockHeader
    new_hashes = server.get_chain_hashes(dest_tip, (i - 1) * batch_size, batch_size);
    new_headers = server.get_headers(new_hashes);
    if new_headers are invalid { stop; }
    new_blocks = server.get_blocks(new_hashes).reverse();
    for block in new_blocks {
      new_chain_state.validate_block(block)?;
      write_block_to_storage(block);
    }
  }

  if new_chain_state.chain_quality() > state.chain_state.chain_quality() {
    state.chain_state = new_chain_state
  }
}
}

Choice of wire Technology

We don’t rely on any specific wire protocol, and only require that the wire protocol allow the transfer of the high level messages in a bidirectional way.

We chose to use GRPC/Protobuf as initial technology choice:

  • Efficiency: Using Protobuf, HTTP2, binary protocol
  • Bidirectional: through HTTP2, allowing stream of data. data push on a single established connection.
  • Potential Authentication: Security / Stream atomicity towards malicious MITM
  • Simplicity: Many languages supported (code generation, wide support)
  • Language/Architecture Independent: works on everything
  • Protobuf file acts as documentation and are relatively easy to version

Connections and bidirectional subscription channels can be left open (especially for clients behind NAT), although we can cycle connections with a simple RCU-like system.

Node-to-Client communication

Client are different from the node, in the sense that they may not be reachable by other peers directly.

However we might consider non reachable clients to keep an open connections to a node to received events. TBD

  • ReceiveNext : () -> Event

Another solution would be use use libp2p which also implements NAT Traversals and already has solutions for this.

Peer-to-Peer network

This section describes the construction of the network topology between nodes participating in the protocol. It will describes the requirements necessary to propagate the most efficiently the Communication Messages to the nodes of the topology.

Definitions

  • Communication Messages: the message that are necessary to be sent through the network (node-to-node and node-to-client) as defined above;
  • Topology: defines how the peers are linked to each other;
  • Node or Peer: an instance running the protocol;
  • Link: a connection between 2 peers in the topology;

Functionalities

  • A node can join the network at any moment;
  • A node can leave the network at any moment;
  • Node will discover new nodes to connect to via gossiping: nodes will exchange information regarding other nodes;
  • Nodes will relay information to their linked nodes (neighbors);
  • A node can challenge another node utilising the VRF in order to authentify the remote node is a specific stake owner/gatherer.

Messages

  • RingGossip: NodeProfileDetails * RING_GOSSIP_MAX_SIZE;
  • VicinityGossip: NodeProfileDetails * VICINITY_GOSSIP_MAX_SIZE;
  • CyclonGossip: NodeProfileDetails * CYCLON_GOSSIP_MAX_SIZE;

A node profile contains:

  • Node’s id;
  • Node’s IP Address;
  • Node’s topics (set of what the node is known to be interested into);
  • Node’s connected IDs

Communications Design

The requirements to join and leave the network at any moment, to discover and change the links and to relay messages are all handled by PolderCast. Implementing PolderCast provides a good support to handle churn, fast relaying and quick stabilisation of the network. The paper proposes 3 modules: Rings, Vicinity and Cyclon.

Our addition: The preferred nodes

We propose to extend the number of modules with a 4th one. This module is static and entirely defined in the config file.

This 4th module will provide the following features:

  • Connect to specific dedicated nodes that we know we can trust (we may use a VRF challenge to validate they are known stakeholder – they participated to numerous block creations);
    • This will add a static, known inter-node communications. Allowing users to build a one to one trusted topology;
    • A direct application for this will be to build an inter-stake-pool communication layer;
  • Static / configured list of trusted parties (automatically whitelisted for quarantine)
  • Metrics measurement related to stability TBD

Reports and Quarantine

In order to facilitate the handling of unreachable nodes or of misbehaving ones we have a system of reports that handles the state of known peers.

Following such reports, at the moment based only on connectivity status, peers may move into quarantine or other less restrictive impairements.

In the current system, a peer can be in any of these 4 states:

  • Available: the peer is known to the current node and can be picked up by poldercast layers for gossip and propagation of messages. This is the state in which new peers joining the topology via gossip end up.

  • Trusted: the last handshake between the peer and this node was successfull. For these kind of nodes we are a little more forgiving with reports and failures.

  • Quarantined: the peer has (possibly several) failed handshake attempts. We will not attempt to contact it again for some time even if we receive new gossip.

  • Unknown: the peer is not known to the current node.

    Actually, due to limitations of the poldercast library, this may mean that there are some traces of the peer in the profiles maintained by the current node but it cannot be picked up by poldercast layers for gossip or propagation. For all purposes but last resort connection attempts (see next paragraph), these two cases are essentially the same.

Since a diagram is often easier to understand than a bunch of sentences, these are the transitions between states in the current implementation, with details about avoiding network partition removed (see next paragraph).

Quarantine

Avoid network partitions

An important property of the p2p network is resilience to outages. We must avoid creating partitions in the network as much as possible.

For this reason, we send a manual (i.e. not part of poldercast protocol) optimistic gossip message to all nodes that were reported after the report expired. If this message fails to be delivered, no further action will be taken against that peer to avoid cycling it back and forth from quarantine indefinitely. If instead the message is delivered correctly, we successfully prevented a possible partition :).

Another measure in place is a sort of a last resort attempt: if the node did not receive any incoming gossip for the last X minutes (tweakable in the config file), we try to contact again any node that is not quarantined and for which we have any trace left in the system (this is where nodes that were artificially forgotten by the system come into play).

Part to look into

Privacy. Possibly Dandelion tech

Adversarial models considered

Adversarial forks

We consider an adversary whose goal is to isolate from the network nodes with stake. The impact of such successful attack would prevent block creation. Such adversarial node would propose a block that may look like a fork of the blockchain. Ouroboros Genesis allows fork up to an undetermined number of blocks in the past. The targeted would then have to do a large amount of block synchronisation and validation.

  • If the fork pretend to be in an epoch known to us, we can perform some cryptographic verifications (check the VRF);
  • If the fork pretends to be in an epoch long past, we may perform a small, controlled verification of up to N blocks from the forking point to verify the validity of these blocks;
  • Once the validity verified, we can then verify the locality aliveness of the fork and apply the consensus algorithm to decide if such a fork is worth considering.
  • However, suck attack can be repeated ad nauseam by any adversarial that happened to have been elected once by the protocol to create blocks. Once elected by its stake, the node may turn adversarial, creates as many invalid blocks, and propose them to the attacked node indefinitely. How do we keep track of the rejected blocks ? How do we keep track of the blacklisted stakeholder key or pool that are known to have propose too many invalid block or attempted this attack ?
    • Rejected block have a given block hash that is unlikely to collide with valid blocks, a node can keep a bloomfilter of hashes of known rejected block hash; or of known rejected VRF key;
    • The limitation of maintaining a bloom filter is that we may need to keep an ever growing bloom filter. However, it is reasonable to assume that the consensus protocol will organise itself in a collection of stakepools that have the resources (and the incentive) to keep suck bloom filter.

Flooding attack

We consider an adversary whose goal is to disrupt or interrupt the p2p message propagation. The event propagation mechanism of the pub/sub part of the p2p network can be leverage to continuously send invalid or non desired transactions to the network. For example, in a blockchain network protocol the transactions are aimed to be sent quickly between nodes of the topology so they may be quickly added to the ledger.

  • While it is true that one can create a random amount of valid transactions, it is also possible perform a certain amount of validation and policies to prevent the transaction message forwarding from flooding the network:
    • The protocol already requires the nodes to validate the signatures and that the inputs are unspent;
    • We can add a policy not to accept transaction that may imply a double spend, i.e. in our pool of pending transaction, we can check that there is no duplicate inputs.
  • The p2p gossiping protocols is an active action where a node decides to contact another node to exchange gossip with. It is not possible to flood the network with the gossiping messages as they do not require instant propagation of the gossips.

Anonymity Against distributed adversaries

We consider an adversary whose goal is to deanonymize users by linking their transactions to their IP addresses. This model is analysed in Dandelion. PolderCast already allows us to provide some reasonable guarantees against this adversary model.

  • Node do not share their links, they share a limited number of gossips based on what a node believe the recipient node might be interested in;
  • While some links can be guessed (those of the Rings module for example), some are too arbitrary (Vicinity or Cyclon) to determined the original sender of a transaction;

Man in the middle

We consider an adversary that could intercept the communication between two nodes. Such adversary could:

  • Escalate acquired knowledge to break the node privacy (e.g. user’s public keys);
  • Disrupt the communication between the two nodes;

Potentially we might use SSL/TLS with dynamic certificate generation. A node would introduce itself to the network with its certificate. The certificate is then associated to this node and would be propagated via gossiping to the network.

In relation to Ouroboros Genesis

Each participants in the protocol need:

  • Key Evolving signature (KES) secret key
  • Verifiable Random Function (VRF) secret key

Apart from the common block deserialization and hashing verification, each block requires:

  • 2 VRF verification
  • 1 KES verification.

Considering the perfect network, it allow to calculate how many sequential hops, a block can hope to reach at a maximum bound.

testing

This section describes tools and libraries used to test catalyst-core components.

Jormungandr test libraries includes projects:

  • jormungandr-automation - sets of apis for automating all node calls and node sub-components (REST, GRPC, logging etc.),
  • hersir - api & cli for bootstrapping entire network of nodes with some predefined configuration. Project takes care of proper settings for all nodes as well as block0,
  • thor - testing api & cli for all wallet operations,
  • mjolnir - load tool (api & cli) for all kind of jormungandr transactions,
  • loki - api & cli for sending invalid/adversary load as well as boostraping adversary node.

jormungandr-automation

Incubator of all testing apis for the node and jcli:

build

In order to build jormungandr-automation in main project folder run:

cd testing/jormungandr-automation
cargo build

jcli testing api

Api that can be used to run jcli executable underneath and is capable to assert outcome of command. It can work with already installed jcli (using PATH variable) or custom path. For Example:

#![allow(unused)]
fn main() {
    let jcli: JCli = Default::default();
    let private_key = jcli.key().generate("ed25519-extended");
    let public_key = jcli.key().convert_to_public_string(&private_key);
}

jormungandr testing api

Collection of automation modules for node interaction and configuration:

  • configuration - allows to configure node & blockchain settings,
  • explorer - explorer configuration/bootstrap & interaction module,
  • grpc - module for grpc internode connection library handling. capable of sending some RPC calls as well as bootstrap receiver instance,
  • legacy - module for loosely typed configuration. This allow to bootstrap older version of node, for example to satisfy need on cross-version testing,
  • rest - module for jormungandr REST api testing,
  • starter - module for bootstrapping node,
  • verifier - node state verifier
  • logger - api for jormungandr log handling/assertion
  • process - api for handling jormungandr process

testing

Bunch of loosely coupled utility modules, mostly for additional configuration capabilities or benchmarking:

  • benchmark - measurements framework for various purposes, for example bootstrap time or how many transactions were successfully handled by node,
  • vit - additional helpers for voting capabilities,
  • asserts - asserts extensions, tailored for node needs,
  • block0 - block0 extensions, like easier access to blockchain setting or function to download block0,
  • collector - input collector utils,
  • configuration - test configuration helper (apps paths etc.),
  • keys - create default keys,
  • observer - simple observer framework,
  • panic - panic error reporting in test code,
  • process - process extensions,
  • resource - resources manager, mostly for tls certificates used for testing,
  • storage - node storage generators,
  • time - time utils, mostly for waiting for particular block date,
  • verify - substitute of asserts in case we don’t want to panic eagerly when assertion is failed.

Hersir

Hersir is a cli & api project capable of bootstrapping local jormungandr network which can be exercised by various tools.

build & install

In order to build hersir in main project folder run:

cd testing/hersir
cargo build
cargo install --path . --force

quick start

The simplest configuration is available by using command:

hersir --config res\example.yaml

it results in small network with all data dumped to current folder

configuration

Simple example:

nodes:
    - spawn_params:
        alias: passive
        leadership_mode: passive
        persistence_mode: inmemory
      trusted_peers:
        - leader
    - spawn_params:
        alias: leader
        leadership_mode: leader
        persistence_mode: inmemory

blockchain:
    discrimination: test
    consensus: bft
  • nodes sections defines each network node. We can define alias, that is then used to express relations between the nodes. Also if we keep everything in memory or if node can mint blocks or not.

  • blockchain section defines blockchain parameters, like what is the consensus and if we are using test or production addresses discrimination.

full list of available parameters

nodes

  • spawn_params

    • alias: string (mandatory) - reference name of the node. Example: “alias”,

    • bootstrap_from_peers: bool (optional) - should node bootstrap from trusted peers. By default it is auto-evaluated: If node doesn’t have any trusted peers it won’t bootstrap from peers,

    • faketime: custom (optional) - inject fake time settings. For example:

        faketime:  {
            /// Clock drift (1 = no drift, 2 = double speed)
            drift: 1,
            /// Offset from the real clock in seconds
            offset: 2,
        }
      
    • gossip_interval: time (optional) - node gossip interval with the rest of the network. Format: number unit. For example: 10 s,

    • jormungandr: path (optional) - path to jormungandr node executable,

    • leadership_mode: enum (optional) - node leadership mode. Possible values:

      • passive - node won’t be able to produce blocks,
      • leader - node will be able to mint blocks,
    • listen_address: string (optional) - override listen address for node. Example: /ip4/127.0.0.1/tcp/10005,

    • log_level: enum (optional) - log level, Possible values: (info/warn/error/debug/trace)

    • max_bootstrap_attempts: number (optional) - maximum number of bootstrap attempt before abandon,

    • max_connections: number (optional) - max connection node will create with other nodes,

    • max_inbound_connections: number (optional) - max inbound connection that node will accept,

    • mempool: custom (optional) - mempool configuration. Example:

      mempool:
          pool_max_entries: 100000
          log_max_entries: 100000
      
    • network_stuck_check: time (optional) - check interval which node use to verify blockchain advanced. Format: number unit. For example: 10 s,

    • node_key_file: path (optional) - path to node network key,

    • persistence_mode: enum (optional) - set persistence mode. Possible values:

      • inmemory - everything is kept in node memory. If node restarts, all history is gone,
      • persistence - node uses local storage to preserve current state,
    • persistent_fragment_log: path (optional) - persistent fragment log serializes every fragment node receives via REST api,

    • policy: custom (optional) - defines nodes quarantine configuration. Example:

        policy:
          quarantine_duration: 30m
          quarantine_whitelist:
            - "/ip4/13.230.137.72/tcp/3000"
            - "/ip4/13.230.48.191/tcp/3000"
            - "/ip4/18.196.168.220/tcp/3000"
      
    • preferred_layer: custom (optional) - defines preferences in gossiping. Example:

        layers:
          preferred_list:
            view_max: 20
            peers:
              - address: "/ip4/13.230.137.72/tcp/3000"
                id: e4fda5a674f0838b64cacf6d22bbae38594d7903aba2226f
              - address: "/ip4/13.230.48.191/tcp/3000"
                id: c32e4e7b9e6541ce124a4bd7a990753df4183ed65ac59e34
              - address: "/ip4/18.196.168.220/tcp/3000"
                id: 74a9949645cdb06d0358da127e897cbb0a7b92a1d9db8e70
      
    • public_address: String (optional)- override public address for node. Example: /ip4/127.0.0.1/tcp/10005,

    • skip_bootstrap: bool (optional) - skips node bootstrap step,

    • topics_of_interest: custom (optional) - topics of interests describe how eager node will fetch blocks or transactions:

      topics_of_interest:
        blocks: normal # Default is normal - set to high for stakepool
        messages: low  # Default is low    - set to high for stakepool
      
    • verbose: bool (optional) - enable verbose mode, which prints additional information,

  • trusted_peers: List (optional) - list of trusted peers. Example:

        trusted_peers:
          - leader
          - leader_1
    

blockchain

  • block0_date: date (optional) - block0 date, if not provided current date would be taken,
  • block_content_max_size: number (optional) - maximum block content size in bytes,
  • committees: list (optional) - list of wallet aliases which will be committees (capable of tallying the vote),
  • consensus: enum (optional) - blockchain consensus, possible values: Bft,GenesisPraos,
  • consensus_genesis_praos_active_slot_coeff: float (optional) - Determines minimum stake required to try becoming slot leader, must be in range (0,1],
  • discrimination: enum (optional) - type of discrimination of the blockchain:
    • production, if this blockchain is meant for production
    • test, otherwise
  • external_committees: list (optional) - list of committees to be included in block0,
  • external_consensus_leader_ids: list (optional) - list of external leaders id (apart from already defined nodes),
  • external_wallets: list (optional) - list of external wallets. Example:
  external_wallets:
      - alias: Alice
        address: ca1q47vz09320mx2qcs0gspwm47lsm8sh40af305x759vvhm7qyjyluulja80r
        value: 1000000000
        tokens: {}
  • kes_update_speed: number (optional) - the speed to update the KES Key in seconds,
  • linear_fee: custom (optional) - fee calculations settings,
  • slot_duration: number (optional) - The slot duration, in seconds, is the time between the creation of 2 blocks,
  • slots_per_epoch: number (optional) - number of slots in each epoch,
  • tx_max_expiry_epochs: number (optional) - transaction ttl (expressed in number of epochs).

session

  • jormungandr: path (optional) - override path to jormungandr. By default it’s taken from PATH variable,
  • root: path (optional) - override path to local storage folder. By default all related data is dumped ino TEMP folder,
  • generate_documentation: bool (optional) - generate documentation files into local storage folder,
  • mode: enum (optional) - set hersir working mode. By default it’s “standard”, which just prints information about correct nodes bootstrap. Possible values:
    • monitor - prints current nodes status as progress bar,
    • standard - just prints information about correct nodes bootstrap,
    • interactive - spawn helper cli, which allows to interact with nodes,
  • log: enum (optional) - log level, Possible values: (info/warn/error/debug/trace),
  • title: string (optional) - give local storage folder name instead of random one.

full list of available commands

Full list of commands is available on hersir --help command.

hersir 0.1.0

USAGE:
    hersir [FLAGS] --config <config>

FLAGS:
    -h, --help       Prints help information
    -V, --version    Prints version information
    -v, --verbose

OPTIONS:
    -c, --config <config>

jormungandr-integration-tests

Integration test is a container project for all jormungandr & jcli tests. Tests are validating node correctness, stability and interaction with database/rest api. Also there are non-functional tests which verify node durability and reliability.

Architecture of tests

Jormungandr tests architecture relies on test pyramid approach. Most of the effort is put into until and api level and small amount of tests on E2E. Thanks to that we can create fast and reliable tests.

Testing architecture

Before approaching Jormungandr testing we need to first remind ourselves a simplified architecture diagram for jcli & jormungandr.

Simplified architecture

Quick start

Prerequisites

In order to run test jormungandr & jcli need to be installed or prebuild.

Start tests

In order to build jormungandr-automation in main project folder run:

cd testing
cargo test

Tests categories

Test are categories based on application/layer and property under test (functional or non-functional: load, perf etc.) Below diagram is a good overview:

Test categories

How to run all functional tests

cd testing/jormungandr-integration-tests
cargo test jormungandr --features network

How to run jcli only functional tests

cd testing/jormungandr-integration-tests
cargo test jcli

How to run single node functional tests

cd testing/jormungandr-integration-tests
cargo test jormungandr

How to run single node performance tests

cd testing/jormungandr-integration-tests
cargo test jormungandr::non_functional --features sanity,non-functional

How to run single node endurance tests

cd testing/jormungandr-integration-tests
cargo test jormungandr::non_functional --features soak,non-functional

How to run network functional tests

cd testing/jormungandr-integration-tests
cargo test jormungandr::network --features network

How to run network performance tests

cd testing/jormungandr-integration-tests
cargo test jormungandr::non_functional::network --features sanity,non-functional

How to run network endurance tests

cd testing/jormungandr-integration-tests
cargo test jormungandr::non_functional::network --features soak,non-functional

Frequency

Functional tests are run on each PR. Performance and testnet integration tests are run nightly

Loki

Loki is an adversary node implementation and api which operates on jormungandr network.

Build & Install

In order to build hersir in main project folder run:

cd testing/loki
cargo build
cargo install --path . --force

Quick Start

Loki can be used bootstrap using cli:

loki --genesis-block block0.bin --listen-address 127.0.0.1:8080 -s secret.yaml

where:

genesis-block - Path to the genesis block (the block0) of the blockchain listen-address - Specifies the address the node will listen secret - Set the secret node config (in YAML format). Example:

---
bft:
    signing_key: ed25519_sk1w2tyr7e2w26w5vxv65xf36kpvcsach8rcdmlmrhg3rjzeumjnzyqvdvwfa

Then utilizing rest interface of loki node one can send some invalid GRPC messages to rest of the network:

curl --location --request POST 'http://127.0.0.1:8080/invalid_fragment' \
--header 'Content-Type: application/json' \
--data-raw '{
    "address": "127.0.0.1:1000",
    "parent": "tip"
}'

where:

address - address of “victim” node, parent - Parent block. Possible values:

  • tip - current tip of “victim” node,
  • block0 - block0,
  • {Hash} - arbitrary parent block which hash is provided in request

Other possible operations

  • /invalid_hash - Sends block with invalid hash,
  • /invalid_signature - Sends block with invalid by wrong leader signature,
  • /nonexistent_leader - Sends block with non-existing leader,
  • /wrong_leader - Sends block with signed with invalid leader,

API

Loki also provides API for performing adversary operations, like sending invalid fragments:

#![allow(unused)]
fn main() {
    use loki::{AdversaryFragmentSender, AdversaryFragmentSenderSetup};

    let mut sender = ...
    let receiver = ..

    // node initialization
    let jormungandr = ...

    let adversary_sender = AdversaryFragmentSender::new(
        jormungandr.genesis_block_hash(),
        jormungandr.fees(),
        BlockDate::first().next_epoch().into(),
        AdversaryFragmentSenderSetup::no_verify(),
    );

    adversary_sender
        .send_faulty_transactions_with_iteration_delay(
            10,
            &mut sender,
            &receiver,
            &jormungandr,
            Duration::from_secs(5),
        )
        .unwrap();
}

Mjolnir

Mjolnir is a load cli & api project which operates on jormungandr node.

Build & Install

In order to build mjolnir in main project folder run:

cd testing/mjolnir
cargo build
cargo install --path . --force

Quick Start

CLI

Mjolnir can be used as a cli. It is capable of putting various load on jormungandr node. It has couple of different load types:

  • explorer - Explorer load
  • fragment - Fragment load
  • passive - Passive Nodes bootstrap
  • rest - Rest load

Simplest load configuration is to use rest load with below parameters:

Rest load

USAGE:
    mjolnir.exe rest [FLAGS] [OPTIONS] --duration <duration> --endpoint <endpoint>

FLAGS:
    -h, --help       Prints help information
    -m, --measure    Prints post load measurements
    -V, --version    Prints version information

OPTIONS:
    -c, --count <count>                            Number of threads [default: 3]
        --delay <delay>                            Amount of delay [milliseconds] between sync attempts [default: 50]
    -d, --duration <duration>                      Amount of delay [seconds] between sync attempts
    -e, --endpoint <endpoint>                      Address in format: http://127.0.0.1:8002/api/
    -b, --progress-bar-mode <progress-bar-mode>    Show progress bar [default: Monitor]

API

Mjolnir main purpose is to serve load api:

#![allow(unused)]
fn main() {
use jortestkit::load::{self, ConfigurationBuilder as LoadConfigurationBuilder, Monitor};
use std::time::Duration;

    //node initialization
    let mut jormungandr = ...

    let rest_client = jormungandr.rest();

    // create request generator for rest calls
    let request = mjolnir::generators::RestRequestGen::new(rest_client);

    // duration based load run (40 seconds)
    let config = LoadConfigurationBuilder::duration(Duration::from_secs(40))
        // with 5 threads
        .thread_no(5)
        // with delay between each request 0.01 s
        .step_delay(Duration::from_millis(10))
        // with monitor thread monitor status of load run each 0.1 s
        .monitor(Monitor::Progress(100))
        // with status printer which prints out status of load run each 1 s
        .status_pace(Duration::from_secs(1_000))
        .build();

    // initialize load in sync manner
    // (duration of each request is calculated by time difference between receiving response and sending request )
    let stats = load::start_sync(request, config, "Jormungandr rest load test");

    // finally some way to assert expected correctness, like percentage of successful requests
    assert!((stats.calculate_passrate() as u32) > 95);
}

full list of available commands

Full list of commands is available on mjolnir --help command.

mjolnir 0.1.0
Jormungandr Load CLI toolkit

USAGE:
    mjolnir.exe [FLAGS] [SUBCOMMAND]

FLAGS:
        --full-version      display full version details (software version, source version, targets and compiler used)
    -h, --help              Prints help information
        --source-version    display the sources version, allowing to check the source's hash used to compile this
                            executable. this option is useful for scripting retrieving the logs of the version of this
                            application
    -V, --version           Prints version information

SUBCOMMANDS:
    explorer    Explorer load
    fragment    Fragment load
    help        Prints this message or the help of the given subcommand(s)
    passive     Passive Nodes bootstrap
    rest        Rest load

Thor

Thor is a wallet cli & wallet api project which operates on jormungandr network.

WARNING: main purpose of the wallet is testing. Do NOT use it on production.

Build & Install

In order to build hersir in main project folder run:

cd testing/hersir
cargo build
cargo install --path . --force

Quick Start

CLI

Thor can be used as a wallet cli. It is capable of sending transactions or pull data from node. The simplest usage example is available by using commands:

  • register new wallet based on secret key: thor wallets import --alias darek --password 1234 secret.file

  • connect to node rest API: thor connect https://jormungandr.iohk.io/api

  • use recently created wallet for rest of commands: thor wallets use darek

  • sync with the node regarding wallet data: thor wallets refresh

  • send transaction: thor send tx --ada 5 --address ca1q5srhkdfuxqdm6h57mj45acxcdr57cr5lhddzkrjqyl8mmw62v9qczh78cu -pin 1234

API

Thor also allows you to use it as Api to perform any wallet operations from the code:

#![allow(unused)]
fn main() {
    use thor::{Wallet, FragmentSender, FragmentSenderSetup, FragmentVerifier};

    let receiver = thor::Wallet::default();
    let mut sender = thor::Wallet::default();

    // node bootstrap
    let jormungandr = ...

    let fragment_sender = FragmentSender::from_with_setup(
        jormungandr.block0_configuration(),
        FragmentSenderSetup::no_verify(),
    );

    fragment_sender
        .send_transaction(&mut sender, &receiver, &jormungandr, 1.into())
        .unwrap();

}

Configuration

Thor api doesn’t use any configuration files. However cli uses small cache folder on filesystem (located in: ~/.thor). The purpose of this configuration is to store wallet lists as well as secret keys guarded by pass phrase.

full list of available commands

Full list of commands is available on thor --help command.

thor 0.1.0
Command line wallet for testing Jormungandr

USAGE:
    thor <SUBCOMMAND>

FLAGS:
    -h, --help       Prints help information
    -V, --version    Prints version information

SUBCOMMANDS:
    address                 Gets address of wallet in bech32 format
    clear-tx                Clears pending transactions to confirm. In case if expiration occured
    confirm-tx              Confirms successful transaction
    connect                 Sets node rest API address. Verifies connection on set
    help                    Prints this message or the help of the given subcommand(s)
    logs                    Prints entire fragment logs from the node
    pending-transactions    Prints pending transactions (not confirmed)
    refresh                 Pulls wallet data from the node
    send                    Sends fragments to nodes
    status                  Prints wallet status (balance/spending counters/tokens)
    statuses                Prints pending or already sent fragments statuses
    wallets                 Allows to manage wallets: add/remove/select operations

Internal Design

Glossary:

  • blockchains: the current blockchain and possibly different known forks.
  • clock: general time tracking to know the time in blockchain unit (epoch/slot)
  • tip: the current fork that is considered the correct one, related to consensus algorithm.

Tasks

Each node runs several tasks. Task is a process with a clearly defined interface that abstracts a particular task.

General tasks:

  • Network task: handle new connections, and perform lowlevel queries. It does queries parsing and routing them to the other tasks: block, client or transaction tasks.

  • Block task: handles blocks reception from other nodes and the leadership thread. The blocks can be external and internal. External block (…), and internal block (…). When the task receives an external block it validates the block. If validation succeeds then the task appends blocks to the blockchain and checks if the tip needs any changes. When the task receives an internal block it does the same actions except for block validation. And then broadcasts the change of the tip to the network thread.

  • Leadership task: waits for each new slot, evaluates if this node is a slot leader. In case if it is, the task creates a new block (with a set of known transactions) referencing the latest known and agreed block in the blockchain. Then the task sends it to the block thread for processing.

  • Client task: receives block header/body queries. This task is in charge of in accord [!!!] with the blockchains, reply to the client.

  • Transaction task: receives new transactions from the network, validates transaction and handle duplicates. Also the broadcast to other nodes new (valid) transaction received.

Internal Architecture

Maintaining the blockchain’s state

The blockchain module is responsible to maintaining the blockchain (i.e.) the blocks, the current working branches (we will come back to it in a bit), the different states associated to every blocks, the epoch’s data (the parameters, the stake active distribution and the leadership schedule).

It is fairly easy to maintain the blocks of a blockchain. They all have the identifier of the parent block. Storing them is another story though and is not covered here.

Blockchain Data structure

  • block0 or blockx.y are blocks of the blockchain. They link to the parent block except for the block0 which may not have parents here (there is a special case where we could set a parent pointing to the block of a previously known state of the blockchain);
  • legder0 or ledgerx.y are states of the blockchain at a given block;
  • epoch x parameters are the blockchain parameters that are valid for all the epoch x;
  • epoch N stake distribution are the stake distribution as extracted from epoch N;
  • epoch x leadership is the leadership schedule for the epoch x.

This may seem a bit overwhelming. Let’s follow the flow of block creations and validation on this blockchain:

From the block 0

Let’s start with first initializing the blockchain from the block0.

Blockchain Data structure From block0

The first block, the block0, is the block that contains the initial data of the blockchain. From the block0 we can construct the first ledger: the ledger0.

From the ledger0 we can extract two objects:

  • epoch 1 parameters which will contains the fee setting to apply during the epoch 1;
  • epoch 0 stake distribution. This is the stake distribution at the end of the epoch 0 (and before the following epoch starts);

And now from the epoch 0 stake distribution we can determine the leadership schedules for the epoch 1 and the epoch 2.

for a block

The view from the point of view of a block k at an epoch N (block N.k) looks like the following:

Blockchain Data structure From blockk

It links to the parent block: block N.(k - 1). This is important because it allows us to retrieve the ledger state at the block N.(k - 1). In order to accept the block N.k in the blockchain we need to validate a couple of things:

  1. the block N.k is correctly referring to the block N.(k - 1):
    • the block date is increasing;
    • the block number is strictly monotonically increasing;
  2. the schedule is correct: the block has been created by the right stake pool at the right time;
  3. the block N.k is updating the parent’s ledger state (ledger N.(k - 1)) and is producing a valid new ledger: ledger N.k

epoch transition

Epoch transition happen when we switch to an epoch to the following one.

Blockchain Data structure Transition

Automatic deployment of the voting blockchain

Originally the voting blockchain was designed to be manually started and required a full block 0 and a configuration file to be created and distributed to nodes before it could commence.

This made automated deployment difficult and introduces necessary manual steps into the process of running the voting system.

To resolve this, the voting system is modified to allow the blockchain and parts of the configuration to be automatically created based solely on the parameters of the next election.

Overview

There are two sources of data required to start the blockchain. Block 0 and the config YAML file. To ease deployment, Block 0 will be created dynamically based on data held in our distributed object storage (which is currently a Postgresql Database.) As are certain parameters currently required for the configuration file.

The blockchain would still need to retain the current method for starting, in addition to the new “auto” mode.

In essence, automatic configuration entails:

  1. Minimizing manual config items to only those that unavoidably need to be defined.
  2. Generating configuration for other items where possible from known local state, and only having configuration items for these to override the defaults.
  3. Sharing data in a central repository of local configuration items that other nodes would require.
  4. Reading other data directly from their source of truth (Such as the schedule of the election, voting power snapshot data and proposals/vote plan information.)

Configuration

The node is configured by a YAML file which contains the following data. In the code, every config parameter should be accompanied by detailed a detailed documentation comment.

  • secret_file: - Optional Path (to what, used for what?)
  • storage: - Optional Path (to what, used for what?)
  • log: - Optional Logger settings.
    • level: - Optional Logger level, can be "Trace", "Debug", "Info". "Warn" and "Error". Should default to "Info" if not set.
    • format: - Format of the logs, can be "plain" and "json". Should default to "json" if not set.
    • output: - Optional destination of the log output. Options need to be fully documented. Should default to stdout if not defined.
    • trace_collector_endpoint: - Optional Options need to be fully documented. Should default to None (ie, no external logging) if not defined.
  • mempool: Optional configuration of the mempool. Should default as specified here.
    • pool_max_entries: - Optional - maximum number of entries in the mempool. Should default to 1,000,000 if not set.
    • log_max_entries: - Optional - maximum number of entries in the fragment logs. Should default to ???? if not set.
    • persistent_log: - Optional - path to the persistent log of all incoming fragments. A decision needs to be made if persistent logging is normally desired. If it is, it should default to a location in /var. If not, it should default to None and be disabled.
  • leadership: - Optional - the number of entries allowed in the leadership logs.
    • logs_capacity: - Optional - Should default to ???? if not set.
  • rest: - Optional - Enables REST API.
    • listen: - Optional - Address to listen to rest api requests on. Should default to “0.0.0.0:12080”. This default is open to suggestions
    • tls: - Optional - Define inbuilt tls support for the listening socket. If not specified, TLS is disabled. The default is TLS Disabled.
      • cert_file: - Path to server X.509 certificate chain file, must be PEM-encoded and contain at least 1 item
      • priv_key_file: - Path to server private key file, must be PKCS8 with single PEM-encoded, unencrypted key
    • cors: - Optional - Defines CORS settings. Default should be as shown in the individual entries.
      • allowed_origins - Origin domains we accept connections from. Defaults to “*”.
      • max_ages_secs - How long in seconds to cache CORS responses. Defaults to 60.
      • allowed_headers - A list of allowed headers in the preflight check. If the provided list is empty, all preflight requests with a request header will be rejected. Default should be a value which allows cors to work without requiring extra config under normal circumstances.
      • allowed_methods - A list of allowed methods in the preflight check. If the provided list is empty, all preflight requests will be rejected. Default should be a value which allows cors to work without requiring extra config under normal circumstances.

STILL TODO

  * `jrpc` - Optional. No idea what this does yet???  TODO: Document it and defaults.
  * `p2p` - Peer to Peer config.

    #[serde(default)]
    pub p2p: P2pConfig,

    #[serde(default)]
    pub http_fetch_block0_service: Vec<String>,

    #[cfg(feature = "prometheus-metrics")]
    pub prometheus: Option<Prometheus>,

    /// the time interval with no blockchain updates after which alerts are thrown
    #[serde(default)]
    pub no_blockchain_updates_warning_interval: Option<Duration>,

    #[serde(default)]
    pub bootstrap_from_trusted_peers: bool,

    #[serde(default)]
    pub skip_bootstrap: bool,

    pub block_hard_deadline: Option<u32>,

Permissionless Auth

sequenceDiagram
    actor U as User
    participant B as Cardano Block Chain
    participant Br as Cardano-Catalyst Bridge
    participant C as Catalyst Backend

    U->>B: Registeration Txn
    Note right of U: Type/Public Key/Reward Address
    Note over B: Block Minted
    B->>Br: Reads Chain Tip, detects Registration
    Br->>C: Records Latest Registration

    U->>C: Requests Priviliged Operation
    Note over C: Generates Random Challenge
    C->>U: Challenge Sent
    Note over U: Signs Challenge with Public Key
    U->>C: Challenge Response
    Note right of U: Public Key/Challenge Signature
    Note over C: Validates Response
    alt Public Key Registered & Signature Valid
      C->>U: Authorized
      Note left of C: Authorized<br>Session Established
      loop Authorized
        U->>C: Privileged Operation
        C->>U: Priviliged Response
      end
    else Unauthorized
      C->>U: Unauthorized
    end

Catalyst testing User Guide

Welcome to the Catalyst testing User Guide.

Vit testing is a family of projects, with the aim to support all quality assurance activities in Catalyst. One can find here:

  • catalyst backend deployment tool,
  • catalyst backend mock,
  • integration tests,
  • registration service and registration verify service,
  • snapshot service,
  • snapshot wormhole,
  • custom proxy for catalyst backend,
  • cli voting app implementation,
  • load driver imitating voting app users.

Iapyx

Iapyx

Iapyx is a wallet cli & wallet api project which operates on catalyst-jormungandr network.

WARNING: main purpose of the wallet is testing. Do NOT use it on production.

Build & Install

In order to build iapyx in main project folder run:

cd iapyx
cargo build
cargo install --path . --force

Quick Start

CLI

Iapyx can be used as a wallet cli. It is capable of sending votes or pull data from node. The simplest usage example is available by using commands:

  • register new wallet based on qr code: iapyx wallets import qr qr_file.png --pin 1234

  • connect to node rest API: iapyx connect https://catalyst-backend.io/api

  • use recently created wallet for rest of commands: iapyx wallets use darek

  • sync with the node and get wallet data: iapyx wallets refresh

  • send vote: iapyx vote single --choice yes --pin --id {proposal_id}

API

Iapyx can be used as api in order to perform voting operations from the code:

#![allow(unused)]

fn main() {
    let wallet_proxy = spawn_network(...);
    let secret_file_path = Path::new("wallet_alice");


    let mut alice = iapyx::ControllerBuilder::default()
        .with_backend_from_client(wallet_proxy.client())?
        .with_wallet_from_secret_file(secret_file_path.as_ref())?
        .build()

    let proposals = alice.proposals().unwrap();
    let votes_data = proposals
        .iter()
        .take(batch_size)
        .map(|proposal| (proposal, Choice::new(0)))
        .collect();

    let fragment_ids = alice
        .votes_batch(votes_data)
        .unwrap()
        .iter()
        .map(|item| item.to_string())
        .collect();
}

Configuration

Iapyx api doesn’t use any configuration files. However cli uses small cache folder on filesystem (located in: ~/.iapyx). The purpose of this configuration is to store wallet lists as well as secret keys guarded by pass phrase.

full list of available commands

Full list of commands is available on iapyx --help command.

iapyx 0.0.1
Command line wallet for testing Catalyst

USAGE:
    iapyx.exe <SUBCOMMAND>

FLAGS:
    -h, --help       Prints help information
    -V, --version    Prints version information

SUBCOMMANDS:
    address                 Gets address of wallet in bech32 format
    clear-tx                Clears pending transactions to confirm. In case if expiration occurred
    confirm-tx              Confirms successful transaction
    connect                 Sets node rest API address. Verifies connection on set
    funds                   Prints information about voting funds
    help                    Prints this message or the help of the given subcommand(s)
    logs                    Prints entire fragment logs from the node
    pending-transactions    Prints pending transactions (not confirmed)
    proposals               Prints proposals available to vote on
    refresh                 Pulls wallet data from the catalyst backend
    status                  Prints wallet status (balance/spending counters/tokens)
    statuses                Prints pending or already sent fragments statuses
    vote                    Sends votes to backend
    votes                   Prints history of votes
    wallets                 Allows to manage wallets: add/remove/select operations

Iapyx Load

Iapyx-load is a load cli & api project which operates on catalyst backend.

Build & Install

In order to build iapyx-load in main project folder run:

cd testing/iapyx
cargo build
cargo install --path . --force

Quick Start

CLI

Iapyx-load can be used as a cli. It is capable of putting various load on catalyst backend. Available load types:

  • node-only - Load which targets blockchain calls only
  • static-only - Load which targets static data only
  • simulation - Load with simulate real user case (both blockchain and static data in some relation)

Also node-only load provides two load characteristic:

  • bursts - Bursts mode. Sends votes in batches and then wait x seconds
  • const - Constant load. Sends votes with x votes per second speed

And two scenario types:

  • duration - Duration based load. Defines how much time load should run
  • count - Requests count based load. Defines how many requests load should sent in total

Simplest load configuration is to use node-only load with below parameters:

iapyx-load node-only const count --help

USAGE:
    iapyx-load.exe node-only const count [FLAGS] [OPTIONS] --requests-per-thread <count>

FLAGS:
        --debug                   Print additional information
        --help                    Prints help information
        --read-from-filename      Read pin from filename of each qr code
        --reuse-accounts-early    Update all accounts state before sending any vote
        --reuse-accounts-lazy     Update account state just before sending vote
    -h, --https                   Use https for sending fragments
    -V, --version                 Prints version information

OPTIONS:
    -a, --address <address>                        Address in format: 127.0.0.1:8000 [default: 127.0.0.1:8000]
    -n, --requests-per-thread <count>              How many requests per thread should be sent
    -c, --criterion <criterion>                    Pass criteria
    -d, --delay <delay>                            Amount of delay [miliseconds] between requests [default: 10000]
        --global-pin <global-pin>                  Global pin for all qr codes [default: 1234]
    -b, --progress-bar-mode <progress-bar-mode>
            Show progress. Available are (Monitor,Standard,None) [default: Monitor]

    -q, --qr-codes-folder <qr-codes-folder>        Qr codes source folder
    -s, --secrets-folder <secrets-folder>          Secrets source folder
        --status-pace <status-pace>                How frequent (in seconds) to print status [default: 1]
    -t, --threads <threads>                        Prints nodes related data, like stats,fragments etc [default: 3]

API

Iapyx load main purpose is to serve as load api:

use iapyx::{NodeLoad, NodeLoadConfig};
use jortestkit::{
    load::{ConfigurationBuilder, Monitor},
    measurement::Status,
};

...

    let no_of_threads = 10;
    let no_of_wallets = 40_000;

    let mut qr_codes_folder = Path::new("qr-codes");

    let config = ConfigurationBuilder::duration(parameters.calculate_vote_duration())
        .thread_no(threads_no)
        .step_delay(Duration::from_millis(delay))
        .fetch_limit(250)
        .monitor(Monitor::Progress(100))
        .shutdown_grace_period(Duration::from_secs(60))
        .build();

    let load_config = NodeLoadConfig {
        batch_size,
        use_v1: false,
        config,
        criterion: Some(100),
        address: "127.0.0.1:8080".to_string(),
        qr_codes_folder: Some(qr_codes_folder),
        secrets_folder: None,
        global_pin: "".to_string(),
        reuse_accounts_lazy: false,
        reuse_accounts_early: false,
        read_pin_from_filename: true,
        use_https: false,
        debug: false,
    };

    let iapyx_load = NodeLoad::new(load_config);
    if let Some(benchmark) = iapyx_load.start().unwrap() {
        assert!(benchmark.status() == Status::Green, "too low efficiency");
    }

full list of available commands

Full list of commands is available on mjolnir --help command.

mjolnir 0.1.0
Jormungandr Load CLI toolkit

USAGE:
    mjolnir.exe [FLAGS] [SUBCOMMAND]

FLAGS:
        --full-version      display full version details (software version, source version, targets and compiler used)
    -h, --help              Prints help information
        --source-version    display the sources version, allowing to check the source's hash used to compile this
                            executable. this option is useful for scripting retrieving the logs of the version of this
                            application
    -V, --version           Prints version information

SUBCOMMANDS:
    explorer    Explorer load
    fragment    Fragment load
    help        Prints this message or the help of the given subcommand(s)
    passive     Passive Nodes bootstrap
    rest        Rest load

integration-tests

Integration test is a container project for all catalyst e2e and integration tests. Tests are validating network correctness, stability. Also there are non-functional tests which verify node durability and reliability

Architecture of tests

Integration tests architecture relies on test pyramid approach. Where most of the effort is put into component and integration level and finally small amount of tests on E2E. Thanks to that we can create fast and reliable tests.

Testing architecture

Before approaching Jormungandr testing we need to first remind ourselves a simplified architecture diagram for jcli & jormungandr.

Simplified architecture

Quick start

Prerequisites

In order to run test integration tests below components need to be installed or prebuild:

  • [vit-servicing-station-server|https://github.com/input-output-hk/vit-servicing-station/tree/master/vit-servicing-station-server]
  • [jormungandr|https://github.com/input-output-hk/jormungandr/tree/master/jormungandr]
  • [valgrind|https://github.com/input-output-hk/vit-testing/tree/master/valgrdin]

Start tests

In order to build jormungandr-automation in main project folder run:

cd testing
cargo test

Tests categories

Test are categories based on application/layer and property under test (functional or non-functional: load, perf etc.) Below diagram is a good overview:

Test categories

How to run all functional tests

cd integration-tests
cargo test

How to run testnet functional tests

cd integration-tests
cargo test --features testnet-tests

How to run load tests

cd integration-tests
cargo test non_functional::load --features load-tests

How to run network endurance tests

cd testing/jormungandr-integration-tests
cargo test non_functional::soak  --features soak-tests

Frequency

Functional tests are run on each PR. Performance and testnet integration tests are run nightly

Registration service

Registration service is REST service purely for test purposes which is capable to interact with:

build

In order to build iapyx in main project folder run:

cd registration-service
cargo build
cargo install --path . --force

quick start

The simplest configuration is available by using command:

registration-service --config config.yaml

See config for more details.

clients

cli

Registration CLI is cli utility tool which help to interact with registration service without manually constructing requests

See cli for more details.

api

Example:

#![allow(unused)]
fn main() {
    use registration_service::{
        client::rest::RegistrationRestClient, context::State, request::Request,
    };

    ...

    let payment_skey = Path::new("payment.skey");
    let payment_skey = Path::new("payment.vkey");
    let payment_skey = Path::new("stake.skey");
    let payment_skey = Path::new("stake.vkey");
    let payment_skey = Path::new("vote.skey");

    let registration_client =
        RegistrationRestClient::new_with_token(registration_token, registration_address);

    let registration_request = Request {
        payment_skey,
        payment_vkey,
        stake_skey,
        stake_vkey,
        vote_skey,
    };

    let registration_job_id = registration_client.job_new(registration_request).unwrap();

    let wait = WaitBuilder::new().tries(10).sleep_between_tries(10).build();
    println!("waiting for registration job");
    let registration_jobs_status = registration_client
        .wait_for_job_finish(registration_job_id.clone(), wait)
        .unwrap();
    println!("{:?}", registration_jobs_status);

    let qr_code_path = temp_dir.child("qr_code");
    std::fs::create_dir_all(qr_code_path.path()).unwrap();

    let qr_code = registration_client
        .download_qr(registration_job_id.clone(), qr_code_path.path())
        .unwrap();
    let voting_key_sk = registration_client
        .get_catalyst_sk(registration_job_id)
        .unwrap();
}

NOTE: see cardano cli guide. This details information how to create payment and stake files.

Registration CLI

Registration CLI is cli utility tool which help to interact with registration service.

Build & Install

In order to build iapyx in main project folder run:

cd registration-service
cargo build
cargo install --path . --force

Quick Start

The simplest usage example is available by using commands:

  • register new job:
registration-cli job new   --payment-skey payment.skey --payment-vkey payment.vkey \
--stake-skey stake.skey --stake-vkey stake.vkey --endpoint https://{ADDRESS}

NOTE: response of the above call should return job-id which should be used in next call

NOTE: see cardano cli guide. This detail information how to create payment and stake files.

  • check job id: registration-cli job status --job-id {job-id} --endpoint https://{ADDRESS}

full list of available commands

Full list of commands is available on registration-cli --help command.

registration-service 0.1.0

USAGE:
    registration-cli.exe [OPTIONS] --endpoint <endpoint> <SUBCOMMAND>

FLAGS:
    -h, --help       Prints help information
    -V, --version    Prints version information

OPTIONS:
    -e, --endpoint <endpoint>    registration service endpoint [env: REGISTRATION_ENDPOINT=]
    -t, --token <token>          access token [env: REGISTRATION_TOKEN=]

SUBCOMMANDS:
    files     download jobs artifacts
    health    check if registration service is up
    help      Prints this message or the help of the given subcommand(s)
    job       jobs related operations

Configuration

This section describe configuration file which can be passed as argument for registration service:

  • port: port on which registration-service will be exposed,
  • jcli: path to jcli executable,
  • result-dir: path to folder which artifacts will be dumped (qr-code etc.),
  • cardano-cli: path to jcli executable,
  • voter-registration: path to jcli executable,
  • vit-kedqr: path to jcli executable,
  • network: network type. Possible values:
    • mainnet
    • { "testnet": 1097911063 },
  • token: token limiting access to environment. Must be provided in header API-Token for each request

Example:

  "port": 8080,
 "jcli": "jcli",
 "result-dir": "/persist",
 "cardano-cli": "./cardano-cli",
 "voter-registration": "./voter-registration",
 "vit-kedqr": "./vit-kedqr",
 "network": "mainnet",
 "token": "..."

Registration service

Registration service is REST service purely for test purposes. It is capable to interact with voter registration cli, cardano cli and vit-kedqr.

build

In order to build registration-verify-service in main project folder run:

cd registration-verify-service
cargo build
cargo install --path . --force

quick start

The simplest configuration is available by using command:

registration-service --config config.yaml

See config for more details.

clients

cli

Registration CLI is cli utility tool which help to interact with registration verify service without manually constructing requests

See cli for more details.

api

Example:

#![allow(unused)]
fn main() {
    use registration_verify_service::client::rest::RegistrationVerifyRestClient;

    ...

    let registration_verify_client =
        RegistrationVerifyRestClient::new_with_token(registration_token, registration_address);

     let mut form = Form::new()
            .text("pin", "1234")
            .text("funds","500")
            .text("threshold", "500")
            .file("qr", PathBuf::new("my_q.png")?;

    let registration_job_id = registration_verify_client.job_new(form).unwrap();

    let wait = WaitBuilder::new().tries(10).sleep_between_tries(10).build();
    println!("waiting for registration job");
    let registration_jobs_status = registration_client
        .wait_for_job_finish(registration_job_id.clone(), wait)
        .unwrap();
    println!("{:?}", registration_jobs_status);
}

Registration Verify CLI

Registration Verify CLI is cli utility tool which help to interact with registration service.

Build & Install

In order to build registration verify project in main project folder run:

cd registration-verify-service
cargo build
cargo install --path . --force

Quick Start

The simplest usage example is available by using commands:

  • register new job:
registration-verify-cli job new --payment-skey payment.skey --payment-vkey payment.vkey \
 --stake-skey stake.skey --stake-vkey stake.vkey --endpoint https://{ADDRESS}`

NOTE: response of the above call should return job-id which should be used in next call

NOTE: see cardano cli guide. This details information how to create payment and stake files.

  • check job id: registration-cli job status --job-id {job-id} --endpoint https://{ADDRESS}

full list of available commands

Full list of commands is available on registration-cli --help command.

registration-service 0.1.0

USAGE:
    registration-cli.exe [OPTIONS] --endpoint <endpoint> <SUBCOMMAND>

FLAGS:
    -h, --help       Prints help information
    -V, --version    Prints version information

OPTIONS:
    -e, --endpoint <endpoint>    registration service endpoint [env: REGISTRATION_ENDPOINT=]
    -t, --token <token>          access token [env: REGISTRATION_TOKEN=]

SUBCOMMANDS:
    files     download jobs artifacts
    health    check if registration service is up
    help      Prints this message or the help of the given subcommand(s)
    job       jobs related operations

Configuration

This section describe configuration file which can be passed as argument for registration verify service:

  • port: port on which registration-verify-service will be exposed,
  • jcli: path to jcli executable,
  • snapshot-token: token required by snapshot-service,
  • snapshot-address: address of snapshot-service,
  • client-token: access token for client endpoints (verifying voting power etc.),
  • admin-token: access token for admin endpoints (updating snapshot etc.),,
  • network: network type. Possible values:
    • mainnet
    • { "testnet": 1097911063 },
  • initial-snapshot-job-id: initial job id from snapshot service that will be used when starting service Example:
    "port": 8080,
    "jcli": "jcli",
 "snapshot-token": "3568b599a65557b2a2e",
 "snapshot-address": "https://snapshot.address:8080",
 "client-token": "5e19639accf2d76bae",
 "admin-token": "605a7c515ec781fd39",
 "network": "mainnet",
 "initial-snapshot-job-id": "3b49a0ae-5536-454b-8f47-780d9e7da6a0"

Snapshot trigger service

Service which operates on top of voting tools. It is a interface improvement which expose voting tools as a REST service.

build

In order to build snapshot-trigger-service in main project folder run:

cd snapshot-trigger-service
cargo build
cargo install --path . --force

quick start

The simplest configuration is available by using command:

snapshot-trigger-service --config config.yaml

See config for more details.

Usage

In order to start new job one need to send POST request like below:

curl --location --request POST 'https://snapshot.io/api/job/new' \
--header 'API-Token: ...' \
--header 'Content-Type: application/json' \
--data-raw '{
    "threshold": 2000000, // IN Lovelace
    "slot-no": 31842935
}'

Response will contains job status: b0b7b774-7263-4dce-a97d-c167169c8f27

Then query for job status:

curl --location --request GET 'https://snapshot.io/api/job/status/b0b7b774-7263-4dce-a97d-c167169c8f27' \
--header 'API-Token: ...'

and finally fetch snapshot:

curl --location --request GET 'https://snapshot.io/api/job/files/get/b0b7b774-7263-4dce-a97d-c167169c8f27/snapshot.json' \
--header 'API-Token: ...'

which has form:

{
    "initial": [
        {
            "fund": [
                {
                    "address": "ca1q5yr504t56ruuwrp5zxpu469t9slk0uhkefc7admk7wqrs24q6nxwyhwjcf",
                    "value": 14463
                },
                {
                    "address": "ca1q5ynl2yqez8lmuaf3snvgcw885c9hxxq6uexeevd4al8pct7vx69sljvzxe",
                    "value": 9991
                },
....

clients

cli

Snapshot CLI is cli utility tool which help to interact with snapshot trigger service without manually constructing requests

See cli for more details.

api

Example:

#![allow(unused)]
fn main() {
    use snapshot_trigger_service::{
        client::rest::SnapshotRestClient,
        config::JobParameters,
        State,
    };

    let job_param = JobParameters {
        slot_no: Some(1234567),
        tag: Some("fund1".to_string()),
    };

    let snapshot_token=  "...";
    let snapshot_address =  "...";

    let snapshot_client =
        SnapshotRestClient::new_with_token(snapshot_token.into(), snapshot_address.into());

    let snapshot_job_id = snapshot_client.job_new(job_params).unwrap();
    let wait = WaitBuilder::new().tries(10).sleep_between_tries(10).build();

    let snapshot_jobs_status =
        snapshot_client.wait_for_job_finish(snapshot_job_id.clone(), wait)?;

    let snapshot = snapshot_client.get_snapshot(snapshot_job_id)?;
}

Registration CLI

Registration CLI is cli utility tool which help to interact with registration service.

Build & Install

In order to build registration-service in main project folder run:

cd registration-service
cargo build
cargo install --path . --force

Quick Start

The simplest usage example is available by using commands:

  • register new job: snapshot-cli --endpoint https://snapshot.io job new --tag daily

NOTE: response of the above call should return job-id which should be used in next call like below:

b0b7b774-7263-4dce-a97d-c167169c8f27

  • check job id: snapshot-cli job status --job-id {job-id} --endpoint https://{ADDRESS}

full list of available commands

Full list of commands is available on snapshot-cli --help command.

snapshot-trigger-service 0.1.0

USAGE:
    snapshot-cli.exe [OPTIONS] --endpoint <endpoint> <SUBCOMMAND>

FLAGS:
    -h, --help       Prints help information
    -V, --version    Prints version information

OPTIONS:
    -e, --endpoint <endpoint>    snapshot endpoint [env: SNAPSHOT_ENDPOINT=]
    -t, --token <token>          access token, which is necessary to perform client operations [env: SNAPSHOT_TOKEN=]

SUBCOMMANDS:
    files     retrieve files from snapshot (snapshot outcome etc.)
    health    check if snapshot service is up
    help      Prints this message or the help of the given subcommand(s)
    job       job related commands

Configuration

This section describe configuration file which can be passed as argument for snapshot service:

  • port: port on which registration-service will be exposed,

  • result-dir: path to folder which artifacts will be dumped (qr-code etc.),

  • voting-tools: voting tools internal parameters section,

    • bin: “voting-tools”,
    • network: network type. Possible values:
      • mainnet
      • { "testnet": 1097911063 },
      • db: dbsync name,
      • db-user: dbsync user,
      • db-host: dbsync host,
      • scale: voting power multiplier. If 1 then Lovelace is used
  • voter-registration: path to jcli executable,

  • vit-kedqr: path to jcli executable,

  • token: token limiting access to environment. Must be provided in header API-Token for each request

Example:

 "port": 8080,
 "result-dir": "/persist/snapshot",
 "voting-tools": {
      "bin": "voting-tools",
      "network": "mainnet",
      "db": "dbsync",
      "db-user": "dbsync-admin",
      "db-host": "/alloc",
      "scale": 1000000
 },
 "token": "3568b599a65557b2a2e"

snapshot wormhole

Snapshot wormhole is a specialized Rest client API project. It has a builtin scheduler for transfering snapshot result file from snapshot-trigger-service to vit-servicing-station service.

build

In main project folder run:

cd vit-testing/snapshot-wormhole
cargo build

and install:

cargo install --path .

run

quick start

The simplest run configuration is available by using command:

snapshot-wormhole --config snapshot-wormhole.config one-shot

which will perform a single job of snapshot-trigger-service -> vit-servicing-station

See config for configuration file details.

run modes

Two modes are available:

  • one-shot - ends program after single job is done,
  • schedule - run job continuously based on cron string.

one-shot

This mode can be helpful for debugging or testing purposes to verify if our configuration is correct and services are available.

schedule

Start scheduler based on input cron string. We are using custom cron string which allows to program scheduler based on seconds.

The scheduling format is as follows:

| sec | min | hour | day of month | month | day of week | year |
|  *  |  *  |   *  |      *       |   *   |      *      |   *  |

For example, to schedule each run per 15 minutes starting from now:

snapshot-wormhole --config wormhole-config.json schedule --cron "* 4/60 * * * *" --eagerly

full list of available commands

Full list of commands is available on snapshot-wormhole --help command

Configuration

This section describe configuration file which can be passed as argument when starting snapshot-wormhole:

snapshot service

This section describe snapshot trigger service connection:

  • address: snapshot trigger REST api address,
  • token: optional access token,

servicing station service

This section describe servicing station service connection:

  • address: servicing station service REST api address,,

parameters

This section defines snapshot import parameters when applying snapshot to vit servicing station

  • min_stake_threshold: minimum stake needed to participate in voting. Expressed in ada,
  • voting_power_cap: maximum voting power before capping in order to satisfy fairness in voting. Expressed as a fraction number,
  • direct_voters_group: group name for direct voters (determines part of REST path when accessing particular group with GET request),
  • representatives_group: group name for representatives (determines part of REST path when accessing particular group with GET request)

Example:

{
    "snapshot_service": {
        "address": "http://127.0.0.1:9090",
        "token": "RBj0weJerr87A"
    },
    "servicing_station": {
        "address": "http://127.0.0.1:8080"
     },
    "parameters": {
        "min_stake_threshold": 500,
        "voting_power_cap": {
      "Rational": ["Plus",[1,2]]
        },
        "direct_voters_group": "direct",
        "representatives_group": "rep"
    }
}
"

valgrind

Valgrind is a Rest API project which is simplified proxy solution for catalyst backend.

build

In main project folder run:

cd valgrind
cargo build

and install:

cargo install --path .

quick start

The simplest configuration is available by using command:

valgrind --block0_path block0.bin

By default valgrind will be exposed at 127.0.0.1:8000

client

Valgrind project provides also API for interacting with proxy server. Usage example:

#![allow(unused)]
fn main() {
    use valgrind::client::{ValgrindClient,ValgrindSettings};


    let settings = RestSettings {
        enable_debug: false,
        use_https: false,
        certificate: None,
        cors: None,
    }

    let address = "0.0.0.0:8080".to_string();

    let client = ValgrindClient::new(address, settings)
    let fragment_logs = client.fragment_logs()?;

}

Configuration

This section describe configuration file which can be passed as argument when starting valgrind:

  • address: address on which valgrind will be exposed. By default: 127.0.0.1:8000,
  • vit-address: vit servicing station address. By default: 127.0.0.1:3030,
  • node-address: node address. By default: 127.0.0.1:8080,
  • block0-path: path to block0 executable,
  • cert: path to certificate (for enabling https). Optional,
  • key: path certificate key (for enabling https). Optional,

Example:

   "address": "127.0.0.1:8000",
 "vit-address": "127.0.0.1:3030",
 "node-address": "127.0.0.1:8080",
 "block0-path": "./block0.bin",
 "cert": "certificate.cert",
 "key": "certificate.key",
"

vitup

Vitup is a cli project which is capable to bootstrap catalyst backend which can be exercised by various tools. Initial purpose is to provide simple localhost backend for catalyst voting app.

build

before building vitup all dependencies need to be installed.

  • valgrind
  • jormungandr
  • vit-servicing-station

then in order to build vitup in main project folder run: cargo build

and install:

cargo install --path vitup

quick start

The simplest configuration is available by using command:

vitup start quick

default endpoint will be exposed at 0.0.0.0:80 all data dumped to .\catalyst

Configuration

Configuration file example is available under src/vit-testing/vitup/example/mock/config.yaml This section describe configuration file which can be passed as argument for vitup start mock command:

pub struct Configuration {

pub ideascale: bool,

pub protocol: valgrind::Protocol,

[serde(default)]

pub local: bool, }

  • port: port on which registration-service will be exposed,

  • token: token limiting access to environment. Must be provided in header API-Token for each request

  • working-dir: path to folder which artifacts will be dumped (qr-code etc.),

  • protocol: optional parameter if service shoudl be exposed through https. Then two sub-parameters need to be defined key_path and cert_path like in an example below:

      "protocol": {
        "key_path": "./resources/tls/server.key",
        "cert_path": "./resources/tls/server.crt"
      }
    

    NOTE: certificates in resources folder are self-signed

Example:

{
  "port": 8080,
  "working-dir": "./mock",
  "protocol": {
    "key_path": "./resources/tls/server.key",
    "cert_path": "./resources/tls/server.crt"
  }
}

Data Generation

Configuration

This section describe configuration file. It is passed as argument when starting vitup. It can also, in some cases, send to already running environments in order to restart them with new settings.

Initials

snapshot

Allows to provide initial voters and representatives which whose will be available in initial snapshot.

see snapshot data creation guide for more details

block0

Allows to provide initial addresses/voters which addresses would be put in block0. Supported syntax:

above threshold

Amount of wallets which receive more than value defined in static_data.voting_power parameter

Example:

{
 "above_threshold":30,
    "pin":"1234"
}

Pin would be set globally for all 30 addresses

below threshold

Amount of wallets which receive less than value defined in static_data.voting_power parameter

Example:

{
 "below_threshold":30,
    "pin":"1234"
}

Pin would be set globally for all 30 addresses

around level

Amount of wallets which have funds around defined level

Example:

{
 "count":30,
    "level":1000,
    "pin":"1234"
}

zero funds

Amount of wallets which won’t have any funds in block0

Example:

{
 "zero_funds":30,
    "pin":"1234"
}

named wallet

Wallet with custom pin and arbitrary funds amount,

Example:

      {
        "name":"darek",
        "funds":8000,
        "pin":"1111"
      },

external wallet

Wallet with address and pin. For users who already generated address outside vitup.

Example:

      {
        "address":"ca1qknqa67aflzndy0rvkmxhd3gvccme5637qch53kfh0slzkfgv5nwyq4hxu4",
        "funds":8000
      },

snapshot

Allows to provide initial addresses/voters which addresses would be put in initial snapshot. Supported syntax:

random

Some number of random wallets which receive specified amount of voting power

Example:

  {
    "count": 2,
    "level": 5000
  },

external

A single entry with specified voting key and voting power

Example:

  {
    "key":"3877098d14e80c62c071a1d82e3df0eb9a6cd339a5f66e9ec338274fdcd9d0f4",
    "funds":300
  }

named

A single entry with specified alias from block0 and optional voting power. If voting power is not defined it would be taken from block0 section. If vitup cannot find alias it will produce an error

Example:

  {
    "name": "darek",
    "funds": 100
  },

vote plan

vote time

Below parameters describe how long vote would be active, for how long users can vote and when tally period would begin.

In cardano time is divided into epochs which consists of slots. There are 2 parameters that defines how long epoch should last, slot_duration and slots_per_epoch with equation: epoch_duration = slot_duration * slots_per_epoch.

For example, for given:

slot_duration = 2
slots_per_epoch = 10

then epoch will lasts 20 seconds.

vote_start, vote_tally, tally_end - describe 2 vote phases:

  • from vote_start to vote_tally : casting vote period, where we gather votes.
  • from vote_tally to tally_end: tallying vote period, where we gather voting results.

Sll above parameters are expressed in epochs. Be aware that slot_duration and slots_per_epoch have influence on time voting phase would start. For example:

  • start vote in 5 minutes,
  • allow users to case vote for 20 minutes
  • give 1 hour for tally operation

our setup would be like below:

"vote_start":1,
"vote_tally":4,
"tally_end":20,
"slots_per_epoch":60,

See jormungandr docs for more information.

NOTE: slot_duration is defined in blockchain section of configuration file

private

If true, then voting is private otherwise public. This parameters basically controls if votes choices are encrypted or not.

representatives_vote_plan

TBD, currently not used

example

 "vote_plan": {
        "vote_time": {
            "vote_start": 13,
            "tally_start": 98,
            "tally_end": 140,
            "slots_per_epoch": 3600
        },
        "private": true,
        "representatives_vote_plan": false
    },
  },

blockchain

Set of parameters which controls blockchain related configuration.

See jormungandr docs for more information.

slot_duration

Describes how frequent block are produces by network. Slot duration is expressed in seconds. Cannot be longer that 128.

block_content_max_size

Describes how big a single block can be. Larger blocks can hold more transactions which results in faster transactions processing. However it put more requirements on space and network throughput.

block0_time

Optional parameter which defines start time of block0. It is useful when one want to defined voting phases that ends and starts precisely in required time. Otherwise block0_time is equal to current time when running vitup

tx_max_expiry_epochs

Optional parameter which defines what is the maximum duration (expressed in epochs) of transaction timeout. Usually it is equal to 1.

consensus_leader_ids

Allows to override randomly generated consensus leaders ids. Useful when we have our own pre-generated leaders keys for nodes.

linear_fees

Transactions fees which defined cost of transaction or vote.

  • constant - constant fee added to each transaction
  • coefficient - coefficient of each transaction output
  • certificate - cost of sending certificate.

constant + transaction.output * coefficient + certificate

Example:

  "linear_fees": {
    "constant": 1,
    "coefficient": 1,
    "certificate": 2
  },

Above configuration will result in:

For transaction with 1 input and 1 output 1 + 1 * 1 + 0 = 2

For vote 1 + 0 * 1 + 2 = 3

committees

Committee is a wallet that is capable of tallying voting results. This setting allows to use predefined committee rather than generate random by vitup.

data

Section describes static data used for voting. Mostly defines parameters for servicing station

current fund

Current fund related settings:

options

Defines options available for voters. Should be expressed as coma-separated values. For example:

options: "yes,no"

proposals

Number of proposals available for voting

challenges

Number of challenges available for voting. Challenge is a container for proposals for the same domain

reviews

Number of reviews for proposals

voting_power

Threshold for voting participation, expressed in ADA

fund_name

Name of fund

fund_id

Id of the fund. This parameter also controls behavior of catalyst voting app. If it’s changed between two funds, voting app will refresh it state.

dates

proposal_submission_start

Data in rfc3339 format. Defines proposal submission start datetime.

insight_sharing_start

Data in rfc3339 format. Defines proposal insight sharing start datetime.

refine_proposals_start

Data in rfc3339 format. Defines proposal refinement start datetime.

finalize_proposals_start

Data in rfc3339 format. Defines proposal finalization start datetime.

proposal_assessment_start

Data in rfc3339 format. Defines proposal assessment start datetime.

assessment_qa_start

Data in rfc3339 format. Defines proposal assessment qa start datetime.

snapshot_time

Data in rfc3339 format. Defines snapshot datetime.

next_vote_start_time

Data in rfc3339 format. Defines what is the date of next voting. This data will be shown to users after current voting will ends.

next_snapshot_time

Data in rfc3339 format. Defines next snapshot datetime. This data will be shown to users after current voting will ends.

next funds

Limited subset of settings comparing to current_fund section for next funds

fund_name

Name of fund

fund_id

Id of the fund. This parameter also controls behavior of catalyst voting app. If it’s changed between two funds, voting app will refresh it state.

dates

proposal_submission_start

Data in rfc3339 format. Defines proposal submission start datetime.

insight_sharing_start

Data in rfc3339 format. Defines proposal insight sharing start datetime.

refine_proposals_start

Data in rfc3339 format. Defines proposal refinement start datetime.

finalize_proposals_start

Data in rfc3339 format. Defines proposal finalization start datetime.

proposal_assessment_start

Data in rfc3339 format. Defines proposal assessment start datetime.

assessment_qa_start

Data in rfc3339 format. Defines proposal assessment qa start datetime.

snapshot_time

Data in rfc3339 format. Defines snapshot datetime.

next_vote_start_time

Data in rfc3339 format. Defines what is the date of next voting. This data will be shown to users after current voting will ends.

next_snapshot_time

Data in rfc3339 format. Defines next snapshot datetime.

service

Service related settings

NOTE: this section is ignored when only generating data using vitup.

version

Control version of backend. Manipulating this parameter we can tell voting app to force user to self-update application.

https

Controls protocol over which vitup is available for client

Full Example

{
   "initials":{
      "snapshot":{
         "tag":"daily",
         "content":[
            {
               "count":2,
               "level":1234
            },
            {
               "name":"alice"
            },
            {
               "name":"bob",
               "funds":10001
            }
         ]
      },
      "block0":[
         {
            "above_threshold":10,
            "pin":"1234"
         },
         {
            "name":"alice",
            "pin":"1234",
            "funds":10000
         },
         {
            "name":"bob",
            "pin":"1234",
            "funds":10000
         },
         {
            "zero_funds":10,
            "pin":"1234"
         }
      ]
   },
   "vote_plan":{
      "vote_time":{
         "vote_start":0,
         "tally_start":134,
         "tally_end":234,
         "slots_per_epoch":3600
      },
      "private":true
   },
   "blockchain":{
      "slot_duration":4,
      "block_content_max_size":20971520,
      "linear_fees":{
         "constant":0,
         "coefficient":0,
         "certificate":0
      }
   },
   "data":{
      "current_fund":{
         "options":"yes,no",
         "proposals":1134,
         "challenges":23,
         "reviews":7045,
         "voting_power":450,
         "fund_name":"Fund9",
         "fund_id":9,
         "dates":{
            "insight_sharing_start":"2022-05-01T12:00:00Z",
            "proposal_submission_start":"2022-05-02T12:00:00Z",
            "refine_proposals_start":"2022-05-03T12:00:00Z",
            "finalize_proposals_start":"2022-05-04T12:00:00Z",
            "proposal_assessment_start":"2022-05-04T12:00:00Z",
            "assessment_qa_start":"2022-05-05T12:00:00Z",
            "snapshot_time":"2022-05-07T12:00:00Z",
            "next_snapshot_time":"2023-05-07T12:00:00Z",
            "next_vote_start_time":"2022-07-14T12:00:00Z"
         }
      },
      "next_funds":[
         {
            "fund_name":"Fund10",
            "fund_id":10,
            "dates":{
               "insight_sharing_start":"2023-05-01T12:00:00Z",
               "proposal_submission_start":"2023-05-02T12:00:00Z",
               "refine_proposals_start":"2023-05-03T12:00:00Z",
               "finalize_proposals_start":"2023-05-04T12:00:00Z",
               "proposal_assessment_start":"2023-05-04T12:00:00Z",
               "assessment_qa_start":"2023-05-05T12:00:00Z",
               "snapshot_time":"2023-05-07T12:00:00Z",
               "voting_start":"2023-07-14T12:00:00Z",
               "voting_tally_end":"2023-07-14T12:00:00Z",
               "voting_tally_start":"2023-07-14T12:00:00Z",
               "next_snapshot_time":"2023-07-07T12:00:00Z",
               "next_vote_start_time":"2023-07-14T12:00:00Z"
            }
         }
      ]
   },
   "version":"3.8"
}

Configuration

This section describe configuration section which can be passed as argument when starting vitup or send to already running environments in order to restart them through rest api.

Example

{
    "parameters": {
        "tag": "latest"
    },
    "content": [
        {
            "rep_name": "alice",
            "ada": 1000
        },
        {
            "rep_name": "clarice",
            "ada": 1000
        },
        {
            "name": "bob",
            "registration": {
                "target": [
                    ["alice",1]
                ],
                "slotno": 0
            },
            "ada": 1000
        },
         {
            "name": "david",
            "registration": {
                "target": [
                    ["clarice",1]
                ],
                "slotno": 0
            },
            "ada": 1000
        }
    ]

Below more detailed explanation for each section element

parameters

Snapshot parameters used when importing it to servicing station or mock.

  • tag - snapshot tag which will be used when importing snapshot
  • min_stake_threshold - Minimum lovelace which is required to participate in voting
  • voting_power_cap - Maximum percentage of voting power before capping
  • direct_voters_group - Name of direct registration holders
  • representatives_group - Name of delegated registrations holders (representatives)

content

Main content of snapshot

actor

For user convenience we allow untagged definition of actor. Actor can be representative or direct voter with some data. Depending on fields role is dynamically defined and user can focus only on scenario description

pre-generated representative

This variant will create new unique wallet with given ada amount

  • rep_name - alias
  • ada - voting power amount

external representative

Representative with just and voting key. Can be used for already existing wallet

  • rep_name - alias
  • voting_key - voting key in hex

external delegator

Delegator with just an address. Can be used for already existing wallet in the network

  • name - alias
  • address - address in hex

pre-generated delegator

Delegator with just an address. Can be used for already existing wallet in the network. Generated delegator will set up new mainnet wallet

name - alias

registration: registration definition which can be used to describe to which representative delegator delegates his voting power. Field need to define slot at which delegation occurs and distribution. Example:

...
  "registration": {
   "target": [ [ "clarice",1 ] ,[ "alice",2 ] ],
   "slotno": 0
  }
...

Above example divides voting power into 3 parts and assign 1/3 to clarice and 2/3 to alice

ada - ada amount

Data Generation

Mock

For developer convenience an in-memory backend is available. Idea is the same as above but env is more lightweight and does not spawn jormungandr or vit-servicing-station. Mock is also capable of controlling more backend aspect than normal deployment (cut off the connections, rejects all fragments.

Configuration

Note: it is recommended to run command from vit-testing/vitup folder (then no explicit paths are required to be provided). Configuration file example is available under vit-testing/vitup/example/mock/config.yaml

Start

vitup start mock --config example\mock\config.yaml

Admin rest commands

For postman collection please visit:

Requests collection

List Files

curl --location --request GET 'http://{mock_address}/api/control/files/list'

Get File

curl --location --request GET 'http://{mock_address}/api/control/files/get/{path_to_file}'

Health

curl --location --request GET 'http://{mock_address}/api/health'

Change Fund Id

curl --location --request POST 'http://{mock_address}/api/control/command/fund/id/{new_fund_id}'

Add new fund

curl --location --request PUT 'http://{mock_address}/api/control/command/fund/update' \
--header 'Content-Type: application/json' \
--data-raw '
{
  "id": 20,
  "fund_name": "fund_3",
  "fund_goal": "How will we encourage developers and entrepreneurs to build Dapps and businesses on top of Cardano in the next 6 months?",
  "voting_power_threshold": 8000000000,
  "fund_start_time": "2022-05-04T10:50:41Z",
  "fund_end_time": "2022-05-04T11:00:41Z",
  "next_fund_start_time": "2022-06-03T10:40:41Z",
  "registration_snapshot_time": "2022-05-04T07:40:41Z",
  "next_registration_snapshot_time": "2022-06-02T10:40:41Z",
  "chain_vote_plans": [
    {
      "id": 2136640212,
      "chain_voteplan_id": "ad6eaebafd2cca7e1829df26c57b340a98b9d513b7eddec8561883f1b99f3b9e",
      "chain_vote_start_time": "2022-05-04T10:50:41Z",
      "chain_vote_end_time": "2022-05-04T11:00:41Z",
      "chain_committee_end_time": "2022-05-04T11:10:41Z",
      "chain_voteplan_payload": "public",
      "chain_vote_encryption_key": "",
      "fund_id": 20
    }
  ],
  "challenges": [
    {
      "id": 1,
      "challenge_type": "community-choice",
      "title": "Universal even-keeled installation",
      "description": "Upgradable",
      "rewards_total": 7686,
      "proposers_rewards": 844,
      "fund_id": 20,
      "challenge_url": "http://schneider-group.info",
      "highlights": {
        "sponsor": "Kreiger and Wuckert and Sons"
      }
    }
  ]
}

'

Accept all Fragments

Makes mock to accept all further fragments sent to environment

curl --location --request POST 'http://{mock_address}/api/control/command/fragments/accept'

Reject all Fragments

Makes mock to reject all further fragments sent to environment

curl --location --request POST 'http://{mock_address}/api/control/command/fragments/reject'

Hold all Fragments

Makes mock to hold all further fragments sent to environment

curl --location --request POST 'http://{mock_address}/api/control/command/fragments/pending'

Reset Fragment strategy

Makes mock to validate all further fragments sent to environment

curl --location --request POST 'http://{mock_address}/api/control/command/fragments/reset'

Make backend unavailable

Mock will reject all connections (returns 500)

curl --location --request POST 'http://{mock_address}/api/control/command/available/false'

Make backend available

Mock will accept all connections

curl --location --request POST 'http://{mock_address}/api/control/command/available/true'

Make account endpoint unavailable

Mock will reject n calls to account endpoint and as a result voting app won’t receive voting power for some time. This endpoint assume that one who changes block-account endpoint knows what is the frequency of calls from client and ultimately can be translated to some time of unavailability.

curl --location --request POST 'http://{mock_address}/api/control/command/block-account/{number_of_calls_to_reject}'

Make account endpoint available

Mock will reset account endpoint unavailability

curl --location --request POST 'http://{mock_address}/api/control/command/block-account/reset'

Add new voters snapshot for specific tag

Add (or overwrite) voters snapshot for this particular tag

curl --location --request POST 'http://{mock_address}/api/control/command/snapshot/add/{tag}' \
--header 'Content-Type: application/json' \
--data-raw '
  [{"voting_group":"direct","voting_key":"241799302733178aca5c0beaa7a43d054cafa36ca5f929edd46313d49e6a0fd5","voting_power":10131166116863755484},{"voting_group":"dreps","voting_key":"0e3fe9b3e4098759df6f7b44bd9b962a53e4b7b821d50bb72cbcdf1ff7f669f8","voting_power":9327154517439309883}]'

Create new voters snapshot for specific tag

Create snapshot json which can be uploaded to mock by using ../snapshot/add command. See mock configuration for more details. Example:

curl --location --request POST 'http://{mock_address}/api/control/command/snapshot/create' \
--header 'Content-Type: application/json' \
--data-raw '{
    "tag": "daily",
    "content": [
    {
        "count": 2,
        "level": 5000
    },
    {
        "name": "darek",
        "funds": 100
    },
    {
        "key":"318947a91d109da7109feaf4625c0cc4e83fe1636ed19408e43a1dabed4090a3",
        "funds":300
    }
]
}'

Reset environment

Resets environment data

curl --location --request POST 'http://{mock_address}/api/control/command/reset' \
--header 'Content-Type: application/json' \
--data-raw '{
  "initials": {
    "block0": [
      {
        "above_threshold": 10,
        "pin": "1234"
      },
      {
        "name": "darek",
        "pin": "1234",
        "funds": 10000
      }
    ]
  },
  "vote_plan": {
        "vote_time": {
            "vote_start": 0,
            "tally_start": 100,
            "tally_end": 140,
            "slots_per_epoch": 3600
        },
        "private": true
  },
  "blockchain": {
    "slot_duration": 2,
    "block_content_max_size": 20971520,
    "block0_time": "2022-03-17T05:00:00Z",
    "linear_fees": {
       "constant": 0,
       "coefficient": 0,
       "certificate": 0
    }
  },
  "data": {
    "options": "yes,no",
    "snapshot_time": "2022-01-06T11:00:00Z",
    "next_snapshot_time": "2022-04-07T11:00:00Z",
    "next_vote_start_time": "2022-04-11T11:00:00Z",
    "proposals": 936,
    "challenges": 25,
    "reviews": 5190,
    "voting_power": 450,
    "fund_name": "Fund7",
    "fund_id": 6
  },
  "version":"3.6"
}'

see data generation guide for more details

Control Health

Checks if mock is up

curl --location --request POST 'http://{mock_address}/api/control/health'

Logs

Mock stores record of each request send to it. This endpoint gets all logs from mock

curl --location --request POST 'http://{mock_address}/api/control/logs/get'

Admin cli

Admin CLI is an alternative for all above calls, available under vitup project.

example:

vitup-cli --endpoint {mock} disruption control health

Mock Farm

Mock farm is a simple extension for mock service. It allows to run more that one mock at once and give more control to user in term of starting and stopping particular mock instance.

Configuration

This section describe configuration file which can be passed as argument for snapshot service:

  • port: port on which registration-service will be exposed,
  • working_directory: path to folder which config files will be dumped,
  • mocks-port-range: range of ports assigned for usage,
  • protocol: decide whether mock farm should be exposed as http or https,
  • local: should service be exposed on all network interfaces or only 127.0.0.1,
  • token: token limiting access to environment. Must be provided in header API-Token for each request

Note: it is recommended to run command from vit-testing/vitup folder (then no explicit paths are required to be provided). Configuration file example is available under vit-testing/vitup/example/mock-farm/config.yaml

Start

vitup start mock-farm --config example\mock\mock-farm\config.yaml

Documentation

Configuration modes

In order to take out the burden of providing entire configuration vitup has two configuration modes:

  • quick - which runs on defaults and allow user to override most important parameters using cli arguments:

vitup start quick

  • advanced - which allows to defined full configuration as well as external static files for proposals and challenges

vitup start advanced

Run Modes

There are 4 run modes available in vitup:

  • interactive - where user can push some fragments or query status of nodes
  • endless - [Default] just simple run until stopped by user
  • service - additional manager service will be published at 0.0.0.0:3030. They allow to control (stop/start) and provides resources over http (qr codes or secret keys)
  • mock - lightweight version of backend with does not spawn any jormungandr or vit-servicing-station services. Mock is also capable of controlling more backend aspect than normal deployment (cut off the connections, rejects all fragments.

Endless mode

There are two ways of starting vitup in endless mode. One with limited configuration and one with giving full control.

vitup start quick --mode endless .. or

vitup start advanced --mode endless ..

Service mode

vitup start quick --mode service .. or

vitup start advanced --mode service ..

Once environment is up one can check status or modify existing environment:

Admin Operations

  • start - in order to start new voting

  • stop - stops currently running vote backend (usually it takes 1 min to stop it)

  • status - check status of environment:

    1. Idle - environment is not started
    2. Starting - environment is starting, please wait until its status is Running,
    3. Running - environment is not running and should be accessible,
    4. Stopping - environment is stopping, please wait until its Idle to start it with different parameters,
  • files: In order to get qr-codes or secret files from env, two operations are provided:

  1. List Files - list all files in data directory for current run
  2. Get File - downloads particular file which is visible in List Files operation result

how to send operations

Voting backend admin console is an REST API accessible over http or https on port 3030. Using POST/GET http methods admin can send some operations to environment. There are various apps capable of sending REST commands. The simplest is to download Postman and use UI to fire up commands.

  1. Download postman: https://www.postman.com/downloads/
  2. Review quick guide, how to send dummy request: https://learning.postman.com/docs/getting-started/sending-the-first-request/
  3. Review guide, how to send different requests with arguments: https://learning.postman.com/docs/sending-requests/requests/

Available commands:

check environment status

    Running

start environment

Default parameters:

start event received

Custom parameters:

    start event received

This requests need to pass environment configuration file in Body.

stop environment

  • Request Type: POST
  • Endpoint : http://{env_endpoint}:3030/api/control/command/stop
  • Response Example:
stop event received

list files

  • Request Type: GET
  • Endpoint : http://{env_endpoint}:3030/api/control/files/list
  • Response Example:
{
    "content": {
        "network": [
            "Leader4/node.log",
            "Leader4/node_config.yaml",
            "Leader4/node_secret.yaml",
            "vit_station/vit_config.yaml",
            "initial_setup.dot",
            "Leader1/node.log",
            "Leader1/node_config.yaml",
            "Leader1/node_secret.yaml",
            "Leader3/node.log",
            "Leader3/node_config.yaml",
            "Leader3/node_secret.yaml",
            "Leader2/node.log",
            "Leader2/node_config.yaml",
            "Leader2/node_secret.yaml",
            "Wallet_Node/node.log",
            "Wallet_Node/node_config.yaml",
            "Wallet_Node/node_secret.yaml"
        ],
        "qr-codes": [
            "qr-codes/zero_funds_12_0000.png",
            "qr-codes/wallet_25_above_8000_1234.png",
            "qr-codes/wallet_12_below_8000_9807.png",
            "qr-codes/wallet_30_below_8000_9807.png"
        ],
        "private_keys": [
            "wallet_13_below_8000",
            "wallet_26_below_8000",
            "wallet_23_above_8000",
            "wallet_26_above_8000"
        ],
        "private_data": [
            "fund_3_committees/ed25519_pk192pta739an4q6phkr4v7pxpgna5544mkkfh8ce6p0auxmk5j89xs0706fp/communication_key.sk",
            "fund_3_committees/ed25519_pk192pta739an4q6phkr4v7pxpgna5544mkkfh8ce6p0auxmk5j89xs0706fp/encrypting_vote_key.sk",
            "fund_3_committees/ed25519_pk192pta739an4q6phkr4v7pxpgna5544mkkfh8ce6p0auxmk5j89xs0706fp/member_secret_key.sk"
        ],
        "blockchain": [
            "block0.bin",
            "genesis.yaml"
        ]
    },
    "root": "./vit_backend",
    "blockchain_items": [
        "block0.bin",
        "genesis.yaml"
    ]
}

get files

User can list or view files available for current voting. To list all available files /api/control/files/list endpoint can be utilized. Then relative path can be provided in /api/control/files/get/.. endpoint.
For example: http://{env_endpoint}:3030/api/control/files/get/qr-codes/zero_funds_12_0000.png

Interactive mode

TBD

Core VIT Servicing Station

vit-servicing-station-tests

Vit servicing station tests project is a container project vit-servicing-station tests. Tests are validating server correctness, stability and interaction with database/rest api. Also there are non-functional tests which verify node durability and reliability

Quick start

Prerequisites

In order to run test vit-servicing-station-server need to be installed or prebuilt.

Start tests

In order to build vit-servicing-station in main project folder run:

cd vit-servicing-station-tests
cargo test

Tests categories

Test are categories based on application/layer and property under test (functional or non-functional: load, perf etc.)

How to run all functional tests

cd vit-servicing-station-tests
cargo test

How to run performance tests

cd vit-servicing-station-tests
cargo test --features non-functional

How to run endurance tests

cd vit-servicing-station-tests
cargo test --features soak,non-functional

Frequency

Functional tests are run on each PR. Performance and testnet integration tests are run nightly

Unified Platform

Overview

The Catalyst-Cardano bridge is a custom bridge interface between Catalyst and a Cardano Node. It tracks data relevant to the unified Catalyst system, as it appears on the Cardano network, in real-time.

The bridge is not just a data logger, it also:

  • Acts as an event trigger for other Catalyst systems.
  • Acts as an information server to data pertinent to Catalyst operations.

Issues with the previous systems

Catalyst has used a tool called dbsync to aquire “snapshot” data. A “snapshot” is a record at a moment in time of all staked ADA in the network.

dbsync is a tool which captures a relational interpretation of the Cardano blockchain to an SQL database. This is useful for general-purpose queries of information contained on Cardano, but to query bulk data it is slow, and complex. The relational structure means that individual transactions need to be pieced together from multiple tables. Even with indexes this exerts a heavy efficiency toll when a single transactions state is queried. When bulk data is queried, it results in a large and complex query which takes a very long time to run (on the order of hours).

dbsync itself takes a very very long time to sync to the blockchain, and get progressively slower. As at mid january 2023, one dbsync instance in a production environment took more than 5 days to sync with a local node.

It is supposed to be possible to recover dbsync database from a backup, however experience shows this is a time consuming process itself. It took more than 12 hours just to load the backup image into the database, but then the node would not sync with main net. These issues cause excessive complexity, slow operation and fragile environments.

Project Catalyst is also not in control of the dbsync database schema, and the schema can change between revisions. This could mean the entire database needs to be re-synched (taking days), or the schema changes and breaks tools which rely on the schema.

The solution

The solution detailed here is a new bridge service, that has the following features:

  • Can sync from multiple redundant nodes.
  • Does not need to trust any single node (so it can sync from public nodes).
  • Focused on data and events required by Project Catalyst:
    • Registration Records at all points in the past.
    • Staked ADA at all points in the past.
    • Minimum necessary state to track staked ADA.
  • More efficient database schema.
  • Schema is not accessed directly but via a simple API Service.
    • Prevents downstream consumers from breaking if the DB Schema needs to change.
  • Does not need to snapshot:
    • Data is accumulated progressively, not at instants in time.
    • Data storage allows the state at any past time to be calculated simply and efficiently.
  • Is easy to independently deploy by the Catalyst Community, so they can independently validate data reported by Project Catalyst.
    • Distributed use does not rely on any Catalyst-supplied data, which improves audibility and trust.

Architecture Overview

The System has these components:

  • 1 or more Cardano Nodes (Preferably 2 or more)
  • A Pipeline which processes the data from the nodes:
    • Read blocks from multiple nodes
    • Validate blocks by independent reference (A valid block has n independent copies)
    • Queue valid blocks for processing.
    • Read valid blocks from the queue and process every transaction in the block.
      • Calculate the change in staked ADA caused by all transactions in the block.
      • Validate all Registration Records in the block:
        • Record all validated registrations.
        • Record all in-valid registrations (including the reason the registration is invalid).
    • Queue the complete block of transactions, ledger state and registration updates for storing and alerting.
    • Lock the Databases for writing (Transactional)
    • Check if the block being recorded is new:
      • New:
        • Record the updated current ledger state.
        • Record the staked ADA for every stake address which changed in this block (time series record)
        • Record the registrations (time series record)
        • Send alerts to all upstream subscribers that subscribed events have changed.
        • Commit the transaction (unlocks the DB)
      • Already Recorded:
        • Abort the write transaction (release the DB)
        • Read the recorded data from the DB
        • Validate the DB data with the data calculated from the block.
        • If there is any discrepancy, LOG errors and send configured alerts.
  • A REST/HTTP service to report catalyst bridge data
    • Report current staked/unpaid rewards in ADA for any stake address.
    • Report staked/unpaid rewards in ADA for any stake address, at any past time.
    • Report staked/unpaid rewards over a period of previous time, with various processing:
      • Daily Averages
      • All records
      • other
    • Calculate voting power given a set of voting power options for a single address, or all registrations of a particular type.
      • Snapshot (instantaneous) voting power
      • Time window based voting power calculation
      • Linear vs functional voting power function of raw ADA.
      • Capped at a particular %
      • other parameters which can affect the voting power calculation.
  • Catalyst Event stream published via:
    • Kafka
    • other

Architectural Diagram

ContractKey
contract_key: bytes
hash(concat(contract_hash, parameter_hash) : )
ContractHash
contract_hash : bytes
hash(Contract.as_bytes() : )
ParameterHash
parameter_hash
hash(Paramters.as_bytes() : )
Contract
Compiled Wasm
as_bytes()
Parameters
Structured Parameter Data
as_bytes()
hunger noticedchoose recipedesired dish?

Integration to the Catalyst Unified Backend

The Cardano-Catalyst bridge is an essential and integral part of the Catalyst Unified backend. However, it is also a useful and capable tool in its own right.

It has a secondary use case of allowing the community to INDEPENDENTLY validate their registrations and voting power.

Accordingly, it is developed as a stand-alone service. This means it can be easily distributed and deployed INDEPENDENTLY of the rest of the catalyst unified backend services.

It has two internal long running tasks. Read, validate and record latest registrations/delegations from the linked block chain. Read and record running total balance and unclaimed rewards for every stake address. It also exposes a Voting Power API. Get voting power for Stake Address or Voting Key as at (timestamp). Would respect the registrations valid at that time. So if you asked for your voting power but you were delegated, the API would return you have X personal voting power, and Y..Z Voting power of yours has been delegated to Keys A-B. Options: Max Registration Age (So regitrations before this date/time are not considered). Must have valid payment address. (So we can later make a valid payment address a necessity if required, and this would also exclude just using stake address.) Voting power calculation type Absolute on the time of snapshot Average Maximum daily value Parameter: Length of time to average over (in days). Voting power linearity Linear (1 ADA = X voting power). Where X is a parameter. Logarithmic (Voting power is attenuated by a logarithmic function). Would need parameters to define the curve. Other?? Get Registration/Delegation information for a Stake Address/Voting Key as at a time. Similar to above but does NOT do any Get all active registrations as at a time. Time and max age of registrations are parameters. If stake addresses without registration are included in the output. What do you think? (edited)

Cardano Nodes

The bridge will need at least 1, and preferably more Cardano Nodes to read blocks from.

The Bridge will employ a local consensus model, in place of the absolute trust of a single node. Part of the configuration of the bridge will need to be:

  • the addresses of the available nodes that may be requested for new blocks.
  • the number of nodes which must send concurring blocks before a block is accepted.
  • the number of blocks to retrieve in advance of the current head.

Bridge Pipeline

Block Reader

REST HTTP Service

Event Stream

Database

The database is private to the catalyst-cardano bridge. Access to it is through the Catalyst-Cardano bridge service.

The schema for this database will be managed by the service and is expected to evolve. The concrete schema will be defined as the service is developed, and responsive to the needs of the service. Rather than defining an abstract “ideal” schema in advance, and then being tied to an inefficient implementation.

Servers

Initially, the service will target and store data in Postgresql. However, it should be written in such a way that we can easily replace Postgresql with another DB, such as SurrealDB.

To achieve this, database interactions should be contained to a crate which abstracts these interactions. For speed, the service should ALSO attempt to cache as much state internally as it can. However it must be kept in mind that multiple services can and will update the backing database. Accordingly, the internal state should be checked and refreshed/updated as required.

High-Level Data Design

There will initially be three logical databases, though they will NOT be in separate schemas and queries can join information between them.

Registrations Database

There will be a Registration Database. This is a time-series database, which means that updates do not replace old records, they are time-stamped instead. This allows for the state “at a particular time” to be recovered without recreating it.

Data

The data needed to be stored in each registration record is:

  • The time and date the registration was made.
    • Derived from the block date/time on Cardano, NOT the time it was detected.
  • The location of the transaction on the blockchain.
    • Allows the transaction to be verified against the blockchain.
  • The raw contents of the transaction.
    • The full raw transaction in binary.
    • Allows information not directly pertinent to Catalyst to be retained.
  • The Type of registration.
    • CIP-15
    • CIP-36
    • Others
      • Currently, there are only CIP-15 and CIP-36 voter registrations, however, there WILL be others.
  • Invalidity Report
    • Is the registration transaction Valid according to Catalyst transaction validity rules?
      • true - Then this field is NULL.
      • false - then this field contains a JSON formatted report detailing WHY it was invalid.
  • Registration specific fields
    • Fields which represent the meta-data of the registration itself.
    • These fields need to be able to be efficiently searched.

Queries

Examples of Common Queries:

Current Voter Registration

Given:

  • AsAt - The time the registration must be valid by.
  • Window - Optional, the maximum age of the registration before AsAt (Defaults to forever).
  • Stake Address - the Stake Address.
  • Valid - Must the registration be valid? (Tristate: True, False, None)
    • True - Only return valid registrations.
    • False - Only return invalid registrations IF there is no newer valid registration.
    • None - Return the most recent registration, Valid or not.

Return the MOST current registration.

All Current Voter Registrations

Given:

  • AsAt - The time the registration must be valid by.
  • Window - Optional, the maximum age of the registration before AsAt (Defaults to forever).
  • Valid - Must the registration be valid? (Tristate: True, False, None)
    • True - Only return valid registrations.
    • False - Only return invalid registrations IF there is no newer valid registration.
    • None - Return the most recent registration, Valid or not.

For each unique stake address: Return the MOST current registration.

Staked ADA Database

There will be a Staked ADA Database. This is a time-series database, which means that updates do not replace old records, they are time-stamped instead. This allows for the state “at a particular time” to be recovered without recreating it.

Data

The data needed to be stored in each staked ADA record is:

  • The Stake address.
  • The time and date the ADA staked/rewarded changed.
    • Derived from the block date/time on Cardano, NOT the time it was detected.
    • IF the staked ADA changed MULTIPLE times in the same block:
      • this record contains the total at the END of all updates.
  • The block on the blockchain when this stake address total changed.
    • Allows the transaction/s to be verified against the blockchain.
  • The total staked ADA as at this Block.
  • The total unpaid-rewards ADA as at this Block.
    • Rewards are earned for stake addresses at the end of epochs.
    • They are specially accounted for and need to be withdrawn.
    • This total is the total of all ADA which has been awarded to the stake address but NOT yet withdrawn.

Note: ONLY stake addresses which CHANGE are recorded.

It’s possible (probable?) that ONLY one of the total staked ADA or total unpaid rewards ADA will update at a time in a single block. However, regardless of which total updates, the record must faithfully record the total current at that time for both.

For example:

  • Staked ADA starts at 1234 and Unpaid Rewards ADA starts at 95.
  • At 12:43 Staked ADA changes to 1200
    • A record is emitted for 12:43 where staked-ada = 1200 and unpaid-rewards = 95.
  • At 14:55 Unpaid Rewards changes to 143
    • A record is emitted for 12:43 where staked-ada = 1200 and unpaid-rewards = 143.

Queries

Examples of Common Queries:

Current Staked ADA

Given:

  • AsAt - The time the registration must be valid by.
  • Window - Optional, the maximum age of the registration before AsAt (Defaults to forever).
  • Stake Address - the Stake Address.

Return the MOST current Total ADA and Unpaid Rewards ADA for that address.

All Current Staked ADA

Given:

  • AsAt - The time the registration must be valid by.
  • Window - Optional, the maximum age of the registration before AsAt (Defaults to forever).

For each unique stake address: Return the MOST current Total ADA and Unpaid Rewards ADA for that address.

Staked Balances for a period

  • AsAt - The time the registration must be valid by.
  • Age - The Oldest record to return.

For the period requested return a list of all staked balances where each record in the list is:

  • date-time - The time this balance applies to.
  • slot - The slot on the Cardano blockchain the balance changed in.
  • staked - The total Staked at this time.
  • rewarded - The total Unclaimed rewards at this time.

Transaction State Database

This is NOT a time-series database, it tracks:

  • the current state of the sync from the block chain
  • all current UTXO’s
  • any other data it needs to calculate staked-ADA changes as blocks arrive.

It is updated to track current state and is not historical.

This state is updated atomically, along with:

  • The staked ADA database
  • The registration database

This ensures that the DB is always in-sync with discrete minted blocks on the block-chain. The DB NEVER store a partial block update.

  • Data

There is no firm specification of the data that needs to be stored. It should be adapted to efficiently and quickly allow for the functions of the process to execute.

The ONLY critical information that it contains is the current block last synced.

All other information and the structure of it will need to be decided during implementation.

Rust API

rustdoc API documentation

OPEN FULL PAGE

Workspace Dependency Graph






0

tests



1

catalyst-toolbox



18

jormungandr-integration-tests



1->18





24

symmetric-cipher



1->24





25

vit-servicing-station-lib



1->25





28

wallet



1->28





2

chain-addr



3

chain-core



2->3





5

chain-crypto



2->5





4

chain-ser



3->4





6

typed-bytes



5->6





7

chain-impl-mockchain



7->2





8

cardano-legacy-address



7->8





9

chain-time



7->9





10

chain-vote



7->10





11

imhamt



7->11





12

sparse-array



7->12





9->3





10->3





10->5





13

chain-storage



14

jcli



15

jormungandr-lib



14->15





15->7





16

jormungandr-automation



16->13





16->14





17

jortestkit



16->17





19

hersir



18->19





22

mjolnir



18->22





20

thor



19->20





20->16





21

loki



21->20





22->21





23

snapshot-lib



23->15





25->23





26

event-db



25->26





27

vit-servicing-station-tests



27->17





27->25





28->15





30

hdkeygen



28->30





29

chain-path-derivation



30->2





30->8





30->29





31

chain-network



31->5





32

jormungandrwallet



33

wallet-core



32->33





33->24





33->28





34

wallet-wasm-js



34->33





35

wallet-uniffi



35->33





36

jormungandr



36->13





36->15





36->31





37

explorer



37->18





37->31





38

settings



39

blockchain



39->7





40

vit-servicing-station-cli



40->25





41

vit-servicing-station-server



41->25





42

iapyx



42->1





43

valgrind



42->43





43->16





43->27





43->33





44

vitup



44->43





46

mainnet-tools



44->46





45

mainnet-lib



45->20





45->23





46->27





46->45





49

snapshot-trigger-service



46->49





47

scheduler-service-lib



47->17





48

signals-handler



49->1





49->47





49->48





50

voting_tools_rs



49->50





51

integration-tests



51->42





51->44





52

cat-data-service



52->26





53

audit



53->13





53->28





54

vit-servicing-station-cli-f10



55

vit-servicing-station-lib-f10



54->55





56

vit-servicing-station-server-f10



56->55





57

vit-servicing-station-tests-f10



57->7





57->17





57->55





58

sign



58->13





58->15





External Dependencies Graph






0

tests



1

catalyst-toolbox



18

jormungandr-integration-tests



1->18





24

symmetric-cipher



1->24





25

vit-servicing-station-lib



1->25





28

wallet



1->28





62

color-eyre



1->62





66

gag



1->66





67

governor



1->67





73

qrcode



1->73





74

quircs



1->74





84

sscanf



1->84





2

chain-addr



3

chain-core



2->3





5

chain-crypto



2->5





4

chain-ser



3->4





85

thiserror



4->85





6

typed-bytes



5->6





60

bech32 0.8.1



5->60





69

hex



5->69





77

rayon



5->77





92

proptest



5->92





93

quickcheck



5->93





94

curve25519-dalek-ng



5->94





95

ed25519-bip32 0.4.1



5->95





96

ed25519-dalek



5->96





99

sha2 0.10.8



5->99





7

chain-impl-mockchain



7->2





8

cardano-legacy-address



7->8





9

chain-time



7->9





10

chain-vote



7->10





11

imhamt



7->11





12

sparse-array



7->12





88

tracing



7->88





101

strum 0.24.1



7->101





8->95





102

cbor_event



8->102





9->3





9->92





9->93





10->3





10->5





103

base64 0.21.5



10->103





104

const_format



10->104





11->85





11->92





13

chain-storage



13->85





98

rand_core 0.6.4



13->98





105

criterion



13->105





106

data-pile



13->106





107

sled



13->107





108

tempfile



13->108





14

jcli



15

jormungandr-lib



14->15





79

reqwest



14->79





110

bincode



14->110





112

clap_complete



14->112





113

gtmpl



14->113





114

rpassword



14->114





115

serde_yaml 0.8.26



14->115





15->7





116

http



15->116





117

humantime



15->117





118

parity-multiaddr



15->118





119

serde_with



15->119





16

jormungandr-automation



16->13





16->14





17

jortestkit



16->17





68

graphql_client



16->68





89

tracing-subscriber



16->89





120

assert_cmd



16->120





125

json



16->125





128

netstat2



16->128





130

poldercast



16->130





137

tonic 0.6.2



16->137





59

assert_fs



17->59





17->60





63

csv



17->63





17->69





17->79





109

base64 0.13.1



17->109





17->115





17->117





121

bytesize



17->121





122

custom_debug



17->122





124

fs_extra



17->124





129

os_info



17->129





131

predicates 2.1.5



17->131





133

semver



17->133





134

sysinfo



17->134





135

tar



17->135





138

warp



17->138





139

zip



17->139





141

dialoguer



17->141





142

indicatif



17->142





143

sha-1



17->143





144

sha2 0.9.9



17->144





19

hersir



18->19





22

mjolnir



18->22





145

bech32 0.7.3



18->145





20

thor



19->20





147

ctrlc



19->147





148

slave-pool



19->148





20->16





149

cocoon



20->149





150

dirs



20->150





21

loki



21->20





22->21





23

snapshot-lib



23->15





64

fraction



23->64





23->79





80

rust_decimal



23->80





83

serde_yaml 0.9.27



23->83





151

serde_test



23->151





75

rand 0.8.5



24->75





24->85





91

cryptoxide 0.4.4



24->91





152

zeroize



24->152





25->23





26

event-db



25->26





61

clap 4.4.8



25->61





71

itertools 0.10.5



25->71





25->89





25->138





154

diesel_migrations



25->154





155

dotenv



25->155





156

http-zipkin



25->156





157

notify



25->157





159

simplelog 0.8.0



25->159





160

tracing-futures



25->160





26->80





26->85





162

bb8-postgres



26->162





164

dotenvy



26->164





27

vit-servicing-station-tests



27->17





27->25





27->120





166

cfg-if 0.1.10



27->166





167

dyn-clone



27->167





168

fake



27->168





171

pretty_assertions



27->171





174

refinery



27->174





28->15





30

hdkeygen



28->30





28->71





175

hashlink



28->175





29

chain-path-derivation



29->85





30->2





30->8





30->29





31

chain-network



31->5





65

futures



31->65





31->85





31->137





32

jormungandrwallet



33

wallet-core



32->33





33->24





33->28





33->145





34

wallet-wasm-js



34->33





178

clear_on_drop



34->178





179

console_error_panic_hook



34->179





183

web-sys



34->183





35

wallet-uniffi



35->33





184

uniffi



35->184





36

jormungandr



36->13





36->15





36->31





36->61





36->79





36->115





36->130





36->138





36->156





185

arc-swap



36->185





187

jsonrpsee-http-server



36->187





189

local-ip-address



36->189





192

opentelemetry-otlp



36->192





193

opentelemetry-semantic-conventions



36->193





194

prometheus



36->194





196

tracing-appender



36->196





197

tracing-opentelemetry



36->197





198

trust-dns-resolver



36->198





37

explorer



37->18





37->31





37->156





37->192





37->193





37->196





37->197





199

anyhow



37->199





201

async-graphql-warp



37->201





38

settings



38->85





38->107





39

blockchain



39->7





190

lru



39->190





40

vit-servicing-station-cli



40->25





40->63





41

vit-servicing-station-server



41->25





41->192





41->193





41->196





41->197





42

iapyx



42->1





43

valgrind



42->43





204

ed25519-bip32 0.3.2



42->204





205

prettytable-rs



42->205





43->16





43->27





43->33





206

warp-reverse-proxy



43->206





44

vitup



44->43





46

mainnet-tools



44->46





44->196





207

diffy



44->207





208

glob



44->208





209

path-slash



44->209





213

tokio-rustls 0.23.4



44->213





214

uuid 0.8.2



44->214





45

mainnet-lib



45->20





45->23





212

tempdir



45->212





216

cardano-serialization-lib



45->216





217

pharos



45->217





46->27





46->45





49

snapshot-trigger-service



46->49





46->205





218

job_scheduler_ng



46->218





47

scheduler-service-lib



47->17





47->61





47->65





163

chrono



47->163





220

uuid 1.6.0



47->220





48

signals-handler



48->65





87

tokio



48->87





49->1





49->47





49->48





50

voting_tools_rs



49->50





50->61





50->62





50->77





50->80





50->92





50->150





50->216





221

bytekind



50->221





222

cddl



50->222





224

dashmap



50->224





225

microtype



50->225





226

nonempty



50->226





227

validity



50->227





51

integration-tests



51->42





51->44





229

libmath



51->229





230

rand_chacha 0.2.2



51->230





52

cat-data-service



52->26





52->61





52->89





52->119





231

axum



52->231





233

metrics-exporter-prometheus



52->233





234

tower-http



52->234





53

audit



53->13





53->28





53->62





53->115





235

clap_complete_command



53->235





54

vit-servicing-station-cli-f10



55

vit-servicing-station-lib-f10



54->55





54->63





54->108





172

rand 0.7.3



54->172





55->89





55->138





55->154





55->155





55->156





55->157





55->159





55->160





236

base64 0.12.3



55->236





237

structopt



55->237





238

itertools 0.9.0



55->238





239

strum 0.21.0



55->239





56

vit-servicing-station-server-f10



56->55





56->196





57

vit-servicing-station-tests-f10



57->7





57->17





57->55





57->120





57->166





57->167





57->168





57->171





58

sign



58->13





58->15





58->62





58->79





58->115





58->235





59->108





241

doc-comment



59->241





242

globwalk



59->242





243

predicates 3.0.4



59->243





245

predicates-tree



59->245





246

clap_builder



61->246





247

backtrace



62->247





248

color-spantrace



62->248





249

eyre



62->249





81

serde



63->81





253

csv-core



63->253





254

itoa



63->254





255

ryu



63->255





100

lazy_static



64->100





256

num



64->256





258

futures-executor



65->258





66->108





262

filedescriptor



66->262





67->65





67->75





263

futures-timer



67->263





264

no-std-compat



67->264





265

nonzero_ext



67->265





266

parking_lot 0.12.1



67->266





82

serde_json



68->82





70

image



268

bytemuck



70->268





271

gif



70->271





273

num-iter



70->273





274

num-rational 0.3.2



70->274





275

png



70->275





276

scoped_threadpool



70->276





277

tiff



70->277





278

either



71->278





72

once_cell



73->70





279

checked_int_cast



73->279





74->85





219

num-traits



74->219





76

rand_chacha 0.3.1



75->76





76->98





280

ppv-lite86



76->280





77->278





281

rayon-core



77->281





78

regex



284

regex-automata 0.4.3



78->284





79->82





90

url



79->90





79->103





79->116





203

futures-util



79->203





286

serde_urlencoded



79->286





287

tower-service



79->287





170

postgres



80->170





288

arrayvec 0.7.4



80->288





289

borsh



80->289





290

rkyv



80->290





82->81





82->254





82->255





83->81





83->254





83->255





291

indexmap 2.1.0



83->291





292

unsafe-libyaml



83->292





84->78





84->100





84->104





86

time



86->254





293

deranged



86->293





295

time-core



86->295





111

bytes



87->111





87->266





296

mio



87->296





297

num_cpus



87->297





298

pin-project-lite



87->298





127

log



88->127





88->298





299

tracing-core



88->299





89->78





89->82





89->86





89->88





267

smallvec



89->267





300

matchers



89->300





301

nu-ansi-term



89->301





302

sharded-slab



89->302





303

thread_local



89->303





304

tracing-log 0.2.0



89->304





305

tracing-serde



89->305





90->81





306

form_urlencoded



90->306





307

idna 0.4.0



90->307





92->75





92->100





92->219





285

regex-syntax 0.8.2



92->285





309

bit-set



92->309





311

bitflags 2.4.1



92->311





312

rand_xorshift



92->312





313

rusty-fork



92->313





314

unarray



92->314





93->172





315

env_logger



93->315





94->98





94->152





269

byteorder



94->269





316

digest 0.9.0



94->316





317

subtle-ng



94->317





95->91





96->81





96->144





96->172





318

curve25519-dalek



96->318





319

ed25519



96->319





97

generic-array



320

typenum



97->320





180

getrandom 0.2.11



98->180





321

digest 0.10.7



99->321





105->63





105->65





105->77





105->78





105->87





105->100





215

walkdir



105->215





324

clap 2.34.0



105->324





325

criterion-plot



105->325





326

oorandom



105->326





327

plotters



105->327





328

serde_cbor



105->328





329

tinytemplate



105->329





330

memmap2



106->330





107->127





331

crc32fast



107->331





332

crossbeam-epoch



107->332





334

fxhash



107->334





335

libc



107->335





336

parking_lot 0.11.2



107->336





228

cfg-if 1.0.0



108->228





337

fastrand



108->337





110->81





111->81





112->61





113->71





113->100





308

percent-encoding



113->308





338

gtmpl_value



113->338





114->82





146

yaml-rust



115->146





115->255





339

indexmap 1.9.3



115->339





116->111





116->254





340

fnv



116->340





118->90





118->269





341

arrayref



118->341





342

bs58



118->342





343

data-encoding



118->343





344

multihash



118->344





345

static_assertions



118->345





346

unsigned-varint 0.7.2



118->346





119->69





119->82





119->86





119->109





119->163





119->339





120->241





120->243





120->245





347

bstr



120->347





348

wait-timeout



120->348





123

flate2



123->331





349

miniz_oxide 0.7.1



123->349





126

keynesis



126->69





126->76





126->85





350

cryptoxide 0.3.6



126->350





127->81





128->85





128->335





351

bitflags 1.3.2



128->351





129->127





130->126





130->190





131->71





131->78





244

predicates-core



131->244





352

difflib



131->352





353

float-cmp



131->353





354

normalize-line-endings



131->354





132

prost 0.9.0



132->111





133->81





134->77





355

filetime



135->355





136

tokio-stream



356

tokio-util 0.7.10



136->356





137->109





137->132





137->136





137->160





195

tokio-util 0.6.10



137->195





137->308





357

async-stream



137->357





359

hyper-timeout



137->359





360

tower



137->360





138->82





138->136





169

hyper



138->169





177

pin-project



138->177





211

rustls-pemfile



138->211





138->286





362

headers



138->362





364

mime_guess



138->364





365

multer



138->365





366

scoped-tls



138->366





367

tokio-rustls 0.24.1



138->367





368

tokio-tungstenite



138->368





139->86





139->123





139->269





369

aes 0.8.3



139->369





370

bzip2



139->370





371

constant_time_eq



139->371





373

pbkdf2 0.11.0



139->373





374

sha1



139->374





375

zstd



139->375





140

console



140->100





140->335





376

unicode-width



140->376





141->108





141->140





141->152





377

shell-words



141->377





142->78





142->140





378

number_prefix



142->378





143->228





143->316





379

block-buffer 0.9.0



143->379





380

opaque-debug



143->380





144->228





144->316





144->379





144->380





188

linked-hash-map



146->188





381

crossbeam-channel



148->381





149->75





382

aes-gcm



149->382





383

chacha20poly1305



149->383





385

pbkdf2 0.9.0



149->385





386

dirs-sys



150->386





151->81





153

diesel



153->82





158

r2d2



153->158





153->163





153->269





153->351





387

libsqlite3-sys



153->387





388

pq-sys



153->388





389

migrations_internals



154->389





156->116





390

zipkin



156->390





157->215





157->335





157->351





157->355





157->381





158->127





391

scheduled-thread-pool



158->391





159->127





159->163





392

termcolor



159->392





160->88





160->177





161

bb8



161->87





161->203





162->161





165

tokio-postgres



162->165





163->81





163->219





165->203





165->308





165->356





394

phf



165->394





396

postgres-types



165->396





397

whoami



165->397





168->75





168->116





168->163





398

deunicode



168->398





399

url-escape



168->399





176

http-body



169->176





169->287





358

h2



169->358





400

httparse



169->400





401

httpdate



169->401





402

socket2



169->402





403

want



169->403





170->165





404

ansi_term 0.11.0



171->404





405

difference



171->405





173

rand_core 0.5.1



172->173





406

getrandom 0.1.16



173->406





407

refinery-core



174->407





408

hashbrown 0.14.2



175->408





176->116





176->298





182

wasm-bindgen



179->182





180->228





181

js-sys



181->182





182->228





183->181





184->72





184->111





184->127





184->199





184->345





410

cargo_metadata



184->410





186

jsonrpsee-core



186->75





186->169





186->288





412

jsonrpsee-types



186->412





413

rustc-hash



186->413





187->100





187->186





414

globset



187->414





415

unicase



187->415





189->85





416

hashbrown 0.12.3



190->416





191

opentelemetry



418

opentelemetry_sdk



191->418





419

opentelemetry-proto



192->419





193->191





194->85





194->100





194->266





283

memchr



194->283





194->340





422

protobuf



194->422





195->87





257

futures-core



195->257





260

futures-sink



195->260





423

slab



195->423





196->85





196->89





196->381





197->89





197->191





424

tracing-log 0.1.4



197->424





425

lru-cache



198->425





426

resolv-conf



198->426





427

trust-dns-proto



198->427





200

async-graphql



200->72





200->78





200->108





200->109





200->219





200->286





200->345





200->357





200->365





428

async-graphql-parser



200->428





430

fast_chemail



200->430





201->138





201->200





202

futures-channel



202->257





202->260





203->202





259

futures-io



203->259





261

futures-task



203->261





203->283





203->298





203->423





431

pin-utils



203->431





204->350





205->63





205->100





205->376





432

encode_unicode



205->432





433

is-terminal



205->433





434

term



205->434





206->79





206->100





206->138





435

ansi_term 0.12.1



207->435





210

rustls 0.20.9



210->127





436

ring 0.16.20



210->436





437

sct



210->437





438

webpki



210->438





211->103





439

rand 0.4.6



212->439





440

remove_dir_all



212->440





213->87





213->210





214->81





214->180





441

same-file



215->441





216->69





216->71





216->75





216->95





216->102





216->144





216->145





216->178





216->188





442

num-bigint



216->442





444

schemars



216->444





217->65





218->220





445

cron



218->445





446

libm



219->446





220->81





220->180





221->69





221->81





221->314





222->78





222->82





222->163





223

ciborium



222->223





222->343





447

abnf_to_pest



222->447





448

base16



222->448





449

base64-url



222->449





450

clap 3.2.25



222->450





451

codespan-reporting



222->451





452

hexf-parse



222->452





453

itertools 0.11.0



222->453





454

lexical-core



222->454





456

pest_vm



222->456





457

regex-syntax 0.7.5



222->457





458

simplelog 0.12.1



222->458





459

uriparse



222->459





223->81





461

ciborium-ll



223->461





224->72





224->408





462

lock_api



224->462





463

parking_lot_core 0.9.9



224->463





464

secrecy



225->464





226->81





465

rand 0.3.23



229->465





230->173





230->280





231->82





231->169





231->286





231->351





231->360





466

axum-core



231->466





467

matchit



231->467





468

serde_path_to_error



231->468





469

sync_wrapper



231->469





232

metrics



470

ahash 0.7.7



232->470





233->85





233->169





233->339





471

ipnet



233->471





472

metrics-util



233->472





234->176





234->203





234->287





234->311





361

tower-layer



234->361





474

http-range-header



234->474





475

clap_complete_fig



235->475





476

clap_complete_nushell



235->476





237->100





237->324





238->278





240

anstyle



242->351





477

ignore



242->477





243->240





243->244





243->352





243->453





245->244





478

termtree



245->478





479

anstream



246->479





480

clap_lex 0.6.0



246->480





481

strsim 0.10.0



246->481





247->228





482

rustc-demangle



247->482





251

owo-colors



248->251





252

tracing-error



248->252





249->72





250

indenter



249->250





252->89





253->283





256->273





483

num-complex



256->483





484

num-rational 0.4.1



256->484





258->203





262->85





262->335





266->462





266->463





270

color_quant



271->270





485

weezl



271->485





272

jpeg-decoder



272->77





443

num-integer



273->443





274->443





275->331





275->351





486

deflate



275->486





487

miniz_oxide 0.3.7



275->487





277->272





277->485





488

miniz_oxide 0.4.4



277->488





489

crossbeam-deque



281->489





282

aho-corasick



282->283





284->282





284->285





286->81





286->254





286->255





286->306





290->220





290->416





490

bitvec



290->490





493

rend



290->493





494

seahash



290->494





495

tinyvec



290->495





291->408





496

equivalent



291->496





293->81





294

powerfmt



293->294





296->127





299->72





497

regex-automata 0.1.10



300->497





498

overload



301->498





302->100





303->72





303->228





304->127





304->299





305->81





305->299





306->308





499

unicode-bidi



307->499





500

unicode-normalization



307->500





310

bit-vec



309->310





312->98





313->108





313->340





313->348





501

quick-error



313->501





315->78





315->127





316->97





318->152





318->173





318->269





318->316





502

subtle



318->502





503

signature



319->503





321->502





504

block-buffer 0.10.4



321->504





505

crypto-common



321->505





322

atty



323

cast



324->322





324->351





506

strsim 0.8.0



324->506





507

textwrap 0.11.0



324->507





508

vec_map



324->508





325->71





325->323





327->219





510

plotters-svg



327->510





328->81





511

half



328->511





329->82





331->228





333

crossbeam-utils



332->333





512

memoffset



332->512





513

scopeguard



332->513





333->228





334->269





336->462





515

parking_lot_core 0.8.6



336->515





339->81





339->416





344->97





516

unsigned-varint 0.5.1



344->516





347->81





347->284





517

adler



349->517





353->219





355->228





356->87





356->88





356->257





356->260





357->257





357->298





358->116





358->203





358->291





358->356





359->169





518

tokio-io-timeout



359->518





360->75





360->177





360->203





360->287





360->339





360->356





360->361





362->103





363

mime



362->363





362->374





362->401





519

headers-core



362->519





364->363





364->415





365->116





365->127





365->203





365->363





365->400





520

encoding_rs



365->520





521

spin



365->521





367->87





522

rustls 0.21.9



367->522





368->87





368->203





523

tungstenite



368->523





524

cipher 0.4.4



369->524





525

bzip2-sys



370->525





372

hmac 0.12.1



372->321





373->99





373->372





526

password-hash



373->526





374->321





527

zstd-safe



375->527





379->97





381->333





528

aead 0.4.3



382->528





529

aes 0.7.5



382->529





531

ctr



382->531





532

ghash



382->532





533

aead 0.5.2



383->533





534

chacha20



383->534





535

poly1305



383->535





384

hmac 0.11.0



384->316





536

crypto-mac



384->536





385->144





385->384





389->153





390->75





390->127





390->298





537

lazycell



390->537





538

zipkin-types



390->538





391->266





393

fallible-iterator



539

phf_shared



394->539





395

postgres-protocol



395->75





395->99





395->103





395->111





395->269





395->283





395->372





395->393





540

md-5



395->540





541

stringprep



395->541





396->82





396->163





396->395





399->308





542

try-lock



403->542





406->228





407->78





407->85





407->86





407->90





407->100





407->170





407->215





543

siphasher 1.0.0



407->543





544

toml



407->544





545

ahash 0.8.6



408->545





546

allocator-api2



408->546





409

camino



409->81





410->82





410->133





410->409





547

cargo-platform



410->547





411

beef



411->81





412->82





412->85





412->88





412->199





412->411





414->78





414->127





414->340





414->347





416->470





417

opentelemetry_api



417->72





417->85





417->203





417->339





417->340





418->75





418->136





418->224





418->258





418->308





418->381





418->417





419->65





419->191





421

tonic 0.8.3



419->421





420

prost 0.11.9



420->111





421->109





421->136





421->160





421->231





421->357





421->359





421->420





424->127





424->299





425->188





426->501





548

hostname



426->548





427->75





427->85





427->87





427->88





427->90





427->100





427->203





427->343





427->471





549

idna 0.2.3



427->549





429

async-graphql-value



428->429





550

pest



428->550





429->82





429->111





429->339





551

ascii_utils



430->551





552

dirs-next



434->552





553

untrusted 0.7.1



436->553





554

ring 0.17.5



437->554





438->554





442->81





442->443





443->219





444->82





444->167





445->72





445->163





556

nom



445->556





447->71





447->339





557

abnf



447->557





558

pretty



447->558





449->103





450->72





450->322





450->339





450->351





450->392





450->481





559

clap_lex 0.2.4



450->559





560

textwrap 0.16.0



450->560





451->376





451->392





453->278





561

lexical-parse-float



454->561





564

lexical-write-float



454->564





455

pest_meta



455->72





455->550





456->455





458->86





458->127





458->392





459->100





459->340





460

ciborium-io



461->460





461->511





462->513





463->228





463->267





464->81





464->152





465->335





465->439





466->176





466->203





466->287





466->361





466->363





468->81





468->254





472->232





472->297





472->332





472->336





473

quanta



472->473





566

atomic-shim



472->566





567

hashbrown 0.11.2



472->567





568

sketches-ddsketch



472->568





473->72





473->333





475->112





476->112





477->100





477->215





477->303





477->414





479->240





569

anstyle-parse



479->569





570

anstyle-query



479->570





571

colorchoice



479->571





483->81





483->219





484->442





486->269





573

adler32



486->573





487->573





488->517





489->332





574

funty



490->574





575

radium



490->575





577

wyz



490->577





491

bytecheck



492

ptr_meta



491->492





578

simdutf8



491->578





493->491





579

tinyvec_macros



495->579





580

regex-syntax 0.6.29



497->580





500->495





504->97





505->97





505->98





507->376





509

plotters-backend



510->509





514

instant



514->228





515->267





515->514





518->87





519->116





520->228





522->127





522->437





581

rustls-webpki



522->581





523->75





523->85





523->90





523->116





523->127





523->269





523->343





523->374





523->400





582

utf-8



523->582





524->152





524->505





583

inout



524->583





525->335





526->98





526->502





584

base64ct



526->584





527->335





585

zstd-sys



527->585





528->97





529->228





529->380





530

cipher 0.3.0



529->530





530->97





531->530





586

polyval



532->586





533->505





534->524





535->380





587

universal-hash 0.5.1



535->587





536->97





536->502





538->343





588

siphasher 0.3.11



539->588





540->321





541->499





541->500





589

finl_unicode



541->589





592

toml_edit



544->592





545->228





593

zerocopy



545->593





547->81





594

match_cfg



548->594





549->499





549->500





595

matches



549->595





550->85





550->283





596

ucd-trie



550->596





552->228





597

dirs-sys-next



552->597





554->180





555

untrusted 0.9.0



554->555





556->283





598

minimal-lexical



556->598





599

abnf-core



557->599





558->127





600

arrayvec 0.5.2



558->600





601

typed-arena



558->601





602

unicode-segmentation



558->602





603

os_str_bytes



559->603





562

lexical-parse-integer



561->562





563

lexical-util



562->563





563->345





565

lexical-write-integer



564->565





565->563





567->470





572

utf8parse



569->572





576

tap



577->576





581->554





583->97





586->228





586->380





604

universal-hash 0.4.1



586->604





587->502





587->505





590

serde_spanned



590->81





591

toml_datetime



591->81





592->291





592->590





592->591





605

winnow



592->605





599->556





604->97





604->502





605->283





Dependencies Graph incl. Build and Development Dependencies






0

tests



1

catalyst-toolbox



18

jormungandr-integration-tests



1->18





24

symmetric-cipher



1->24





25

vit-servicing-station-lib



1->25





1->25





28

wallet



1->28





63

color-eyre



1->63





67

gag



1->67





68

governor



1->68





76

qrcode



1->76





77

quircs



1->77





89

sscanf



1->89





2

chain-addr



3

chain-core



2->3





5

chain-crypto



2->5





2->5





4

chain-ser



3->4





91

thiserror



4->91





6

typed-bytes



5->6





61

bech32 0.8.1



5->61





70

hex



5->70





75

proptest



5->75





5->75





80

rayon



5->80





90

test-strategy



5->90





5->90





99

quickcheck



5->99





5->99





100

curve25519-dalek-ng



5->100





101

ed25519-bip32 0.4.1



5->101





102

ed25519-dalek



5->102





105

sha2 0.10.8



5->105





106

smoke



5->106





7

chain-impl-mockchain



7->2





7->2





8

cardano-legacy-address



7->8





9

chain-time



7->9





7->9





10

chain-vote



7->10





11

imhamt



7->11





12

sparse-array



7->12





94

tracing



7->94





108

quickcheck_macros



7->108





7->108





109

strum 0.24.1



7->109





8->101





112

cbor_event



8->112





9->3





9->75





9->75





9->90





9->90





9->99





9->99





10->3





10->5





113

base64 0.21.5



10->113





115

const_format



10->115





116

criterion



10->116





11->75





11->75





11->90





11->90





11->91





11->116





117

rustc_version



11->117





118

trybuild



11->118





12->75





12->90





13

chain-storage



13->91





104

rand_core 0.6.4



13->104





13->104





111

tempfile



13->111





13->111





13->116





119

data-pile



13->119





120

sled



13->120





14

jcli



15

jormungandr-lib



14->15





60

assert_fs



14->60





74

predicates 2.1.5



14->74





82

reqwest



14->82





97

versionisator



14->97





124

clap_complete



14->124





125

gtmpl



14->125





127

rpassword



14->127





15->7





15->7





122

bincode



15->122





129

serde_yaml 0.8.26



15->129





130

http



15->130





131

humantime



15->131





132

parity-multiaddr



15->132





133

serde_with



15->133





16

jormungandr-automation



16->13





16->14





17

jortestkit



16->17





59

assert_cmd



16->59





69

graphql_client



16->69





95

tracing-subscriber



16->95





138

json



16->138





141

netstat2



16->141





143

poldercast



16->143





149

tonic 0.6.2



16->149





150

tonic-build 0.6.2



16->150





17->60





17->61





64

csv



17->64





17->70





17->74





17->82





121

base64 0.13.1



17->121





17->129





17->131





134

bytesize



17->134





135

custom_debug



17->135





137

fs_extra



17->137





142

os_info



17->142





145

semver



17->145





146

sysinfo



17->146





147

tar



17->147





151

warp



17->151





152

zip



17->152





154

dialoguer



17->154





155

indicatif



17->155





156

sha-1



17->156





157

sha2 0.9.9



17->157





19

hersir



18->19





22

mjolnir



18->22





158

bech32 0.7.3



18->158





159

rstest



18->159





20

thor



19->20





161

ctrlc



19->161





162

slave-pool



19->162





20->16





163

cocoon



20->163





164

dirs



20->164





21

loki



21->20





22->21





23

snapshot-lib



23->15





65

fraction



23->65





23->82





84

rust_decimal_macros



23->84





87

serde_test



23->87





23->87





88

serde_yaml 0.9.27



23->88





23->88





78

rand 0.8.5



24->78





24->91





98

cryptoxide 0.4.4



24->98





165

zeroize



24->165





26

event-db



25->26





27

vit-servicing-station-tests



25->27





25->95





168

diesel_migrations



25->168





169

dotenv



25->169





170

http-zipkin



25->170





171

notify



25->171





172

pretty_assertions 1.4.0



25->172





174

simplelog 0.8.0



25->174





175

tracing-futures



25->175





83

rust_decimal



26->83





26->91





177

bb8-postgres



26->177





179

dotenvy



26->179





27->17





27->23





27->25





27->59





62

clap 4.4.8



27->62





181

cfg-if 0.1.10



27->181





182

dyn-clone



27->182





183

fake



27->183





186

pretty_assertions 0.6.1



27->186





188

refinery



27->188





28->15





30

hdkeygen



28->30





189

hashlink



28->189





29

chain-path-derivation



29->91





29->99





29->108





190

paste 0.1.18



29->190





30->2





30->8





30->29





31

chain-network



31->5





66

futures



31->66





31->91





31->149





31->150





32

jormungandrwallet



33

wallet-core



32->33





33->24





33->28





33->158





34

wallet-wasm-js



34->33





193

clear_on_drop



34->193





198

wasm-bindgen-test



34->198





35

wallet-uniffi



35->33





200

uniffi



35->200





36

jormungandr



36->13





36->15





36->31





36->62





36->82





36->97





36->143





36->151





36->170





203

arc-swap



36->203





206

jsonrpsee-http-server



36->206





209

local-ip-address



36->209





211

nix 0.25.1



36->211





213

opentelemetry-otlp



36->213





214

opentelemetry-semantic-conventions



36->214





215

prometheus



36->215





217

tracing-appender



36->217





218

tracing-opentelemetry



36->218





219

trust-dns-resolver



36->219





37

explorer



37->18





37->31





37->170





37->213





37->214





37->217





37->218





222

async-graphql-warp



37->222





38

settings



38->91





38->120





39

blockchain



39->7





210

lru



39->210





40

vit-servicing-station-cli



40->25





41

vit-servicing-station-server



41->25





41->213





41->214





41->217





41->218





42

iapyx



42->1





43

valgrind



42->43





225

ed25519-bip32 0.3.2



42->225





226

prettytable-rs



42->226





43->16





43->25





43->33





227

warp-reverse-proxy



43->227





44

vitup



44->43





46

mainnet-tools



44->46





44->217





228

diffy



44->228





230

path-slash



44->230





234

tokio-rustls 0.23.4



44->234





235

uuid 0.8.2



44->235





45

mainnet-lib



45->20





45->23





233

tempdir



45->233





237

cardano-serialization-lib



45->237





238

pharos



45->238





46->45





49

snapshot-trigger-service



46->49





46->226





239

job_scheduler_ng



46->239





47

scheduler-service-lib



47->17





47->62





47->66





178

chrono



47->178





241

uuid 1.6.0



47->241





48

signals-handler



48->66





93

tokio



48->93





49->1





49->47





49->48





50

voting_tools_rs



49->50





50->62





50->63





50->75





50->80





50->83





50->90





50->164





50->237





242

bytekind



50->242





243

cddl



50->243





245

dashmap



50->245





246

insta



50->246





247

microtype



50->247





248

nonempty



50->248





249

tracing-test



50->249





250

validity



50->250





51

integration-tests



51->42





51->44





251

libmath



51->251





52

cat-data-service



52->26





52->62





52->95





52->133





253

axum



52->253





255

metrics-exporter-prometheus



52->255





257

tower-http



52->257





53

audit



53->13





53->28





53->63





258

clap_complete_command



53->258





54

vit-servicing-station-cli-f10



55

vit-servicing-station-lib-f10



54->55





55->15





55->95





55->151





166

async-trait



55->166





55->168





55->169





55->170





55->171





55->174





55->175





259

base64 0.12.3



55->259





260

structopt



55->260





261

itertools 0.9.0



55->261





263

strum 0.21.0



55->263





264

strum_macros 0.21.1



55->264





56

vit-servicing-station-server-f10



56->55





56->217





57

vit-servicing-station-tests-f10



57->17





57->55





57->59





57->181





57->182





57->183





57->186





58

sign



58->13





58->15





58->63





58->82





58->258





266

bstr



59->266





267

doc-comment



59->267





268

predicates 3.0.4



59->268





270

predicates-tree



59->270





271

wait-timeout



59->271





60->111





60->267





60->268





60->270





272

globwalk



60->272





273

clap_builder



62->273





274

clap_derive 4.4.7



62->274





275

backtrace



63->275





276

color-spantrace



63->276





277

eyre



63->277





85

serde



64->85





281

csv-core



64->281





282

itoa



64->282





283

ryu



64->283





107

lazy_static



65->107





284

num



65->284





286

futures-executor



66->286





67->111





290

filedescriptor



67->290





68->66





68->78





291

futures-timer



68->291





292

no-std-compat



68->292





293

nonzero_ext



68->293





294

parking_lot 0.12.1



68->294





296

graphql_query_derive



69->296





71

image



297

bytemuck



71->297





300

gif



71->300





302

num-iter



71->302





303

num-rational 0.3.2



71->303





304

png



71->304





305

scoped_threadpool



71->305





306

tiff



71->306





72

itertools 0.10.5



307

either



72->307





73

once_cell



74->72





81

regex



74->81





269

predicates-core



74->269





308

difflib



74->308





309

float-cmp



74->309





310

normalize-line-endings



74->310





75->78





75->107





240

num-traits



75->240





311

bit-set



75->311





314

rand_xorshift



75->314





315

regex-syntax 0.8.2



75->315





316

rusty-fork



75->316





317

unarray



75->317





76->71





318

checked_int_cast



76->318





77->91





77->240





319

num-derive



77->319





79

rand_chacha 0.3.1



78->79





79->104





320

ppv-lite86



79->320





80->307





321

rayon-core



80->321





324

regex-automata 0.4.3



81->324





86

serde_json



82->86





82->86





96

url



82->96





232

rustls-pemfile



82->232





325

encoding_rs



82->325





327

hyper-rustls



82->327





328

hyper-tls



82->328





329

ipnet



82->329





330

mime



82->330





335

serde_urlencoded



82->335





336

system-configuration



82->336





340

wasm-bindgen-futures



82->340





341

webpki-roots



82->341





342

winreg



82->342





185

postgres



83->185





343

arrayvec 0.7.4



83->343





344

borsh



83->344





345

rkyv



83->345





84->83





128

serde_derive



85->128





85->128





86->85





86->282





86->283





87->85





88->85





88->282





88->283





347

indexmap 2.1.0



88->347





348

unsafe-libyaml



88->348





89->81





89->107





89->115





349

sscanf_macro



89->349





351

structmeta



90->351





353

thiserror-impl



91->353





92

time



92->282





354

deranged



92->354





355

num_threads



92->355





358

time-macros



92->358





123

bytes



93->123





93->275





93->294





333

pin-project-lite



93->333





359

mio



93->359





360

num_cpus



93->360





361

signal-hook-registry



93->361





362

socket2 0.5.5



93->362





363

tokio-macros



93->363





140

log



94->140





94->333





365

tracing-attributes



94->365





366

tracing-core



94->366





95->81





95->86





95->92





95->94





295

smallvec



95->295





367

matchers



95->367





368

nu-ansi-term



95->368





369

sharded-slab



95->369





370

thread_local



95->370





371

tracing-log 0.2.0



95->371





372

tracing-serde



95->372





96->85





373

form_urlencoded



96->373





374

idna 0.4.0



96->374





375

platforms



97->375





126

rand 0.7.3



99->126





376

env_logger



99->376





100->104





100->165





298

byteorder



100->298





377

digest 0.9.0



100->377





378

subtle-ng



100->378





101->98





102->85





102->126





102->157





379

curve25519-dalek



102->379





380

ed25519



102->380





103

generic-array



381

typenum



103->381





382

version_check



103->382





195

getrandom 0.2.11



104->195





383

cpufeatures



105->383





384

digest 0.10.7



105->384





352

syn 1.0.109



108->352





110

strum_macros 0.24.3



109->110





110->352





385

heck 0.4.1



110->385





386

rustversion



110->386





114

cfg-if 1.0.0



111->114





387

fastrand



111->387





388

redox_syscall 0.4.1



111->388





389

rustix



111->389





390

const_format_proc_macros



115->390





116->64





116->66





116->80





116->81





116->93





116->107





236

walkdir



116->236





393

clap 2.34.0



116->393





394

criterion-plot



116->394





395

oorandom



116->395





396

plotters



116->396





397

serde_cbor



116->397





398

tinytemplate



116->398





117->145





118->73





118->86





229

glob



118->229





399

basic-toml



118->399





400

termcolor



118->400





401

memmap2



119->401





120->140





402

crc32fast



120->402





403

crossbeam-epoch



120->403





405

fs2



120->405





406

fxhash



120->406





407

parking_lot 0.11.2



120->407





122->85





123->85





124->62





125->72





125->107





332

percent-encoding



125->332





408

gtmpl_value



125->408





252

rand_chacha 0.2.2



126->252





410

rand_hc



126->410





127->86





207

libc



127->207





411

winapi



127->411





412

syn 2.0.39



128->412





160

yaml-rust



129->160





129->283





413

indexmap 1.9.3



129->413





130->123





130->282





414

fnv



130->414





132->96





132->298





415

arrayref



132->415





416

bs58



132->416





417

data-encoding



132->417





418

multihash



132->418





419

static_assertions



132->419





420

unsigned-varint 0.7.2



132->420





133->70





133->86





133->92





133->121





133->178





133->413





421

serde_with_macros



133->421





422

custom_debug_derive



135->422





136

flate2



136->402





423

miniz_oxide 0.7.1



136->423





136->423





139

keynesis



139->70





139->79





139->91





424

cryptoxide 0.3.6



139->424





140->85





141->91





141->207





141->240





141->298





141->319





425

bitflags 1.3.2



141->425





142->140





142->411





143->139





143->210





144

prost 0.9.0



144->123





426

prost-derive 0.9.0



144->426





145->85





146->73





146->80





146->207





427

core-foundation-sys



146->427





428

ntapi



146->428





429

filetime



147->429





430

xattr



147->430





148

tokio-stream



431

tokio-util 0.7.10



148->431





149->121





149->144





149->148





149->166





149->175





216

tokio-util 0.6.10



149->216





256

tower



149->256





149->332





432

async-stream



149->432





433

hyper-timeout



149->433





435

prost-build 0.9.0



150->435





151->86





151->148





184

hyper



151->184





192

pin-project



151->192





151->232





151->335





338

tokio-rustls 0.24.1



151->338





436

headers



151->436





437

mime_guess



151->437





438

multer



151->438





439

scoped-tls



151->439





440

tokio-tungstenite



151->440





152->92





152->136





152->298





404

crossbeam-utils



152->404





441

aes 0.8.3



152->441





442

bzip2



152->442





443

constant_time_eq



152->443





445

pbkdf2 0.11.0



152->445





446

sha1



152->446





447

zstd



152->447





153

console



153->107





153->207





448

encode_unicode 0.3.6



153->448





449

unicode-width



153->449





450

windows-sys 0.45.0



153->450





154->111





154->153





154->165





451

shell-words



154->451





155->81





155->153





452

number_prefix



155->452





156->114





156->377





156->383





453

block-buffer 0.9.0



156->453





454

opaque-debug



156->454





157->114





157->377





157->383





157->453





157->454





159->114





159->117





159->352





208

linked-hash-map



160->208





364

windows-sys 0.48.0



161->364





455

nix 0.27.1



161->455





456

crossbeam-channel



162->456





163->78





457

aes-gcm



163->457





458

chacha20poly1305



163->458





460

pbkdf2 0.9.0



163->460





461

dirs-sys



164->461





462

zeroize_derive



165->462





166->412





167

diesel



167->86





173

r2d2



167->173





167->178





262

libsqlite3-sys



167->262





167->298





463

diesel_derives



167->463





464

pq-sys



167->464





466

migrations_macros



168->466





170->130





467

zipkin



170->467





171->236





171->359





171->359





171->429





171->450





171->456





468

fsevent-sys



171->468





469

inotify



171->469





470

kqueue



171->470





471

diff



172->471





472

yansi



172->472





173->140





473

scheduled-thread-pool



173->473





174->178





174->400





175->94





175->192





176

bb8



176->93





176->166





224

futures-util



176->224





177->176





180

tokio-postgres



177->180





178->240





474

android-tzdata



178->474





475

iana-time-zone



178->475





180->166





180->224





180->332





180->431





478

phf



180->478





480

postgres-types



180->480





481

whoami



180->481





183->78





183->130





183->178





482

deunicode



183->482





483

url-escape



183->483





191

http-body



184->191





326

h2



184->326





339

tower-service



184->339





484

httparse



184->484





485

httpdate



184->485





486

socket2 0.4.10



184->486





487

want



184->487





185->180





488

ansi_term 0.11.0



186->488





489

ctor



186->489





490

difference



186->490





491

output_vt100



186->491





187

rand_core 0.5.1



409

getrandom 0.1.16



187->409





493

refinery-macros



188->493





494

hashbrown 0.14.2



189->494





495

paste-impl



190->495





191->130





191->333





497

pin-project-internal



192->497





498

cc



193->498





194

console_error_panic_hook



197

wasm-bindgen



194->197





196

js-sys



195->196





195->207





499

wasi 0.11.0+wasi-snapshot-preview1



195->499





196->197





197->114





500

wasm-bindgen-macro



197->500





198->194





198->340





198->439





501

wasm-bindgen-test-macro



198->501





199

web-sys



199->196





200->123





200->140





202

uniffi_macros



200->202





200->419





503

cargo_metadata



200->503





504

paste 1.0.14



200->504





201

uniffi_build



220

anyhow



201->220





502

camino



201->502





202->73





202->122





202->201





505

fs-err



202->505





506

toml 0.5.11



202->506





507

uniffi_meta



202->507





204

enum-as-inner



204->352





204->385





205

jsonrpsee-core



205->78





205->166





205->184





205->343





509

jsonrpsee-types



205->509





510

rustc-hash



205->510





206->107





206->205





511

globset



206->511





512

unicase



206->512





209->91





513

neli



209->513





514

windows-sys 0.42.0



209->514





515

hashbrown 0.12.3



210->515





211->114





211->207





211->425





517

memoffset 0.6.5



211->517





518

pin-utils



211->518





212

opentelemetry



520

opentelemetry_sdk



212->520





521

opentelemetry-proto



213->521





214->212





215->91





215->107





215->294





323

memchr



215->323





215->414





524

protobuf



215->524





216->93





285

futures-core



216->285





288

futures-sink



216->288





525

slab



216->525





217->91





217->95





217->456





218->95





218->212





526

tracing-log 0.1.4



218->526





527

ipconfig



219->527





528

lru-cache



219->528





529

resolv-conf



219->529





530

trust-dns-proto



219->530





221

async-graphql



221->111





221->121





221->166





221->240





221->335





221->419





221->432





221->438





531

async-graphql-derive



221->531





534

fast_chemail



221->534





222->151





222->221





223

futures-channel



223->285





223->288





224->223





287

futures-io



224->287





289

futures-task



224->289





224->323





224->333





224->518





224->525





535

futures-macro



224->535





225->424





226->64





226->107





226->449





536

encode_unicode 1.0.0



226->536





537

is-terminal



226->537





538

term



226->538





227->82





227->151





539

ansi_term 0.12.1



228->539





231

rustls 0.20.9



540

ring 0.16.20



231->540





541

sct



231->541





542

webpki



231->542





232->113





543

rand 0.4.6



233->543





544

remove_dir_all



233->544





234->93





234->231





235->195





545

same-file



236->545





237->70





237->72





237->78





237->101





237->112





237->157





237->158





237->193





237->208





547

noop_proc_macro



237->547





548

num-bigint



237->548





550

rand_os



237->550





237->550





551

schemars



237->551





237->551





552

serde-wasm-bindgen 0.4.5



237->552





238->66





238->117





239->241





553

cron



239->553





516

autocfg



240->516





554

libm



240->554





241->195





242->70





242->85





242->317





243->81





243->86





243->178





243->194





244

ciborium



243->244





243->417





555

abnf_to_pest



243->555





556

base16



243->556





557

base64-url



243->557





558

clap 3.2.25



243->558





559

codespan-reporting



243->559





560

crossterm



243->560





561

displaydoc



243->561





562

hexf-parse



243->562





563

itertools 0.11.0



243->563





564

lexical-core



243->564





566

pest_vm



243->566





567

regex-syntax 0.7.5



243->567





568

serde-wasm-bindgen 0.5.0



243->568





569

simplelog 0.12.1



243->569





570

uriparse



243->570





244->85





572

ciborium-ll



244->572





245->494





573

lock_api



245->573





574

parking_lot_core 0.9.9



245->574





246->85





246->153





246->160





575

similar



246->575





576

microtype-macro



247->576





577

secrecy



247->577





248->85





249->95





578

tracing-test-macro



249->578





579

rand 0.3.23



251->579





252->187





252->320





253->86





253->184





253->256





253->335





580

axum-core



253->580





581

matchit



253->581





582

serde_path_to_error



253->582





583

sync_wrapper



253->583





254

metrics



584

ahash 0.7.7



254->584





585

metrics-macros



254->585





255->91





255->184





255->329





255->413





586

metrics-util



255->586





256->78





256->192





256->224





256->339





256->413





256->431





434

tower-layer



256->434





257->191





257->224





313

bitflags 2.4.1



257->313





257->339





257->434





588

http-range-header



257->588





589

clap_complete_fig



258->589





590

clap_complete_nushell



258->590





260->107





260->393





591

structopt-derive



260->591





261->307





262->498





592

pkg-config



262->592





593

vcpkg



262->593





264->352





594

heck 0.3.3



264->594





265

anstyle



266->85





266->324





268->265





268->269





268->308





268->563





270->269





595

termtree



270->595





271->207





272->425





596

ignore



272->596





597

anstream



273->597





598

clap_lex 0.6.0



273->598





599

strsim 0.10.0



273->599





274->385





274->412





275->114





275->423





275->498





600

addr2line



275->600





601

object



275->601





602

rustc-demangle



275->602





279

owo-colors



276->279





280

tracing-error



276->280





277->73





278

indenter



277->278





280->95





281->323





284->302





603

num-complex



284->603





604

num-rational 0.4.1



284->604





286->224





290->91





290->207





290->411





294->573





294->574





605

graphql_client_codegen



296->605





299

color_quant



300->299





606

weezl



300->606





301

jpeg-decoder



301->80





549

num-integer



302->549





303->549





304->402





304->425





607

deflate



304->607





608

miniz_oxide 0.3.7



304->608





306->301





306->606





609

miniz_oxide 0.4.4



306->609





309->240





312

bit-vec



311->312





314->104





316->111





316->271





316->414





610

quick-error



316->610





319->352





611

crossbeam-deque



321->611





322

aho-corasick



322->323





324->315





324->322





325->114





326->130





326->224





326->347





326->431





327->184





327->338





328->184





337

tokio-native-tls



328->337





331

native-tls



331->107





331->111





331->140





612

openssl



331->612





613

openssl-probe



331->613





615

schannel



331->615





616

security-framework



331->616





334

rustls 0.21.9



334->541





619

rustls-webpki



334->619





335->85





335->282





335->283





335->373





336->425





620

core-foundation



336->620





621

system-configuration-sys



336->621





337->93





337->331





338->93





338->334





340->199





342->114





342->364





622

borsh-derive



344->622





623

cfg_aliases



344->623





345->241





345->515





624

bitvec



345->624





627

rend



345->627





628

rkyv_derive



345->628





629

seahash



345->629





630

tinyvec



345->630





346

quote



350

proc-macro2



346->350





347->494





631

equivalent



347->631





349->352





632

regex-syntax 0.6.29



349->632





633

unicode-ident



350->633





634

structmeta-derive



351->634





352->346





353->412





354->85





356

powerfmt



354->356





355->207





357

time-core



358->357





359->140





359->207





359->207





359->364





359->499





360->207





635

hermit-abi 0.3.3



360->635





361->207





362->207





362->364





363->412





476

windows-targets 0.48.5



364->476





365->412





366->73





636

valuable



366->636





637

regex-automata 0.1.10



367->637





368->411





638

overload



368->638





369->107





370->73





370->114





371->140





371->366





372->85





372->366





373->332





639

unicode-bidi



374->639





640

unicode-normalization



374->640





376->81





376->140





377->103





379->165





379->187





379->298





379->377





641

subtle



379->641





642

signature



380->642





383->207





383->207





383->207





383->207





384->641





643

block-buffer 0.10.4



384->643





644

crypto-common



384->644





388->425





389->313





645

errno



389->645





389->645





389->645





646

linux-raw-sys



389->646





389->646





390->346





647

unicode-xid



390->647





391

atty



391->411





648

hermit-abi 0.1.19



391->648





392

cast



393->391





393->425





393->539





649

strsim 0.8.0



393->649





650

textwrap 0.11.0



393->650





651

vec_map



393->651





394->72





394->392





396->199





396->240





653

plotters-svg



396->653





397->85





654

half



397->654





398->86





399->85





546

winapi-util



400->546





401->207





402->114





403->404





655

memoffset 0.9.0



403->655





656

scopeguard



403->656





404->114





405->207





405->411





406->298





407->573





658

parking_lot_core 0.8.6



407->658





409->114





409->207





659

wasi 0.9.0+wasi-snapshot-preview1



409->659





410->187





660

winapi-i686-pc-windows-gnu



411->660





661

winapi-x86_64-pc-windows-gnu



411->661





412->346





413->515





413->516





418->103





662

multihash-derive



418->662





663

unsigned-varint 0.5.1



418->663





664

darling 0.20.3



421->664





665

synstructure



422->665





666

adler



423->666





426->72





426->220





426->352





428->411





429->114





429->207





429->364





667

redox_syscall 0.3.5



429->667





430->207





431->93





431->94





431->285





431->288





432->285





432->333





668

async-stream-impl



432->668





433->184





669

tokio-io-timeout



433->669





435->81





435->107





435->111





435->140





435->594





670

multimap



435->670





671

petgraph



435->671





672

prost-types 0.9.0



435->672





673

which



435->673





436->113





436->330





436->446





436->485





674

headers-core



436->674





437->330





437->512





437->512





438->130





438->140





438->224





438->325





438->330





438->382





438->484





675

spin 0.9.8



438->675





440->93





440->224





676

tungstenite



440->676





441->383





677

cipher 0.4.4



441->677





678

bzip2-sys



442->678





444

hmac 0.12.1



444->384





445->105





445->444





679

password-hash



445->679





446->383





446->384





680

zstd-safe



447->680





681

windows-targets 0.42.2



450->681





453->103





455->114





455->207





455->313





456->404





682

aead 0.4.3



457->682





683

aes 0.7.5



457->683





685

ctr



457->685





686

ghash



457->686





687

aead 0.5.2



458->687





688

chacha20



458->688





689

poly1305



458->689





459

hmac 0.11.0



459->377





690

crypto-mac



459->690





460->157





460->459





461->411





691

redox_users



461->691





462->412





463->352





464->593





465

migrations_internals



465->167





466->465





467->78





467->333





692

lazycell



467->692





693

zipkin-types



467->693





468->207





469->425





694

inotify-sys



469->694





695

kqueue-sys



470->695





473->294





475->196





475->427





696

android_system_properties



475->696





697

iana-time-zone-haiku



475->697





698

windows-core



475->698





699

windows_aarch64_gnullvm 0.48.5



476->699





700

windows_aarch64_msvc 0.48.5



476->700





701

windows_i686_gnu 0.48.5



476->701





702

windows_i686_msvc 0.48.5



476->702





703

windows_x86_64_gnu 0.48.5



476->703





704

windows_x86_64_gnullvm 0.48.5



476->704





705

windows_x86_64_msvc 0.48.5



476->705





477

fallible-iterator



706

phf_shared



478->706





479

postgres-protocol



479->78





479->105





479->113





479->123





479->298





479->323





479->444





479->477





707

md-5



479->707





708

stringprep



479->708





480->86





480->178





480->479





481->199





483->332





486->207





486->411





709

try-lock



487->709





488->411





489->352





491->411





492

refinery-core



492->81





492->91





492->92





492->96





492->107





492->185





492->236





710

siphasher 1.0.0



492->710





711

toml 0.7.8



492->711





493->492





712

ahash 0.8.6



494->712





713

allocator-api2



494->713





496

proc-macro-hack



495->496





497->412





714

jobserver



498->714





715

wasm-bindgen-macro-support



500->715





501->346





502->85





503->86





503->145





503->502





716

cargo-platform



503->716





505->516





506->85





507->85





717

siphasher 0.3.11



507->717





718

uniffi_checksum_derive



507->718





508

beef



508->85





509->86





509->91





509->94





509->220





509->508





511->81





511->140





511->266





511->414





512->382





513->207





513->298





719

windows_aarch64_gnullvm 0.42.2



514->719





720

windows_aarch64_msvc 0.42.2



514->720





514->720





721

windows_i686_gnu 0.42.2



514->721





514->721





722

windows_i686_msvc 0.42.2



514->722





514->722





723

windows_x86_64_gnu 0.42.2



514->723





514->723





724

windows_x86_64_gnullvm 0.42.2



514->724





725

windows_x86_64_msvc 0.42.2



514->725





514->725





515->584





517->516





519

opentelemetry_api



519->91





519->224





519->413





519->414





520->78





520->148





520->166





520->245





520->286





520->332





520->456





520->519





521->66





521->212





523

tonic 0.8.3



521->523





726

tonic-build 0.8.4



521->726





522

prost 0.11.9



522->123





727

prost-derive 0.11.9



522->727





523->121





523->148





523->175





523->253





523->432





523->433





523->522





525->516





526->140





526->366





527->342





527->362





728

widestring



527->728





528->208





529->610





729

hostname



529->729





530->78





530->91





530->93





530->94





530->96





530->107





530->166





530->204





530->224





530->329





530->417





730

idna 0.2.3



530->730





532

async-graphql-parser



531->532





731

Inflector



531->731





732

darling 0.14.4



531->732





733

proc-macro-crate 1.3.1



531->733





533

async-graphql-value



532->533





734

pest



532->734





533->86





533->123





533->413





735

ascii_utils



534->735





535->412





537->389





537->635





538->386





736

dirs-next



538->736





539->411





540->199





540->411





540->498





737

spin 0.5.2



540->737





738

untrusted 0.7.1



540->738





618

ring 0.17.5



541->618





542->618





543->207





543->411





740

fuchsia-cprng



543->740





742

rdrand



543->742





544->411





545->546





546->411





548->85





548->549





549->240





550->197





550->207





550->411





550->740





550->742





743

cloudabi



550->743





551->86





551->182





745

schemars_derive



551->745





552->196





553->178





746

nom



553->746





555->72





555->413





747

abnf



555->747





748

pretty



555->748





557->113





558->391





558->400





558->413





558->425





558->599





749

clap_derive 3.2.25



558->749





750

clap_lex 0.2.4



558->750





751

textwrap 0.16.0



558->751





559->400





559->449





560->294





560->313





752

crossterm_winapi



560->752





754

signal-hook-mio



560->754





561->412





563->307





755

lexical-parse-float



564->755





758

lexical-write-float



564->758





565

pest_meta



565->105





565->734





566->565





568->196





569->92





569->140





569->400





570->107





570->414





571

ciborium-io



572->571





572->654





573->516





573->656





574->114





574->207





574->295





574->388





574->476





576->352





577->85





577->165





578->107





578->352





579->543





580->166





580->191





580->224





580->330





580->339





580->386





580->434





582->85





582->282





584->195





584->382





585->352





586->254





586->360





586->403





586->407





587

quanta



586->587





760

atomic-shim



586->760





761

hashbrown 0.11.2



586->761





762

sketches-ddsketch



586->762





587->199





587->404





587->411





763

mach



587->763





587->763





764

raw-cpuid



587->764





587->764





765

wasi 0.10.2+wasi-snapshot-preview1



587->765





589->124





590->124





591->594





766

proc-macro-error



591->766





767

unicode-segmentation



594->767





596->107





596->236





596->370





596->511





768

anstyle-parse



597->768





769

anstyle-query



597->769





770

anstyle-wincon



597->770





771

colorchoice



597->771





773

gimli



600->773





601->323





603->85





603->240





604->548





605->86





605->107





605->352





605->385





774

graphql-introspection-query



605->774





775

graphql-parser



605->775





607->298





776

adler32



607->776





608->776





609->516





609->666





611->403





612->73





612->114





612->313





614

openssl-sys



612->614





777

foreign-types



612->777





778

openssl-macros



612->778





614->498





614->592





614->593





615->364





616->425





617

security-framework-sys



616->617





616->620





617->207





617->427





618->195





618->364





618->498





618->675





739

untrusted 0.9.0



618->739





619->618





620->207





620->427





621->207





621->427





779

proc-macro-crate 2.0.0



622->779





780

syn_derive



622->780





781

funty



624->781





782

radium



624->782





784

wyz



624->784





625

bytecheck



626

ptr_meta



625->626





785

bytecheck_derive



625->785





786

simdutf8



625->786





787

ptr_meta_derive



626->787





627->625





628->352





788

tinyvec_macros



630->788





634->352





637->632





640->630





643->103





644->103





644->104





645->207





645->207





645->207





645->364





648->207





650->449





652

plotters-backend



653->652





655->516





657

instant



657->114





658->207





658->295





658->411





658->657





789

redox_syscall 0.2.16



658->789





662->665





662->733





662->766





791

darling_macro 0.20.3



664->791





665->352





665->647





667->425





668->412





669->93





671->347





792

fixedbitset



671->792





672->144





673->73





673->307





673->389





793

home



673->793





674->130





676->78





676->91





676->96





676->130





676->298





676->417





676->446





676->484





794

utf-8



676->794





677->165





677->644





795

inout



677->795





678->498





678->592





679->104





679->641





796

base64ct



679->796





797

zstd-sys



680->797





681->719





681->720





681->720





681->721





681->721





681->722





681->722





681->723





681->723





681->724





681->725





681->725





682->103





683->114





683->383





683->454





684

cipher 0.3.0



683->684





684->103





685->684





798

polyval



686->798





687->644





688->383





688->677





689->383





689->454





799

universal-hash 0.5.1



689->799





690->103





690->641





691->91





691->195





800

libredox



691->800





693->417





694->207





695->207





695->425





696->207





697->498





698->476





706->717





707->384





708->639





708->640





801

finl_unicode



708->801





804

toml_edit 0.19.15



711->804





712->73





712->114





712->382





805

zerocopy



712->805





714->207





806

wasm-bindgen-backend



715->806





716->85





718->352





809

prost-build 0.11.9



726->809





727->72





727->220





727->352





729->207





729->411





810

match_cfg



729->810





730->639





730->640





811

matches



730->811





731->81





731->107





813

darling_macro 0.14.4



732->813





733->804





734->91





734->323





814

ucd-trie



734->814





815

dirs-sys-next



736->815





741

rand_core 0.3.1



744

rand_core 0.4.2



741->744





742->741





743->425





816

serde_derive_internals



745->816





746->323





817

minimal-lexical



746->817





818

abnf-core



747->818





748->140





748->767





819

arrayvec 0.5.2



748->819





820

typed-arena



748->820





749->385





749->766





821

os_str_bytes



750->821





752->411





753

signal-hook



753->361





754->359





754->753





756

lexical-parse-integer



755->756





757

lexical-util



756->757





757->419





759

lexical-write-integer



758->759





759->757





760->404





760->404





761->584





763->207





764->425





766->352





822

proc-macro-error-attr



766->822





772

utf8parse



768->772





769->364





770->265





770->364





774->85





775->91





823

combine



775->823





824

foreign-types-shared



777->824





778->412





825

toml_edit 0.20.7



779->825





780->412





780->766





783

tap



784->783





785->352





787->352





789->425





790

darling_core 0.20.3



790->412





790->414





790->599





826

ident_case



790->826





791->790





793->364





795->103





797->498





797->592





798->114





798->383





798->454





827

universal-hash 0.4.1



798->827





799->641





799->644





800->207





800->313





800->388





802

serde_spanned



802->85





803

toml_datetime



803->85





804->347





804->802





804->803





828

winnow



804->828





829

zerocopy-derive



805->829





806->73





806->140





807

wasm-bindgen-shared



806->807





830

bumpalo



806->830





808

prettyplease



808->352





809->81





809->107





809->111





809->140





809->385





809->670





809->671





809->673





809->808





831

prost-types 0.11.9



809->831





812

darling_core 0.14.4



812->352





812->414





812->599





812->826





813->812





815->411





815->691





816->352





818->746





822->346





822->382





823->298





823->307





823->323





832

ascii



823->832





833

unreachable



823->833





825->347





825->803





825->828





827->103





827->641





828->323





829->412





831->522





834

void



833->834





🦀 Rust Style Guide

This guide is intended to be a set of guidelines, not hard rules. These represent the default for Rust code. Exceptions can (and sometimes should) be made, however:

Toolchain

We use the latest stable version of Rust. You can get an up-to-date toolchain by running nix develop. If you’re not a Nix user, make sure you have the correct versions.

Basic Rules

  • Formatting is “whatever rustfmt does”. In cases where rustfmt doesn’t yet work (i.e. macros, let-else), try to stay consistent with the rest of the codebase.
  • Clippy should be used whenever possible, with pedantic lints turned on. There are some lints (particularly those from pedantic) that are generally unhelpful, often due to high false positive rates There is a list of known exceptions that can be added to if you run into anything particularly bad.
  • Clippy is not enabled for older parts of the codebase. This is allowed for legacy code, but any new code should have clippy enabled. We’re actively working to get it enabled on everything
  • Avoid raw identifiers. Instead, use abbreviations/misspellings (i.e. r#crate -> krate, r#type -> ty, etc)

TLDR: run:

cargo fmt
cargo clippy
cargo `clippy` --all-features

before submitting a PR

Creating a new crate

We add the following preamble to all crates’ lib.rs:

#![allow(unused)]
#![warn(clippy::pedantic)]
#![forbid(clippy::integer_arithmetic)]
#![forbid(missing_docs)]
#![forbid(unsafe_code)]
#![allow(/* known bad lints outlined below */)]
fn main() {
}

We enable #![forbid(missing_docs)] for a couple of reasons:

  • it forces developers to write doc comments for publicly exported items
  • it serves as a reminder that the item you’re working on is part of your public API

We enable #![forbid(unsafe_code)] to reinforce the fact that unsafe code should not be mixed in with the rest of our code. More details are below.

We enable #![forbid(integer_arithmetic)] to prevent you from writing code like:

#![allow(unused)]
fn main() {
let x = 1;
let y = 2;
let z = x + y;
}

Why is this bad?

Integer arithmetic may panic or behave unexpectedly depending on build settings. In debug mode, overflows cause a panic, but in release mode, they silently wrap. In both modes, division by 0 causes a panic.

By forbidding integer arithmetic, you have to choose a behaviour, by writing either:

  • a.checked_add(b) to return an Option that you can error-handle
  • a.saturating_add(b) to return a + b, or the max value if an overflow occurred
  • a.wrapping_add(b) to return a + b, wrapping around if an overflow occurred

By being explicit, we prevent the developer from “simply not considering” how their code behaves in the presence of overflows. In a ledger application, silently wrapping could be catastrophic, so we really want to be explicit about what behaviour we expect.

Exceptions for clippy

These lints are disabled:

  • clippy::match_bool - sometimes a match statement with true => and false => arms is sometimes more concise and equally readable
  • clippy::module_name_repetition - warns when creating an item with a name than ends with the name of the module it’s in
  • clippy::derive_partial_eq_without_eq - warns when deriving PartialEq and not Eq. This is a semver hazard. Deriving Eq is a stronger semver guarantee than just PartialEq, and shouldn’t be the default.
  • clippy::missing_panics_doc - this lint warns when a function might panic, but the docs don’t have a panics section. This lint is buggy, and doesn’t correctly identify all panics. Code should be written to explicitly avoid intentional panics. You should still add panic docs if a function is intended to panic under some conditions. If a panic may occur, but you’d consider it a bug if it did, don’t document it. We disable this lint because it creates a false sense of security.

Guidelines

Prefer references over generics

It’s tempting to write a function like this:

#![allow(unused)]
fn main() {
fn use_str(s: impl AsRef<str>) {
  let s = s.as_ref();
  println!("{s}");
}
}

Unfortunately, this has a few downsides:

  • it increases compile times
  • if used in a trait, it makes that trait not object-safe
  • if the body of the function is large, it bloats binary size, which can hurt performance by increasing pressure on the instruction cache
  • it makes type inference harder

Now that’s not to say you should never use generics. Of course, there are plenty of good reasons to use generics. But if the only reason to make your function generic is “slightly easier to use at the call-site”, consider just using a plain reference/slice instead:

#![allow(unused)]
fn main() {
fn use_str(s: &str) {
  println!("{s}");
}
}

This does mean you may have to use .as_ref() at the call-site, but generally this is preferred compared to the downsides of using generics.

Similar logic applies to AsRef<Path>, Into<String>, and a few other common types. The general principle is that a little bit of extra text at the call-site is usually worth the benefits from not having generic functions.

Abbreviations and naming things

We should be careful with abbreviations. Similar to above, they do indeed shorten the code you write, but at the cost of some readability. It’s important to balance the readability cost against the benefits of shorter code.

Some guidelines for when to use abbreviations:

  • if it’s something you’re going to type a lot, an abbreviation is probably the right choice. (i.e. s is an OK name for a string in very string-heavy code)
  • if it’s a well-known abbreviation, it’s probably good (e.g. ctx for “context”, db for “database”)
  • if it’s ambiguous (i.e. it could be short for multiple things) either use the full word, or a longer abbreviation that isn’t ambiguous.
  • Remember that abbreviations are context-sensitive. (if I see db in a database library, it’s probably “database”. If I see it in an audio processing library it is probably “decibels”).

General advice around names

  • avoid foo.get_bar(), instead just call it foo.bar()
  • use into_foo() for conversions that consume the original data
  • use as_foo() for conversions that convert borrowed data to borrowed data
  • use to_foo() for conversions that are expensive
  • use into_inner() for extracting a wrapped value

Pay attention to the public API of your crate

Items (functions, modules, structs, etc) should be private by default. This is what Rust does anyways, but make sure you pay attention when marking something pub.

Try to keep the public API of your crate as small as possible. It should contain only the items needed to provide the functionality it’s responsible for.

A good “escape hatch” is to mark things as pub(crate). This makes the item pub but only within your crate. This can be handy for “helper functions” that you want to use everywhere within your crate, but don’t want to be available outside.

Type safety

Rust has a powerful type system, so use it!

Where possible, encode important information in the type system. For example, using NonZeroU64 might make sense if it would be ridiculous for a number to be zero. Of course, you can go too far with this. Rust’s type system is Turing-complete, but we don’t want to write our whole program in the type system.

Use newtypes (a.k.a. microtypes)

If you are handling email addresses, don’t use String. Instead, create a newtype wrapper with:

struct Email(String);

This prevents you from using a Password where you meant to use an Email, which catches more bugs at compile time.

Consider using the microtype library to generate boilerplate:

#![allow(unused)]
fn main() {
#[string]
String {
  Email,
  Username,
  Address,
  // etc...
}
}

This generates struct Email(String), struct Username(String), etc. for you. See the docs for more info.

If your type is responsible for handling secret data, mark it #[secret] to:

  • zeroize the memory on drop
  • redact the Debug impl
  • prevent serialization
  • prevent use without .expose_secret()

Don’t over-abstract

In general, prefer plain functions over a struct + trait implementation. For example, instead of this:

#![allow(unused)]
fn main() {
// BAD
trait GetBar {
  fn bar(&self) -> &Bar;
}

impl GetBar for Foo {
  fn bar(&self) -> &Bar {
    &self.bar
  }
}
}

write this:

#![allow(unused)]
fn main() {
// GOOD
impl Foo {
  fn bar(&self) -> &Bar {
    &self.bar
  }
}
}

I.e., don’t use a trait if you don’t need it.

A common reason why people do this is to mock out a particular function call for testing. This can be useful in a few select places, such as interacting with the real world. Eg, networking, clocks, randomness, etc. However, it has some significant downsides.

  • it means you’re not actually testing this code. This might be fine for some types of code (e.g. database code). It might be unreasonable to rely on a database for unit tests. However, if your whole test suite is organized around this, your business logic won’t get tested.
  • it forces you to use a trait, which have restrictions that plain functions don’t have:
    • it forces you into either generics or dynamic dispatch (often with a heap allocation if you don’t want to play the lifetime game)
    • you may now have to think about object safety, which can be very tricky for some APIs
    • async functions are not properly supported
    • it’s not usable in a const context

Some alternative patterns are:

  • try to rewrite your test to avoid needing a mock
  • if you know all the variants at compile time, consider using an enum
  • swap out the implementation with conditional compilation

Unsafe code

If you need unsafe code, put it in its own crate with a safe API. And really think hard about whether you need unsafe code. There are times when you absolutely do need it, but this project cares more about correctness than performance.

If you find yourself wanting to use unsafe, try the following:

  • if you want to create bindings to a C/C++ library:
    • First, see if there is a pure-Rust implementation.
    • Otherwise, search on crates.io for a _sys crate.
  • if you want to create a cool data structure that requires unsafe:
    • does it really need unsafe?
    • is it a doubly linked list? If so, have you got benchmarks that show that a VecDeque is insufficient? Something something cache-friendly…
    • is there a suitable implementation on crates.io?
    • is this data structure really noticeably better than what we have in std?
  • if you want to do a performance optimization (e.g. using unreachable_unchecked() to remove a bounds check):
    • Encode it in the type system, and put it in a separate crate with a safe API.
    • If you can’t do that, it’s probably an indication that the mental model is also too complicated for a human to keep track of.
  • if you want to write a test that makes sure the code does the right thing “even in the presence of UB”, just don’t

All unsafe code must be tested with Miri.

Docs

As mentioned above, we should enable #![deny(missing_docs)] on all new code. But that doesn’t mean we shouldn’t document private items. Ideally, we’d document as much as possible.

Of course, for tiny helper functions, or functions whose behaviour is obvious from looking don’t need documentation. For example, this sort of comment doesn’t add much:

#![allow(unused)]
fn main() {
/// Sets self.bar to equal bar
fn set_bar(&mut self, bar: Bar) {
  self.bar = bar;
}
}

If this is a private member, don’t bother with this comment. If it’s public, something like this is fine just to get clippy to shut up. But if it’s at all unclear what’s going on, try to use a more descriptive comment.

If adding a dependency, add a comment explaining what the dependency does.

Doctests

Try to use doctests. Especially for less obvious code, a small example can be really helpful. Humans learn by copying examples, so providing some can drastically reduce the amount of time a new contributor needs to become productive.

If you need some setup for your tests that you don’t want to render in docs, prefix the line with #. When combined with the include macro, this can lead to pretty concise but also powerful test setup.

If you need some inspiration, check out the docstests for diesel.

Write code as if it’s going to be in a web server

Write code as if it’s going to end up being run in a web server. This means a few things:

  • all inputs are potentially malicious
  • code should be usable as a library without going through a text interface (i.e. your library should expose a Rust API)

Error handling

Error handling in Rust is complex, which represents the real-world complexity of error handling.

Broadly speaking, there are two types of error:

Expected errors are errors that are expected to occur during normal operation of the application. For example, in bug free code, it would still be expected to see network timeout errors, since that networking is inherently fallible. The exact error handling strategy may vary, but often involves returning a Result.

Unexpected errors are errors that are not expected to occur. If they do occur, it represents a bug. These errors are handled by panicking. As much as possible, we try to make these cases impossible by construction by using the correct types for data. For example, imagine you have a struct that represents “a list with at least one element”. You could write:

#![allow(unused)]
fn main() {
struct NonEmptyList<T> {
  inner: Vec<T>,
}

impl<T> NonEmptyList<T> {
  /// Doesn't need to be an Option<&T> because the list is guaranteed to have at least 1 element
  fn first(&self) -> &T {
    inner.get(0).expect("guaranteed to have at least 1 element")
  }
}
}

This would be fine, since it represents a bug if this panic is ever hit. But it would be better to write it like this:

#![allow(unused)]
fn main() {
struct NonEmptyList<T> {
  head: T,
  tail: Vec<T>,
}

impl<T> NonEmptyList<T> {
  fn first(&self) -> &T {
    &self.head
  }
}
}

This provides the compiler with more information about the invariants of our type. This allows us to eliminate the error at compile time.

Handling expected errors

Well-behaved code doesn’t panic. So if our response to encountering an expected error is to panic, our software is not well-behaved.

Instead, we should use Result<T, E> to represent data that might be an error. But how do we pick E?

There are two main choices for E:

Use thiserror for recoverable errors

In contexts where we may want to recover from errors, we should use a dedicated error type. We generate these with thiserror:

#![allow(unused)]
fn main() {
#[derive(Debug, Error)]
enum FooError {
  #[error("failed to bar")]
  Bar,

  #[error("failed to baz")]
  Baz,
}
}

This allows the user to write:

#![allow(unused)]
fn main() {
match try_foo() {
  Ok(foo) => println!("got a foo: {foo}"),
  Err(FooError::Bar) => eprintln!("failed to bar"),
  Err(FooError::Baz) => eprintln!("failed to baz"),
}
}

Use color_eyre for unrecoverable errors

In contexts where we don’t want to recover from errors, use Report from the color_eyre crate. This is a trait object based error type which allows you to “fire and forget” an error. While technically possible, it’s less ergonomic to recover from a Result<T, Report>. Therefore, only use this in contexts where the correct behaviour is “exit the program”. This is commonly the case in CLI apps.

However, even in CLI apps, it’s good practice to split the logic into a lib.rs file (or modules) and have a separate binary.

Web API

Catalyst Core API V1

Event DB crate

The event-db crate abstracts all database operations.

This will allow us to iterate the backend data storage without requiring large changes to the consuming services.

Overview Diagram



erd

Catalyst Event Database Overview


ballot


ballot

Column

Type


row_id


bigint+

objective

integer

proposal

integer

voter

integer

fragment_id

text

cast_at

timestamp

choice

smallint

raw_fragment

bytea



objective


objective

Column

Type


row_id


integer+

id

integer

event

integer

category

text

title

text

description

text

deleted

boolean

rewards_currency

text

rewards_total

bigint

rewards_total_lovelace

bigint

proposers_rewards

bigint

vote_options

integer

extra

jsonb



ballot:objective_out->objective:row_id





proposal


proposal

Column

Type


row_id


integer+

id

integer

objective

integer

title

text

summary

text

category

text

public_key

text

funds

bigint

url

text

files_url

text

impact_score

bigint

deleted

boolean

extra

jsonb

proposer_name

text

proposer_contact

text

proposer_url

text

proposer_relevant_experience

text

bb_proposal_id

bytea

bb_vote_options

ARRAY



ballot:proposal_out->proposal:row_id





voter


voter

Column

Type


row_id


bigint+

voting_key

text

snapshot_id

integer

voting_group

text

voting_power

bigint



ballot:voter_out->voter:row_id





committee_member


committee_member

Column

Type


row_id


integer+

committee

integer

member_index

integer

threshold

integer

comm_pk

text

comm_sk

text

member_pk

text

member_sk

text



tally_committee


tally_committee

Column

Type


row_id


integer+

event

integer

committee_pk

text

committee_id

text

member_crs

text

election_key

text



committee_member:committee_out->tally_committee:row_id





config


config

Column

Type


row_id


integer+

id

varchar

id2

varchar

id3

varchar

value

jsonb



contribution


contribution

Column

Type


row_id


bigint+

stake_public_key

text

snapshot_id

integer

voting_key

text

voting_weight

integer

voting_key_idx

integer

value

bigint

voting_group

text

reward_address

text



snapshot


snapshot

Column

Type


row_id


integer+

event

integer

as_at

timestamp

as_at_slotno

integer

last_updated

timestamp

last_updated_slotno

integer

final

boolean

dbsync_snapshot_cmd

text

dbsync_snapshot_params

jsonb

dbsync_snapshot_data

bytea

dbsync_snapshot_error

bytea

dbsync_snapshot_unregistered

bytea

drep_data

bytea

catalyst_snapshot_cmd

text

catalyst_snapshot_params

jsonb

catalyst_snapshot_data

bytea



contribution:snapshot_id_out->snapshot:row_id





currency


currency

Column

Type


name


text

description

text



event


event

Column

Type


row_id


integer+

name

text

description

text

registration_snapshot_time

timestamp

snapshot_start

timestamp

voting_power_threshold

bigint

max_voting_power_pct

numeric

review_rewards

bigint

start_time

timestamp

end_time

timestamp

insight_sharing_start

timestamp

proposal_submission_start

timestamp

refine_proposals_start

timestamp

finalize_proposals_start

timestamp

proposal_assessment_start

timestamp

assessment_qa_start

timestamp

voting_start

timestamp

voting_end

timestamp

tallying_end

timestamp

block0

bytea

block0_hash

text

committee_size

integer

committee_threshold

integer

extra

jsonb

cast_to

jsonb



goal


goal

Column

Type


id


integer+

event_id

integer

idx

integer

name

varchar



goal:event_id_out->event:row_id





moderation


moderation

Column

Type


row_id


integer+

review_id

integer

user_id

integer

classification

integer

rationale

varchar



moderation:user_id_out->config:row_id





proposal_review


proposal_review

Column

Type


row_id


integer+

proposal_id

integer

assessor

varchar

assessor_level

integer

reward_address

text

impact_alignment_rating_given

integer

impact_alignment_note

varchar

feasibility_rating_given

integer

feasibility_note

varchar

auditability_rating_given

integer

auditability_note

varchar

ranking

integer

flags

jsonb



moderation:review_id_out->proposal_review:row_id





moderation_allocation


moderation_allocation

Column

Type


row_id


integer+

review_id

integer

user_id

integer



moderation_allocation:user_id_out->config:row_id





moderation_allocation:review_id_out->proposal_review:row_id





objective:rewards_currency_out->currency:name





objective:event_out->event:row_id





objective_category


objective_category

Column

Type


name


text

description

text



objective:category_out->objective_category:name





vote_options


vote_options

Column

Type


id


integer+

idea_scale

ARRAY

objective

ARRAY



objective:vote_options_out->vote_options:id





objective_review_metric


objective_review_metric

Column

Type


row_id


integer+

objective

integer

metric

integer

note

boolean

review_group

varchar



objective_review_metric:objective_out->objective:row_id





review_metric


review_metric

Column

Type


row_id


integer+

name

varchar

description

varchar

min

integer

max

integer

map

ARRAY



objective_review_metric:metric_out->review_metric:row_id





proposal:objective_out->objective:row_id





proposal:bb_vote_options_out->vote_options:objective





proposal_review:proposal_id_out->proposal:row_id





reviewer_level


reviewer_level

Column

Type


row_id


integer+

name

text

total_reward_pct

numeric

event_id

integer



proposal_review:assessor_level_out->reviewer_level:row_id





proposal_voteplan


proposal_voteplan

Column

Type


row_id


integer+

proposal_id

integer

voteplan_id

integer

bb_proposal_index

bigint



proposal_voteplan:proposal_id_out->proposal:row_id





voteplan


voteplan

Column

Type


row_id


integer+

objective_id

integer

id

varchar

category

text

encryption_key

varchar

group_id

text

token_id

text



proposal_voteplan:voteplan_id_out->voteplan:row_id





review_rating


review_rating

Column

Type


row_id


integer+

review_id

integer

metric

integer

rating

integer

note

varchar



review_rating:review_id_out->proposal_review:row_id





review_rating:metric_out->review_metric:row_id





reviewer_level:event_id_out->event:row_id





snapshot:event_out->event:row_id





tally_committee:event_out->event:row_id





voteplan:objective_id_out->objective:row_id





voteplan_category


voteplan_category

Column

Type


name


text

public_key

boolean



voteplan:category_out->voteplan_category:name





voting_group


voting_group

Column

Type


name


text



voteplan:group_id_out->voting_group:name





voter:snapshot_id_out->snapshot:row_id





voting_node


voting_node

Column

Type


hostname


text


event


integer

pubkey

text

seckey

text

netkey

text



voting_node:event_out->event:row_id





LEGEND


LEGEND

Type

Example


Primary Key


integer+

Standard Field

bytea

Nullable Field

text

Sized Field

varchar(32)

Autoincrement Field

integer+



Event DB Local Development

The Event DB is targeted to a Postgresql Database, and for local development, an instance must be running.

Installing Postgresql

Please see your operating systems’ specific guides for installing and configuring Postgresql.

Initialize the Database

After Postgres is installed, as the postgres user:

  1. Init the Database (if it doesn’t exist already): The recommended database initialization command (on Linux or Mac) is:

    [postgres@host]$ initdb --locale=C.UTF-8 --encoding=UTF8 -D /var/lib/postgres/data --data-checksums
    
  2. Create a Development user

    [postgres@host]$ createuser -P catalyst-dev
    

    when prompted, enter a password, eg “CHANGE_ME

  3. Create a Development Database:

    [postgres@host]$ createdb CatalystDev
    

General Configuration

These tables are used for general Configuration and Database management.

VitSS Compatibility Diagram



erd

Catalyst Event Database - Configuration


config


config

Column

Type

Description


row_id


integer+


Synthetic unique key.
Always lookup using id.

id

varchar

The name/id of the general config value/variable

id2

varchar

2nd ID of the general config value.
Must be defined, use "" if not required.

id3

varchar

3rd ID of the general config value.
Must be defined, use "" if not required.

value

jsonb

The JSON value of the system variable id.id2.id3


General JSON Configuration and Data Values.
Defined  Data Formats:
  API Tokens:
    `id` = "api_token"
    `id2` = <API Token, encrypted with a secret, as base-64 encoded string "">`
    `id3` = "" (Unused),
    `value`->"name" = "<Name of the token owner>",
    `value`->"created" = <Integer Unix Epoch when Token was created>,
    `value`->"expires" = <Integer Unix Epoch when Token will expire>,
    `value`->"perms" = {Permissions assigned to this api key}
  Community reviewers:
    `id` = `email`
    `id2` = `encrypted_password`
    `id3` = `salt`
    `value`->"role" = `role`
    `value`->"name" = `name`
    `value`->"anonymous_id" = `<anonymous_id of the PA>`
    `value`->"force_reset" = "<bool used to force reset of password>"
    `value`->"active" = "<bool used to activate account>"
  IdeaScale parameters:
    `id` = "ideascale"
    `id2` = "params"
    `id3` = <String identifying a fund, e.g. "F10">
    `value`->"campaign_group_id" = <IdeaScale campaign group id>
    `value`->"stage_ids" = <List of IdeaScale stage ids>
  Event IdeaScale parameters:
    `id` = "event"
    `id2` = "ideascale_params"
    `id3` = <Event row_id (as a string)>
    `value`->"params_id" = <String identifier of the Ideascale parameters in the config table>



refinery_schema_history


refinery_schema_history

Column

Type

Description


version


integer


none

name

varchar(255)

none

applied_on

varchar(255)

none

checksum

varchar(255)

none


History of Schema Updates to the Database.
Managed by the `refinery` cli tool.



LEGEND


LEGEND

Type

Example


Primary Key


integer+

Standard Field

bytea

Nullable Field

text

Sized Field

varchar(32)

Autoincrement Field

integer+



Schema

-- Catalyst Event Database

-- Version of the schema.

CREATE TABLE IF NOT EXISTS refinery_schema_history
(
    version     INTEGER NOT NULL PRIMARY KEY,
    name        VARCHAR(255),
    applied_on  VARCHAR(255),
    checksum    VARCHAR(255)
);

COMMENT ON TABLE refinery_schema_history IS
'History of Schema Updates to the Database.
Managed by the `refinery` cli tool.
';

-- Config Table
-- This table is looked up with three keys, `id`, `id2` and `id3`

CREATE TABLE config
(
  row_id SERIAL PRIMARY KEY,
  id     VARCHAR NOT NULL,
  id2    VARCHAR NOT NULL,
  id3    VARCHAR NOT NULL,
  value  JSONB NULL
);

-- id+id2+id3 must be unique, they are a combined key.
CREATE UNIQUE INDEX config_idx ON config(id,id2,id3);

COMMENT ON TABLE config IS
'General JSON Configuration and Data Values.
Defined  Data Formats:
  API Tokens:
    `id` = "api_token"
    `id2` = <API Token, encrypted with a secret, as base-64 encoded string "">`
    `id3` = "" (Unused),
    `value`->"name" = "<Name of the token owner>",
    `value`->"created" = <Integer Unix Epoch when Token was created>,
    `value`->"expires" = <Integer Unix Epoch when Token will expire>,
    `value`->"perms" = {Permissions assigned to this api key}

  Community reviewers:
    `id` = `email`
    `id2` = `encrypted_password`
    `id3` = `salt`
    `value`->"role" = `role`
    `value`->"name" = `name`
    `value`->"anonymous_id" = `<anonymous_id of the PA>`
    `value`->"force_reset" = "<bool used to force reset of password>"
    `value`->"active" = "<bool used to activate account>"

  IdeaScale parameters:
    `id` = "ideascale"
    `id2` = "params"
    `id3` = <String identifying a fund, e.g. "F10">
    `value`->"campaign_group_id" = <IdeaScale campaign group id>
    `value`->"stage_ids" = <List of IdeaScale stage ids>

  Event IdeaScale parameters:
    `id` = "event"
    `id2` = "ideascale_params"
    `id3` = <Event row_id (as a string)>
    `value`->"params_id" = <String identifier of the Ideascale parameters in the config table>
';

COMMENT ON COLUMN config.row_id IS
'Synthetic unique key.
Always lookup using id.';
COMMENT ON COLUMN config.id IS  'The name/id of the general config value/variable';
COMMENT ON COLUMN config.id2 IS
'2nd ID of the general config value.
Must be defined, use "" if not required.';
COMMENT ON COLUMN config.id3 IS
'3rd ID of the general config value.
Must be defined, use "" if not required.';
COMMENT ON COLUMN config.value IS 'The JSON value of the system variable id.id2.id3';

COMMENT ON INDEX config_idx IS 'We use three keys combined uniquely rather than forcing string concatenation at the app level to allow for querying groups of data.';

Event Definition Table

This table defines the root data and schedules for all Catalyst events.

Event Table Diagram



erd

Catalyst Event Database - Event


event


event

Column

Type

Description


row_id


integer+


Synthetic Unique ID for each event.

name

text

The name of the event.
eg. "Fund9" or "SVE1"

description

text

A detailed description of the purpose of the event.
eg. the events "Goal".

registration_snapshot_time

timestamp

The Time (UTC) Registrations are taken from Cardano main net.
Registrations after this date are not valid for voting on the event.
NULL = Not yet defined or Not Applicable.

snapshot_start

timestamp

The Time (UTC) Registrations taken from Cardano main net are considered stable.
This is not the Time of the Registration Snapshot,
This is the time after which the registration snapshot will be stable.
NULL = Not yet defined or Not Applicable.

voting_power_threshold

bigint

The Minimum number of Lovelace staked at the time of snapshot, to be eligible to vote.
NULL = Not yet defined.

max_voting_power_pct

numeric

none

review_rewards

bigint

The total reward pool to pay for community reviewers for their valid reviews of the proposals assigned to this event.

start_time

timestamp

The time (UTC) the event starts.
NULL = Not yet defined.

end_time

timestamp

The time (UTC) the event ends.
NULL = Not yet defined.

insight_sharing_start

timestamp

TODO.
NULL = Not yet defined.

proposal_submission_start

timestamp

The Time (UTC) proposals can start to be submitted for the event.
NULL = Not yet defined, or Not applicable.

refine_proposals_start

timestamp

TODO.
NULL = Not yet defined.

finalize_proposals_start

timestamp

The Time (UTC) when all proposals must be finalized by.
NULL = Not yet defined, or Not applicable.

proposal_assessment_start

timestamp

The Time (UTC) when PA Assessors can start assessing proposals.
NULL = Not yet defined, or Not applicable.

assessment_qa_start

timestamp

The Time (UTC) when vPA Assessors can start assessing assessments.
NULL = Not yet defined, or Not applicable.

voting_start

timestamp

The earliest time that registered wallets with sufficient voting power can place votes in the event.
NULL = Not yet defined.

voting_end

timestamp

The latest time that registered wallets with sufficient voting power can place votes in the event.
NULL = Not yet defined.

tallying_end

timestamp

The latest time that tallying the event can complete by.
NULL = Not yet defined.

block0

bytea

The copy of Block 0 used to start the Blockchain.
NULL = Blockchain not started yet.

block0_hash

text

The hash of block 0.
NULL = Blockchain not started yet.

committee_size

integer

The size of the tally committee.
0 = No Committee, and all votes are therefore public.

committee_threshold

integer

The minimum size of the tally committee to perform the tally.
Must be <= `comittee_size`

extra

jsonb

Json Map defining event specific extra data.
NULL = Not yet defined.
"url"."results" = a results URL,
"url"."survey" = a survey URL,
others can be defined as required.

cast_to

jsonb

Json Map defining parameters which control where the vote is to be cast.
Multiple destinations can be defined simultaneously.
In this case the vote gets cast to all defined destinations.
`NULL` = Default Jormungandr Blockchain.
```jsonc
"jorm" : { // Voting on Jormungandr Blockchain
    chain_id: <int>, // Jormungandr chain id. Defaults to 0.
    // Other parameters TBD.
},
"cardano" : { // Voting on Cardano Directly
    chain_id: <int>, // 0 = pre-prod, 1 = mainnet.
    // Other parameters TBD.
},
"postgres" : { // Store votes in Web 2 postgres backed DB only.
    url: "<postgres URL. Defaults to system default>"
    // Other parameters TBD.
    // Note: Votes that arrive in the Cat1 system are always stored in the DB.
    // This Option only allows the vote storage DB to be tuned.
},
"cat2" : { // Store votes to the Catalyst 2.0 P2P Network.
    gateway: "<URL of the gateway to use"
    // Other parameters TBD.
}
```


The basic parameters of each voting/decision event.



goal


goal

Column

Type

Description


id


integer+


Synthetic Unique Key.

event_id

integer

The ID of the event this goal belongs to.

idx

integer

The index specifying the order/priority of the goals.

name

varchar

The description of this event goal.


The list of campaign goals for this event.



goal:event_id_out->event:row_id





LEGEND


LEGEND

Type

Example


Primary Key


integer+

Standard Field

bytea

Nullable Field

text

Sized Field

varchar(32)

Autoincrement Field

integer+



Schema

-- Catalyst Event Database


-- Event Table - Defines each voting or decision event

CREATE TABLE event
(
    row_id SERIAL PRIMARY KEY,

    name TEXT NOT NULL,
    description TEXT NOT NULL,

    registration_snapshot_time TIMESTAMP,
    snapshot_start TIMESTAMP,
    voting_power_threshold BIGINT,
    max_voting_power_pct NUMERIC(6,3) CONSTRAINT percentage CHECK (max_voting_power_pct <= 100 AND max_voting_power_pct >= 0),

    review_rewards BIGINT,

    start_time TIMESTAMP,
    end_time TIMESTAMP,

    insight_sharing_start TIMESTAMP,
    proposal_submission_start TIMESTAMP,
    refine_proposals_start TIMESTAMP,
    finalize_proposals_start TIMESTAMP,
    proposal_assessment_start TIMESTAMP,
    assessment_qa_start TIMESTAMP,
    voting_start TIMESTAMP,
    voting_end TIMESTAMP,
    tallying_end TIMESTAMP,

    block0 BYTEA NULL,
    block0_hash TEXT NULL,

    committee_size INTEGER NOT NULL,
    committee_threshold INTEGER NOT NULL,

    extra JSONB,
    cast_to JSONB
);

CREATE UNIQUE INDEX event_name_idx ON event(name);

COMMENT ON TABLE event IS 'The basic parameters of each voting/decision event.';
COMMENT ON COLUMN event.row_id IS 'Synthetic Unique ID for each event.';
COMMENT ON COLUMN event.name IS
'The name of the event.
eg. "Fund9" or "SVE1"';
COMMENT ON COLUMN event.description IS
'A detailed description of the purpose of the event.
eg. the events "Goal".';
COMMENT ON COLUMN event.registration_snapshot_time IS
'The Time (UTC) Registrations are taken from Cardano main net.
Registrations after this date are not valid for voting on the event.
NULL = Not yet defined or Not Applicable.';
COMMENT ON COLUMN event.snapshot_start IS
'The Time (UTC) Registrations taken from Cardano main net are considered stable.
This is not the Time of the Registration Snapshot,
This is the time after which the registration snapshot will be stable.
NULL = Not yet defined or Not Applicable.';
COMMENT ON COLUMN event.voting_power_threshold IS
'The Minimum number of Lovelace staked at the time of snapshot, to be eligible to vote.
NULL = Not yet defined.';
COMMENT ON COLUMN event.review_rewards IS 'The total reward pool to pay for community reviewers for their valid reviews of the proposals assigned to this event.';
COMMENT ON COLUMN event.start_time IS
'The time (UTC) the event starts.
NULL = Not yet defined.';
COMMENT ON COLUMN event.end_time IS
'The time (UTC) the event ends.
NULL = Not yet defined.';
COMMENT ON COLUMN event.insight_sharing_start IS
'TODO.
NULL = Not yet defined.';
COMMENT ON COLUMN event.proposal_submission_start IS
'The Time (UTC) proposals can start to be submitted for the event.
NULL = Not yet defined, or Not applicable.';
COMMENT ON COLUMN event.refine_proposals_start IS
'TODO.
NULL = Not yet defined.';
COMMENT ON COLUMN event.finalize_proposals_start IS
'The Time (UTC) when all proposals must be finalized by.
NULL = Not yet defined, or Not applicable.';
COMMENT ON COLUMN event.proposal_assessment_start IS
'The Time (UTC) when PA Assessors can start assessing proposals.
NULL = Not yet defined, or Not applicable.';
COMMENT ON COLUMN event.assessment_qa_start IS
'The Time (UTC) when vPA Assessors can start assessing assessments.
NULL = Not yet defined, or Not applicable.';
COMMENT ON COLUMN event.voting_start IS
'The earliest time that registered wallets with sufficient voting power can place votes in the event.
NULL = Not yet defined.';
COMMENT ON COLUMN event.voting_end IS
'The latest time that registered wallets with sufficient voting power can place votes in the event.
NULL = Not yet defined.';
COMMENT ON COLUMN event.tallying_end IS
'The latest time that tallying the event can complete by.
NULL = Not yet defined.';

COMMENT ON COLUMN event.block0      IS
'The copy of Block 0 used to start the Blockchain.
NULL = Blockchain not started yet.';

COMMENT ON COLUMN event.block0_hash IS
'The hash of block 0.
NULL = Blockchain not started yet.';

COMMENT ON COLUMN event.committee_size  IS
'The size of the tally committee.
0 = No Committee, and all votes are therefore public.';

COMMENT ON COLUMN event.committee_threshold  IS
'The minimum size of the tally committee to perform the tally.
Must be <= `comittee_size`';

COMMENT ON COLUMN event.extra IS
'Json Map defining event specific extra data.
NULL = Not yet defined.
"url"."results" = a results URL,
"url"."survey" = a survey URL,
others can be defined as required.';

COMMENT ON COLUMN event.cast_to IS
'Json Map defining parameters which control where the vote is to be cast.
Multiple destinations can be defined simultaneously.
In this case the vote gets cast to all defined destinations.
`NULL` = Default Jormungandr Blockchain.
```jsonc
"jorm" : { // Voting on Jormungandr Blockchain
    chain_id: <int>, // Jormungandr chain id. Defaults to 0.
    // Other parameters TBD.
},
"cardano" : { // Voting on Cardano Directly
    chain_id: <int>, // 0 = pre-prod, 1 = mainnet.
    // Other parameters TBD.
},
"postgres" : { // Store votes in Web 2 postgres backed DB only.
    url: "<postgres URL. Defaults to system default>"
    // Other parameters TBD.
    // Note: Votes that arrive in the Cat1 system are always stored in the DB.
    // This Option only allows the vote storage DB to be tuned.
},
"cat2" : { // Store votes to the Catalyst 2.0 P2P Network.
    gateway: "<URL of the gateway to use"
    // Other parameters TBD.
}

’;


<footer id="open-on-gh">Found a bug? <a href="https://github.com/input-output-hk/catalyst-core/edit/main/book/src/08_event-db/03_event-table.md">Edit this page on GitHub.</a></footer>

Objective and Proposal Tables

These tables define the data known about Challenges and Proposals.

Objective and Proposal Table Diagram



erd

Catalyst Event Database - Objectives & Proposals


currency


currency

Column

Type

Description


name


text


The name of this currency type.

description

text

A Description of this kind of currency
type.


Defines all known and valid currencies.



objective


objective

Column

Type

Description


row_id


integer+


Synthetic Unique Key

id

integer

Event specific objective ID.
Can be non-unique between events (Eg,
Ideascale ID for objective).

event

integer

The specific Event ID this objective is
part of.

category

text

What category of objective is this.
See the objective_category table for
allowed values.

title

text

The  title of the objective.

description

text

Long form description of the objective.

deleted

boolean

Flag which defines was this objective
(challenge) deleted from ideascale or
not. DEPRECATED: only used for ideascale
compatibility.

rewards_currency

text

The currency rewards values are
represented as.

rewards_total

bigint

The total reward pool to pay on this
objective to winning proposals. In the
Objective Currency.

rewards_total_lovelace

bigint

The total reward pool to pay on this
objective to winning proposals. In
Lovelace.

proposers_rewards

bigint

Not sure how this is different from
rewards_total???

vote_options

integer

The Vote Options applicable to all
proposals in this objective.

extra

jsonb

Extra Data  for this objective
represented as JSON.
"url"."objective" is a URL for more info
about the objective.
"highlights" is ???


All objectives for all events.
A objective is a group category for selection in an event.



objective:rewards_currency_out->currency:name





objective_category


objective_category

Column

Type

Description


name


text


The name of this objective category.

description

text

A Description of this kind of objective
category.


Defines all known and valid objective categories.



objective:category_out->objective_category:name





vote_options


vote_options

Column

Type

Description


id


integer+


Unique ID for each possible option set.

idea_scale

ARRAY

How this vote option is represented in
idea scale.

objective

ARRAY

How the vote options is represented in
the objective.


Defines all known vote plan option types.



objective:vote_options_out->vote_options:id





event


event

Column

row_id


ABRIDGED



objective:event_out->event:row_id





objective_review_metric


objective_review_metric

Column

Type

Description


row_id


integer+


none

objective

integer

The objective that can use this review
metric.

metric

integer

The review metric that the objective
can use.

note

boolean

Does the metric require a Note?
NULL = Optional.
True = MUST include Note.
False = MUST NOT include Note.

review_group

varchar

The review group that can give this
metric. Details TBD.


All valid metrics for reviews on an objective.



objective_review_metric:objective_out->objective:row_id





review_metric


review_metric

Column

Type

Description


row_id


integer+


The synthetic ID of this metric.

name

varchar

The short name for this review metric.

description

varchar

Long form description of what the review
metric means.

min

integer

The minimum value of the metric
(inclusive).

max

integer

The maximum value of the metric
(inclusive).

map

ARRAY

OPTIONAL: JSON Array which defines extra
details for each metric score.
There MUST be one entry per possible
score in the range.
Entries are IN ORDER, from the lowest
numeric score to the highest.
Each entry =
```jsonc
{
   "name" : "<name>", // Symbolic Name
for the metric score.
   "description" : "<desc>", //
Description of what the named metric
score means.
}
```


Definition of all possible review metrics.



objective_review_metric:metric_out->review_metric:row_id





proposal


proposal

Column

Type

Description


row_id


integer+


Synthetic Unique Key

id

integer

Actual Proposal Unique ID

objective

integer

The Objective this proposal falls under.

title

text

Brief title of the proposal.

summary

text

A Summary of the proposal to be
implemented.

category

text

Objective Category Repeated. DEPRECATED:
Only used for Vit-SS compatibility.

public_key

text

Proposals Reward Address (CIP-19 Payment
Key)

funds

bigint

How much funds (in the currency of the
fund)

url

text

A URL with supporting information for
the proposal.

files_url

text

A URL link to relevant files supporting
the proposal.

impact_score

bigint

The Impact score assigned to this
proposal by the Assessors.

deleted

boolean

Flag which defines was this proposal
deleted from ideascale or not.
DEPRECATED: only used for ideascale
compatibility.

extra

jsonb

Extra data about the proposal.
 The types of extra data are defined by
the proposal type and are not enforced.
 Extra Fields for `native` challenges:
    NONE.
 Extra Fields for `simple` challenges:
    "problem"  : <text> - Statement
of the problem the proposal tries to
address.
    "solution" : <text> - The Solution
to the challenge.
 Extra Fields for `community choice`
challenge:
    "brief"      : <text> - Brief
explanation of a proposal.
    "importance" : <text> - The
importance of the proposal.
    "goal"       : <text> - The goal of
the proposal is addressed to meet.
    "metrics"    : <text> - The metrics
of the proposal or how success will be
determined.

proposer_name

text

The proposers name.

proposer_contact

text

Contact details for the proposer.

proposer_url

text

A URL with details of the proposer.

proposer_relevant_experience

text

A freeform  string describing the
proposers experience relating to their
capability to implement the proposal.

bb_proposal_id

bytea

The ID used by the voting ledger
(bulletin board) to refer to this
proposal.

bb_vote_options

ARRAY

The selectable options by the voter.
DEPRECATED: Only used for Vit-SS
compatibility.


All Proposals for the current fund.



proposal:objective_out->objective:row_id





proposal:bb_vote_options_out->vote_options:objective





proposal_review


proposal_review

Column

Type

Description


row_id


integer+


Synthetic Unique Key.

proposal_id

integer

The Proposal this review is for.

assessor

varchar

Assessors Anonymized ID

assessor_level

integer

Assessors level ID

reward_address

text

Assessors reward address

impact_alignment_rating_given

integer

The  numeric rating assigned to the
proposal by the assessor.
DEPRECATED: Only used for Vit-SS
compatibility.

impact_alignment_note

varchar

A note about why the impact rating was
given.
DEPRECATED: Only used for Vit-SS
compatibility.

feasibility_rating_given

integer

The numeric feasibility rating given.
DEPRECATED: Only used for Vit-SS
compatibility.

feasibility_note

varchar

A note about why the feasibility rating
was given.
DEPRECATED: Only used for Vit-SS
compatibility.

auditability_rating_given

integer

The numeric auditability rating given.
DEPRECATED: Only used for Vit-SS
compatibility.

auditability_note

varchar

A note about the auditability rating
given.
DEPRECATED: Only used for Vit-SS
compatibility.

ranking

integer

Numeric  Measure of quality of this
review according to veteran community
advisors.
DEPRECATED: Only used for Vit-SS
compatibility.

flags

jsonb

OPTIONAL: JSON Array which defines the
flags raised for this review.
Flags can be raised for different
reasons and have different metadata.
Each entry =
```jsonc
{
   "flag_type": "<flag_type>", // Enum
of the flag type (0: Profanity, 1:
Similarity 2: AI generated).
   "score": <score>, // Profanity score,
similarity score, or AI generated score.
0-1.
   "related_reviews": [<review_id>] //
Array of review IDs that this flag is
related to (valid for similarity).
}
```


All Reviews.



proposal_review:proposal_id_out->proposal:row_id





reviewer_level


reviewer_level

Column

row_id


ABRIDGED



proposal_review:assessor_level_out->reviewer_level:row_id





review_rating


review_rating

Column

Type

Description


row_id


integer+


Synthetic ID of this individual rating.

review_id

integer

The review the metric is being given
for.

metric

integer

Metric the rating is being given for.

rating

integer

The rating being given to the metric.

note

varchar

OPTIONAL: Note about the rating given.


An Individual rating for a metric given on a review.



review_rating:review_id_out->proposal_review:row_id





review_rating:metric_out->review_metric:row_id





LEGEND


LEGEND

Type

Example


Primary Key


integer+

Standard Field

bytea

Nullable Field

text

Sized Field

varchar(32)

Autoincrement Field

integer+



Objective Schema

-- Catalyst Event Database

-- objective types table - Defines all currently known objectives types.
CREATE TABLE objective_category
(
    name TEXT PRIMARY KEY,
    description TEXT
);

COMMENT ON TABLE objective_category IS 'Defines all known and valid objective categories.';
COMMENT ON COLUMN objective_category.name IS 'The name of this objective category.';
COMMENT ON COLUMN objective_category.description IS 'A Description of this kind of objective category.';

-- Define known objective categories
INSERT INTO objective_category (name,  description)
VALUES
    ('catalyst-simple','A Simple choice'),
    ('catalyst-native','??'),
    ('catalyst-community-choice','Community collective decision'),
    ('sve-decision','Special voting event decision');

-- known currencies - Defines all currently known currencies.
CREATE TABLE currency
(
    name TEXT PRIMARY KEY,
    description TEXT
);

COMMENT ON TABLE currency IS 'Defines all known and valid currencies.';
COMMENT ON COLUMN currency.name IS 'The name of this currency type.';
COMMENT ON COLUMN currency.description IS 'A Description of this kind of currency type.';


-- Define known currencies
INSERT INTO currency (name,  description)
VALUES
    ('USD_ADA','US Dollars, converted to Cardano ADA at time of reward calculation.'),
    ('ADA','Cardano ADA.'),
    ('CLAP', 'CLAP tokens.'),
    ('COTI', 'COTI tokens.');

-- known vote options - Defines all currently known vote options.
CREATE TABLE vote_options
(
    id SERIAL PRIMARY KEY,

    idea_scale TEXT ARRAY UNIQUE,
    objective TEXT ARRAY UNIQUE
);

COMMENT ON TABLE vote_options IS 'Defines all known vote plan option types.';
COMMENT ON COLUMN vote_options.id IS 'Unique ID for each possible option set.';
COMMENT ON COLUMN vote_options.idea_scale IS 'How this vote option is represented in idea scale.';
COMMENT ON COLUMN vote_options.objective IS 'How the vote options is represented in the objective.';

-- Define known vote_options
INSERT INTO vote_options (idea_scale,  objective)
VALUES
    ('{"blank", "yes", "no"}','{"yes", "no"}');



-- goals

CREATE TABLE goal
(
    id SERIAL PRIMARY KEY,
    event_id INTEGER NOT NULL,

    idx INTEGER NOT NULL,
    name VARCHAR NOT NULL,

    FOREIGN KEY(event_id) REFERENCES event(row_id) ON DELETE CASCADE
);

CREATE UNIQUE INDEX goal_index ON goal(event_id, idx);

COMMENT ON TABLE goal IS 'The list of campaign goals for this event.';
COMMENT ON COLUMN goal.id IS 'Synthetic Unique Key.';
COMMENT ON COLUMN goal.idx IS 'The index specifying the order/priority of the goals.';
COMMENT ON COLUMN goal.name IS 'The description of this event goal.';
COMMENT ON COLUMN goal.event_id IS 'The ID of the event this goal belongs to.';
COMMENT ON INDEX goal_index IS 'An index to enforce uniqueness of the relative `idx` field per event.';


-- objective table - Defines all objectives for all known funds.


CREATE TABLE objective
(
    row_id SERIAL PRIMARY KEY,

    id INTEGER NOT NULL,
    event INTEGER NOT NULL,

    category TEXT NOT NULL,
    title TEXT NOT NULL,
    description TEXT NOT NULL,

    deleted BOOLEAN NOT NULL DEFAULT FALSE,

    rewards_currency TEXT,
    rewards_total BIGINT,
    rewards_total_lovelace BIGINT,
    proposers_rewards BIGINT,
    vote_options INTEGER,

    extra JSONB,

    FOREIGN KEY(event) REFERENCES event(row_id) ON DELETE CASCADE,
    FOREIGN KEY(category) REFERENCES objective_category(name) ON DELETE CASCADE,
    FOREIGN KEY(rewards_currency) REFERENCES currency(name) ON DELETE CASCADE,
    FOREIGN KEY(vote_options) REFERENCES vote_options(id) ON DELETE CASCADE
);

CREATE UNIQUE INDEX objective_idx ON objective (id, event);

COMMENT ON TABLE objective IS
'All objectives for all events.
A objective is a group category for selection in an event.';
COMMENT ON COLUMN objective.row_id IS 'Synthetic Unique Key';
COMMENT ON COLUMN objective.id IS
'Event specific objective ID.
Can be non-unique between events (Eg, Ideascale ID for objective).';
COMMENT ON COLUMN objective.event IS 'The specific Event ID this objective is part of.';
COMMENT ON COLUMN objective.category IS
'What category of objective is this.
See the objective_category table for allowed values.';
COMMENT ON COLUMN objective.title IS 'The  title of the objective.';
COMMENT ON COLUMN objective.description IS 'Long form description of the objective.';
COMMENT ON COLUMN objective.deleted IS 'Flag which defines was this objective (challenge) deleted from ideascale or not. DEPRECATED: only used for ideascale compatibility.';
COMMENT ON COLUMN objective.rewards_currency IS 'The currency rewards values are represented as.';
COMMENT ON COLUMN objective.rewards_total IS 'The total reward pool to pay on this objective to winning proposals. In the Objective Currency.';
COMMENT ON COLUMN objective.rewards_total_lovelace IS 'The total reward pool to pay on this objective to winning proposals. In Lovelace.';
COMMENT ON COLUMN objective.proposers_rewards IS 'Not sure how this is different from rewards_total???';
COMMENT ON COLUMN objective.vote_options IS 'The Vote Options applicable to all proposals in this objective.';
COMMENT ON COLUMN objective.extra IS
'Extra Data  for this objective represented as JSON.
"url"."objective" is a URL for more info about the objective.
"highlights" is ???
';

Proposal Schema

-- Catalyst Event Database

-- Proposals Table

CREATE TABLE proposal
(
    row_id SERIAL PRIMARY KEY,
    id INTEGER NOT NULL,
    objective INTEGER NOT NULL,
    title TEXT NOT NULL,
    summary TEXT NOT NULL,
    category TEXT NOT NULL,
    public_key TEXT NOT NULL,
    funds BIGINT NOT NULL,
    url TEXT NOT NULL,
    files_url TEXT NOT NULL,
    impact_score BIGINT NOT NULL,

    deleted BOOLEAN NOT NULL DEFAULT FALSE,

    extra JSONB,

    proposer_name TEXT NOT NULL,
    proposer_contact TEXT NOT NULL,
    proposer_url TEXT NOT NULL,
    proposer_relevant_experience TEXT NOT NULL,
    bb_proposal_id BYTEA,

    bb_vote_options TEXT[],

    FOREIGN KEY(objective) REFERENCES objective(row_id) ON DELETE CASCADE,
    FOREIGN KEY(bb_vote_options) REFERENCES vote_options(objective) ON DELETE CASCADE
);

CREATE UNIQUE INDEX proposal_index ON proposal(id, objective);


COMMENT ON TABLE proposal IS 'All Proposals for the current fund.';
COMMENT ON COLUMN proposal.row_id IS 'Synthetic Unique Key';
COMMENT ON COLUMN proposal.id IS 'Actual Proposal Unique ID';
COMMENT ON COLUMN proposal.objective IS 'The Objective this proposal falls under.';
COMMENT ON COLUMN proposal.title IS 'Brief title of the proposal.';
COMMENT ON COLUMN proposal.summary IS 'A Summary of the proposal to be implemented.';
COMMENT ON COLUMN proposal.category IS 'Objective Category Repeated. DEPRECATED: Only used for Vit-SS compatibility.';
COMMENT ON COLUMN proposal.public_key IS 'Proposals Reward Address (CIP-19 Payment Key)';
COMMENT ON COLUMN proposal.funds IS 'How much funds (in the currency of the fund)';
COMMENT ON COLUMN proposal.url IS 'A URL with supporting information for the proposal.';
COMMENT ON COLUMN proposal.files_url IS 'A URL link to relevant files supporting the proposal.';
COMMENT ON COLUMN proposal.impact_score IS 'The Impact score assigned to this proposal by the Assessors.';
COMMENT ON COLUMN proposal.deleted IS 'Flag which defines was this proposal deleted from ideascale or not. DEPRECATED: only used for ideascale compatibility.';
COMMENT ON COLUMN proposal.proposer_name IS 'The proposers name.';
COMMENT ON COLUMN proposal.proposer_contact IS 'Contact details for the proposer.';
COMMENT ON COLUMN proposal.proposer_url IS 'A URL with details of the proposer.';
COMMENT ON COLUMN proposal.proposer_relevant_experience IS 'A freeform  string describing the proposers experience relating to their capability to implement the proposal.';
COMMENT ON COLUMN proposal.bb_proposal_id IS 'The ID used by the voting ledger (bulletin board) to refer to this proposal.';
COMMENT ON COLUMN proposal.bb_vote_options IS 'The selectable options by the voter. DEPRECATED: Only used for Vit-SS compatibility.';
COMMENT ON COLUMN proposal.extra IS
'Extra data about the proposal.
 The types of extra data are defined by the proposal type and are not enforced.
 Extra Fields for `native` challenges:
    NONE.

 Extra Fields for `simple` challenges:
    "problem"  : <text> - Statement of the problem the proposal tries to address.
    "solution" : <text> - The Solution to the challenge.

 Extra Fields for `community choice` challenge:
    "brief"      : <text> - Brief explanation of a proposal.
    "importance" : <text> - The importance of the proposal.
    "goal"       : <text> - The goal of the proposal is addressed to meet.
    "metrics"    : <text> - The metrics of the proposal or how success will be determined.';

-- Reviewer's levels table

CREATE TABLE reviewer_level (
    row_id SERIAL PRIMARY KEY,
    name TEXT NOT NULL,
    total_reward_pct NUMERIC(6,3) CONSTRAINT percentage CHECK (total_reward_pct <= 100 AND total_reward_pct >= 0),

    event_id INTEGER NOT NULL,

    FOREIGN KEY (event_id) REFERENCES event(row_id) ON DELETE CASCADE
);

COMMENT ON TABLE reviewer_level IS 
'All levels of reviewers.
This table represents all different types of reviewer`s levels, which is taken into account during rewarding process.';
COMMENT ON COLUMN reviewer_level.row_id IS 'Synthetic Unique Key';
COMMENT ON COLUMN reviewer_level.name IS 'Name of the reviewer level';
COMMENT ON COLUMN reviewer_level.total_reward_pct IS 
'Total reviewer`s reward assigned to the specific level, which is defined as a percentage from the total pot of Community Review rewards (See `event.review_rewards` column).';
COMMENT ON COLUMN reviewer_level.event_id IS 'The specific Event ID this review level is part of.';

-- community advisor reviews

-- I feel like these ratings and notes should be in a  general json field to
-- suit adaptability without needing schema changes.

CREATE TABLE proposal_review (
  row_id SERIAL PRIMARY KEY,
  proposal_id INTEGER NOT NULL,
  assessor VARCHAR NOT NULL,
  assessor_level INTEGER,
  reward_address TEXT,

  -- These fields are deprecated and WILL BE removed in a future migration.
  -- They MUST only be used for Vit-SS compatibility.
  impact_alignment_rating_given INTEGER,
  impact_alignment_note VARCHAR,
  feasibility_rating_given INTEGER,
  feasibility_note VARCHAR,
  auditability_rating_given INTEGER,
  auditability_note VARCHAR,
  ranking INTEGER,
  flags JSONB NULL,

  FOREIGN KEY (proposal_id) REFERENCES proposal(row_id) ON DELETE CASCADE,
  FOREIGN KEY (assessor_level) REFERENCES reviewer_level(row_id) ON DELETE CASCADE
);

COMMENT ON TABLE proposal_review IS 'All Reviews.';
COMMENT ON COLUMN proposal_review.row_id IS 'Synthetic Unique Key.';
COMMENT ON COLUMN proposal_review.proposal_id IS 'The Proposal this review is for.';
COMMENT ON COLUMN proposal_review.assessor IS 'Assessors Anonymized ID';
COMMENT ON COLUMN proposal_review.assessor_level IS 'Assessors level ID';
COMMENT ON COLUMN proposal_review.reward_address IS 'Assessors reward address';

COMMENT ON COLUMN proposal_review.impact_alignment_rating_given IS
'The  numeric rating assigned to the proposal by the assessor.
DEPRECATED: Only used for Vit-SS compatibility.';
COMMENT ON COLUMN proposal_review.impact_alignment_note IS
'A note about why the impact rating was given.
DEPRECATED: Only used for Vit-SS compatibility.';

COMMENT ON COLUMN proposal_review.feasibility_rating_given IS
'The numeric feasibility rating given.
DEPRECATED: Only used for Vit-SS compatibility.';
COMMENT ON COLUMN proposal_review.feasibility_note IS
'A note about why the feasibility rating was given.
DEPRECATED: Only used for Vit-SS compatibility.';

COMMENT ON COLUMN proposal_review.auditability_rating_given IS
'The numeric auditability rating given.
DEPRECATED: Only used for Vit-SS compatibility.';
COMMENT ON COLUMN proposal_review.auditability_note IS
'A note about the auditability rating given.
DEPRECATED: Only used for Vit-SS compatibility.';

COMMENT ON COLUMN proposal_review.ranking IS
'Numeric  Measure of quality of this review according to veteran community advisors.
DEPRECATED: Only used for Vit-SS compatibility.
';

COMMENT ON COLUMN proposal_review.flags IS
'OPTIONAL: JSON Array which defines the flags raised for this review.
Flags can be raised for different reasons and have different metadata.
Each entry =
```jsonc
{
   "flag_type": "<flag_type>", // Enum of the flag type (0: Profanity, 1: Similarity 2: AI generated).
   "score": <score>, // Profanity score, similarity score, or AI generated score. 0-1.
   "related_reviews": [<review_id>] // Array of review IDs that this flag is related to (valid for similarity).
}

’;

CREATE TABLE review_metric ( row_id SERIAL PRIMARY KEY, name VARCHAR NOT NULL, description VARCHAR NULL, min INTEGER NOT NULL, max INTEGER NOT NULL, map JSONB ARRAY NULL ); COMMENT ON TABLE review_metric IS ‘Definition of all possible review metrics.’; COMMENT ON COLUMN review_metric.row_id IS ‘The synthetic ID of this metric.’; COMMENT ON COLUMN review_metric.name IS ‘The short name for this review metric.’; COMMENT ON COLUMN review_metric.description IS ‘Long form description of what the review metric means.’; COMMENT ON COLUMN review_metric.min IS ‘The minimum value of the metric (inclusive).’; COMMENT ON COLUMN review_metric.max IS ‘The maximum value of the metric (inclusive).’; COMMENT ON COLUMN review_metric.map IS ’OPTIONAL: JSON Array which defines extra details for each metric score. There MUST be one entry per possible score in the range. Entries are IN ORDER, from the lowest numeric score to the highest. Each entry =

{
   "name" : "<name>", // Symbolic Name for the metric score.
   "description" : "<desc>", // Description of what the named metric score means.
}

’;

– Define known review metrics INSERT INTO review_metric (name, description, min, max, map) VALUES (‘impact’, ‘Impact Rating’, 0, 5, NULL), (‘feasibility’,‘Feasibility Rating’, 0, 5, NULL), (‘auditability’,‘Auditability Rating’, 0, 5, NULL), (‘value’,‘Value Proposition Rating’, 0, 5, NULL), (‘vpa_ranking’,‘VPA Ranking of the review’,0,3, ARRAY [ ‘{“name”:“Excellent”,“desc”:“Excellent Review”}’, ‘{“name”:“Good”,“desc”:“Could be improved.”}’, ‘{“name”:“FilteredOut”,“desc”:“Exclude this review”}’, ‘{“name”:“NA”, “desc”:“Not Applicable”}’]::JSON[]);

CREATE TABLE objective_review_metric ( row_id SERIAL PRIMARY KEY, objective INTEGER NOT NULL, metric INTEGER NOT NULL, note BOOLEAN, review_group VARCHAR,

UNIQUE(objective, metric, review_group),

FOREIGN KEY (objective) REFERENCES objective(row_id) ON DELETE CASCADE, FOREIGN KEY (metric) REFERENCES review_metric(row_id) ON DELETE CASCADE );

COMMENT ON TABLE objective_review_metric IS ‘All valid metrics for reviews on an objective.’; COMMENT ON COLUMN objective_review_metric.objective IS ‘The objective that can use this review metric.’; COMMENT ON COLUMN objective_review_metric.metric IS ‘The review metric that the objective can use.’; COMMENT ON COLUMN objective_review_metric.note IS ‘Does the metric require a Note? NULL = Optional. True = MUST include Note. False = MUST NOT include Note.’; COMMENT ON COLUMN objective_review_metric.review_group IS ‘The review group that can give this metric. Details TBD.’;

CREATE TABLE review_rating ( row_id SERIAL PRIMARY KEY, review_id INTEGER NOT NULL, metric INTEGER NOT NULL, rating INTEGER NOT NULL, note VARCHAR,

UNIQUE ( review_id, metric ),

FOREIGN KEY (review_id) REFERENCES proposal_review(row_id) ON DELETE CASCADE, FOREIGN KEY (metric) REFERENCES review_metric(row_id) ON DELETE CASCADE );

COMMENT ON TABLE review_rating IS ‘An Individual rating for a metric given on a review.’; COMMENT ON COLUMN review_rating.row_id IS ‘Synthetic ID of this individual rating.’; COMMENT ON COLUMN review_rating.review_id IS ‘The review the metric is being given for.’; COMMENT ON COLUMN review_rating.metric IS ‘Metric the rating is being given for.’; COMMENT ON COLUMN review_rating.rating IS ‘The rating being given to the metric.’; COMMENT ON COLUMN review_rating.note IS ‘OPTIONAL: Note about the rating given.’;


<footer id="open-on-gh">Found a bug? <a href="https://github.com/input-output-hk/catalyst-core/edit/main/book/src/08_event-db/04_objective_tables.md">Edit this page on GitHub.</a></footer>

Vote Plan Tables

These Tables represent the on-chain vote plan for an event.

Vote Plan Table Diagram



erd

Catalyst Event Database - Vote Plans


proposal_voteplan


proposal_voteplan

Column

Type

Description


row_id


integer+


Synthetic ID of this Voteplan/Proposal
M-M relationship.

proposal_id

integer

The link to the Proposal primary key
that links to this voteplan.

voteplan_id

integer

The link to the Voteplan primary key
that links to this proposal.

bb_proposal_index

bigint

The Index with the voteplan used by
the voting ledger/bulletin board that
references this proposal.


Table to link Proposals to Vote plans in a Many to Many relationship.



voteplan


voteplan

Column

Type

Description


row_id


integer+


Synthetic Unique Key

objective_id

integer

none

id

varchar

The ID of the Vote plan in the voting
ledger/bulletin board.
A Binary value encoded as hex.

category

text

The kind of vote which can be cast on
this vote plan.

encryption_key

varchar

The public encryption key used.
ONLY if required by the voteplan
category.

group_id

text

The identifier of voting power token
used withing this plan.

token_id

text

none


All Vote plans.



proposal_voteplan:voteplan_id_out->voteplan:row_id





proposal


proposal

Column

row_id


ABRIDGED



proposal_voteplan:proposal_id_out->proposal:row_id





voteplan_category


voteplan_category

Column

Type

Description


name


text


The UNIQUE name of this voteplan
category.

public_key

boolean

Does this vote plan category require a
public key.


The category of vote plan currently supported.



voteplan:category_out->voteplan_category:name





voting_group


voting_group

Column

Type

Description


name


text


The ID of this voting group.


All Groups.



voteplan:group_id_out->voting_group:name





objective


objective

Column

row_id


ABRIDGED



voteplan:objective_id_out->objective:row_id





LEGEND


LEGEND

Type

Example


Primary Key


integer+

Standard Field

bytea

Nullable Field

text

Sized Field

varchar(32)

Autoincrement Field

integer+



Schema

-- Catalyst Event Database

-- Vote Plan Categories

CREATE TABLE voteplan_category
(
    name TEXT PRIMARY KEY,
    public_key BOOL
);


INSERT INTO voteplan_category (name, public_key)
VALUES
    ('public', false), -- Fully public votes only
    ('private', true), -- Fully private votes only.
    ('cast-private', true); -- Private until tally, then decrypted.

COMMENT ON TABLE voteplan_category IS 'The category of vote plan currently supported.';
COMMENT ON COLUMN voteplan_category.name IS 'The UNIQUE name of this voteplan category.';
COMMENT ON COLUMN voteplan_category.public_key IS 'Does this vote plan category require a public key.';


-- groups

CREATE TABLE voting_group (
    name TEXT PRIMARY KEY
);

INSERT INTO voting_group (name)
VALUES
    ('direct'), -- Direct Voters
    ('rep'); -- Delegated Voter (Check what is the real name for this group we already use in snapshot)

COMMENT ON TABLE voting_group IS 'All Groups.';
COMMENT ON COLUMN voting_group.name IS 'The ID of this voting group.';

-- Vote Plans

CREATE TABLE voteplan
(
    row_id SERIAL PRIMARY KEY,
    objective_id INTEGER NOT NULL,

    id VARCHAR NOT NULL,
    category TEXT NOT NULL,
    encryption_key VARCHAR,
    group_id TEXT,
    token_id TEXT,

    FOREIGN KEY(objective_id) REFERENCES objective(row_id)  ON DELETE CASCADE,
    FOREIGN KEY(category) REFERENCES voteplan_category(name)  ON DELETE CASCADE,
    FOREIGN KEY(group_id) REFERENCES voting_group(name)  ON DELETE CASCADE
);

COMMENT ON TABLE voteplan IS 'All Vote plans.';

COMMENT ON COLUMN voteplan.row_id IS 'Synthetic Unique Key';
COMMENT ON COLUMN voteplan.id IS
'The ID of the Vote plan in the voting ledger/bulletin board.
A Binary value encoded as hex.';
COMMENT ON COLUMN voteplan.category IS 'The kind of vote which can be cast on this vote plan.';
COMMENT ON COLUMN voteplan.encryption_key IS
'The public encryption key used.
ONLY if required by the voteplan category.';
COMMENT ON COLUMN voteplan.group_id IS 'The identifier of voting power token used withing this plan.';

-- Table to link Proposals to Vote plans in a many-many relationship.
-- This Many-Many relationship arises because:
--  in the vote ledger/bulletin board,
--      one proposal may be within multiple different vote plans,
--      and each voteplan can contain multiple proposals.
CREATE TABLE proposal_voteplan
(
    row_id SERIAL PRIMARY KEY,
    proposal_id INTEGER,
    voteplan_id INTEGER,
    bb_proposal_index BIGINT,

    FOREIGN KEY(proposal_id) REFERENCES proposal(row_id) ON DELETE CASCADE,
    FOREIGN KEY(voteplan_id) REFERENCES voteplan(row_id) ON DELETE CASCADE
);

CREATE UNIQUE INDEX proposal_voteplan_idx ON proposal_voteplan(proposal_id,voteplan_id,bb_proposal_index);

COMMENT ON TABLE proposal_voteplan IS 'Table to link Proposals to Vote plans in a Many to Many relationship.';
COMMENT ON COLUMN proposal_voteplan.row_id IS 'Synthetic ID of this Voteplan/Proposal M-M relationship.';
COMMENT ON COLUMN proposal_voteplan.proposal_id IS 'The link to the Proposal primary key that links to this voteplan.';
COMMENT ON COLUMN proposal_voteplan.voteplan_id IS 'The link to the Voteplan primary key that links to this proposal.';
COMMENT ON COLUMN proposal_voteplan.bb_proposal_index IS 'The Index with the voteplan used by the voting ledger/bulletin board that references this proposal.';

Voter Voting Power Snapshot and Vote Storage Table

This table stores:

  • The details of each registration and voting power of each voter.
  • The results of the latest snapshots for each event.
  • The record of all votes cast by voters.

Snapshot & Vote Table Diagram



erd

Catalyst Event Database - Snapshot


ballot


ballot

Column

Type

Description


row_id


bigint+


none

objective

integer

Reference to the Objective the ballot
was for.

proposal

integer

Reference to the Proposal the ballot
was for.
May be NULL if this ballot covers ALL
proposals in the challenge.

voter

integer

Reference to the Voter who cast the
ballot

fragment_id

text

Unique ID of this Ballot

cast_at

timestamp

When this ballot was recorded as
properly cast

choice

smallint

If a public vote, the choice on the
ballot, otherwise NULL.

raw_fragment

bytea

The raw ballot record.


All Ballots cast on an event.



voter


voter

Column

Type

Description


row_id


bigint+


none

voting_key

text

Either the voting key.

snapshot_id

integer

The ID of the snapshot this record
belongs to.

voting_group

text

The voter group the voter belongs to.

voting_power

bigint

Calculated Voting Power associated with
this key.


Voting Power for every voting key.



ballot:voter_out->voter:row_id





objective


objective

Column

row_id


ABRIDGED



ballot:objective_out->objective:row_id





proposal


proposal

Column

row_id


ABRIDGED



ballot:proposal_out->proposal:row_id





contribution


contribution

Column

Type

Description


row_id


bigint+


Synthetic Unique Row Key

stake_public_key

text

The voters Stake Public Key

snapshot_id

integer

The snapshot this contribution was
recorded from.

voting_key

text

The voting key.  If this is NULL it is
the raw staked ADA.

voting_weight

integer

The weight this voting key gets of the
total.

voting_key_idx

integer

The index from 0 of the keys in the
delegation array.

value

bigint

The amount of ADA contributed to this
voting key from the stake address

voting_group

text

The group this contribution goes to.

reward_address

text

Currently Unused.  Should be the Stake
Rewards address of the voter (currently
unknown.)


Individual Contributions from stake public keys to voting keys.



snapshot


snapshot

Column

Type

Description


row_id


integer+


none

event

integer

The event id this snapshot was for.

as_at

timestamp

The time the snapshot was collected
from dbsync.
This is the snapshot *DEADLINE*, i.e the
time when registrations are final.
(Should be the slot time the
dbsync_snapshot_cmd was run against.)

as_at_slotno

integer

none

last_updated

timestamp

The last time the snapshot was run
(Should be the latest block time taken
from dbsync just before the snapshot
was run.)

last_updated_slotno

integer

none

final

boolean

Is the snapshot Final?
No more updates will occur to this
record once set.

dbsync_snapshot_cmd

text

The name of the command run to collect
the snapshot from dbsync.

dbsync_snapshot_params

jsonb

The parameters passed to the command,
each parameter is a key and its value is
the value of the parameter.

dbsync_snapshot_data

bytea

The BROTLI COMPRESSED raw json result
stored as BINARY from the dbsync
snapshot.
(This is JSON data but we store as raw
text to prevent any processing of it,
and BROTLI compress to save space).

dbsync_snapshot_error

bytea

The BROTLI COMPRESSED raw json errors
stored as BINARY from the dbsync
snapshot.
(This is JSON data but we store as raw
text to prevent any processing of it,
and BROTLI compress to save space).

dbsync_snapshot_unregistered

bytea

The BROTLI COMPRESSED unregistered
voting power stored as BINARY from the
dbsync snapshot.
(This is JSON data but we store as raw
text to prevent any processing of it,
and BROTLI compress to save space).

drep_data

bytea

The latest drep data obtained from GVC,
and used in this snapshot calculation.
Should be in a form directly usable by
the `catalyst_snapshot_cmd`
However, in order to save space this
data is stored as BROTLI COMPRESSED
BINARY.

catalyst_snapshot_cmd

text

The actual name of the command run
to produce the catalyst voting power
snapshot.

catalyst_snapshot_params

jsonb

none

catalyst_snapshot_data

bytea

The BROTLI COMPRESSED raw yaml result
stored as BINARY from the catalyst
snapshot calculation.
(This is YAML data but we store as raw
text to prevent any processing of it,
and BROTLI compress to save space).


Raw snapshot data for an event.
Only the latests snapshot per event is stored.



contribution:snapshot_id_out->snapshot:row_id





event


event

Column

row_id


ABRIDGED



snapshot:event_out->event:row_id





voteplan


voteplan

Column

Type

Description


row_id


integer+


Synthetic Unique Key

objective_id

integer

none

id

varchar

The ID of the Vote plan in the voting
ledger/bulletin board.
A Binary value encoded as hex.

category

text

The kind of vote which can be cast on
this vote plan.

encryption_key

varchar

The public encryption key used.
ONLY if required by the voteplan
category.

group_id

text

The identifier of voting power token
used withing this plan.

token_id

text

none


All Vote plans.



voteplan:objective_id_out->objective:row_id





voteplan_category


voteplan_category

Column

name


ABRIDGED



voteplan:category_out->voteplan_category:name





voting_group


voting_group

Column

name


ABRIDGED



voteplan:group_id_out->voting_group:name





voter:snapshot_id_out->snapshot:row_id





LEGEND


LEGEND

Type

Example


Primary Key


integer+

Standard Field

bytea

Nullable Field

text

Sized Field

varchar(32)

Autoincrement Field

integer+



Snapshot Schema

-- Catalyst Event Database

-- Voting Power Snapshot Table

CREATE TABLE snapshot (
    row_id SERIAL PRIMARY KEY,
    event INTEGER NOT NULL UNIQUE,
    as_at TIMESTAMP NOT NULL,
    as_at_slotno INTEGER NOT NULL,
    last_updated TIMESTAMP NOT NULL,
    last_updated_slotno INTEGER NOT NULL,

    final BOOLEAN NOT NULL,

    dbsync_snapshot_cmd          TEXT NULL,
    dbsync_snapshot_params       JSONB NULL,
    dbsync_snapshot_data         BYTEA NULL,
    dbsync_snapshot_error        BYTEA NULL,
    dbsync_snapshot_unregistered BYTEA NULL,

    drep_data                    BYTEA NULL,

    catalyst_snapshot_cmd        TEXT NULL,
    catalyst_snapshot_params     JSONB NULL,
    catalyst_snapshot_data       BYTEA NULL,

    FOREIGN KEY(event) REFERENCES event(row_id)  ON DELETE CASCADE
);

COMMENT ON TABLE snapshot IS
'Raw snapshot data for an event.
Only the latests snapshot per event is stored.';
COMMENT ON COLUMN snapshot.event is 'The event id this snapshot was for.';
COMMENT ON COLUMN snapshot.as_at is
'The time the snapshot was collected from dbsync.
This is the snapshot *DEADLINE*, i.e the time when registrations are final.
(Should be the slot time the dbsync_snapshot_cmd was run against.)';
COMMENT ON COLUMN snapshot.last_updated is
'The last time the snapshot was run
(Should be the latest block time taken from dbsync just before the snapshot was run.)';
COMMENT ON COLUMN snapshot.final is
'Is the snapshot Final?
No more updates will occur to this record once set.';

COMMENT ON COLUMN snapshot.dbsync_snapshot_cmd is     'The name of the command run to collect the snapshot from dbsync.';
COMMENT ON COLUMN snapshot.dbsync_snapshot_params is  'The parameters passed to the command, each parameter is a key and its value is the value of the parameter.';
COMMENT ON COLUMN snapshot.dbsync_snapshot_data is
'The BROTLI COMPRESSED raw json result stored as BINARY from the dbsync snapshot.
(This is JSON data but we store as raw text to prevent any processing of it, and BROTLI compress to save space).';
COMMENT ON COLUMN snapshot.dbsync_snapshot_error is
'The BROTLI COMPRESSED raw json errors stored as BINARY from the dbsync snapshot.
(This is JSON data but we store as raw text to prevent any processing of it, and BROTLI compress to save space).';
COMMENT ON COLUMN snapshot.dbsync_snapshot_unregistered is
'The BROTLI COMPRESSED unregistered voting power stored as BINARY from the dbsync snapshot.
(This is JSON data but we store as raw text to prevent any processing of it, and BROTLI compress to save space).';

COMMENT ON COLUMN snapshot.drep_data is
'The latest drep data obtained from GVC, and used in this snapshot calculation.
Should be in a form directly usable by the `catalyst_snapshot_cmd`
However, in order to save space this data is stored as BROTLI COMPRESSED BINARY.';

COMMENT ON COLUMN snapshot.catalyst_snapshot_cmd is  'The actual name of the command run to produce the catalyst voting power snapshot.';
COMMENT ON COLUMN snapshot.dbsync_snapshot_params is 'The parameters passed to the command, each parameter is a key and its value is the value of the parameter.';
COMMENT ON COLUMN snapshot.catalyst_snapshot_data is
'The BROTLI COMPRESSED raw yaml result stored as BINARY from the catalyst snapshot calculation.
(This is YAML data but we store as raw text to prevent any processing of it, and BROTLI compress to save space).';

-- voters

CREATE TABLE voter (
    row_id SERIAL8 PRIMARY KEY,

    voting_key TEXT NOT NULL,
    snapshot_id INTEGER NOT NULL,
    voting_group TEXT NOT NULL,

    voting_power BIGINT NOT NULL,

    FOREIGN KEY(snapshot_id) REFERENCES snapshot(row_id) ON DELETE CASCADE
);

CREATE UNIQUE INDEX unique_voter_id on voter (voting_key, voting_group, snapshot_id);

COMMENT ON TABLE voter IS 'Voting Power for every voting key.';
COMMENT ON COLUMN voter.voting_key is 'Either the voting key.';
COMMENT ON COLUMN voter.snapshot_id is 'The ID of the snapshot this record belongs to.';
COMMENT ON COLUMN voter.voting_group is 'The voter group the voter belongs to.';
COMMENT ON COLUMN voter.voting_power is 'Calculated Voting Power associated with this key.';

-- contributions

CREATE TABLE contribution (
    row_id SERIAL8 PRIMARY KEY,

    stake_public_key TEXT NOT NULL,
    snapshot_id INTEGER NOT NULL,

    voting_key TEXT NULL,
    voting_weight INTEGER NULL,
    voting_key_idx INTEGER NULL,
    value BIGINT NOT NULL,

    voting_group TEXT NOT NULL,

    -- each unique stake_public_key should have the same reward_address
    reward_address TEXT NULL,

    FOREIGN KEY(snapshot_id) REFERENCES snapshot(row_id) ON DELETE CASCADE
);

CREATE UNIQUE INDEX unique_contribution_id ON contribution (stake_public_key, voting_key, voting_group, snapshot_id);

COMMENT ON TABLE contribution IS 'Individual Contributions from stake public keys to voting keys.';
COMMENT ON COLUMN contribution.row_id is 'Synthetic Unique Row Key';
COMMENT ON COLUMN contribution.stake_public_key IS 'The voters Stake Public Key';
COMMENT ON COLUMN contribution.snapshot_id IS 'The snapshot this contribution was recorded from.';

COMMENT ON COLUMN contribution.voting_key IS 'The voting key.  If this is NULL it is the raw staked ADA.';
COMMENT ON COLUMN contribution.voting_weight IS 'The weight this voting key gets of the total.';
COMMENT ON COLUMN contribution.voting_key_idx IS 'The index from 0 of the keys in the delegation array.';
COMMENT ON COLUMN contribution.value IS 'The amount of ADA contributed to this voting key from the stake address';

COMMENT ON COLUMN contribution.voting_group IS 'The group this contribution goes to.';

COMMENT ON COLUMN contribution.reward_address IS 'Currently Unused.  Should be the Stake Rewards address of the voter (currently unknown.)';

Vote Schema

-- Catalyst Event Database - VIT-SS Compatibility

-- vote storage (replicates on-chain data for easy querying)

CREATE TABLE ballot (
    row_id SERIAL8 PRIMARY KEY,
    objective      INTEGER NOT NULL,
    proposal       INTEGER NULL,

    voter          INTEGER NOT NULL,
    fragment_id    TEXT NOT NULL,
    cast_at        TIMESTAMP NOT NULL,
    choice         SMALLINT NULL,
    raw_fragment   BYTEA    NOT NULL,

    FOREIGN KEY(voter)               REFERENCES voter(row_id)  ON DELETE CASCADE,
    FOREIGN KEY(objective)           REFERENCES objective(row_id)  ON DELETE CASCADE,
    FOREIGN KEY(proposal)            REFERENCES proposal(row_id)  ON DELETE CASCADE
);

CREATE UNIQUE INDEX ballot_proposal_idx  ON ballot(proposal,fragment_id);
CREATE UNIQUE INDEX ballot_objective_idx ON ballot(objective,fragment_id);

COMMENT ON TABLE ballot IS 'All Ballots cast on an event.';
COMMENT ON COLUMN ballot.fragment_id is 'Unique ID of this Ballot';
COMMENT ON COLUMN ballot.voter is 'Reference to the Voter who cast the ballot';
COMMENT ON COLUMN ballot.objective is 'Reference to the Objective the ballot was for.';
COMMENT ON COLUMN ballot.proposal is
'Reference to the Proposal the ballot was for.
May be NULL if this ballot covers ALL proposals in the challenge.';
COMMENT ON COLUMN ballot.cast_at is 'When this ballot was recorded as properly cast';
COMMENT ON COLUMN ballot.choice is 'If a public vote, the choice on the ballot, otherwise NULL.';
COMMENT ON COLUMN ballot.raw_fragment is 'The raw ballot record.';

Catalyst Automation Support Tables

This table defines the data necessary to support continuous automation of Catalyst 1.0 Backend.

Event Table Diagram



erd

Catalyst Event Database - Automation


tally_committee


tally_committee

Column

Type

Description


row_id


integer+


Unique ID for this committee member for
this event.

event

integer

The event this committee member is for.

committee_pk

text

Encrypted private key for the committee
wallet. This key can be used to get the
committee public address.

committee_id

text

The hex-encoded public key for the
committee wallet.

member_crs

text

Encrypted Common Reference String
shared in the creation of every set of
committee member keys.

election_key

text

Public key generated with all
committee member public keys, and is
used to encrypt votes. NULL if the
event.committee_size is 0.


Table for storing data about the tally committee per voting event.



event


event

Column

row_id


ABRIDGED



tally_committee:event_out->event:row_id





voting_node


voting_node

Column

Type

Description


hostname


text


Unique hostname for the voting node.


event


integer


Unique event this node was configured
for.

pubkey

text

Public key from Ed25519 pair for the
node. Used as consensus_leader_id when
the node is a leader.

seckey

text

Encrypted secret key from Ed25519 pair
for the node. Used as the node secret.

netkey

text

Encrypted Ed25519 secret key for the
node. Used as the node p2p topology key.


This table holds information for all nodes in the event.
It is used by nodes to self-bootstrap the blockchain.



voting_node:event_out->event:row_id





LEGEND


LEGEND

Type

Example


Primary Key


integer+

Standard Field

bytea

Nullable Field

text

Sized Field

varchar(32)

Autoincrement Field

integer+



Schema

-- Catalyst Event Database

-- Voting Nodes Table - Defines nodes in the network
-- This table is looked up by hostname and event
CREATE TABLE voting_node (
    hostname TEXT NOT NULL,
    event INTEGER NOT NULL,

    pubkey TEXT NOT NULL,
    seckey TEXT NOT NULL,
    netkey TEXT NOT NULL,

    PRIMARY KEY (hostname, event),
    FOREIGN KEY(event) REFERENCES event(row_id)  ON DELETE CASCADE
);

COMMENT ON TABLE voting_node IS
'This table holds information for all nodes in the event.
It is used by nodes to self-bootstrap the blockchain.';
COMMENT ON COLUMN voting_node.hostname IS 'Unique hostname for the voting node.';
COMMENT ON COLUMN voting_node.event IS 'Unique event this node was configured for.';
COMMENT ON COLUMN voting_node.seckey IS 'Encrypted secret key from Ed25519 pair for the node. Used as the node secret.';
COMMENT ON COLUMN voting_node.pubkey IS 'Public key from Ed25519 pair for the node. Used as consensus_leader_id when the node is a leader.';
COMMENT ON COLUMN voting_node.netkey IS 'Encrypted Ed25519 secret key for the node. Used as the node p2p topology key.';


-- Tally Committee Table - Stores data about the tally committee per voting event
-- This table is looked up by event
CREATE TABLE tally_committee (
    row_id SERIAL PRIMARY KEY,

    event INTEGER NOT NULL UNIQUE,

    committee_pk TEXT NOT NULL,
    committee_id TEXT NOT NULL,
    member_crs TEXT,
    election_key TEXT,

    FOREIGN KEY(event) REFERENCES event(row_id)  ON DELETE CASCADE
);

COMMENT ON TABLE tally_committee IS 'Table for storing data about the tally committee per voting event.';
COMMENT ON COLUMN tally_committee.row_id IS 'Unique ID for this committee member for this event.';
COMMENT ON COLUMN tally_committee.event  IS 'The event this committee member is for.';
COMMENT ON COLUMN tally_committee.committee_pk  IS 'Encrypted private key for the committee wallet. This key can be used to get the committee public address.';
COMMENT ON COLUMN tally_committee.committee_id  IS 'The hex-encoded public key for the committee wallet.';
COMMENT ON COLUMN tally_committee.member_crs  IS 'Encrypted Common Reference String shared in the creation of every set of committee member keys.';
COMMENT ON COLUMN tally_committee.election_key  IS 'Public key generated with all committee member public keys, and is used to encrypt votes. NULL if the event.committee_size is 0.';


-- Committee Member Table - Stores data about the tally committee members
-- This table is looked up by committee
CREATE TABLE committee_member (
    row_id SERIAL PRIMARY KEY,

    committee INTEGER NOT NULL,

    member_index INTEGER NOT NULL,
    threshold INTEGER NOT NULL,
    comm_pk TEXT NOT NULL,
    comm_sk TEXT NOT NULL,
    member_pk TEXT NOT NULL,
    member_sk TEXT NOT NULL,

    FOREIGN KEY(committee) REFERENCES tally_committee(row_id)
);

COMMENT ON TABLE committee_member IS 'Table for storing data about the tally committee members.';
COMMENT ON COLUMN committee_member.row_id IS 'Unique ID for this committee member for this event.';
COMMENT ON COLUMN committee_member.member_index IS 'the zero-based index of the member, ranging from 0 <= index < committee_size.';
COMMENT ON COLUMN committee_member.committee IS 'The committee this member belongs to.';
COMMENT ON COLUMN committee_member.comm_pk  IS 'Committee member communication public key.';
COMMENT ON COLUMN committee_member.comm_sk  IS 'Encrypted committee member communication secret key.';
COMMENT ON COLUMN committee_member.member_pk  IS 'Committee member public key';
COMMENT ON COLUMN committee_member.member_sk  IS 'Encrypted committee member secret key';

Moderation Stage

Tables for storing moderation stage data. WIP.

Moderation Stage Diagram



erd

Catalyst Event Database Moderation Stage


moderation


moderation

Column

Type

Description


row_id


integer+


Synthetic ID of this moderation.

review_id

integer

The review the moderation is related to.

user_id

integer

The user the moderation is submitted
from.

classification

integer

The value used to describe the
moderation (e.g. 0: excluded, 1:
included).

rationale

varchar

The rationale for the given
classification.


An individual moderation for a proposal review.



config


config

Column

row_id


ABRIDGED



moderation:user_id_out->config:row_id





proposal_review


proposal_review

Column

row_id


ABRIDGED



moderation:review_id_out->proposal_review:row_id





moderation_allocation


moderation_allocation

Column

Type

Description


row_id


integer+


Synthetic ID of this relationship.

review_id

integer

The review the relationship is related
to.

user_id

integer

The user the relationship is related to.


The relationship between users and proposals_reviews.



moderation_allocation:user_id_out->config:row_id





moderation_allocation:review_id_out->proposal_review:row_id





LEGEND


LEGEND

Type

Example


Primary Key


integer+

Standard Field

bytea

Nullable Field

text

Sized Field

varchar(32)

Autoincrement Field

integer+



Schema

-- Catalyst Event Database

-- ModerationAllocation - Defines the relationship between users and proposals_reviews
-- to describe the allocation of moderations that needs to be done.

CREATE TABLE moderation_allocation (
  row_id SERIAL PRIMARY KEY,
  review_id INTEGER NOT NULL,
  user_id INTEGER NOT NULL,

  FOREIGN KEY (review_id) REFERENCES proposal_review(row_id) ON DELETE CASCADE,
  FOREIGN KEY (user_id) REFERENCES config(row_id) ON DELETE CASCADE
);


COMMENT ON TABLE moderation_allocation IS 'The relationship between users and proposals_reviews.';
COMMENT ON COLUMN moderation_allocation.row_id IS 'Synthetic ID of this relationship.';
COMMENT ON COLUMN moderation_allocation.review_id IS 'The review the relationship is related to.';
COMMENT ON COLUMN moderation_allocation.user_id IS 'The user the relationship is related to.';


-- Moderation - Defines the moderation submitted by users for each proposal_review.

CREATE TABLE moderation (
  row_id SERIAL PRIMARY KEY,
  review_id INTEGER NOT NULL,
  user_id INTEGER NOT NULL,
  classification INTEGER NOT NULL,
  rationale VARCHAR,
  UNIQUE (review_id, user_id),

  FOREIGN KEY (review_id) REFERENCES proposal_review(row_id) ON DELETE CASCADE,
  FOREIGN KEY (user_id) REFERENCES config(row_id) ON DELETE CASCADE
);


COMMENT ON TABLE moderation IS 'An individual moderation for a proposal review.';
COMMENT ON COLUMN moderation.row_id IS 'Synthetic ID of this moderation.';
COMMENT ON COLUMN moderation.review_id IS 'The review the moderation is related to.';
COMMENT ON COLUMN moderation.user_id IS 'The user the moderation is submitted from.';
COMMENT ON COLUMN moderation.classification IS 'The value used to describe the moderation (e.g. 0: excluded, 1: included).';
COMMENT ON COLUMN moderation.rationale IS 'The rationale for the given classification.';

Python API

Voting Node API documentation

OPEN FULL PAGE

Prometheus Metrics

jormungandr Prometheus Metrics

jormungadr uses Prometheus metrics to gather information about the node at runtime.

Fragment Mempool Process

As the node receives fragments, they are inserted into the fragment mempool, and propagated into the peer network.

txRecvCnt

>> tx_recv_cnt: IntCounter,

Total number of tx inserted and propagated by the mempool at each loop in the process.

txRejectedCnt

>> tx_rejected_cnt: IntCounter,

Total number of tx rejected by the mempool at each loop in the process.

mempoolTxCount

>> mempool_tx_count: UIntGauge,

Total number of tx in the mempool for a given block

mempoolUsageRatio

>> mempool_usage_ratio: Gauge,

Mempool usage ratio for a given block

Topology Process

As the node connects to peers, the network topology allows for gossip and p2p communication. Nodes can join or leave the network.

peerConnectedCnt

>> peer_connected_cnt: UIntGauge,

The total number of connected peers.

peerQuarantinedCnt

>> peer_quarantined_cnt: UIntGauge,

The total number of quarantined peers.

peerAvailableCnt

>> peer_available_cnt: UIntGauge,

The total number of available peers.

peerTotalCnt

>> peer_total_cnt: UIntGauge,

The total number of peers.

Blockchain Process

Each node receives blocks streamed from the network which are processed in order to create a new block tip.

blockRecvCnt

>> block_recv_cnt: IntCounter,

This is the total number of blocks streamed from the network that will be processed at each loop in the process.

Blockchain Tip-Block Process

As the node sets the tip-block, this happens when the node is started and during the block minting process, these metrics are updated.

votesCasted

>> votes_casted_cnt: IntCounter,

The total number accepted VoteCast fragments. Metric is incremented by the total number of valid VoteCast fragments in the block tip.

lastBlockTx

>> // Total number of tx for a given block
>> block_tx_count: IntCounter,

The total number of valid transaction fragments in the block tip.

lastBlockInputTime <— misnomer

>> block_input_sum: UIntGauge,

The total sum of transaction input values in the block tip. The tx.total_input() is added for every fragment.

lastBlockSum

>> block_fee_sum: UIntGauge,

The total sum of transaction output values (fees) in the block tip. The tx.total_output() is added for every fragment.

lastBlockContentSize

>> block_content_size: UIntGauge,

The total size in bytes of the sum of the transaction content in the block tip.

lastBlockEpoch

>> block_epoch: UIntGauge,

The epoch of the block date defined in the block tip header.

lastBlockSlot

>> block_slot: UIntGauge,

The slot of the block date defined in the block tip header.

lastBlockHeight

>> block_chain_length: UIntGauge,

Length of the blockchain.

lastBlockDate

>> block_time: UIntGauge,

Timestamp in seconds of the block date.

Unused metrics

lastReceivedBlockTime

>> slot_start_time: UIntGauge,

This metric is never updated.

Unclear metrics

lastBlockHashPiece

>> block_hash: Vec<UIntGauge>,

A vector of gauges that does something to with the block hash. Metric is updated when http_response is called.

Contributing to Catalyst Core

First off, thanks for taking the time to contribute! ❤️

All types of contributions are encouraged and valued. See the Table of Contents for different ways to help and details about how this project handles them. Please make sure to read the relevant section before making your contribution. It will make it a lot easier for us maintainers and smooth out the experience for all involved. The community looks forward to your contributions. 🎉

And if you like the project, but just don’t have time to contribute, that’s fine. There are other easy ways to support the project and show your appreciation, which we would also be very happy about:

  • Star the project
  • Tweet about it
  • Refer this project in your project’s readme
  • Mention the project at local meetups and tell your friends/colleagues

Table of Contents

Code of Conduct

This project and everyone participating in it is governed by the Catalyst Core Code of Conduct. By participating, you are expected to uphold this code. Please report unacceptable behavior to mailto:code-of-conduct@iohk.io.

I Have a Question

If you want to ask a question, we assume that you have read the available Documentation.

Before you ask a question, it is best to search for existing Issues that might help you. In case you have found a suitable issue and still need clarification, you can write your question in this issue. It is also advisable to search the internet for answers first.

If you then still feel the need to ask a question and need clarification, we recommend the following:

  • Open an Issue.
  • Provide as much context as you can about what you’re running into.
  • Provide project and platform versions (rustc --version --verbose, etc), depending on what seems relevant.

We will then take care of the issue as soon as possible.

I Want To Contribute

When contributing to this project, you must agree:

  • that you have authored 100% of the content
  • that you have the necessary rights to the content and
  • that the content you contribute may be provided under the project license.

Reporting Bugs

Before Submitting a Bug Report

A good bug report shouldn’t leave others needing to chase you up for more information. Therefore, we ask you to investigate carefully, collect information and describe the issue in detail in your report. Please complete the following steps in advance to help us fix any potential bug as fast as possible.

  • Make sure that you are using the latest version.
  • Determine if your bug is really a bug and not an error on your side. e.g. using incompatible environment components/versions (Make sure that you have read the documentation. If you are looking for support, you might want to check this section).
  • To see if other users have experienced (and potentially already solved) the same issue you are having. Check if there is not already a bug report existing for your bug or error in the bug tracker.
  • Also make sure to search the internet (including Stack Overflow) to see if users outside of the GitHub community have discussed the issue.
  • Collect information about the bug:
    • Stack trace (Traceback)
    • OS, Platform and Version (Windows, Linux, macOS, x86, ARM)
    • Version of the interpreter, compiler, SDK, runtime environment, package manager, depending on what seems relevant.
    • Possibly your input and the output
    • Can you reliably reproduce the issue? And can you also reproduce it with older versions?

How Do I Submit a Good Bug Report?

You must never report security related issues, vulnerabilities or bugs including sensitive information to the issue tracker, or elsewhere in public. Instead sensitive bugs must be sent by email to contact@iohk.io.

We use GitHub issues to track bugs and errors. If you run into an issue with the project:

  • Open an Issue. (Since we can’t be sure at this point whether it is a bug or not, we ask you not to talk about a bug yet and not to label the issue.)
  • Explain the behavior you would expect and the actual behavior.
  • Please provide as much context as possible. Describe the reproduction steps that someone else can follow to recreate the issue on their own. This usually includes your code. For good bug reports you should isolate the problem and create a reduced test case.
  • Provide the information you collected in the previous section.

Once it’s filed:

  • The project team will label the issue accordingly.
  • A team member will try to reproduce the issue with your provided steps. If there are no reproduction steps or no obvious way to reproduce the issue, the team will ask you for those steps. The issue would then be marked as needs-repro. Bugs with the needs-repro tag will not be addressed until they are reproduced.
  • If the team is able to reproduce the issue, it will be marked needs-fix. It may possibly be marked with other tags (such as critical). The issue will then be left to be implemented by someone.

Suggesting Enhancements

This section guides you through submitting an enhancement suggestion for Catalyst Core, including completely new features and minor improvements to existing functionality. Following these guidelines will help maintainers and the community to understand your suggestion and find related suggestions.

Before Submitting an Enhancement

  • Make sure that you are using the latest version.
  • Read the documentation carefully. Find out if the functionality is already covered, maybe by an individual configuration.
  • Perform a search to see if the enhancement has already been suggested. If it has, add a comment to the existing issue instead of opening a new one.
  • Find out whether your idea fits with the scope and aims of the project. It’s up to you to make a strong case to convince the project’s developers of the merits of this feature. Keep in mind that we want features that will be useful to the majority of our users and not just a small subset. If you’re just targeting a minority of users, consider writing an add-on/plugin library.

How Do I Submit a Good Enhancement Suggestion?

Enhancement suggestions are tracked as GitHub issues.

  • Use a clear and descriptive title for the issue to identify the suggestion.

  • Provide a step-by-step description of the suggested enhancement in as many details as possible.

  • Describe the current behavior and explain which behavior you expected to see instead and why. At this point you can also tell which alternatives do not work for you.

  • You may want to include screenshots and animated GIFs. This can help you demonstrate the steps or point out the part which the suggestion is related to. You can use this tool to record GIFs on macOS and Windows, and this tool or this tool on Linux.

  • Explain why this enhancement would be useful to most Catalyst Core users. You may also want to point out the other projects that solved it better and which could serve as inspiration.

Your First Code Contribution

Improving The Documentation

Styleguides

Commit Messages

Referencing and dereferencing

The opposite of referencing by using & is dereferencing, which is accomplished with the dereference operator, *.

Bug

This syntax won’t work in Python 3:

print "Hello, world!"

Example Rendered Diagrams

AB

Don’t Click Me

Can help
Anyone
Go to https://github.com/input-output-hk/catalyst-core
How to contribute?
Reporting bugs
Sharing ideas
Advocating

Suffix