network-mux- Multiplexing library
Safe HaskellNone




muxer ∷ (MonadAsync m, MonadFork m, MonadMask m, MonadThrow (STM m), MonadTimer m, MonadTime m) ⇒ EgressQueue m → MuxBearer m → m void Source #

Process the messages from the mini protocols - there is a single shared FIFO that contains the items of work. This is processed so that each active demand gets a maxSDUs work of data processed each time it gets to the front of the queue

Egress Path

┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ Every mode per miniprotocol
│ muxDuplex │ │ muxDuplex │ │ muxDuplex │ │ muxDuplex │ has a dedicated thread which
│ Initiator │ │ Responder │ │ Initiator │ │ Responder │ will send ByteStrings of CBOR
│ ChainSync │ │ ChainSync │ │ BlockFetch│ │ BlockFetch│ encoded data.
└─────┬─────┘ └─────┬─────┘ └─────┬─────┘ └─────┬─────┘
      │             │             │             │
      │             │             │             │
                    application data
                        ░│  │░ For a given Mux Bearer there is a single egress
                        ░│ci│░ queue shared among all miniprotocols. To ensure
                        ░│cr│░ fairness each miniprotocol can at most have one
                        ░└──┘░ message in the queue, see Desired Servicing
                        ░░░│░░ Semantics.
                      ░┌─────┐░ The egress queue is served by a dedicated thread
                      ░│ mux │░ which chops up the CBOR data into MuxSDUs with at
                      ░└─────┘░ most sduSize bytes of data in them.
                         ░│░ MuxSDUs
                 ░│ Bearer.write() │░ Mux Bearer implementation specific write
                          │ ByteStrings

Desired Servicing Semantics

Constructing Fairness

In this context we are defining fairness as: - no starvation - when presented with equal demand (from a selection of mini protocols) deliver "equal" service.

Equality here might be in terms of equal service rate of requests (or segmented requests) and/or in terms of effective (SDU) data rates.


1) It is assumed that (for a given peer) that bulk delivery of blocks (i.e. in recovery mode) and normal, interactive, operation (e.g. chain following) are mutually exclusive. As such there is no requirement to create a notion of prioritisation between such traffic.

2) We are assuming that the underlying TCP/IP bearer is managed so that individual Mux-layer PDUs are paced. a) this is necessary to mitigate head-of-line blocking effects (i.e. arbitrary amounts of data accruing in the O/S kernel); b) ensuring that any host egress data rate limits can be respected / enforced.

Current Caveats

1) Not considering how mini-protocol associations are constructed (depending on deployment model this might be resolved within the instantiation of the peer relationship)

2) Not yet considered notion of orderly termination - this not likely to be used in an operational context, but may be needed for test harness use.

Principle of Operation

Egress direction (mini protocol instance to remote peer)

The request for service (the demand) from a mini protocol is encapsulated in a Wanton, such Wantons are placed in a (finite) queue (e.g TBMQ) of TranslocationServiceRequests.

data TranslocationServiceRequest m Source #

A TranslocationServiceRequest is a demand for the translocation of a single mini-protocol message. This message can be of arbitrary (yet bounded) size. This multiplexing layer is responsible for the segmentation of concrete representation into appropriate SDU's for onward transmission.

newtype Wanton m Source #

A Wanton represent the concrete data to be translocated, note that the TVar becoming empty indicates -- that the last fragment of the data has been enqueued on the -- underlying bearer.