The Centrifuge POD network is built to support a new generation of applications for the financial supply chain. Centrifuge provides users with the ability to remove intermediaries and create financial business documents as Non-Fungible Tokens (NFTs) that have long-term verifiability, are censorship resistant, and are stored and processed in a decentralized fashion.
The Centrifuge POD node provides a method to create, exchange, and use the data that exists in the financial supply chain. It creates transparent and shareable relationships between interacting companies. Data owners can selectively share the information with their business partners or other users of the network. Centrifuge provides a censorship resistant way to verify the authenticity of data that is transacted through and stored in it. This creates the foundation for data ownership, privacy, and transparency throughout the financial supply chain and also allows third parties to offer additional services, such as instant and decentralized financing of invoices and orders, trade credit insurance and financing supply chains multiple levels deep.
The underlying Centrifuge protocol has a two layered approach. It is built on Substrate which allows businesses to transact freely on a single verifiable source of truth. The public blockchain is used for business identities, committing document status and minting business NFTs. In addition, a peer to peer network enables the exchange of business documents in a private and verifiable way.
Substrate is a modular blockchain framework that enables creating custom blockchains. Centrifuge uses Substrate as the source of truth for document anchoring, heavily involved in the peer to peer document consensus protocol.
For more information, see the Parity Substrate Project
The components of the Centrifuge protocol are a collection Substrate Pallets and a peer to peer (P2P) network implemented on libp2p. Substrate Pallets are used for maintaining identities, minting NFTs from off-chain Centrifuge documents, and anchoring state commitments.
The Centrifuge POD provides a simple API interface to interact with the p2p network and the Centrifuge Chain. The POD operates on a “service bus” principal where plugins and outside systems can subscribe to messages about specific objects (e.g., a procurement application can subscribe to changes of order objects). The POD abstracts the events that occur on the public blockchain and P2P Layer and translates them into messages on this internal bus for other applications to consume. The POD also offers the connectivity to Centrifuge Chain for applications that build on top of the network.
A Centrifuge Identity (CentrifugeID) is a unique ID assigned to a participant of Centrifuge in a network. It keeps track of the different cryptographic keys in use and enforces that this data can only be modified by the creator and/or a delegate chosen by the creator.
An identity has the following credentials:
Peer to Peer Messaging Encryption Keys: are used for message encryption. These keys are used to identify the nodes over the P2P network and establish an encrypted communication channel between peers.
Signing Keys: Documents in Centrifuge are signed with signing keys. These signatures are a part of the Merkle root that is anchored on the public chain and verifiable at a later time.
The unique identifier of a participant in the Centrifuge protocol is equivalent to the Centrifuge Chain account ID.
A document within the Centrifuge protocol is a structured set of fields with specific types. The protocol supports any document types as long as the formats are agreed upon and shared between the participants. E.g.: A document can be an invoice or a purchase order with agreed upon fields and line items. The structure of the document becomes important for reaching consensus by attaching signatures to the document state, as well as creating specific attestations about a document at a later point in time. Documents are exchanged encrypted, and are only accessible for parties involved in this private data exchange. Collaborators can be added and removed from a document. Different collaborators can update a document and publish new versions within the set of nodes with access.
In order to interact with Centrifuge Chain, you can either start your own node and sync with the network or use one of the public full nodes that Centrifuge provides:
wss://fullnode.centrifuge.io
wss://fullnode.catalyst.cntrfg.com
Before you can create a new Centrifuge Chain account, you have to install the latest version of Parity Substrate Subkey.
To install, we recommend you follow the instructions found here. Alternatively, you can use the docker image - parity/subkey:latest
.
1$ subkey --sr25519 --network centrifuge generate
1$ subkey --sr25519 generate
You can now fund the newly generated Centrifuge Chain account with CFG by making a request in our discord #dev
channel
Before being able to transfer and anchor financial documents and mint NFTs, you need to spin up a Centrifuge POD on your machine and create an account.
Follow these steps to install the Centrifuge POD:
If you want to build the node from source, follow the description in the source code.
$PATH
or modify the command invocation to point to the correct library.Run centrifuge createconfig
as seen in the example below. This command automatically creates an identity and the required key pairs. It then generates the config.yaml
file required to run the node.
1$ centrifuge createconfig \\2-n mainnet \\3-t <DEFINE_CONFIG_DIR_NAME> \\4-a 8082 -p 38204 \\5--centchainurl <your centchain endpoint> \\6--ipfsPinningServiceName pinata \\7--ipfsPinningServiceURL <pinata endpoint> \\8--ipfsPinningServiceAuth <your pinata auth token> \\9--podOperatorSecretSeed <secret seed for POD operator> \\10--podAdminSecretSeed <secret seed for POD admin> \\
NOTE:
The generated config.yaml
includes sensitive information regarding the accounts used to authenticate and sign transactions. Make sure to store it in a secure environment.
podOperatorSecretSeed
- if this is omitted a new secret seed will be generated by the node, please see POD operator for more information regarding this account.
podAdminSecretSeed
- if this is omitted a new secret seed will be generated by the node, please see POD admin and token usage for more information regarding this account.
For more information regarding IPFS pinning, please see IPFS.
Besides mainnet
, Centrifuge has support for the catalyst
test network. The network configuration for the different test networks is also part of the code base. This enables the client user to run on top of them with minimum configuration needed. Please find the most important information summarized below:
Use network -n catalyst
.
This network is a test network running a version of the Centrifuge Chain modified for testing.
Use network -n mainnet
.
This network is the production network, the Centrifuge Chain.
The default configuration with all available options is accessible here. You may adjust certain configurations according to your requirements.
Configure node under NAT
If you want your node to be accessible outside your private network, you will need to manually specify the External IP of the node:
1p2p:2 externalIP: "100.111.112.113"
To accept the incoming P2P connections, you will need to open two ports for incoming TCP connections.
p2p
port
in your config.nodeport
in your config.You can run the Centrifuge POD using the config.yaml
file you created:
1$ centrifuge run -c /<PATH-TO-CONFIG-DIR>/config.yaml
Replace the PATH-TO-CONFIG-DIR
with the location of the config.yaml
file.
To make sure that your Centrifuge POD setup was successful and is running properly you can ping your node.
1$ curl -X GET "http://localhost:8082/ping" -H "accept: application/json"
It will return (e.g. Catalyst):
{"version":"...","network":"catalyst"}
The Accounts
section of our swagger API docs provides
an overview of all the endpoints available for handling accounts.
An account is the POD representation of the user that is performing various operations. The identity of this account is used when storing documents and performing any action related to the document handling process such as - starting long-running tasks for committing or minting documents, or sending the document via the p2p layer.
The data stored for each account has the following JSON format:
1{2 "data": [3 {4 "identity": "string",5 "document_signing_public_key": [0],6 "p2p_public_signing_key": [0],7 "pod_operator_account_id": [0],8 "precommit_enabled": true,9 "webhook_url": "string"10 }11 ]12}
identity
- hex encoded Centrifuge Chain account ID. This is the identity used for performing the operations described above.
document_signing_public_key
- read-only - public key that is used for signing documents, this is generated for each account that is created on the POD.
p2p_public_signing_key
- read-only - public key that is used for interactions on the P2P layer, this is generated during POD configuration.
pod_operator_account_id
- read-only - the POD operator account ID.
precommit_enabled
- flag that enables anchoring the document prior to requesting the signatures from all collaborators.
webhook_url
- URL of the webhook that is used for sending updates regarding documents or jobs.
An account can be created by calling the account creation endpoint with a valid admin token (see token usage),
and providing the required information - identity
, precommit_enabled
, webhook_url
.
IMPORTANT - the identity
must be a valid account on the Centrifuge Chain meaning that it MUST hold funds or have some proxies (which is the case for a pure/anonymous proxy).
The successful response for the account creation operation will contain the fields mentioned above in account data.
NOTE - The following steps are required to ensure that the POD can use a newly created account.
Store the document_signing_public_key
and p2p_public_signing_key
in the Keystore
storage of Centrifuge Chain.
This can be done by submitting the addKeys
extrinsic of the Keystore
pallet.
Add the POD operator account ID as a PodOperation
proxy to the identity
.
This can be done by submitting the addProxy
extrinsic of the Proxy
pallet.
Example script using our Go Substrate RPC Client:
1func bootstrapAccount(2 api *gsrpc.SubstrateAPI,3 rv *types.RuntimeVersion,4 genesisHash types.Hash,5 meta *types.Metadata,6 accountInfo types.AccountInfo,7 krp signature.KeyringPair,8) error {9 addProxyCall, err := types.NewCall(10 meta,11 "Proxy.add_proxy",12 delegateAccountID,13 10, // PodOperation14 types.NewU32(0), // Delay15 )1617 if err != nil {18 return fmt.Errorf("couldn't create addProxy call: %w", err)19 }2021 discoveryKeyHash := types.NewHash(discoveryKeyBytes)22 documentSigningKeyHash := types.NewHash(documentSigningKeyBytes)2324 type AddKey struct {25 Key types.Hash26 Purpose uint827 KeyType uint828 }2930 addKeysCall, err := types.NewCall(31 meta,32 "Keystore.add_keys",33 []*AddKey{34 {35 Key: discoveryKeyHash,36 Purpose: 0, // P2P Discovery37 KeyType: 0, // ECDSA38 },39 {40 Key: documentSigningKeyHash,41 Purpose: 1, // P2P Document Signing42 KeyType: 0, // ECDSA43 },44 },45 )4647 if err != nil {48 return fmt.Errorf("couldn't create addKeys call: %w", err)49 }5051 batchCall, err := types.NewCall(52 meta,53 "Utility.batch_all",54 addProxyCall,55 addKeysCall,56 )5758 if err != nil {59 return fmt.Errorf("couldn't create batch call: %w", err)60 }6162 ext := types.NewExtrinsic(batchCall)6364 opts := types.SignatureOptions{65 BlockHash: genesisHash, // using genesis since we're using immortal era66 Era: types.ExtrinsicEra{IsMortalEra: false},67 GenesisHash: genesisHash,68 Nonce: types.NewUCompactFromUInt(uint64(accountInfo.Nonce)),69 SpecVersion: rv.SpecVersion,70 Tip: types.NewUCompactFromUInt(0),71 TransactionVersion: rv.TransactionVersion,72 }7374 err = ext.Sign(krp, opts)7576 if err != nil {77 return fmt.Errorf("couldn't sign extrinsic: %w", err)78 }7980 sub, err := api.RPC.Author.SubmitAndWatchExtrinsic(ext)8182 if err != nil {83 return fmt.Errorf("couldn't submit and watch extrinsic: %w", err)84 }8586 defer sub.Unsubscribe()8788 select {89 case st := <-sub.Chan():90 switch {91 case st.IsFinalized, st.IsInBlock, st.IsReady:92 return nil93 default:94 return fmt.Errorf("extrinsic not successful - %v", st)95 }96 case err := <-sub.Err():97 return fmt.Errorf("extrinsic error: %w", err)98 }99}
Most of the operations performed by the POD rely on the presence of proxies that are used to:
The POD admin is an account that is stored on the POD, and its sole purpose is to authorize access for some account related endpoints such as account generation, accounts listing, and account details retrieval. This is required since not every user should have the rights to perform the mentioned actions.
The POD operator is an account that is stored on the POD, and it is used for submitting extrinsics on behalf of the provided identity. This is required since an identity can be an anonymous proxy, which is unable to sign any extrinsics.
Given the purpose of this account, it is expected that it's properly funded in order to cover for the transaction fees.
Authentication is performed using the JSON Web3 Tokens described here.
The Centrifuge POD is capable of maintaining multiple accounts. Accounts are used to track of the different users that might be using a single instance of a Centrifuge POD. We use an HTTP header for specifying a JSON Web3 Token that holds information regarding the identity to be used and its delegate.
Header | Value |
---|---|
authorization: | Bearer <jw3t_token> |
The format of the JW3 token that we use is:
base_64_encoded_json_header.base_64_encoded_json_payload.base_64_encoded_signature
Where the un-encoded parts are as follows:
Header:
1{2 "algorithm": "sr25519",3 "token_type": "JW3T",4 "address_type": "ss58"5}
Payload:
1{2 "address": "delegate_address",3 "on_behalf_of": "delegator_address",4 "proxy_type": "proxy_type",5 "expires_at": "1663070957",6 "issued_at": "1662984557",7 "not_before": "1662984557"8}
address
- SS58 address of the proxy delegate (see usage for more info).
on_behalf_of
- SS58 address of the proxy delegator (see usage for more info).
proxy_type
- one of the allowed proxy types (see usage for more info):
PodAdmin
- defined in the POD.Any
- defined in the Centrifuge Chain.PodOperation
- defined in the Centrifuge Chain.PodAuth
- defined in the Centrifuge Chain.expires_at
- token expiration time.
issued_at
- token creation time.
not_before
- token activation time.
Signature - the Schnorrkel/Ristretto x25519
signature generated for json_header.json_payload
.
The POD has 2 types of authentication mechanisms:
On-chain proxies - this is the most commonly used mechanism, and it is used to authenticate any on-chain proxies of the identity.
In this case, the address
, on_behalf_of
and proxy_type
should contain the information as found on-chain.
Example:
Alice
- identity.
Bob
- proxy of Alice
with type PodAuth
.
Token payload:
1{2 "address": "ss58_address_of_bob",3 "on_behalf_of": "ss58_address_of_alice",4 "proxy_type": "PodAuth",5 "expires_at": "1663070957",6 "issued_at": "1662984557",7 "not_before": "1662984557"8}
POD admin - this is used when performing authentication for restricted endpoints.
In this case, the address
and on_behalf_of
fields should be equal and contain the SS58 address of the POD admin, and
the proxy_type
should be PodAdmin
.
Example:
1{2 "address": "pod_admin_ss58_address",3 "on_behalf_of": "pod_admin_ss58_address",4 "proxy_type": "PodAdmin",5 "expires_at": "1663070957",6 "issued_at": "1662984557",7 "not_before": "1662984557"8}
Once the Centrifuge POD is up and running you are able to start submitting documents and tokenize these documents via the Rest API. Please refer to the swagger API docs documentation for a complete list of endpoints. A short summary can be found below:
The NFTs
section of our swagger API docs provides
an overview of all the endpoints available for handling document NFTs.
The NFT endpoint provides basic functionality for minting NFTs for a document and retrieving NFT specific information such as attributes, metadata, and owner.
When minting NFTs, additional information is stored on-chain and on IPFS, as follows:
document fields that are specified in the minting request are saved on IPFS under the following format:
1{2 "name": "ipfs_name",3 "description": "ipfs_description",4 "image": "ipfs_image",5 "properties": {6 "AssetIdentifier": "0x25680a49ff1b6368f7e243130ff957f9523b917c8c83d79aab97c0ef99fd3b15",7 "AssetValue": "100",8 "MaturityDate": "2022-10-13T11:07:28.128752151Z",9 "Originator": "0xd43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27d",10 "result": "0x0000000000000000000000000000000100000000000000000000000000000064"11 }12}
NOTE - at the moment, the only IPFS pinning service that is supported is pinata.
the IPFS hash of the above mentioned fields is set as metadata to the NFT on chain, in the following format - /ipfs/QmfN7u6hMRHxL83Jboa4bHgme4PJmcS4eQFnkrXye5ctAM
the document ID and document version are set as attributes to the NFT on chain.
NOTE - All the above information can be found on chain by querying the related storages of the Uniques
pallet.
The Documents
section of our swagger API docs provides
an overview of all the endpoints available for handling documents.
The main purpose of the POD is to serve as a handler for documents that contain private off-chain data, as described above.
The Jobs
section of our swagger API docs provides
an overview of all the endpoints available for retrieving job details.
The jobs endpoint returns detailed information for a job.
A job is a long-running operation that is triggered by the POD when performing actions related to documents and/or NFTs.
The Webhook
section of our swagger API docs provides
an overview the notification message that is sent by the POD for document or job events.
The "Software", which includes but is not limited to the source code of components of Centrifuge, related repositories, client implementations, user interfaces, compiled or deployed binaries and smart contracts all of its components, libraries, supporting services (including, but not limited to, build pipelines, tests, deployments, "boot nodes", code samples, integrations) is provided "as is", without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement.
In no event shall the authors, maintainers, operators or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the Software or the use or other dealings in the Software.
Centrifuge and all its components are Beta Software, which might and will lead to substantial changes in the future, re-architecture, addition and removal of features, as well as unexpected behavior. Use at your own risk.
Centrifuge is in an early stage of its development. The protocol and its first client implementation have a limited feature set compared to the end-vision. Not all features are implemented yet, and tradeoffs between security, speed, end-user features, and protocol flexibility are made continuously.
Following is a list of important limitations and not yet implemented features of Centrifuge.
When two Centrifuge POD exchange documents with each other, they automatically attach signatures to the transferred documents after validation of the data payload and signatures/keys. A Centrifuge POD validates the structural integrity of a received document as well as the validity of previous signatures compared to the public keys of the corresponding Centrifuge ID of the counterparty. A Centrifuge POD itself does not validate if the document data makes sense from a business point of view.
A Centrifuge POD is a technical client to Centrifuge. This client exchanges and signs data in well-known formats. It does not validate document data authenticity.
Data authenticity and correctness are always validated by the upstream system. E.g. the accounting system interacting with a Centrifuge POD.
A signature of a collaborator on a Centrifuge document signifies the technical receipt and validation of a message. It does not signify the agreement that a document itself is valid, e.g. if an invoice amount is matching the underlying purchase order.
It is possible to attach additional signatures to a document (e.g., with custom attributes) to indicate "business agreement" of a document. However, this is not part of the protocol specifications and is the responsibility of an upstream system.
Important: Nobody outside of a document can view or deduce the parties who collaborate on a document.
However, the list of collaborators on any single document is visible to all of the document's collaborators. This is part of the implementation approach where signatures are gathered from all collaborators on a document when anchoring a new state. To do this, the list of collaborators has to be known when making an update.
For the initial implementation, we assume that businesses only add their already known and trusted business partners to a document as a collaborator rendering this limitation insignificant.
Centrifuge does not support forking or successive merging of document state. If disagreement of document state between collaborators exist this has to be solved by the user by creating a new document.
Collaborators can withhold their signature on a given document update if they choose to do so. The mitigation to this behavior is to remove the withholding/offline collaborator from the document's collaborator list and re-issue the document update and/or create a new document based on the original document data with a new set of collaborators.
For the initial implementation, we assume that businesses only add their trusted business partners to a document as a collaborator. With that, the likelihood of disagreement on the protocol level is low.
It is possible for a malicious collaborator to publish a new document version that blocks other collaborators from updating the original document. This can be done by the malicious collaborator by removing all collaborators from the original document and then publishing a new version with the "next identifier," essentially preventing other collaborators from publishing a new version of the document with this identifier.
Mid-term this will be mitigated by supporting document forking. Short-term the mitigation is as described above: The users can create a new document with the last benign document data and do not add the malicious actor as a collaborator to the document. This will create a new chain of document updates that the malicious collaborator can neither access nor block.
For the initial implementation, we assume that businesses only add their trusted business partners to a document as a collaborator. With that, the likelihood of a malicious actor trying to block document updates is low.
Two or more collaborators could try to update a document at the same time. The "first" update that goes through (the first version being anchored) essentially blocks the other from updating the desired document version.
Mitigation is to always have "pre-commit" enabled. Mid-term this is also possible to be mitigated by supporting document forking/merging.
It is possible for any collaborator to anchor a new document version at any time. Previous collaborator's signatures are not required to anchor/publish a new document version. This is less of a limitation and more of a feature to prevent malicious collaborators from blocking documents by withholding signatures.
Mid-term a feature could be added that requires an x of n
signature scheme where a certain threshold of collaborator signatures is required to anchor a new state. For now, anybody can publish a new version of a document.