Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 191 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

Filecoin Docs

Loading...

Basics

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Storage providers

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Nodes

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Smart contracts

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Assets

The section covers the assets you can find on the Filecoin network, along with how to securely manage and use them.

Was this page helpful?

How retrieval works

This section covers the very basics of how retrieving data works on the Filecoin network.

Retrieval market

The retrieval market facilitates the negotiation of retrieval deals for serving stored data to clients in exchange for FIL.

Basic Retrieval from Filecoin

Currently, Filecoin nodes support direct retrieval from the storage miners who originally stored the data. Clients can send retrieval requests directly to a storage provider and pay a small amount of FIL to retrieve their data.

To request data retrieval, clients need to provide the following information to the storage provider:

  • Storage Provider ID: The ID of the storage provider where the data is stored.

  • Payload CID: Also known as Data CID.

  • Address: The address initially used to create the storage deal.

Hot Retrieval from IPFS

Since most Filecoin nodes are also IPFS nodes, standard practice has been for Filecoin storage providers to also make available a hot copy of any given stored file through IPFS. Since the algorithm that generates a content address (CID) is the same for both Filecoin and IPFS, the client can request the CID of a file they stored on Filecoin and retrieve it from IPFS, if there is an IPFS node that is able and willing to serve the file.

How storage works

This section covers the very basics of storing data works on the Filecoin network.

This section is an introduction to two methods of performing storage deals --through the program or through . This section also explains the features and advantages of using .

Project and community

This section contains information about the Filecoin project as a whole, and how you can interact with the community.

Filecoin economics

This section discusses the economics of Filecoin in relation to storage providers.

Infrastructure

This section covers various infrastructure considerations that storage providers should be aware of.

Was this page helpful?
Was this page helpful?
Filecoin Plus
various storage onramps
Filecoin and IPFS
Was this page helpful?
Was this page helpful?
Was this page helpful?
Was this page helpful?

Crypto-economics

Crypto-economics is the study of how cryptocurrency can incentivize usage of a blockchain network. This page covers how Filecoin manages incentivization within the network.

Native currency

Filecoin’s native currency, FIL, is a utility token that incentivizes persistent storage on the Filecoin network. Storage providers earn FIL by offering reliable storage services or committing storage capacity to the network. With a maximum circulating supply of 2 billion FIL, no more than 2 billion Filecoin will ever exist.

As a utility token aligned with the network’s long-term growth, Filecoin issuance depends on the network’s provable utility and growth. Most of the Filecoin supply is only minted as the network achieves specific growth and utility milestones.

Filecoin uses a dual minting model for block reward distribution:

Baseline minting

Up to 770 million FIL tokens are minted based on network performance. Full release of these tokens would only occur if the Filecoin network reaches a yottabyte of storage capacity within 20 years, approximately 1,000 times the capacity of today’s cloud storage.

Simple minting

An additional 330 million FIL tokens are released on a 6-year half-life schedule, with 97% of these tokens projected to be released over about 30 years.

Additionally, 300 million FIL tokens are held in a mining reserve to incentivize future mining models.

Vesting

Mining rewards are subject to a vesting schedule to support long-term network alignment. For instance, 75% of block rewards earned by miners vest linearly over 180 days, while 25% are immediately accessible, improving miner cash flow and profitability. Note that if the miner has incurred "fee debt," the immediately accessible block rewards will automatically go towards paying down those fees.

A certain portion of initially printed FIL tokens are vested to Protocol Labs teams and the Filecoin Foundation over six years, and to SAFT investors over three years, as outlined in the vesting schedule.

To learn more about Filecoin block rewards vesting, review FIP004: Liquidity Improvement for Storage Miners.

Collateral and slashing

To ensure network security and reliable storage, storage providers must lock FIL as pledge collateral during block reward mining. Pledge collateral is based on projected block rewards a miner could earn. Collateral and all earned rewards are subject to slashing if the storage fails to meet reliability standards throughout a sector’s lifecycle.

Total supply

FIL’s maximum circulating supply is capped at 2 billion FIL. However, this maximum will never be reached, as a portion of FIL is permanently removed from circulation through gas fees, penalties, and other mechanisms.

Was this page helpful?

What is Filecoin

This section offers a detailed overview of Filecoin for developers, serving as a go-to reference for their needs.

Introduction to Filecoin

Filecoin is a peer-to-peer network that enables reliable, decentralized file storage through built-in economic incentives and cryptographic proofs. Clients, or users, pay any number of storage providers, or data centers, to store the client's data --storage providers then provide cryptographic proofs daily as evidence to the clients that the data is still at the data center. Storage providers lock a certain amount of Filecoin as collateral --should they repeatedly fail to provide a proof, their collateral gets burned, serving as a strong deterrent from the data center losing the data.

Anyone can join Filecoin as a client looking to store their data, or as a storage provider offering storage services. Storage availability and pricing aren’t controlled by any single entity; instead, Filecoin fosters an open market for file storage and retrieval accessible to all. Clients can review the history of each storage provider, along with their credentials and compliance record, before choosing to store their data with them.

Note that most Filecoin nodes are IPFS protocol nodes. IPFS is a open system, a hypermedia protocol, to manage data without a central server that makes use of content addressing to provide permanent data references without dependency on specific devices or cloud providers. A client who knows the content address (CID) of their file can retrieve it from any IPFS node (or Filecoin storage provider) that currently has a copy and is able to serve it. Given a CID, the CID Contact network indexer will locate and providing routing details for the relevant file.

Historically, IPFS node operators offered pinning services to the community out of interest and often for free, meaning there was no financial incentive for the IPFS node operators to stay online or keep a given file for a long period of time. Filecoin solves this issue by introducing an incentive layer (clients pay storage providers for long term data center use) to ensure more reliable long term cold storage. Since most Filecoin nodes are also IPFS nodes, they can pin a hot copy of the given file to the IPFS node to allow the client to easily retrieve the file later.

Filecoin is used as a storage solution for a range of products, including from Web3-native NFT storage, incentivized permanent storage, and archival traditional Web2 datasets. For instance, NFT.Storage leverages Filecoin for NFT content and metadata storage. Organizations such as the Shoah Foundation and the Internet Archive use Filecoin for content preservation and backup.

Filecoin is compatible with various data types, including audio and video files. This versatility allows Web3 platforms like Audius and Huddle01 to use Filecoin as a decentralized storage backend for music streaming and video conferencing.

Was this page helpful?

Storage model

A storage model defines how data is stored within a system. This page covers the basic aspects of Filecoin’s storage model.

The Filecoin storage model consists of three main components:

  • Providers

  • Deals

  • Sectors

Providers

Providers offer storage and retrieval services to network users. There are two types of providers:

  • Storage Providers

  • Retrieval Providers

Storage providers

Storage providers, often called SPs, are responsible for storing files and data for clients on the network. They also provide cryptographic proofs to verify that data is stored securely. The majority of providers on the Filecoin network are SPs.

Retrieval providers

Retrieval providers, or RPs, specialize in delivering quick access to data rather than long-term storage. While many storage providers also offer retrieval services, stand-alone RPs are increasingly joining the network to enhance data accessibility.

Deals

In the Filecoin network, SPs and RPs offer storage or retrieval services to clients through deals. These deals are negotiated between two parties and outline terms such as data size, price, duration, and collateral.

The deal-making process initially occurs off-chain. Once both parties agree to the terms, the deal is published on-chain for network-wide visibility and validation.

Sectors

Sectors are the fundamental units of provable storage where storage providers securely store client data and generate PoSt (Proof of Spacetime) for the Filecoin network. Sectors come in standard sizes, typically 32 GiB or 64 GiB, and have a set lifespan that providers can extend before it expires.

Was this page helpful?

Storage market

The storage market is the entry point where storage providers and clients negotiate and publish storage deals on-chain.

Deal making

The lifecycle of a deal within the storage market includes four distinct phases:

  • Discovery: The client identifies potential storage providers (SPs) and requests their prices.

  • Negotiation: After selecting an SP, both parties agree to the terms of the deal.

  • Publishing: The deal is published on-chain.

  • Handoff: The deal is added to a sector, where the SP can provide cryptographic proofs of data storage.

Filecoin Plus

Filecoin Plus aims to maximize useful storage on the Filecoin network by incentivizing the storage of meaningful and valuable data. It offers verified clients low-cost or free storage through a system called datacap, a storage quota that boosts incentives for storage providers.

Verified clients use datacap allocated by community-selected allocators to store data on the network. In exchange for storing verified deals, storage providers receive a 10x boost in storage power, which increases their block rewards as an incentive.

  • Datacap: A token allocated to verified clients to spend on storage deals, offering a 10x quality multiplier for deals.

  • Allocators: Community-selected entities responsible for verifying storage clients and allocating datacap tokens.

  • Verified Clients: Active participants with datacap allocations for their data storage needs.

Storage on-ramps

To simplify data storage on the Filecoin network, several tools offer streamlined integration of Filecoin and IPFS storage for applications or smart contracts.

These storage helpers provide libraries that abstract the Filecoin deal-making process into simple API calls. They also store data on IPFS for efficient and fast content retrieval.

Available storage helpers include:

  • lighthouse.storage: An SDK for builders, providing tools for storing data from dApps.

  • web3.storage: A user-friendly client for accessing decentralized protocols like IPFS and UCAN.

  • Akave: A modular L2 solution for decentralized data management, combining Filecoin storage with encryption and easy-to-use interfaces.

  • Storacha: A decentralized hot storage network for scalable, user-owned data with decentralized permissions, leveraging Filecoin.

  • Curio: A next-gen platform within the Filecoin ecosystem, streamlining storage provider operations.

  • boost.filecoin.io: A tool for storage providers to manage data onboarding and retrieval on the Filecoin network.

Was this page helpful?

Serving retrievals

In this article, we will discuss the functions of storage providers in the Filecoin network, the role of the indexer, and the retrieval process for publicly available data.

The indexer

When a storage deal is originally made, the client can opt to make the data publicly discoverable. If this is the case, the storage provider must publish an advertisement of the storage deal to the Interplanetary Network Indexer (IPNI). IPNI maps a CID to a storage provider (SP). This mapping allows clients to query the IPNI to discover where content is on Filecoin.

The IPNI also tracks which data transfer protocols you can use to retrieve specific CIDs. Currently, Filecoin SPs have the ability to serve retrievals over Graphsync, Bitswap, and HTTP. This is dependent on the SP setup.

Retrieval process

If a client wants to retrieve publicly available data from the Filecoin network, then they generally follow this process.

Query the IPNI

Before the client can submit a retrieval deal to a storage provider, they first need to find which providers hold the data. To do this, the client sends a query to the Interplanetary Network Indexer.

Select a provider

Assuming the IPNI returns more than one storage provider, the client can select which provider they’d like to deal with. Here, they will also get additional details (if needed) based on the retrieval protocol they want to retrieve the content over.

Initiate retrieval

The client then attempts to retrieve the data from the SP over Bitswap, Graphsync, or HTTP. Note that currently, clients can only get full-piece retrievals using HTTP.

When attempting this retrieval deal using Graphsync, payment channels are used to pay FIL to the storage provider. These payment channels watch the data flow and pay the storage provider after each chunk of data is retrieved successfully.

Finalize the retrieval

Once the client has received the last chunk of data, the connection is closed.

Was this page helpful?

Related projects

Filecoin is a highly modular project that is itself made out of many different protocols and tools. Many of these exist as their own projects, supported by Protocol Labs. Learn more about them below.

Libp2p

A modular network stack, libp2p enables you to run your network applications free from runtime and address services, independently of their location. Learn more at libp2p.io/.

IPLD

IPLD is the data model of the content-addressable web. It allows us to treat all hash-linked data structures as subsets of a unified information space, unifying all data models that link data with hashes as instances of IPLD. Learn more at ipld.io/.

IPFS

IPFS is a distributed system for storing and accessing files, websites, applications, and data. However, it does not have support for incentivization or guarantees of this distributed storage; Filecoin provides the incentive layer. Learn more at ipfs.tech/.

Multiformats

The Multiformats Project is a collection of protocols which aim to future-proof systems through self-describing format values that allow for interoperability and protocol agility. Learn more at multiformats.io/.

ProtoSchool

Interactive tutorials on decentralized web protocols, designed to introduce you to decentralized web concepts, protocols, and tools. Complete code challenges right in your web browser and track your progress as you go. Explore ProtoSchool’s tutorials on Filecoin at proto.school/.

Was this page helpful?

Auxiliary services

As a storage provider, you can set your business apart from the rest by offering additional services to your customers. Many new use-cases for the Filecoin network are emerging as new technologies are

Saturn

One of the additional services is participation in Saturn retrieval markets. Saturn is a Web3 CDN (“content delivery network”), and will launch in stages in 2023. Saturn aims to be the biggest Web3 CDN, and biggest CDN overall with the introduction of Saturn, data stored on Filecoin is no longer limited to archive or cold storage, but can also be cached into a CDN layer for fast retrieval. Data that needs to be available quickly can then be stored on Filecoin and retrieved through Saturn. Saturn comes with 2 layers of caching, L1 and L2. L1 nodes typically run in data centers, require high availability and 10 GBs minimum connectivity. The L1 Saturn provider earns FIL through caching and serving data to clients. L2 nodes can be run via an app on desktop hardware.

FVM

Other new opportunities are emerging since the launch of FVM (Filecoin Virtual Machine) in March 2023. The FVM allows smart contracts to be executed on the Filecoin blockchain. The FVM is Ethereum-compatible (also called the FEVM) and allows for entire new use cases to be developed in the Filecoin ecosystem. Think of on-chain FIL lending as an example, but the opportunities are countless.

Bacalhau

A next step after the introduction of FVM is Bacalhau), which will be offering Compute over Data (COD). After the introduction of a compute layer on Filecoin, Bacalhau’s COD promises to run compute jobs over the data where the data resides, at the storage provider. Today, data scientists have to transfer their datasets to compute farms in order for their AI, ML or other data processing activities to run. Bacalhau will allow them to run compute activities on the data where the data is located, thereby removing the expensive requirement to move data around. Storage providers will be able to offer - and get rewarded for - providing compute power to data scientists and other parties who want to execute COD.

Storage tiering

Another potential service to offer is storage tiers with various performance profiles. For example, storage providers can offer hot/online storage by keeping an additional copy of the unsealed data available for immediate retrieval, as well as the sealed that has been stored on the Filecoin Network.

Was this page helpful?

The blockchain

This section covers the basic concepts surrounding the Filecoin blockchain.

Was this page helpful?

Storage proving

Storage proving, known as Proof-of-Spacetime (“PoSt”), is the mechanism that the Filecoin blockchain uses to validate that storage providers are continuously providing the storage they claim. Storage providers earn block rewards each time they successfully answer a PoSt challenge.

Proving deadlines

As a storage provider, you must preserve the data for the duration of the deal, which are on-chain agreements between a client and a storage provider. As of March 2023, deals must have a minimum duration of 180 days, and maximum duration of 540 days. The latter value was chosen to balance long deal length with cryptographic security. Storage providers must be able to continuously prove the availability and integrity of the data they are storing. Every storage sector of 32 GiB or 64 GiB gets verified once in each 24 hour period. This period is called a proving period. Every proving period of 24 hours is broken down into a series of 30 minute, non-overlapping deadlines. This means there are 48 deadlines per day. Storage sectors are grouped in a partition, and assigned to a proving deadline. All storage sectors in a given partition will always be verified during the same deadline.

WindowPoSt

The cryptographic challenge for storage proving is called Window Proof-of-Spacetime (WindowPoSt). Storage providers have a deadline of 30 minutes to respond to this WindowPoSt challenge via a message on the blockchain containing a zk-SNARK proof of the verified sector. Failure to submit this proof within the 30 minute deadline, or failure to submit it at all, results in slashing. Slashing means a portion of the collateral will be forfeited to the f099 burn address and the storage power of the storage provider gets reduced. Slashing is a way to penalize storage providers who fail to meet the agreed upon standards of storage.

Was this page helpful?

Skills

This section covers the technical skills and knowledge required to become a storage provider.

Was this page helpful?

Architecture

This section covers the architectural components and processes that storage providers should be aware of when creating their infrastructure.

Was this page helpful?

Social media

Filecoin is everywhere on the internet — and that includes social media. Find your favorite flavor here.

YouTube

The Filecoin YouTube channel is home to a wealth of information about the Filecoin project — everything from developer demos to recordings of mining community calls — so you can explore playlists and subscribe to ones that interest and inform you.

Blog

Explore the latest news, events and other happenings on the official Filecoin Blog.

Newsletter

Subscribe to the Filecoin newsletter for official project updates sent straight to your inbox.

Twitter

Get your Filecoin news in tweet-sized bites. Follow these accounts for the latest:

  • @Filecoin for news and other updates from the Filecoin project

  • @ProtoSchool for updates on ProtoSchool workshops and tutorials

WeChat

Follow FilecoinOfficial on WeChat for project updates and announcements in Chinese.

WeChat logo

Was this page helpful?

The FIL token

FIL is the cryptocurrency that powers the Filecoin network. This page explains what FIL is, how it can be used, and its denominations.

Uses

FIL plays a vital role in incentivizing users to participate in the Filecoin network and ensuring its smooth operation. Here are some ways in which FIL is used on the Filecoin network:

Network payments

When a user wants to store data on the Filecoin network, they pay in FIL to the storage providers who offer their storage space. The payment is made in advance, for a certain amount of time that the data will be stored on the network.

In addition, storage providers choose their own terms and payment mechanisms for providing storage and retrieval services so other options (such as fiat payments) can be available.

Blockchain rewards

Storage providers are also rewarded with FIL for providing their storage space and performing other useful tasks on the network. FIL is used to reward storage providers who validate and add new blocks to the Filecoin blockchain. Providers receive a block reward in FIL for each new block they add to the blockchain and also earn transaction fees in FIL for processing storage and retrieval transactions.

Governance

As members of the Filecoin community, FIL holders are encouraged to participate in the Filecoin governance process. They can do so by proposing, deliberating, designing, and/or contributing to consensus for network changes, alongside other stakeholders in the Filecoin community- including implementers, Core Devs, storage providers, and other ecosystem partners. Learn more about the .

Denominations

FIL, NanoFIL, and PicoFIL are all denominated in the same cryptocurrency unit, but they represent different levels of precision and granularity. For most users, FIL is the main unit of measurement and is used for most transactions and payments on the Filecoin network.

Much like how a US penny represents a fraction of a US dollar, there are many ways to represent value using Filecoin. This is because some actions on the Filecoin network require substantially less value than one whole FIL. The different denominations of FIL you may see referenced across the ecosystem are:

Name
Decimal

Drand

Drand, pronounced dee-rand, is a distributed randomness beacon daemon written in Golang.

This page covers how Drand is used within the Filecoin network. For more information on Drand generally, .

Randomness outputs

By polling the appropriate endpoint, a Filecoin node will get back a Drand value formatted as follows:

  • signature: the threshold BLS signature on the previous signature value and the current round number round.

  • previous_signature: the threshold BLS signature from the previous Drand round.

  • round: the index of randomness in the sequence of all random values produced by this Drand network.

The message signed is the concatenation of the round number treated as a uint64 and the previous signature. At the moment, Drand uses BLS signatures on the BLS12-381 curve with the latest v7 RFC of hash-to-curve, and the signature is made over G1.

Polling the network

Filecoin nodes fetch the Drand entry from the distribution network of the selected Drand network.

Drand distributes randomness using multiple distribution channels such as HTTP servers, S3 buckets, gossiping, etc. Simply put, the Drand nodes themselves will not be directly accessible by consumers; rather, highly-available relays will be set up to serve Drand values over these distribution channels.

On initialization, Filecoin initializes a Drand client with chain info that contains the following information:

  • Period: the period of time between each Drand randomness generation.

  • GenesisTime: at which the first round in the Drand randomness chain is created.

  • PublicKey: the public key to verify randomness.

  • GenesisSeed: the seed that has been used for creating the first randomness.

It is possible to simply store the hash of this chain info and to retrieve the contents from the Drand distribution network as well on the /info endpoint.

Thereafter, the Filecoin client can call Drand’s endpoints:

  • /public/latest to get the latest randomness value produced by the beacon.

  • /public/<round> to get the randomness value produced by the beacon at a given round.

Using Drand

Drand is used as a randomness beacon for leader election in Filecoin. While Drand returns multiple values with every call to the beacon (see above), Filecoin blocks need only store a subset of these in order to track a full Drand chain. This information can then be mixed with on-chain data for use in Filecoin.

Edge cases and outages

Any Drand beacon outage will effectively halt Filecoin block production. Given that new randomness is not produced, Filecoin miners cannot generate new blocks. Specifically, any call to the Drand network for a new randomness entry during an outage should be blocked in Filecoin.

After a beacon downtime, Drand nodes will work to quickly catch up to the current round. In this way, the above time-to-round mapping in Drand used by Filecoin remains invariant after this catch-up following downtime.

While Filecoin miners were not able to mine during the Drand outage, they will quickly be able to run leader election thereafter, given a rapid production of Drand values. We call this a catch-up period.

During the catch-up period, Filecoin nodes will backdate their blocks in order to continue using the same time-to-round mapping to determine which Drand round should be integrated according to the time. Miners can then choose to publish their null blocks for the outage period, including the appropriate Drand entries throughout the blocks, per the time-to-round mapping. Or, as is more likely, try to craft valid blocks that might have been created during the outage.

Based on the level of decentralization of the Filecoin network, we expect to see varying levels of miner collaboration during this period. This is because there are two incentives at play: trying to mine valid blocks during the outage to collect block rewards and not falling behind a heavier chain being mined by a majority of miners who may or may not have ignored a portion of these blocks.

In any event, a heavier chain will emerge after the catch-up period and mining can resume as normal.

Networks

The Filecoin network has several networks for testing, staging, and production purposes. This page provides information on available networks.

Mainnet

is the live production network that connects all nodes on the Filecoin network. It operates continuously without resets.

Testnets

Test networks, or testnets, are versions of the Filecoin network that simulate various aspects of the mainnet. They are intended for testing and should not be used for production applications or services.

Calibration

The testnet offers the closest simulation of the mainnet. It provides realistic sealing performance and hardware requirements due to the use of finalized proofs and parameters, allowing prospective storage providers to test their setups. Storage clients can also store and retrieve real data on this network, participating in deal-making workflows and testing storage/retrieval functionalities. Calibration testnet uses the same sector size as the mainnet.

Consensus

In the Filecoin blockchain, network consensus is achieved using the Expected Consensus (EC) algorithm, a secret, fair, and verifiable consensus protocol used by the network to agree on the chain state

Overview

In the Filecoin blockchain, network consensus is achieved using the Expected Consensus (EC) algorithm, a probabilistic, Byzantine fault-tolerant consensus protocol. At a high level, EC achieves consensus by running a secret, fair, and verifiable leader election at every where a set number of participants may become eligible to submit a block to the chain based on fair and verifiable criteria.

Properties

Expected Consensus (EC) has the following properties:

  • Each epoch has potentially multiple elected leaders who may propose a block.

  • A winner is selected randomly from a set of network participants weighted according to the respective storage power they contribute to the Filecoin network.

  • All blocks proposed are grouped together in a tipset, from which the final chain is selected.

  • A block producer can be verified by any participant in the network.

  • The identity of a block producer is anonymous until they release their block to the network.

Steps

In summary, EC involves the following steps at each epoch:

  1. A storage provider checks to see if they are elected to propose a block by generating an election proof.

  2. Zero, one, or multiple storage providers may be elected to propose a block. This does not mean that an elected participant is guaranteed to be able to submit a block. In the case where:

    • No storage providers are elected to propose a block in a given epoch; a new election is run in the next epoch to ensure that the network remains live.

    • One or more storage providers are elected to propose a block in a given epoch; each must generate a WinningPoSt proof-of-storage to be eligible to actually submit a block.

  3. Each potential block producer elected generates a storage proof using for a randomly selected within in short window of time. Potential block producers that fail this step are not eligible to produce a block. In this step, the following could occur:

    • All potential block producers fail WinningPoSt, in which case EC returns to step 1 (described above).

    • One or more potential block producers pass WinningPoSt, which means they are eligible to submit that block to the epochs tipset.

  4. Blocks generated by block producers are grouped into a .

  5. The tipset that reflects the biggest amount of committed storage on the network is selected.

  6. Using the selected tipset, the chain state is propagated.

  7. EC returns to step 1 in the next epoch.

Forums and FIPs

Connect with the Filecoin community in discussion forums or on IRC. The Filecoin community is active and here to answer your questions in your channel of choice.

Discussion Forums

For shorter-lived discussions, our community chat open to all on both Slack and Discord:

For long-lived discussions and for support, please use the instead of Slack. It’s easy for complex discussions to get lost in a sea of new messages on those chat platforms, and posting longer discussions and support requests on the forums helps future visitors, too.

Filecoin improvement proposals

Filecoin improvement proposals (FIPs) are design documents that propose changes and improvements to the Filecoin network, giving detailed specifications and their rational, and allowing the community to document their consensus or dissent. All technical FIPs that are accepted are later reflected in the .

There are three types of FIPs:

  • Technical FIPs (FTP): protocol changes, standards, API changes. They can include core (consensus-related changes, networking (network protocol improvements, interface (API/RPC or language-level updates), or can be informational (updates to general guidelines or documentation).

  • Organizational FIPs (FOP): changes to processes, tools, or governance.

  • Recovery FIPs (FRP): emergency fixes requiring state changes (e.g., major bugs).

Typically, the FIP lifecycle looks something like this:

[ WIP ] -> [ DRAFT ] -> [ LAST CALL ] -> [ ACCEPTED ] -> [ FINAL ]

  1. WIP: A community member has an idea for a FIP, and begins discussing the idea publicly on the Filecoin Discord, in the , or in Github issues for the relevant repo.

  2. DRAFT: If there is a chance the FIP could be adopted, the author submits a draft for the FIP as a pull request in the .

  3. LAST CALL: This status allows the community to submit final changes to the draft.

  4. ACCEPTED: Once the FIP is voted on and accepted, the core engineers will work to implement it.

  5. FINAL: This status represents the current state-of-the-art, and it should only be updated to correct errors.

It is the authors' responsibility to request status updates for the FIP. A more robust explainer of the FIP process can be found in .

Quickstart guide

This page is a quick start guide for storage providers in the Filecoin ecosystem.

Explore the storage provider documentation

Get ready to dive into the valuable resources of the . This comprehensive guide offers a wealth of information about the role of storage providers in the Filecoin ecosystem, including insights into the economic aspects. You’ll also gain knowledge about the software architecture, hardware infrastructure, and the necessary skills for success.

Gain insights into ROI and collateral’s role

To run a successful storage provider business, it’s crucial to understand the concept of and the significance of collateral. By planning ahead and considering various factors, such as CAPEX, OPEX, network variables, and collateral requirements, you can make informed decisions that impact your business’s profitability and desired capacity.

Get to know the ecosystem

One of the truly enriching elements of the Filecoin ecosystem lies in its vibrant community. Meet the community on the . Within this dynamic network, you’ll find a treasure trove of individuals who are eager to share their experiences and offer invaluable solutions to the challenges they’ve encountered along the way. Whether it’s navigating the intricacies of storage provider operations or overcoming hurdles on the blockchain, this supportive community stands ready to lend a helping hand. Embrace the spirit of collaboration and tap into this remarkable network.

Unleash the Power of Filecoin’s Reference Implementation

Get ready to dive into the heart of the Filecoin network with , the leading reference implementation. As the most widely used software stack for interacting with the blockchain and operating a storage provider setup, Lotus holds the key to unlocking a world of possibilities. Seamlessly navigate the intricacies of this powerful tool and leverage its capabilities to propel your journey forward.

Hands-on learning and exploration

It’s time to roll up your sleeves and embark on a hands-on adventure. With a multitude of options at your disposal, setting up a environment is the easiest and most exciting way to kickstart your Filecoin journey. Immerse yourself in the captivating world of sealing sectors and witness firsthand how this critical process works. Feel the thrill of experimentation as you delve deeper into the inner workings of this remarkable technology.

Transforming into a storage provider

Congratulations on taking the next leap in becoming a full-fledged storage provider! Now is the time to determine your starting capacity and architect a tailored solution to accommodate it. Equip yourself with the to kickstart your journey on the mainnet. Test your setup on the calibration testnet to fine-tune your skills and ensure seamless operations. Once you’re ready, brace yourself for the excitement of joining the mainnet.

Supercharge your mainnet experience

As you step into the vibrant realm of the mainnet, it’s time to supercharge your storage provider capabilities with . Discover the immense potential of this powerful software designed to help you secure storage deals and offer efficient data retrieval services to data owners. Unleash the full force of Boost and witness the transformative impact it has on your Filecoin journey.

Discover the world of verified deals and tools

Within the Filecoin network there are many designed to enhance your storage provider setup. Uncover the power of these tools as you dive into the documentation, gaining valuable insights and expanding your knowledge. Make the best use of data programs on your path to success.

Saturn

Filecoin Saturn is an open-source, community-run Content Delivery Network (CDN) built on Filecoin.

Saturn is a Web3 CDN in Filecoin’s retrieval market. On one side of the network, websites buy fast, low-cost content delivery. On the other side, Saturn node operators earn Filecoin by fulfilling requests.

Saturn is trustless, permissionless, and inclusive. Anyone can run Saturn software, contribute to the network, and earn Filecoin.

Content on Saturn is IPFS content-addressed. Every piece of content is immutable, and every response is verifiable.

Incentives unite, align, and grow the network. Node operators earn Filecoin for accelerating web content, and websites get faster content delivery for less.

Find out more over at .

The Filecoin project

Curious about how it all got started, or where we’re headed? Learn about the history, current state, and future trajectory of the Filecoin project here.

Roadmap

The is updated quarterly. It provides insight into the strategic development of the network and offers pathways for community members to learn more about ongoing work and connect directly with project teams.

Research

Learn about the ongoing cryptography research and design efforts that underpin the Filecoin protocol on the . The also actively researches improvements.

Code of conduct

The Filecoin community believes that our mission is best served in an environment that is friendly, safe, and accepting, and free from intimidation or harassment. To that end, we ask that everyone involved in Filecoin read and respect our .

Committed capacity

The content discusses participating in the network by providing Committed Capacity (CC) sectors. CC sectors are storage sectors that are filled with random data, instead of customer data.

One way of participating in the Filecoin network is by providing to the network. CC sectors do not contain customer data but are filled with random data when they are created. The goal for the Filecoin network is to have a distributed network of verifiers and collaborators to the network in order to run and maintain a healthy blockchain. Any public blockchain network requires enough participants in the consensus mechanism of the blockchain, in order to guarantee that transactions being logged onto the blockchain are legitimate. Because Filecoin’s consensus mechanism is based on Proof-of-Storage, we need sufficient storage providers that pledge capacity to the network, and thus take part in the consensus process. This is done via Committed Capacity sectors. This can be done in sectors of 32 GiB or 64 GiB. For more detail, see the .

Availability requirements

Because the Filecoin network needs consistency, meaning all data stored is still available and unaltered, a storage provider is required to keep their capacity online, and be able to demonstrate to the network that the capacity is online. WindowPoSt verification is the process that checks that the provided capacity remains online. If not, a storage provider is penalized (or slashed) over the collateral FIL they provided for that capacity and their storage power gets reduced. This means an immediate reduction in capital (lost FIL), but also a reduction in future earnings because block rewards are correlated to storage power, as explained above. See , and for more information.

What’s next?

Providing committed capacity is the easiest way to get started as a storage provider, but the economics are very dependent on the price of FIL. If the price of FIL is low, it can be unprofitable to provide only committed capacity. The optimal FIL-price your business needs to be profitable will depend on your setup. Profitability can be increased by utilizing , along with .

Note that as of , storage providers can now batch pre-commit up to 256 sectors at once. This change reduces gas costs, requires fewer reads/writes to the blockchain, and lowers transaction congestion. Note that if anything in the batch is invalid, nothing in the batch is pre-committed.

Slashing

Slashing penalizes storage providers that either fail to provide reliable uptime or act maliciously against the network. This page discusses what slashing means to storage providers.

Storage fault slashing

This term encompasses a broad set of penalties which are to be paid by storage providers if they fail to provide sector reliability or decide to voluntarily exit the network. These include:

  • Fault fees are incurred for each day a storage provider’s sector is offline (fails to submit Proofs-of-Spacetime to the chain). Fault fees continue until the associated wallet is empty and the storage provider is removed from the network. In the case of a faulted sector, there will be an additional sector penalty added immediately following the fault fee. Sector fault fees are equal to 3.51 days of expected block rewards.

  • Sector penalties are incurred for a faulted sector that was not declared faulted before a WindowPoSt check occurs. The sector will pay a fault fee after a Sector Penalty once the fault is detected.

  • Termination fees are incurred when a sector is voluntarily or involuntarily terminated and is removed from the network.

  • Consensus fault slashing is a penalty incurred when committing consensus faults. This penalty is applied to storage providers that have acted maliciously against the network’s consensus functionality.

Honest Storage Providers

Note that occasionally, storage providers may experience operational issues, such as downtime or bugs, that cause them to miss their delivery of a WindowPoSt. To ensure reliability and to encourage smaller miners to join the network, there are built-in exceptions to the fault fees:

  • If the Storage Provider has a history of acting honestly, there is no penalty in the current proving period for a faulted sector in the case of a missed WindowPoSt.

  • There are no fees if the sector is successfully recovered in a later proving period.

  • The fault fee applies only to the sectors already faulty, meaning, they are from a previous proving period, or marked for recovery. Penalties are only applied to faulty sectors from previous proving periods, never the current proving period.

To learn more about fault fee exceptions, review .

Storage onramps

Storage on-ramps and helpers are APIs and services that abstract Filecoin dealmaking into simple, streamlined API calls.

Developers use web UIs, APIs, or libraries to send data to storage onramps. Behind the scenes, storage onramps receive the data and handle the underlying processes to store it in a reliable way, making deals with Filecoin storage providers.

The available storage onramps are:

  • "offers permanent, decentralized storage powered by Filecoin. Secure, scalable, and ideal for individuals, developers, and enterprises."

  • is "revolutionizing data management with a decentralized, modular solution that combines the robust storage of Filecoin with cutting-edge encryption and easy-to-use interfaces."

  • is an open hot storage network scales IPFS and Filecoin. Upload any data and Storacha will ensure it ends up on a decentralized set of IPFS and Filecoin storage providers. There Storacha detail the JavaScript and Go API libraries, and there is a no-code web uploader available as well.

  • "facilitates onboarding of large quantaties of data (PB-scale) to the Filecoin network in an efficient, secure, and flexible way."

  • is a "seamless gateway to the decentralized web", allowing you to drag and drop files through an easy-to-use UI that uploads files to Filecoin and IPFS.

  • is "a network coordinating people, hardware and capital to build a more open and resilient internet infrastructure for everyone."

Sealing-as-a-service

This page describes how sealing-as-a-service works, and the benefits to storage providers.

Storage providers with hardware cost or availability constraints can use Sealing-as-a-service, in which another provider performs sector sealing on the storage providers behalf. This page describes how sealing-as-a-service works, and the benefits to storage providers.

Overview

In a traditional setup, a storage provider needs high-end hardware to build out a . Storage providers with hardware cost or availability constraints can use Sealing-as-a-Service providers, where another provider performs sector sealing on the storage provider’s behalf. In this model, the following occurs:

  1. The storage provider provides the data to the sealer

  2. The sealer seals the data into sectors.

  3. The sealer returns the sealed sectors in exchange for a service cost.

Benefits

Sealing-as-a-service provides multiple benefits for storage providers:

  • Available storage can be filled faster, thereby maximizing block rewards, without investing in a complex, expensive sealing pipeline.

  • Bigger deals can be onboarded, as Sealing-as-a-Service essentially offers a burst capability in your sealing capacity. Thus, storage providers can take on larger deals without worrying about sealing time and not meeting client expectations.

  • Storage capacity on the Filecoin network can be expanded without investing in a larger sealing pipeline.

Other solutions are possible where the sealing partner seals committed capacity (CC) sectors for you, which you in turn to data sectors.

See the following video from about their offering of Sealing-as-a-Service:

Network

This page covers topics related to internet bandwidth requirements, LAN bandwidth considerations, the use of VLANs for network traffic separation, network redundancy measures, and common topologies.

Internet bandwidth

The amount of internet bandwidth required for a network largely depends on the size of the organization and customer expectations. A bandwidth between 1 Gbps and 10 Gbps is generally sufficient for most organizations, but the specific requirements should be determined based on the expected traffic. A minimum bandwidth of 10 Gbps is recommended for setups that include a node. Saturn requires a high-speed connection to handle large amounts of data.

LAN bandwidth

The bandwidth between different components of a network is also important, especially when transferring data between servers. The internal connectivity between servers should be at least 10 Gbps to ensure that planned sealing capacity is not limited by network performance. It is important to ensure that the servers and switches are capable of delivering the required throughput, and that firewalls are not the bottleneck for this throughput.

VLANs

Virtual Local Area Networks (VLANs) are commonly used to separate network traffic and enhance security. However, if firewall rules are implemented between VLANs, the firewall can become the bottleneck. To prevent this, it is recommended to keep all sealing workers, Lotus miners, and storage systems in the same VLAN. This allows for data access and transfer without involving routing and firewalls, thus improving network performance.

Redundancy

Network redundancy is crucial to prevent downtime and ensure uninterrupted operations. By implementing redundancy, individual networking components can fail without disrupting the entire network. Common industry standards for network redundancy include NIC (network interface card) bonding, LACP (Link Aggregation Control Protocol), or MCLAG (Multi-Chassis Link Aggregation Group).

Common topologies

Depending on the size of the network, different network topologies may be used to optimize performance and scalability. For larger networks, a spine-leaf architecture may be used, while smaller networks may use a simple two-tier architecture.

Spine-leaf architectures provide predictable latency and linear scalability by having multiple L2 leaf switches that connect to the spine switches. On the other hand, smaller networks can be set up with redundant L3 switches or a collapsed spine/leaf design that connect to redundant routers/firewalls.

It is important to determine the appropriate topology based on the specific needs of the organization.

Charging for data

This page covers how storage providers can charge for data on the Filecoin network.

Charging for data stored on your storage provider network is an essential aspect of running a sustainable business. While block rewards from the network can provide a source of income, they are highly dependent on the volatility of the price of FIL, and cannot be relied on as the sole revenue stream.

To build a successful business, it is crucial to develop a pricing strategy that is competitive, yet profitable. This will help you attract and retain customers, as well as ensure that your business succeeds in the long term. While some programs may require storage providers to accept deals for free, or bid in auctions to get a deal, it is generally advisable to charge customers for most client deals.

When developing your pricing strategy, it is important to consider the cost of sales associated with acquiring new customers. This cost consideration should include expenses related to business development, marketing, and sales, which you should incorporate into your business’ return-on-investment (ROI) calculation.

In addition to sales costs, other factors contribute to your business’ total cost of ownership. These include expenses related to backups of your setup and data, providing an access layer to ingest data and for retrievals, preparing the data when necessary, and more. Investigating these costs is essential to ensure your pricing is competitive, yet profitable.

By charging for data stored on your network, you can create a sustainable business model that allows you to invest in hardware and FIL as collateral, as well as grow your business over time. This requires skilled people capable of running a business at scale and interacting with investors, venture capitalists, and banks to secure the necessary funding for growth.

Next to the sales cost, there are other things that contribute to the total cost of ownership of your storage provider business. Think of backups of your setup and the data, providing an access layer to ingest data and for retrievals, preparing the data (if not done already), and more.

Snap deals

Snap Deals are a way to convert Committed Capacity sectors (that store no real data) into data sectors to be used for storing actual data and potentially Filecoin Plus data.

Instead of destroying a previously sealed sector and recreating a new sector that needs to be sealed, Snap Deals allow data to be ingested into CC-sectors without the requirement of re-sealing the sector.

Why would you do snap deals?

There are two main reasons why a storage provider could be doing Snap Deals, also known as “snapping up their sectors” in the Filecoin community:

  • The first reason is that the 10x storage power on the same volume of data stored is a strong incentive to upgrade to verified deals for those storage providers who started out on CC-sectors and wish to upgrade to verified deals with Filecoin Plus.

  • The second reason applies to storage providers who decide to start sealing CC-sectors, but later then fill them with verified deals. When you start as a storage provider or when you expand your storage capacity, it might be a good idea to fill your capacity with CC-sectors in the absence of verified deals. Not only do you start earning block rewards over that capacity, but more importantly, you can plan the sealing throughput, and balance your load over the available hardware. If your is 3 TiB/day, it makes no sense to feed 5 TiB/day into the pipeline. This creates congestion and possibly negative performance. If you are sealing 3 TiB/day for 33 days in a row, you end up with 99 TiB of sealed sectors that were sealed evenly and consistently. If you then take on a 99 TiB verified deal (accounting for 1 PiB QAP), the only thing required is to snap up the sectors.

Snapping up sectors with snap deals puts a lot less stress on the storage provider’s infrastructure. The only task that is executed from the is the replica-update and prove-replica-update phase, which is similar to the PC2 process. The CPU-intensive PreCommit 1 phase is not required in this process.

Do not forget to provide the collateral funds when snapping up a verified deal. The same volume requires more collateral when it counts as Filecoin Plus data, namely 10x the collateral compared to raw storage power.

PDP

DEPRECATED DEVELOPER TOOL This documentation refers to the legacy pdptool, which is intended only for low-level developer testing. It is not the recommended method for onboarding or interacting with PDP Storage Providers.

For current usage, including working with live PDP SPs and submitting real deals, please use the and .

Lite-nodes

This section covers what lite-nodes are, and how developers can use them to interact with the Filecoin network.

Full-nodes

This section contain information on how to spin up a full Filecoin node using Lotus, and options for using remote nodes.

FIL

1

milliFIL

1,000

microFIL

1,000,000

nanoFIL

1,000,000,000

picoFIL

1,000,000,000,000

femtoFIL

1,000,000,000,000,000

attoFIL

1,000,000,000,000,000,000

Filecoin Governance process
Was this page helpful?
{
  "round": 367,
  "signature": "b62dd642e939191af1f9e15bef0f0b0e9562a5f570a12a231864afe468377e2a6424a92ccfc34ef1471cbd58c37c6b020cf75ce9446d2aa1252a090250b2b1441f8a2a0d22208dcc09332eaa0143c4a508be13de63978dbed273e3b9813130d5",
  "previous_signature": "afc545efb57f591dbdf833c339b3369f569566a93e49578db46b6586299422483b7a2d595814046e2847494b401650a0050981e716e531b6f4b620909c2bf1476fd82cf788a110becbc77e55746a7cccd47fb171e8ae2eea2a22fcc6a512486d"
}
take a look at the project’s documentation
Was this page helpful?
Mainnet
Calibration
Public endpoint
Blockchain explorer
Calibration Faucet - Chainsafe
Calibration Faucet - Zondax
Calibration Faucet - Forest Explorer
Calibration USDFC Faucet - Chainsafe
Was this page helpful?
epoch
WinningPoSt
sector
tipset
Was this page helpful?
Slack
Discord
discussion tab on GitHub
Filecoin Spec
Filecoin Slack channel for discussing FIPs
FIPs repo
FIP001
Was this page helpful?
Filecoin Community Roadmap
Filecoin Research website
CryptoLab at Protocol Labs
code of conduct
Was this page helpful?
Committed Capacity (CC) sectors
architectural overview
Slashing
Storage Proving
FIL Collateral
Filecoin Plus
extra services you can charge for
FIP008: Add miner batched sector pre-commit method
Was this page helpful?
FIP002: Free Faults on Newly Faulted Sectors of a Missed WindowPoSt
Was this page helpful?
Lighthouse
Akave
Storacha
docs
Singularity
CID Gravity
Ramo
Was this page helpful?
Saturn
Was this page helpful?
Was this page helpful?
sealing rate
sealing pipeline
Was this page helpful?
Synapse SDK
Synapse dApp Tutorial
Was this page helpful?
Was this page helpful?
saturn.tech
Was this page helpful?
https://saturn.tech/
storage provider documentation
Return on Investment (ROI)
Filecoin Slack
Lotus
local devnet
necessary hardware
Boost
programs and tools
Was this page helpful?

Blockchain

A blockchain is a distributed database shared among nodes in a computer network. This page covers the design and functions of the Filecoin blockchain.

Blockchain

Tipsets

A tipset is a set of blocks with the same height, allowing multiple storage providers to produce blocks in each epoch, increasing network throughput. The Filecoin blockchain consists of a chain of tipsets rather than individual blocks. Each tipset is assigned a weight, enabling the consensus protocol to guide nodes to build on the heaviest chain and preventing interference from nodes attempting to produce invalid blocks.

Actors

Actors are ‘objects’ within the Filecoin network, each with a state and a set of methods for interaction, that pass messages to each other and ensure the system operates appropiately.

Built-in actors

Several built-in system actors power the Filecoin network as a decentralized storage network:

  • Init actor: Initializes new actors and records the network name.

  • Cron actor: Scheduler that runs critical functions at every epoch.

  • Account actor: Manages user accounts (non-singleton).

  • Reward actor: Manages block rewards and token vesting (singleton).

  • Storage miner actor: Manages storage mining operations and validates storage proofs.

  • Storage power actor: Tracks storage power allocation for each provider.

  • Storage market actor: Manages storage deals.

  • Multisig actor: Handles Filecoin multi-signature wallet operations.

  • Payment channel actor: Sets up and settles payment channel funds.

  • Datacap actor: Manages datacap tokens.

  • Verified registry actor: Manages verified clients.

  • Ethereum Address Manager (EAM) actor: Assigns Ethereum-compatible addresses on Filecoin, including EVM smart contract addresses.

  • Ethereum Virtual Machine (EVM) account actor: Represents an external Ethereum identity backed by a secp256k1 key.

  • System actor: General system actor.

Nodes

Filecoin nodes are categorized by the services they provide to the storage network, including chain verifier nodes, client nodes, storage provider nodes, and retrieval provider nodes. All participating nodes must provide chain verification services.

Filecoin supports multiple protocol implementations to enhance security and resilience. Active implementations include:

  • Lotus

  • Venus

  • Forest

Addresses

In the Filecoin network, addresses identify actors in the Filecoin state. Each address encodes information about the corresponding actor, making it easy to use and resistant to errors. Filecoin has five address types. Mainnet addresses start with f, and Testnet addresses start with t.

  • f0/t0: ID address for an actor in a human-readable format, such as f0123261 for a storage provider.

  • f1/t1: secp256k1 wallet address, generated from an encrypted secp256k1 public key.

  • f2/t2: Address assigned to an actor in a way that ensures stability across network forks.

  • f3/t3: BLS wallet address, generated from a BLS public key.

  • f4/t4: Address created and assigned to user-defined actors by customizable "address management" actors. This address can receive funds before an actor is deployed.

  • f410/t410: Address space managed by the Ethereum Address Manager (EAM) actor, allowing Ethereum-compatible addresses to interact seamlessly with the Filecoin network. Ethereum addresses can be cast as f410/t410 addresses and vice versa, enabling compatibility with existing Ethereum tools.

Consensus

Expected consensus

Expected Consensus (EC) is the probabilistic, Byzantine fault-tolerant consensus algorithm underlying Filecoin. EC conducts a leader election among storage providers each epoch to determine which provider submits a block. Similar to proof-of-stake, Filecoin’s leader election relies on proof-of-storage, meaning the probability of being elected depends on how much provable storage a miner contributes to the network --measured in something called "storage power".

The consensus process uses Drand as a randomness beacon for leader election, ensuring the leader election is secret, fair, and verifiable. Election participants and their storage power are drawn from a data structure called the "Power Table", which is continuously calculated and maintained by the storage power actor.

Ultimately, the EC process ends by gathering all valid blocks produced in an epoch to a tipset, applying a weighting function to select the heaviest chain, and adding the tipset to the heaviest chain accordingly.

Block production process

The block production process for each epoch is as follows:

  • Elect leaders from eligible miners.

  • Miners check if they are elected.

  • Elected miners generate WinningPoSt using randomness.

  • Miners build and propagate a block.

  • Verify the winning miner and election.

  • Select the heaviest chain to add the tipset.

Finality

EC enforces soft finality, where miners at round N reject blocks forking off before round N - F (where F is set to 900). This ensures finality without compromising chain availability.

Proofs

Filecoin operates on proof-of-storage, where miners offer storage space and provide proofs to verify data storage.

Proof of replication

With proof-of-replication (PoRep), storage providers prove they have created a unique copy of the client’s data for the network.

Proof of spacetime

Storage providers must continuously prove that they are storing clients' data throughout the entire duration of the storage deal. The proof-of-spacetime (PoSt) process includes two types of challenges:

  • Winning PoSt: Verifies that a storage provider holds a copy of the data at a specific point in time.

  • Window PoSt: Confirms that the data has been consistently stored over a defined period.

Slashing

If storage providers fail to maintain reliable uptime or act maliciously, they face penalties through a process called slashing. Filecoin enforces two types of slashing:

  • Storage Fault Slashing: Penalizes providers who fail to maintain healthy and reliable storage sectors.

  • Consensus Fault Slashing: Penalizes providers attempting to disrupt the security or availability of the consensus process.

Was this page helpful?

FIL collateral

This page discusses the concept of collateral in Filecoin for storage providers.

As a storage provider on the network, you will have to create FIL wallets and add FIL to them. This is used to send messages to the blockchain but is also used for collateral. Providing storage capacity to the network requires you to provide FIL as collateral, which goes into a locked wallet on your Lotus instance. The Lotus documentation details the process of setting up your wallets and funding wallets for the initial setup. Filecoin uses upfront token collateral, as in proof-of-stake protocols, proportional to the storage hardware committed. This gets the best of both worlds to protect the network: attacking the network requires both acquiring and running the hardware, but it also requires acquiring large quantities of the token.

Types of collateral

To satisfy the varied collateral needs of storage providers in a minimally burdensome way, Filecoin includes three different collateral mechanisms:

  • Initial pledge collateral, an initial commitment of FIL that a miner must provide with each sector.

  • Block rewards as collateral, a mechanism to reduce the initial token commitment by vesting block rewards over time.

  • Storage deal provider collateral, which aligns incentives between storage provider and client and can allow storage providers to differentiate themselves in the market.

For more detailed information about how collateral requirements are calculated, see the miner collateral section in the Filecoin spec.

When a storage provider fails to answer to the WindowsPoSt challenges within the 30-minute deadline (see Storage Proving), storage is taken offline, or any storage deal rules are broken, the provider is penalized against the provided collateral. This penalty is called slashing and means that a portion of the pledged collateral is forfeited to the f099 address from your locked or available rewards, and your storage power is reduced. The f099 address is the address where all burned FIL goes.

Commit Pledge

The amount of required collateral depends on the amount of storage pledged to the Filecoin network. The bigger volume you store, the more collateral is required. Additionally, Filecoin Plus uses a QAP multiplier to increase the collateral requirement. See Verified Deals with Filecoin Plus for more information.

The formula for the required collateral is as follows:

Collateral needed for X TiB = (Current Sector Initial Pledge) x (32) x (X TiB)

For instance, for 100 TiB at 0.20 FIL / 32 GiB sector, this means:

0.20 FIL x 32 x 100 = 640 FIL

The “Current Sector Initial Pledge" can be found on blockchain explorers like Filfox and on the Starboard dashboards.

Gas fees

Another cost factor in the network is gas. Storage providers not only pledge collateral for the capacity they announce on-chain. The network also burns FIL in the form of gas fees. Most activity on-chain has some level of gas involved. For storage providers, this is the case for committing sectors.

The gas fees fluctuate over time and can be followed on various websites like Filfox - Gas Statistics and Beryx - Gas Estimator.

FIL lending programs

The ecosystem does have FIL Lenders who can provide you FIL (with interest) to get you started, which you can pay back over time and with the help of earned block rewards. Every lender, though, will still require you to supply up to 20% of the required collateral. The Filecoin Virtual Machine, introduced in March 2023, enables the creation of new lending mechanisms via smart contracts.

Was this page helpful?

Verified deals

This page discusses what verified deals are, and how they can impact storage providers.

Filecoin aims to be a decentralized storage network for humanity’s essential information. To achieve this, it’s crucial to add valuable data to the network. Filecoin Plus is a social trust program encouraging storage providers to store data in verified deals. A deal becomes verified after the data owner (client) completes a verification process, where community allocators assess the client’s use of Filecoin to determine its relevance and value to the Filecoin mission: storing and preserving humanity’s vital data. Allocators conduct due diligence by questioning clients and building reasonable confidence in their trustworthiness and use case.

DataCap

Notaries are responsible for allocating a resource called DataCap to clients with valuable storage use cases. DataCap is a non-exchangeable asset that is allocated by notaries to data clients. DataCap gets assigned to a wallet but cannot be sold or exchanged. The client can only spend the DataCap as part of making a verified deal with a storage provider. DataCap is a single use credit, and a client’s DataCap balance is deducted based on the size of the data stored in verified deals.

Quality Adjusted Power (QAP)

Storage providers are incentivized by the Filecoin network to store verified deals. A 10x quality adjustment multiplier is set at the protocol level for storage offered for verified deals. A 100 TiB dataset will account for 1 PiB of Quality-Adjusted-Power (QAP). This means the storage provider has a larger share of storage power on the Filecoin network and will be more likely to get elected for WinningPoSt (see Storage proving). The storage provider will earn 10x more block rewards for the same capacity made available to the network, if that capacity is storing verified deals.

When storing real customer data and not simply CC sectors, a whole new set of responsibilities arises. A storage provider must have the capacity to make deals, to be able to obtain a copy of the data, to prepare the data for the network, prove the data on-chain via sealing, and last but not least, have a means to offer retrieval of the data to the client when requested.

Responsibilities

As a storage provider, you play a crucial role in the ecosystem. Unlike miners in other blockchains, storage providers must do more than offer disk space to the network. Whether onboarding new customers to the network, or storing copies data from other storage providers for clients seeking redundancy, providing storage can involve:

  • Business development.

  • Sales and marketing efforts.

  • Hiring additional personnel.

  • Networking.

  • Relationship building.

Acquiring data copies requires systems and infrastructure capable of ingesting large volumes of data, sometimes up to a PiB. This necessitates significant internet bandwidth, with a minimum of 10 GB. For instance, transferring 1 PiB of data takes approximately 240 hours on a 10 GB connection. However, many large storage providers use up to 100 GB internet connections. ```

Data preparation, which involves separating files and folders in CAR files, is time-consuming and requires expertise. You can delegate this task to a Data Preparer for a fee or assume the role yourself. Tools like Singularity simplify this process.

Once the data is sealed and you are proving your copies on-chain (i.e. on the blockchain), you will need to offer retrievals to your customer as well. This obviously requires network bandwidth once more, so you may need to charge for retrievals accordingly.

Tools

Tools and programs exist to support Filecoin Plus, but storage providers need to know how to operate this entire workflow. See Filecoin Plus Programs for more information on available programs. See Architecture for more information on the tooling and software components.

Rewards & penalties

With great power, comes great responsibility, which also counts for storage power: rewards on Fil+ deals are 10x, but so are the penalties. Because a sector of 32 GiB counts for 320 GiB of storage power (10x), the rewards and the penalties are calculated on the QAP of 320 GiB. Filecoin Plus allows a storage provider to earn more block rewards on a verified deal, compared to a regular data deal. The 10x multiplier on storage power that comes with a verified deal, however, also requires 10x collateral from the storage provider.

If the storage provider is then not capable of keeping the data and systems online and fails to submit the daily required proofs (WindowPoSt) for that data, the penalties (slashing) are also 10x higher than over regular data deals or CC sectors. Larger storage power means larger block rewards, larger collateral and larger slashing. The stakes are high - after all, we’re storing humanity’s most important information with Filecoin.

Was this page helpful?

Sealing rate

The rate at which storage providers complete the sealing pipeline process is called the sealing rate sealing capacity. This page describes considerations and advice in regards to sealing rate.

Cost

When setting up their business, storage providers must determine how fast they should seal and, thus, how much sealing hardware they should buy. In other words, the cost is an important factor in determining a storage provider’s sealing rate. For example, suppose you have an initial storage capacity of 100 TiB, which would account for 1 PiB QAP if all the sectors contain Filecoin Plus verified deals. If your sealing capacity is 2.5 TiB per day, you will seal your full 100 TiB in 40 days. Is it worth investing in double the sealing capacity to fill your storage in just 20 days? It might be if you are planning to grow way beyond 100 TiB. This is an example of the sort of cost considerations storage providers must factor in when tuning the sealing rate.

Customer expectations

A common reason that a storage provider may want or need a faster sealing rate is customer expectations. When you take on a customer deal, there are often requirements to seal a dataset of a certain size within a certain time window. If you are a new storage provider with 2.5 TiB per day in sealing capacity, you cannot take on a deal of 2 PiB that needs to be on-chain in 1 month; at the very least, you could not take the deal using your own sealing infrastructure. Instead, you can use a Sealing-as-a-service provider, which can help you scale your sealing capabilities.

Balancing the sealing pipeline

When designing their sealing pipeline, storage providers should consider bottlenecks, the grouping of similar tasks, and scaling out.

Bottlenecks

The art of building a well-balanced sealing pipeline means having the bottlenecks where you expect them to be; any non-trivial piece of infrastructure always contains some kind of bottleneck. Ideally, you should design your systems so that the PC1 process is the bottleneck. By doing this, all other components are matched to the capacity required to perform PC1. With PC1 being the most resource-intensive task in the pipeline, it makes the most sense to architect a solution around this bottleneck. Knowing exactly how much sealing capacity you can get from your PC1 servers is vital so you can match the rest of your infrastructure to this throughput.

Assuming you obtain maximum hardware utilization from your PC1 server to seal 15 sectors in parallel, and PC1 takes 3 hours on your infrastructure, that would mean a sealing rate of 3.75 TiB per day. The calculation is described below:

15 sectors x 32 GiB / 3 hours PC1 runtime x 24 hours / 1024 = 3.75 TiB /day

Grouping similar tasks

While a Lotus worker can run all of the various tasks in the sealing pipeline, different storage provider configurations may split tasks between workers. Because some tasks are similar in behavior and others are insignificant in terms of resource consumption, it makes sense to group like-tasks together on the same worker.

A common grouping is AddPiece (AP) and PreCommit1 (PC1) because AP essentially prepares the data for the PC1 task. If you have dedicated hardware for PreCommit2 (PC2), your scratch content will move to that other server. If you are grouping PC1 and PC2 on the same server, you won’t have the sealing scratch copied, but you will need a larger NVMe volume. Eventually, you may run out of sealing scratch space and not be able to start sealing additional sectors. Consider a very large bandwidth (40Gb or even 100Gb) between the servers that copy over the sealing space.

As PC1 is CPU-bound and PC2 is GPU-bound, this is another good reason to separate those tasks into dedicated hardware, especially if you are planning to scale. Because PC2 is GPU-bound, it makes sense to have PC2, C1, and C2 collocated on the same worker.

Another rule of thumb is to have two PC2 workers for every PC1 worker in your setup. The WaitSeed phase occurs after PC2, which locks the scratch space for a sector until C1 and C2. In order to keep sealing sectors in PC1, PC2 must have sufficient capacity. You can easily host multiple PC2 workers on a single server though, ideally with separate GPU's.

You can run multiple lotus-workers on the same GPU by splitting out theirtmpfolders. Give the environment variable TMPDIR=<folder>to the lotus-worker.

Scaling out

A storage provider’s sealing capacity scales linearly with the hardware you add to it. For example, if your current setup allows for a sealing rate of 3 TiB per day, doubling the number of workers could bring you to 6 TiB per day. This requires that all components of your infrastructure are able to handle this additional throughput. Using Sealing-as-a-Service providers allows you to scale your sealing capacity without adding more hardware.

Was this page helpful?

Sales

This content covers the business and commercial aspects of running a storage provider business.

Running a storage provider business is not just about having technical expertise and providing storage services. It is also about building and maintaining relationships with clients, negotiating contracts, and managing finances effectively. A storage provider must be able to communicate the value of their services to potential clients, as well as ensure that current clients are satisfied and receive the support they need.

Sales skills are important for storage providers to differentiate themselves from the competition, market their services effectively, and attract new customers. This requires an understanding of the market, the needs of potential clients, and how to tailor their services to meet those needs. Storage providers should also be able to identify opportunities for growth and expansion, and have a strategy in place for pursuing those opportunities.

In addition to sales skills, financial management skills are also crucial for running a successful storage provider business. This includes budgeting, forecasting, and managing cash flow effectively. It is important for storage providers to understand the costs associated with providing their services, and to price their services appropriately in order to generate revenue and cover their expenses.

Overall, sales skills are essential for storage providers to succeed in a competitive market. By combining technical expertise with strong business and commercial skills, storage providers can build a successful and sustainable business.

Business aspects

Running a storage provider business involves several business aspects that require careful attention to ensure long-term success. The first and most obvious aspect is investment in hardware and FIL as collateral. Hardware is the backbone of any storage provider’s business, and ensuring that you have the right equipment to provide reliable and high-performance storage is critical. Additionally, FIL is the primary currency used within the Filecoin network, and as a storage provider, you need to ensure that you have a sufficient amount of FIL as collateral to cover your storage deals.

As your business grows, the amount of hardware and FIL needed will increase, and it is important to have a clear plan for scaling your business. This involves not only investing in additional hardware and FIL but also managing operational costs such as electricity, cooling, and maintenance. Having a skilled business team that can manage and plan for these costs is essential.

Another important aspect of running a storage provider business is managing your relationships with investors, venture capitalists, and banks. These organizations can provide much-needed funding to help grow your business, but they will only invest if they are confident in your ability to manage your business effectively. This means having a strong business plan, a skilled team, and a clear strategy for growth.

In summary, the business aspects of running a storage provider business are critical to its success. This involves managing investments in hardware and FIL, planning for scalability and managing operational costs, and building strong relationships with investors, venture capitalists, and banks.

Commercial aspects

A storage provider needs to get storage deals to grow his network power and to earn money. There are at least 2 ways to get storage deals, each one requiring specific sales skills.

  • Obtaining data replicas from other storage providers and programs:

    Certain Filecoin data programs will specify the minimum amount of replicas needed to perform a deal. This means deals need to be stored across multiple storage providers in the ecosystem, so you can work with peers in the network to share clients’ data replicas.

    Working in the ecosystem and building connections with other storage providers takes time and effort, and is essentially a sales activity.

  • Onboarding your own customers:

Acquiring your own customers, and bringing their data onto the Filecoin network, requires business development skills and people on your team who actively work with data owners (customers) to educate them about the advantages of decentralized storage.

It takes additional effort to work with customers and their data, but it has the additional advantage of being able to charge your customer for the data being stored. This means an additional revenue stream compared to only storing copies of deals, and earning block rewards.

Was this page helpful?

Security

This page covers the importance of security for Filecoin storage providers, including the need to mitigate potential security threats and implement appropriate security controls.

Being a Filecoin storage provider involves more than just storing customer data. You are also responsible for managing Filecoin wallets and running systems that require 24/7 uptime to avoid losing collateral. This means that if your network or systems are compromised due to a security intrusion, you risk experiencing downtime or even losing access to your systems and storage. Therefore, maintaining proper security is of utmost importance.

As a storage provider, you must have the necessary skills and expertise to identify and mitigate potential security threats. This includes understanding common attack vectors such as phishing, malware, and social engineering. On top of that, you must be proficient at implementing appropriate security controls such as firewalls, intrusion detection and prevention systems, and access controls.

Additionally, you must also be able to keep up with the latest security trends and technologies to ensure that your systems remain secure over time. This can involve ongoing training and education, as well as staying informed about new threats and vulnerabilities.

In summary, as a Filecoin storage provider, you have a responsibility to ensure the security of your customer’s data, your own systems, and the Filecoin network as a whole. This requires a thorough understanding of security best practices, ongoing training and education, and a commitment to staying informed about the latest security trends and technologies.

Network security

When it comes to network security, it is important to have a solid first line of defense in place. One effective strategy is to implement a redundant firewall setup that can filter incoming traffic as well as traffic between your VLANs.

A next-generation firewall (NGFW) can provide even more robust security by incorporating an intrusion prevention system (IPS) at the network perimeter. This can help to detect and prevent potential threats before they can do any harm.

However, it is important to note that implementing a NGFW with IPS enabled can also have an impact on your internet bandwidth. This is because the IPS will inspect all incoming and outgoing traffic, which can slow down your network performance. As such, it is important to carefully consider your bandwidth requirements and plan accordingly.

System security

A second layer of defense is system security. There are multiple concepts that contribute to good system security:

  • Host-based firewall (UFW)

    Implement a host-based firewall on your systems (also called UFW on Ubuntu), which is iptables based.

  • SELinux

    Linux comes with an additional security implementation called SELinux (Security Enhanced Linux). Most system administrators will not implement this by default because it takes additional consideration and administration. Once activated though it offers the highest grade of process and user isolation possible on Linux and contributes greatly to better security.

  • Not running as root

    It is a common mistake to run processes or containers as root. This is a serious security risk because any attacker who compromises a service running as root automatically obtains root privileges on that system.

    Lotus software does not require root privileges and therefore should run under a normal account (such as a service account, for instance called lotus) on the system.

  • Privilege escalation

    Since it is not required that Lotus runs as root, it is also not required for the service account to have privilege escalation. This means you should not allow the lotus account to use sudo.

Was this page helpful?

Pre-requisites

This page provide details on Lotus installation prerequisites and supported platforms.

Before installing Lotus on your computer, you will need to meet the following prerequisites:

  • Operating system: Lotus is compatible with Windows, macOS, and various Linux distributions. Ensure that your operating system is compatible with the version of Lotus you intend to install.

  • CPU architecture: Lotus is compatible with 64-bit CPU architectures. Ensure that your computer has a 64-bit CPU.

  • Memory: Lotus requires at least 8GB of RAM to run efficiently.

  • Storage: Lotus requires several GB of free disk space for the blockchain data, as well as additional space for the Lotus binaries and other files.

  • Internet connection: Lotus requires a stable and high-speed internet connection to synchronize with the Filecoin network and communicate with other nodes.

  • Firewall and port forwarding: Ensure that your firewall settings and port forwarding rules allow incoming and outgoing traffic on the ports used by Lotus.

  • Command-line interface: Lotus is primarily operated through the command line interface. Ensure that you have a basic understanding of command-line usage and are comfortable working in a terminal environment.

Lotus documentation

To get more information, check out the official Lotus documentation.

Was this page helpful?

Lotus

Lotus is a full-featured implementation of the Filecoin network, including the storage, retrieval, and mining functionalities. It is the reference implementation of the Filecoin protocol.

Interact with Lotus

There are many ways to interact with a Lotus node, depending on your specific needs and interests. By leveraging the powerful tools and APIs provided by Lotus, you can build custom applications, extend the functionality of the network, and contribute to the ongoing development of the Filecoin ecosystem.

Lotus API

Lotus provides a comprehensive API that allows developers to interact with the Filecoin network programmatically. The API includes methods for performing various operations such as storing and retrieving data, mining blocks, and transferring FIL tokens. You can use the API to build custom applications or integrate Filecoin functionality into your existing applications.

Lotus CLI

Lotus includes a powerful command-line interface that allows developers to interact with the Filecoin network from the terminal. You can use the CLI to perform various operations such as creating wallets, sending FIL transactions, and querying the network. The CLI is a quick and easy way to interact with the network and is particularly useful for testing and development purposes.

Custom plugin

Lotus is designed to be modular and extensible, allowing developers to create custom plugins that add new functionality to the network. You can develop plugins that provide custom storage or retrieval mechanisms, implement new consensus algorithms, or add support for new network protocols.

Source contributions

If you are interested in contributing to the development of Lotus itself, you can do so by contributing to the open-source codebase on GitHub. You can submit bug reports, suggest new features, or submit code changes to improve the functionality, security, or performance of the network.

Hosted nodes

Many hosting service provide access to Lotus nodes on the Filecoin network. Check out the RPC section for more information

More information

For more information about Lotus, including advanced configuration, check out the Lotus documentation site lotus.filecoin.io.

Was this page helpful?

Node providers

A node providers, sometimes specifically called a remote node providers, are services that offers access to remote nodes on the Filecoin network.

Nodes are essential components of the Filecoin network. They maintain copies of the blockchain’s entire transaction history and verify the validity of new transactions and blocks. Running a node requires significant computational resources and storage capacity, which can be demanding for individual developers or teams.

Benefits

Remote node providers address this challenge by hosting and maintaining Filecoin nodes on behalf of their clients. By utilizing a remote node provider, developers can access blockchain data, submit transactions, and query the network without the need to synchronize the entire blockchain or manage the infrastructure themselves. This offers convenience and scalability, particularly for applications or services that require frequent and real-time access to blockchain data.

Remote node providers typically offer APIs or other communication protocols to facilitate seamless integration with their hosted nodes. These APIs allow developers to interact with the Filecoin network, retrieve data, and execute transactions programmatically.

Potential drawbacks

It’s important to note that when using a remote node provider, developers are relying on the provider’s infrastructure and trustworthiness. You should carefully choose a reliable and secure provider to ensure the integrity and privacy of their interactions with the blockchain network.

Node providers often limit the specifications of the nodes that they offer. Some developers may need particularly speedy nodes or nodes that contain the entire history of the blockchain (which can be incredibly expensive to store).

Node providers

There are multiple node providers for the Filecoin mainnet and each of the testnets. Checkout the Networks section for details.

Was this page helpful?

Venus

Venus is an open-source implementation of the Filecoin network, developed by the blockchain company IPFSForce. Venus is built in Go and is designed to be fast, efficient, and scalable.

Venus is a full-featured implementation of the Filecoin protocol, providing storage, retrieval, and mining functionalities. It is compatible with the Lotus implementation and can interoperate with other Filecoin nodes on the network.

One of the key features of Venus is its support for the Chinese language and market. Venus provides a Chinese language user interface and documentation, making it easier for Chinese users to participate in the Filecoin network.

Venus also includes several advanced features, such as automatic fault tolerance, intelligent storage allocation, and decentralized data distribution. These features are designed to improve the reliability and efficiency of the storage and retrieval processes on the Filecoin network.

Interact with Venus

Here are some of the most common ways to interact with Venus:

Venus API

Venus provides a comprehensive API that allows developers to interact with the Filecoin network programmatically. The API includes methods for performing various operations such as storing and retrieving data, mining blocks, and transferring FIL tokens. You can use the API to build custom applications or integrate Filecoin functionality into your existing applications.

Command-line interface

Venus includes a powerful command-line interface that allows developers to interact with the Filecoin network from the terminal. You can use the CLI to perform various operations such as creating wallets, sending FIL transactions, and querying the network. The CLI is a quick and easy way to interact with the network and is particularly useful for testing and development purposes.

Contribute to source

If you are interested in contributing to the development of Venus itself, you can do so by contributing to the open-source codebase on GitHub. You can submit bug reports, suggest new features, or submit code changes to improve the functionality, security, or performance of the network.

More information

For more information about Venus, including advanced configuration, see the Venus documentation site.

Was this page helpful?

Industry

This content covers the importance of understanding and meeting specific requirements, certifications, and compliance standards when working with customers in certain industries.

When working with customers from certain industries, it is important to understand that specific requirements may apply. This can include certifications and compliance standards that are necessary to meet regulatory and legal obligations. Some examples of such standards include:

HIPAA: This standard applies to the handling of medical data and is essential for healthcare providers and organizations.

SOC2: This standard applies to service providers and is used to ensure that they have adequate controls in place to protect sensitive data.

PCI-DSS: This standard applies to businesses that handle payments and ensures that they have adequate security measures in place to protect payment card data. PCI-DSS

SOX: This standard applies to businesses operating in the financial sector and is used to ensure that they have adequate controls in place to protect against fraud and financial misconduct.

GDPR: This standard applies to businesses that store personally identifiable information (PII) for European customers and is used to ensure that customer data is protected in accordance with European data privacy regulations.

Local regulations: These regulations can vary per country and are especially important to consider when doing business with government agencies.

ISO 27001: This is a security standard that provides a framework for establishing, implementing, maintaining, and continually improving an information security management system.

Having one or more of these certifications can demonstrate to customers that you have the necessary skills and expertise to handle their data and meet their regulatory requirements. This can be a valuable asset for businesses looking to work with customers in specific industries, as it can provide a competitive edge and help attract new customers. Therefore, it is important for storage providers to stay informed about industry-specific requirements and obtain relevant certifications as necessary.

Was this page helpful?

Storage provider automation

1-click deployment automation for the storage provider stack allows new storage providers to quickly learn and deploy Lotus and Boost.

Find the automation code here!

Why this automation?

It can be rather overwhelming for new storage providers to learn everything about Filecoin and the various software components. In order to help with the learning process, we provide a fully automated installation of the Lotus and Boost stack. This automation should allow anyone to go on mainnet or the Calibration testnet in no time.

What are we automating?

This automation is still evolving and will receive more features and capabilities over time. In its current state, it lets you:

  • Install and configure Lotus Daemon to interact with the blockchain.

  • Initialize and configure Lotus Miner to join the network as a storage provider.

  • Install and configure Boost to accept storage deals from clients.

  • Install and configure Booster-HTTP to provide HTTP-based retrievals to clients.

Sealing configuration

The initial use case of this automation is to use sealing-as-a-service instead of doing your own sealing. As such, there is no Lotus Worker configured for the setup. It is possible to extend the setup with a remote worker. However, this Lotus Worker will require dedicated and custom hardware.

Composable deployment

One of the next features coming to this automation is a composable deployment method. Today Lotus Daemon, Lotus Miner, and Boost are all installed on a single machine. Many production setups, however, will split out those services into their own dedicated hardware. A composable deployment will allow you to deploy singular components on separate servers.

Prerequisites

Read the README carefully on the GitHub repo to make sure you have all the required prerequisites in place.

Was this page helpful?

Basic setup

This page gives a very basic overview of how to install Lotus on your computer.

To install Lotus on your computer, follow these steps:

  1. First, you need to download the appropriate binary file for your operating system. Go to the official Lotus GitHub repository and select the latest release that is compatible with your system. You can choose from Windows, macOS, and Linux distributions.

  2. Once you have downloaded the binary file, extract the contents to a directory of your choice. For example, if you are using Linux, you can extract the contents to the /usr/local/bin directory by running the command:

sudo tar -C /usr/local/bin -xzf lotus-1.33.0-linux-amd64.tar.gz
  1. After extracting the contents, navigate to the lotus directory in your terminal. For example, if you extracted the contents to /usr/local/bin, you can navigate to the lotus directory by running the command:

cd /usr/local/bin/lotus-1.33.0
  1. Run the lotus binary file to start the Lotus daemon. You can do this by running the command:

./lotus daemon
  1. This will start the Lotus daemon, which will connect to the Filecoin network and start synchronizing with other nodes on the network.

  2. Optionally, you can also run the lotus-miner binary file if you want to participate in the Filecoin mining process. You can do this by running the command:

./lotus-miner run
  1. This will start the Lotus miner, which will use your computer’s computing power to mine new blocks on the Filecoin network.

Was this page helpful?

sealing pipeline
snap up
Aligned
Was this page helpful?

Storage deals

This page discusses what storage deals are, and how storage providers can prepare for them.

The real purpose of Filecoin is to store humanity’s most important information. As a storage provider, that means accepting storage deals and storing deal sectors with real data in it. As before, those sectors are either 32 GiB or 64 GiB in size and require that the data be prepared as a content archive; that is, as a CAR file..

Data preparation

Data preparation, which includes packaging files into size appropriate CAR files, is either done by a separate Data Preparer actor, or by storage providers acting as Data Preparers. The latter option is common for new storage providers, as they normally only have a few files that need preparation.

Data preparation can be done in various ways, depending on your use-case. Here are some valuable sources of information:

  • The data-prep-tools repo has a set of CLI tools for more specific use-cases.

  • Singularity is a command-line tool to put data into CAR files, create CIDs, and even initiate deals with storage providers.

See the following video for a demonstration on Singularity:

Deal Market

In order for storage providers to accept deals and set their deal terms, they need to install some market software, such as Boost. This component interacts with data owners, accepts deals if they meet the configured requirements, gets a copy of the prepared data (CAR files), and puts it through the sealing pipeline, after which it is in the state required to be proven to the network.

The storage provider can (and should) keep unsealed data copies available for retrieval requests from the client. It is the same software component, Boost, that is responsible for HTTP retrievals from the client and for setting the price for retrievals.

Many tools and platforms act as a deal making engine in front of Boost. This is the case for Spade for instance.

Was this page helpful?

Implementations

Nodes are participants that contribute to the network’s operation and maintain its integrity. There are two major node implementations running on the Filecoin network today, with more in the works.

Lotus

The Lotus implementation logo.

Lotus is the reference implementation of the Filecoin protocol, developed by Protocol Labs, the organization behind Filecoin. Lotus is a full-featured implementation of the Filecoin network, including the storage, retrieval, and mining functionalities. It is written in Go and is designed to be modular, extensible, and highly scalable.

Learn more about Lotus

Venus

The Venus implementation logo.

Venus is an open-source implementation of the Filecoin network, developed by IPFSForce. The project is built in Go and is designed to be fast, efficient, and scalable.

Venus is a full-featured implementation of the Filecoin protocol, providing storage, retrieval, and mining functionalities. It is compatible with the Lotus implementation and can interoperate with other Filecoin nodes on the network.

One of the key features of Venus is its support for the Chinese language and market. Venus provides a Chinese language user interface and documentation, making it easier for Chinese users to participate in the Filecoin network.

Learn more about Venus

Implementation differences

While Lotus and Venus share many similarities, they differ in their development, feature sets, focus, and community support. Depending on your needs and interests, you may prefer one implementation over the other:

Compatibility

Both Lotus and Venus are fully compatible with the Filecoin network and can interoperate with other Filecoin nodes on the network.

Features

While both implementations provide storage, retrieval, and mining functionalities, they differ in their feature sets. Lotus includes features such as a decentralized storage market, a retrieval market, and a built-in consensus mechanism, while Venus includes features such as automatic fault tolerance, intelligent storage allocation, and decentralized data distribution.

Focus

Lotus has a more global focus, while Venus has a stronger focus on the Chinese market. Venus provides a Chinese language user interface and documentation, making it easier for Chinese users to participate in the Filecoin network.

Other implementations

Forest

Forest logo.

Forest is the Rust implementation of the Filecoin protocol with low hardware requirements (16 GiB, 4 cores), developed by ChainSafe. Forest is focused on blockchain analytics, and does not support storage, retrieval or mining.

Forest is currently used for generating up-to-date snapshots and managing archival copies of the Filecoin blockchain. Currently, the Forest team is hosting the entire Filecoin archival data for the community to use. This can be downloaded for free here.

You can learn more about Forest at the codebase on GitHub and documentation site.

Was this page helpful?

Block rewards

This page describes block rewards in Filecoin, where storage providers are elected to produce new blocks and earn FIL as rewards.

What are block rewards?

WinningPoSt (short for Winning Proof of SpaceTime) is the cryptographic challenge through which storage providers are rewarded for their contributions to the network. At the beginning of each epoch (1 epoch = 30 seconds), a small number of storage providers are elected by the network to mine new blocks. Each elected storage provider who successfully creates a block is granted Filecoin tokens by means of a block reward. The amount of FIL per block reward varies over time and is listed on various blockchain explorers like Filfox.

The election mechanism of the Filecoin network is based on the “storage power” of the storage providers. A minimum of 10 TiB in storage power is required to be eligible for WinningPoSt, and hence to earn block rewards. The more storage power a storage provider has, the more likely they will be elected to mine a block. This concept becomes incredibly advantageous in the context of Filecoin Plus verified deals.

Note that the deadline cron, a built-in actor that processes all miner actors every 60 epochs (every 30 minutes), is responsible for updating the rewards vesting table. A miner operator wishing to process vesting manually, ahead of the per-deadline cron call, could do so by calling WithdrawFunds with an amount of zero. Such a call would require use of the miner's Owner address. More details can be found in FIP005: Remove ineffective reward vesting.

Filecoin’s storage capacity

The Filecoin network is composed of storage providers who offer storage capacity to the network. This capacity is used to secure the network, as it takes a significant amount of storage to take part in the consensus mechanism. This large capacity makes it impractical for a single party to reach 51% of the network power, since an attacker would need 10 EiB in storage to control the network. Therefore, it is important that the raw capacity also referred to as raw byte power, remains high. The Filecoin spec also included a baseline power above which the network yields maximum returns for the storage providers.

The graph below shows the evolution of network capacity on the Filecoin network. As can be seen, the baseline power goes up over time (and becomes exponential). This means from May 2021 to February 2023 the network yielded maximum returns for storage providers. However, in recent history, Quality Adjusted Power (QAP) has taken over as a leading indicator of relevance for the Filecoin network. QAP is the result of the multiplier when storing verified deals:

Check out the Starboard dashboard for the most up-to-date Network Storage Capacity.

Impact of storage capacity on block rewards

As mentioned before, when the Raw Byte Power is above the Baseline Power, storage providers yield maximum returns. When building a business plan as a storage provider, it is important not to rely solely on block rewards. Block rewards are an incentive mechanism for storage providers. However, they are volatile and depend on the state of the network, which is largely beyond the control of storage providers.

The amount of FIL that is flowing to the storage provider per earned block reward is based on a combination of simple minting and baseline minting. Simple minting is the minimum amount of FIL any block will always have, which is 5.5. Baseline minting is the extra FIL on top of the 5.5 that comes from how close the Raw Byte Power is to the Baseline Power.

The below graph shows the evolution of FIL per block reward over time:

There is a positive side to releasing less FIL per block reward too. As Filecoin has a capped maximum token supply of 2 billion FIL, the slower minting rate allows for minting over a longer period. A lower circulating supply also has a positive effect on the price of FIL.

See the Crypto Economics page of this documentation and the Filecoin spec for more information.

Was this page helpful?

Install & Run Lotus

Lotus is your gateway to the Filecoin network. It syncs the chain, manages wallets, and is required for Curio to interact with your node.

Build Lotus Daemon

Clone and check out Lotus:

git clone https://github.com/filecoin-project/lotus.git
cd lotus
git checkout $(curl -s https://api.github.com/repos/filecoin-project/lotus/releases/latest | jq -r .tag_name)

Build and Install for Mainnet

make clean lotus
sudo make install-daemon
lotus --version

Build and Install for Calibration

make clean && make GOFLAGS="-tags=calibnet" lotus
sudo make install-daemon
lotus --version

You should see something like: lotus version 1.33.0+mainnet+git.ff88d8269


Import a Snapshot and Start the Daemon

Download the Snapshot

Mainnet:

aria2c -x5 -o snapshot.car.zst https://forest-archive.chainsafe.dev/latest/mainnet/

Calibration:

aria2c -x5 -o snapshot.car.zst https://forest-archive.chainsafe.dev/latest/calibnet/

Import and Start the Daemon

lotus daemon --import-snapshot snapshot.car.zst --remove-existing-chain --halt-after-import
nohup lotus daemon > ~/lotus.log 2>&1 &

If you encounter errors related to EnableEthRPC or EnableIndexer, run the command below and restart Lotus

sed -i 's/EnableEthRPC = .*/EnableEthRPC = true/; s/EnableIndexer = .*/EnableIndexer = true/' ~/.lotus/config.toml

Monitor Sync Progress

lotus sync wait

To monitor continuously:

lotus sync wait --watch

Monitor Logs

tail -f ~/lotus.log

Create Wallets

You’ll need to create two BLS wallets:

  • One for owner: used to fund sector pledges and submit proofs

  • One for worker: used to publish and manage storage deals

lotus wallet new bls  # Create owner wallet
lotus wallet new bls  # Create worker wallet
lotus wallet list     # List all created wallets

Make sure to send a small amount of FIL (Mainnet) or tFIL (Calibration) to each wallet - we recommend 1 FIL/tFIL per wallet to ensure the creation of your Storage Provider in Curio. Calibration test FIL faucet information.

Both wallets will be used during Curio initialisation.

Back up your wallet keys securely before continuing. Losing them will result in permanent loss of access to funds.

Filecoin EVM runtime

This page details what exactly EVM compatibility means for the FVM, and any other information that Ethereum developers may need to build applications on Filecoin.

The Ethereum Virtual Machine is an execution environment initially designed, built for, and run on the Ethereum blockchain. The EVM was revolutionary because, for the first time, any arbitrary code could be deployed to and run on a blockchain. This code inherited all the decentralized properties of the Ethereum blockchain. Before the EVM, a new blockchain had to be created with custom logic and then bootstrapped with validators every time a new type of decentralized application needed to be built.

Code deployed to EVM is typically written in the high-level language Solidity, although other languages, such as Vyper, exist. The high-level Solidity code is compiled to EVM bytecode which is what is actually deployed to and run on the EVM. Due to it being the first virtual machine to run on top of a blockchain, the EVM has developed one of the strongest developer ecosystems in Web3 to date. Today, many different blockchains run their own instance of the EVM to allow developers to easily port their existing applications into the new blockchain’s ecosystem.

Ethereum Virtual Machine

The Filecoin EVM, often just referred to as FEVM, is the Ethereum virtual machine virtualized as a runtime on top of the Filecoin virtual machine. It allows developers to port any existing EVM-based smart contracts straight onto the FVM. The Filecoin EVM runtime is completely compatible with any EVM development tools, such as Hardhat, Brownie, and MetaMask, making deploying and interacting with EVM-based actors easy! This is because Filecoin nodes offer the Ethereum JSON-RPC API.

Deep dive

For a deeper dive into the concepts discussed on this page, see this presentation Ethereum compatibility of FVM, see:

Was this page helpful?

Welcome to Filecoin Docs

Filecoin is a decentralized, peer-to-peer network enabling anyone to store and retrieve data over the internet. Economic incentives are built in, ensuring files are stored and accessible reliably over

Choose your own path to start exploring Filecoin:

Was this page helpful?

Get FIL

The most common way to get FIL is to use an exchange. You should be aware of some specific steps when trying to transfer FIL from an exchange to your wallet.

Exchanges

A cryptocurrency exchange is a digital platform where users can buy, sell, and trade cryptocurrencies for other cryptocurrencies or traditional fiat currencies like USD, EUR, or JPY.

Cryptocurrency exchanges provide a marketplace for users to trade their digital assets and are typically run by private companies that facilitate these transactions. These exchanges can differ in terms of fees, security protocols, and the variety of cryptocurrencies they support.

Users can typically sign up for an account with a cryptocurrency exchange, deposit funds into their account, and then use those funds to purchase or sell cryptocurrencies at the current market price. Some exchanges offer advanced trading features like margin trading, stop-loss orders, and trading bots.

It's important to note that while cryptocurrency exchanges can offer convenience and liquidity for traders, they also come with risks like hacking and regulatory uncertainty. Therefore, users should take precautions to protect their funds and do their own research before using any particular exchange.

Supported exchanges

There are many exchanges that allow users to buy, sell, and trade FIL. Websites like and keep track of which exchanges support which cryptocurrencies. You can use these lists to help you decide which exchange to use.

Once you have found an exchange you want to use, you will have to create an account with that exchange. Many exchanges have strict verification and Know-Your-Customer (KYC) processes in place, so it may take a few days to create your account. However, most large exchanges can verify your information in a few minutes.

Purchasing cryptocurrency varies from exchange to exchange, but the process is usually something like this:

  1. Add funds to your exchange account in your local currency (USD, EUR, YEN, etc.).

  2. Exchange your local currency for FIL at a set price.

Address compatibility

Some exchanges allow users to fund and withdraw FIL using any of the . However, some exchanges only support one or a handful of the available address types. Most exchanges do not currently support .

If your exchange does not yet support Filecoin Eth-style 0x addresses, you must create a wallet to relay the funds through. Take a look at the for details on how to transfer your funds safely.

Fiat on-ramps

A fiat on-ramp is a service or platform that allows individuals to convert traditional fiat currencies such as the US dollar, Euro, or any other government-issued currency into cryptocurrencies. These on-ramps serve as entry points for people who want to start participating in the cryptocurrency ecosystem by purchasing digital currencies with their money but don't want to sign up with a cryptocurrency exchange.

FIL is supported by a number of fiat on-ramps, such as:

  • .

If you know of any other services that can be added to list this, .

Users are cautioned to do their own due diligence with respect to choosing a fiat on-ramp provider.

Crypto ATMs

Crypto ATMs, also known as Bitcoin ATMs, are kiosks that allow individuals to buy and/or sell cryptocurrencies in exchange for fiat currency like the US dollar. They function similarly to traditional ATMs but are not connected to a bank account. Instead, they connect the user directly to a cryptocurrency exchange.

Using a Bitcoin ATM often comes with higher fees than online exchanges. Fees can vary, but they can range anywhere from 5% to 15% or even more per transaction.

Test FIL

If you’re looking to get FIL to test your applications on a testnet like , then check how to get test tokens! Test FIL is often referred to as tFIL.

Filecoin and IPFS

Explore the features that make Filecoin a compelling system for storing files. This is an overview of features offered by Filecoin that make it a compelling system for storing files.

Verifiable storage

Filecoin has built-in processes to check the history of files and verify that they have been stored correctly over time. Every storage provider proves that they are maintaining their files in every 24-hour window. Clients can efficiently scan this history to confirm that their files have been stored correctly, even if the client was offline at the time. Any observer can check any storage provider’s track record and will notice if the provider has been faulty or offline in the past.

Open market

In Filecoin, file storage and retrieval deals are negotiated in open markets. Anybody can join the Filecoin network without needing permission. By lowering the barriers to entry, Filecoin enables a thriving ecosystem of many independent storage providers.

Competitive prices

Prices for storage and retrieval are determined by supply and demand, not corporate pricing departments. Filecoin makes reliable storage available at hyper-competitive prices. Miners compete based on their storage, reliability, and speed rather than through marketing or locking users in.

Reliable storage

Because storage is paid for, Filecoin provides a viable economic reason for files to stay available over time. Files are stored on computers that are reliable and well-connected to the internet.

Reputation, not marketing

In Filecoin, storage providers prove their reliability through their track record published on the blockchain, not through marketing claims published by the providers themselves. Users don’t need to rely on status pages or self-reported statistics from storage providers.

Choice of tradeoffs

Users get to choose their own tradeoffs between cost, redundancy, and speed. Users are not limited to a set group of data centers offered by their provider but can choose to store their files on any storage provider participating in Filecoin.

Censorship resistance

Filecoin resists censorship because no central provider can be coerced into deleting files or withholding service. The network is made up of many different computers run by many different people and organizations. Faulty or malicious actors are noticed by the network and removed automatically.

Useful blockchain

In Filecoin, storage providers are rewarded for providing storage, not for performing wasteful computations. Filecoin secures its blockchain using proof of file replication and proof of storage over time. It doesn’t rely on energy-intensive proof-of-work schemes like other blockchains. Miners are incentivized to amass hard drives and put them to use by storing files. Filecoin doesn’t incentivize the hoarding of graphics cards or application-specific integrated circuits for the sole purpose of mining.

Provides storage to other blockchains

Filecoin’s blockchain is designed to store large files, whereas other blockchains can typically only store tiny amounts of data, very expensively. Filecoin can provide storage to other blockchains, allowing them to store large files. In the future, mechanisms will be added to Filecoin, enabling Filecoin’s blockchain to interoperate with transactions on other blockchains.

Content addressing

Files are referred to by the data they contain, not by fragile identifiers such as URLs. Files remain available no matter where they are hosted or who they are hosted by. When a file becomes popular, it can be quickly distributed by swarms of computers instead of relying on a central computer, which can become overloaded by network traffic.

When multiple users store the same file (and choose to make the file public by not encrypting it), everyone who wants to download the file benefits from Filecoin, keeping it available. No matter where a file is downloaded from, users can verify that they have received the correct file and that it is intact.

Content distribution network

Retrieval providers are computers that have good network connections to lots of users who want to download files. By prefetching popular files and distributing them to nearby users, retrieval providers are rewarded for making network traffic flow smoothly and files download quickly.

Single protocol

Applications implementing Filecoin can store their data on any storage provider using the same protocol. There isn’t a different API to implement for each provider. Applications wishing to support several different providers aren’t limited to the lowest-common-denominator set of features supported by all their providers.

No lock-in

Migrating to a different storage provider is made easier because they all offer the same services and APIs. Users aren’t locked into providers because they rely on a particular feature of the provider. Also, files are content-addressed, enabling them to be transferred directly between providers without the user having to download and re-upload the files.

Traditional cloud storage providers lock users by making it cheap to store files but expensive to retrieve them again. Filecoin avoids this by facilitating a retrieval market where providers compete to give users their files back as fast as possible, at the lowest possible price.

Open source code

The code that runs both clients and storage providers is open-source. Storage providers don’t have to develop their own software for managing their infrastructure. Everyone benefits from improvements made to Filecoin’s code.

Active community

Filecoin has an active community of contributors to answer questions and help newcomers get started. There is an open dialog between users, developers, and storage providers. If you need help, you can reach the person who designed or built the system in question. Reach out on .

Blocks and tipsets

Like many other blockchains, blocks are a fundamental concept in Filecoin. Unlike other blockchains, Filecoin is a chain of groups of blocks called tipsets rather than a chain of individual blocks.

Blocks

In Filecoin, a block consists of:

  • A block header

  • A list of messages contained in the block

  • A signed copy of each message listed

Every block refers to at least one parent block; that is, a block produced in a prior epoch.

A message represents communication between two actors and thus changes in network state. The messages are listed in their order of appearance, deduplicated, and returned in canonical order of execution. So, in other words, a block describes all changes to the network state in a given epoch.

Blocktime

Blocktime is a concept that represents the average time it takes to mine or produce a new block on a blockchain. In Ethereum, for example, the blocktime is approximately 15 seconds on average, meaning that a new block is added to the Ethereum blockchain roughly every 15 seconds.

In the Filecoin network, storage providers compete to produce blocks by providing storage capacity and participating in the consensus protocol. The block time determines how frequently new blocks are added to the blockchain, which impacts the overall speed and responsiveness of the network.

Filecoin has a block time of 30 seconds, and this duration was chosen for two main reasons:

  • Hardware requirements: If the block time were faster while maintaining the same gas limit or the number of messages per block, it would lead to increased hardware requirements. This includes the need for more storage space to accommodate the larger chain data resulting from more frequent block production.

  • Storage provider operations: The block time also takes into account the various operations that occur during that duration on the storage provider (SP) side. As SPs generate new blocks, the 30-second block time allows for the necessary processes and computations to be carried out effectively. If the blocktime were shorter, SPs would encounter significantly more blocktime failures.

By considering these factors, the Filecoin network has established a block time of 30 seconds, balancing the need for efficient operations and hardware requirements.

Tipsets

As described in , multiple potential block producers may be elected via Expected Consensus (EC) to create a block in each epoch, which means that more than one valid block may be produced in a given epoch. All valid blocks with the same height and same parent block are assembled into a group called a tipset.

Benefits of tipsets

In other blockchains, blocks are used as the fundamental representation of network state, that is, the overall status of each participant in the network at a given time. However, this structure has the following disadvantages:

  • Potential block producers may be hobbled by network latency.

  • Not all valid work is rewarded.

  • Decentralization and collaboration in block production are not incentivized.

Because Filecoin is a chain of tipsets rather than individual blocks, the network enjoys the following benefits:

  • All valid blocks generated in a given round are used to determine network state, increasing network efficiency and throughput.

  • All valid work is rewarded (that is, all validated block producers in an epoch receive a block reward).

  • All potential block producers are incentivized to produce blocks, disincentivizing centralization and promoting collaboration.

  • Because all blocks in a tipset have the same height and parent, Filecoin is able to achieve rapid convergence in the case of forks.

In summary, blocks, which contain actor messages, are grouped into tipsets in each epoch, which can be thought of as the overall description of the network state for a given epoch.

Tipsets in the Ethereum JSON-RPC

Wherever you see the term block in the Ethereum JSON-RPC, you should mentally read tipset. Before the inclusion of the Filecoin EVM runtime, there was no single hash referring to a tipset. A tipset ID was the concatenation of block CIDs, which led to a variable-length ID and poor user experience.

With the Ethereum JSON-RPC, we introduced the concept of the tipset CID for the first time. It is calculated by hashing the former tipset key using a Blake-256 hash. Therefore, when you see the term:

  • block hash, think tipset hash.

  • block height, think tipset epoch.

  • block messages, think messages in all blocks in a tipset, in their order of appearance, deduplicated and returned in canonical order of execution.

The Filecoin Virtual Machine

The Filecoin Virtual Machine (FVM) is a runtime environment enabling users to deploy their own smart contracts on the Filecoin blockchain. This page covers the basics of the FVM.

NOTE: As of January 2025, for developer support, please visit the website. For Filecoin product updates, please visit the website or see the Lotus .

Introduction

Filecoin’s storage and retrieval capabilities can be thought of as the base layer of the Filecoin blockchain, and can be thought of as a layer on top of Filecoin that unlocks programmability on the network (e.g. programmable storage primitives).

Whereas other blockchains do have smart contract capabilities, FVM’s smart contracts can use Filecoin storage and retrieval primitives with computational logic conditions. FVM will also enable Layer 2 capabilities, such as “compute over data” and .

Some additional notes about FVM’s technical specifications:

  • WASM-based: The FVM is a WASM-based polyglot execution environment for IPLD data, meaning that FVM gives developers access to IPFS / IPLD data primitives and can accommodate smart contracts (actors) written in any programming language that compiles to WASM.

  • FEVM Compatibility: Are you an Ethereum / Solidity developer? You can build the next killer app on FVM and make use of the . Learn more about how FVM is Ethereum runtime and solidity compatible in the next section.

  • VM Agnostic: The FVM is built to be VM-agnostic, meaning support for other foreign VMs can be added in the near future. Future versions of FVM can serve as a useful hypervisor enabling cross run-time invocations.

FVM brings user programmability to Filecoin, unleashing the enormous potential of an open data economy through various applications.

Use Cases

FVM Actors enable a huge range of use cases to be built on Filecoin. Here are just a few potential examples:

  • Data Access Control: FVM Actors can enable a client to grant retrieval permission for certain files to a limited set of third-party Filecoin wallet addresses.

  • DataDAO: FVM Actors can enable the creation of decentralized autonomous organizations where members govern and manage the storage, accessibility, and monetization of certain data sets and pool returns into a shared treasury.

  • Perpetual Storage: Because all Filecoin storage deals are time-limited, when a client makes a deal with a storage provider to store a data set with them, the client has to begin to consider whether they will want to renew this deal for the next time-period with the same storage provider or seek out other storage providers that may be cheaper. However, FVM enables a client to automatically renew deals or find a cheaper storage provider when the time limit of a given deal has reached maturity. This automated renewal of deals can persist, even in perpetuity, for as many cycles as can be financed by an associated endowment of FIL. FVM Actors enable the creation and management of this endowment.

  • Replication: In addition to allowing a client to store one data set with one storage provider in perpetuity, FVM Actors enable data resiliency by allowing a client to store one data set once manually and then have the Actor replicate that data with multiple other storage providers automatically. Additional conditions that can be set in an automated replication Actor include choices about the geographic region of the storage providers, latency, and deal price limits.

  • Leasing: FVM Actors enable a FIL token holder to provide collateral to clients looking to do a storage deal, and be repaid the principal and interest over time. FVM Actors can also trace the borrowing and repayment history of a given client, generating a community-developed reputation score.

Additional use cases enabled by FVM include, but are not limited to, tokenized data sets, trustless reputation systems, NFTs, storage bounties and auctions, Layer 2 bridges, futures and derivatives, or conditional leasing.

Start building on the FVM

If you’re ready to start building on the FVM, here are some resources you should explore:

  • FVM Reference Implementation: The containing the reference implementation for FVM.

  • FVM Quickstart Guide: The Quickstart guide will walk you through deploying your first ERC-20 contract on FVM. In addition to being provided this code, we also walk you through the developer environment set-up.

  • Developing Contracts: If you are ready to build your dApp on FVM, you can skip ahead and review our section for developing contracts. Here, you can find a guide for the Filecoin solidity libraries, details on tools such as Foundry, Remix, and Hardhat, and tutorials for calling built-in actors and building client contracts.

The next page will walk you through the process of deciding whether you need to use FVM’s programmatic storage when building a dApp with storage on Filecoin.

Install & Run YugabyteDB

Set ulimit configuration

Before starting Yugabyte, you must increase the default ulimit values to ensure system limits do not interfere with the database.

To do this:

Persist new limits across reboots

Add these lines to /etc/security/limits.conf:

This ensures the increased limits are automatically applied to future sessions.

Apply limit immediately (for current shell only)

This should output 1048576.

Install Yugabyte

Start the DB

If you encounter locale-related errors when starting Yugabyte for the first time, run:

Visit 127.0.0.1:15433 to confirm successful installation. This is the YugabyteDB web UI — it should display the dashboard if the service is running correctly and all nodes are healthy.

You can also check your Yugabyte cluster details directly in the CLI with:

💡 Learn the basics

New to Filecoin and looking for foundational concepts? Start with the Basics section to understand the essentials and kick off your journey!

🔧 Build with Filecoin

Ready to develop on the Filecoin network? Head to the Developers section for guides and examples to help bring your project to life.

🏗️ Become a Storage Provider

Thinking about running a provider node on Filecoin? Visit the Provider section for comprehensive guidance on getting started.

📊 Store data

Looking to store large volumes of data? Explore the Store section to review the various storage options Filecoin offers.

coingecko.com
coinmarketcap.com
Filecoin address type
f410 addresses
Transfer FIL page
Changelly
ChangeNow
MoonPay
Ramp Network
Simplex
raise an issue on GitHub
Calibration
Was this page helpful?
Learn about storage verification at ProtoSchool
Filecoin’s chat and forums
Was this page helpful?
Consensus
Was this page helpful?
FILB
FILOz
Github discussion page
FVM
content delivery networks
Filecoin Solidity library
Github repo
best practices
Was this page helpful?

Lotus Documentation

Filecoin Slack - #fil-lotus-help

Cover
Cover

Storage

This content covers various aspects related to storage in the context of being a Filecoin storage provider.

Storage is a critical component of running a successful storage provider in the Filecoin network. While it may seem obvious that having strong storage skills is important, Filecoin requires a unique end-to-end skill set to run a 24/7 application.

Storage proving requires atypical read-behavior from a storage system. This means that the storage administrator must be able to design for this behavior and analyze the storage system accordingly.

In addition, it is important for storage providers to understand the importance of reliable and efficient storage. Filecoin is designed to incentivize storage providers to keep data safe and secure, and as such, the storage system must be able to maintain high levels of reliability and availability.

Storage providers need to be able to implement and maintain storage infrastructure that meets the needs of clients who require large amounts of storage space. This requires knowledge of various storage technologies, as well as the ability to troubleshoot issues that may arise.

Overall, storage is a critical aspect of the Filecoin network and storage providers must have the necessary skills and knowledge to provide high-quality storage services to clients.

ZFS

Zettabyte File System (ZFS) is a combined file system and logical volume manager that provides advanced features such as pooled storage, data integrity verification and automatic repair, and data compression. It is a popular choice among storage providers due to its reliability, scalability, and performance.

Configuring ZFS requires knowledge and skills that go beyond the basics of traditional file systems. As a storage provider you need to understand how ZFS manages data, including how it distributes data across disks and how it handles data redundancy and data protection. You must also know how to configure ZFS for optimal performance and how to troubleshoot issues that may arise with ZFS.

In addition to configuring ZFS, storage providers must also be able to manage the disks and other hardware used for storage. This includes selecting and purchasing appropriate hardware, installing and configuring disks and disk controllers, and monitoring disk health and performance.

Having the knowledge and skills to configure ZFS is crucial as a storage providers, as it enables you to provide reliable and high-performance storage services to your clients. Without this expertise, you may struggle to deliver the level of service that your clients expect, which could lead to decreased customer satisfaction and loss of business.

RAIDZ2

ZFS is a combined file system and volume manager, designed to work efficiently on large-scale storage systems. One of the unique features of ZFS is its built-in support for various types of RAID configurations, which makes it an ideal choice for data storage in a Filecoin network.

As a storage provider, it is crucial to have knowledge and skills in configuring ZFS. This includes understanding how to create virtual devices (VDEVs), which are the building blocks of ZFS storage pools. A VDEV can be thought of as a group of physical devices, such as hard disks, solid-state drives, or even virtual disks, that are used to store data.

In addition, storage providers must also understand how wide VDEVs should ideally be, and how to create storage pools with a specific RAID protection level. RAID is a method of protecting data by distributing it across multiple disks in a way that allows for redundancy and fault tolerance. ZFS has its own types of RAID, known as RAID-Z, which come in different levels of protection.

For example, RAIDZ2 is a configuration that provides double parity, meaning that two disks can fail simultaneously without data loss. As a storage provider, it is important to understand how to create storage pools with the appropriate level of RAID protection to ensure data durability.

Finally, creating datasets is another important aspect of ZFS configuration. Datasets are logical partitions within a ZFS storage pool that can have their own settings and attributes, such as compression, encryption, and quota. As a storage provider, it is necessary to understand how to create datasets to effectively manage storage and optimize performance.

Snapshots and replication

ZFS provides built-in protection for data in the form of snapshots. Snapshots are read-only copies of a ZFS file system at a particular point in time. By taking regular snapshots, you can protect your data against accidental deletions, file corruption, or other disasters.

To ensure that your data is fully protected, it is important to configure a snapshot rotation schema. This means defining a schedule for taking snapshots and retaining them for a specified period of time. For example, you might take hourly snapshots and retain them for 24 hours, and then take daily snapshots and retain them for a week.

In addition to snapshots, ZFS also allows you to replicate them to another system running ZFS. This can be useful for creating backups or for replicating data to a remote site for disaster recovery purposes. ZFS replication works by sending incremental changes to the destination system, which ensures that only the changes are sent over the network, rather than the entire dataset. This can significantly reduce the amount of data that needs to be transferred and can help minimize network bandwidth usage.

Performance analysis

As a storage provider, it is crucial to be able to troubleshoot and resolve any performance issues that may arise. This requires a deep understanding of the underlying storage system and the ability to use Linux performance analytic tools such as iostat. These tools can help identify potential bottlenecks in the storage system, such as high disk utilization or slow response times.

In addition to troubleshooting, you must also be able to optimize the performance of your storage system. One way to improve performance is by implementing an NVMe write-cache. NVMe is a protocol designed specifically for solid-state drives, which can greatly improve the speed of write operations. By adding an NVMe write-cache to the storage system, you can reduce the latency of write operations and improve overall system performance.

Read-cache on the other hand is typically not useful in the context of Filecoin. This is because sealed sectors are read very randomly, and unsealed sectors will typically not be read twice. Therefore, storing data in a read-cache would be redundant and add unnecessary overhead to the system.

Was this page helpful?

echo "$(whoami) soft nofile 1048576" | sudo tee -a /etc/security/limits.conf
echo "$(whoami) hard nofile 1048576" | sudo tee -a /etc/security/limits.conf
ulimit -n 1048576
# Verify limit change:
ulimit -n
wget https://software.yugabyte.com/releases/2.25.1.0/yugabyte-2.25.1.0-b381-linux-x86_64.tar.gz
tar xvfz yugabyte-2.25.1.0-b381-linux-x86_64.tar.gz
cd yugabyte-2.25.1.0
./bin/post_install.sh
./bin/yugabyted start \
  --advertise_address 127.0.0.1 \
  --master_flags rpc_bind_addresses=127.0.0.1 \
  --tserver_flags rpc_bind_addresses=127.0.0.1
sudo locale-gen en_US.UTF-8
./bin/yugabyted status
Cover

Cover

Cover

Addresses

A Filecoin address is an identifier that refers to an actor in the Filecoin state. All actors (miner actors, the storage market actor, account actors) have an address.

All Filecoin addresses begin with an f to indicate the network (Filecoin), followed by any of the address prefix numbers (0, 1, 2, 3, 4) to indicate the address type. There are five address types:

Address prefix
Description

0

An ID address.

1

A public key address.

2

An actor address.

3

A public key address.

4

Extensible, user-defined actor addresses. f410 addresses refers to Ethereum-compatible address space, each f410 address is equivalent to an 0x address.

Each of the address types is described below.

Actor IDs

All actors have a short integer assigned to them by InitActor, a unique actor that can create new actors. This integer that gets assigned is the ID of that actor. An ID address is an actor’s ID prefixed with the network identifier and the address type.

Actor ID addresses are not robust in the sense that they depend on chain state and are defined on-chain by the InitActor. Additionally, actor IDs can change for a brief time after creation if the same ID is assigned to different actors on different forks. Actor ID addresses are similar to monotonically increasing numeric primary keys in a relational database. So, when a chain reorganization occurs (similar to a rollback in a SQL database), you can refer to the same ID for different rows. The expected consensus algorithm will resolve the conflict. Once the state that defines a new ID reaches finality, no changes can occur, and the ID is bound to that actor forever.

For example, the mainnet burn account ID address, f099, is structured as follows:

  Address type
  |
f 0 9 9
|    |
|    Actor ID
|
Network identifier

ID addresses are often referred to by their shorthand f0.

Public keys

Actors managed directly by users, like accounts, are derived from a public-private key pair. If you have access to a private key, you can sign messages sent from that actor. The public key is used to derive an address for the actor. Public key addresses are referred to as robust addresses as they do not depend on the Filecoin chain state.

Public key addresses allow devices, like hardware wallets, to derive a valid Filecoin address for your account using just the public key. The device doesn’t need to ask a remote node what your ID address is. Public key addresses provide a concise, safe, human-readable way to reference actors before the chain state is final. ID addresses are used as a space-efficient way to identify actors in the Filecoin chain state, where every byte matters.

Filecoin supports two types of public key addresses:

  • secp256k1 addresses that begin with the prefix f1.

  • BLS addresses that begin with the prefix f3.

For BLS addresses, Filecoin uses curve bls12-381 for BLS signatures, which is a pair of two related curves, G1 and G2.

Filecoin uses G1 for public keys, as G1 allows for a smaller representation of public keys and G2 for signatures. This implements the same design as ETH2 but contrasts with Zcash, which has signatures on G1 and public keys on G2. However, unlike ETH2, which stores private keys in big-endian order, Filecoin stores and interprets private keys in little-endian order.

Public key addresses are often referred to by their shorthand, f1 or f3.

Actors

Actor addresses provide a way to create robust addresses for actors not associated with a public key. They are generated by taking a sha256 hash of the output of the account creation. The ZH storage provider has the actor address f2plku564ddywnmb5b2ky7dhk4mb6uacsxuuev3pi and the ID address f01248.

Actor addresses are often referred to by their shorthand, f2.

Extensible user-defined actors

Filecoin supports extensible, user-defined actor addresses through the f4 address class, introduced in Filecoin Improvement Proposal (FIP) 0048. The f4 address class provides the following benefits to the network:

  • A predictable addressing scheme to support interactions with addresses that do not yet exist on-chain.

  • User-defined, custom addressing systems without extensive changes and network upgrades.

  • Support for native addressing schemes from foreign runtimes such as the EVM.

An f4 address is structured as f4<address-manager-actor-id>f<new-actor-id>, where <address-manager-actor-id> is the actor ID of the address manager, and <new-actor-id> is the arbitrary actor ID chosen by that actor. An address manager is an actor that can create new actors and assign an f4 address to the new actor.

Currently, per FIP 0048, f4 addresses may only be assigned by and in association with specific, built-in actors called address managers. Once users are able to deploy custom WebAssembly actors, this restriction will likely be relaxed in a future FIP.

As an example, suppose an address manager has an actor ID (an f0 address) 123, and that address manager creates a new actor. Then, the f4 address of the actor created by the address manager is f4123fa3491xyz, where f4 is the address class, 123 is the actor ID of the address manager, f is a separator, and a3491xyz is the arbitrary <new-actor-id> chosen by that actor.

Was this page helpful?

Wallets

Wallets provide a way to securely store Filecoin, along with other digital assets. These wallets consist of a public and private key, which work similarly to a bank account number and password.

When someone sends cryptocurrency to your wallet address, the transaction is recorded on the blockchain network, and the funds are added to your wallet balance. Similarly, when you send cryptocurrency from your wallet to someone else’s wallet, the transaction is recorded on the blockchain network, and the funds are deducted from your wallet balance.

There are various types of cryptocurrency wallets, including desktop, mobile, hardware, and web-based wallets, each with its own unique features and levels of security. It’s important to choose a reputable and secure wallet to ensure the safety of your digital assets.

Compatible wallets

We do not provide technical support for any of these wallets. Please use caution when researching and using the wallets listed below. Wallets that have conducted third-party audits of their open-source code by a reputable security auditor are marked recommended below.

If you are already running your own lotus node, you can also manage FIL wallets from the command line.

Name
Description
Audited

A multi-currency hardware wallet. Recommended.

Yes

Supports sending & receiving FIL. Can be integrated with a Ledger hardware device. Recommended.

Yes

MetaMask has extensions called installable from the right menu in MetaMask.

Yes

A multi-currency mobile wallet by .

Yes

A multi-currency software wallet built-in to the Brave browser.

Yes

A multi-currency wallet, the official wallet of Binance.

Unknown

A multi-currency wallet.

Unknown

A multi-currency wallet.

Unknown

A multi-currency mobile wallet by .

Yes

MetaMask has an extension system called .

Yes

A hardware and mobile wallet supporting Filecoin mainnet transactions, with f1 and f4 address support.

Yes

Hot versus cold

A hot wallet refers to any wallet that is permanently connected to the internet. They can be mobile, desktop, or browser-based. Hot wallets make it faster and easier to access digital assets but could be vulnerable to online attacks. Therefore, it is recommended to keep large balances in cold wallets and only use hot wallets to hold funds that need to be accessed frequently.

Cold wallets most commonly refer to hardware wallet devices shaped like a USB stick. They are typically offline and only connected to the internet for transactions. Accessing a cold wallet requires physical possession of the device plus knowledge of the private key, which makes them more resistant to theft. Cold wallets can be less convenient and are most useful for storing larger balances securely.

Security

Wallets that have gone through an audit have had their codebase checked by a recognized security firm for security vulnerabilities and potential leaks. However, just because a wallet has had an audit does not mean that it’s 100% bug-proof. Be incredibly cautious when using unaudited wallets.

Never share your seed phrase, password, or private keys. Bad actors will often use social engineering tactics such as phishing emails or posing as customer service or tech support to lure users into handing over their private key or seed phrase.

Add a wallet to our list

If you know of a wallet that supports Filecoin, you can submit a pull request to this page and add it!

  • Create an issue in filecoin-project/filecoin-docs with the name of the wallet and its features.

  • If the wallet is a mobile wallet, it must be available on both Android and iOS.

  • The wallet must have been audited. The results of this audit must be public.

Was this page helpful?

Network indexer

InterPlanetary Network Indexer (IPNI) enables users to search for content-addressable data available from storage providers. This page discusses the implications of IPNI for storage providers.

A network indexer, also referred to as an indexer node or indexer, is a node that maps content identifiers (CIDs) to records of who has the data and how to retrieve that data. These records are called provider data records. Indexers are built to scale in environments with massive amounts of data, like the Filecoin network, and are also used by the IPFS network to locate data. Because the Filecoin network stores so much data, clients can’t perform efficient retrieval without proper indexing. Indexer nodes work like a specialized key-value store for efficient retrieval of content-addressed data.

There are two groups of users within the network indexer process:

  • Storage providers advertise their available content by storing data in the indexer. This process is handled by the indexer’s ingest logic.

  • Retrieval clients query the indexer to determine which storage providers have the content and what protocol to use, such as Graphsync, Bitswap, etc. This process is handled by the indexer’s find logic.

How the indexer works

This diagram summarizes the different actors in the indexer ecosystem and how they interact with each other. In this context, these actors are not the same as smart-contract actors.

For more info on how the indexer works, read the .

IPNI and storage providers

Storage providers publish data to indexers so that clients can find that data using the CID or multihash of the content. When a client queries the indexer using a CID or multihash, the indexer then responds to the client with the provider data record, which tells the client where and how the content can be retrieved.

As a storage provider, you will need to run an indexer in your setup so that your clients know where and how to retrieve data. For more information on how to create an index provider, see the IPNI documentation.

Was this page helpful?

Programming on Filecoin

Once data is stored, computations can be performed directly on it without needing retrieval. This page covers the basics of programming on Filecoin.

Compute-over-data

Beyond storage and retrieval, data often needs transformation. Compute-over-data protocols enable computations over IPLD, the data layer used by content-addressed systems like Filecoin. Working groups are developing compute solutions for Filecoin data, including large-scale parallel compute (e.g., ) and cryptographically verifiable compute (e.g., ).

For example, Bacalhau provides a platform for public, transparent, and verifiable distributed computation, allowing users to run Docker containers and WebAssembly (Wasm) images as tasks on data stored in InterPlanetary File System (IPFS).

Filecoin is uniquely positioned to support large-scale off-chain computation because storage providers have compute resources, such as GPUs and CPUs, colocated with their data. This setup enables a new paradigm where computations occur directly on the data where it resides, reducing the need to move data to external compute nodes.

Filecoin Virtual Machine

The Filecoin Virtual Machine (FVM) is a runtime environment for executing smart contracts on the Filecoin network. These smart contracts allow users to run bounded computations and establish rules for storing and accessing data. The FVM ensures that these contracts are executed securely and reliably.

The FVM is designed to support both native Filecoin actors written in languages that compile to Wasm and smart contracts from other runtimes, such as Solidity for the Ethereum Virtual Machine (EVM), Secure EcmaScript (SES), and eBPF. The and SDK are written in Rust, ensuring high performance and security.

Initially, the FVM supports smart contracts written in Solidity, with plans to expand to other languages that compile to Wasm, as outlined in the FVM roadmap.

By enabling compute-over-states on the Filecoin network, the FVM unlocks a wide range of potential use cases. Examples include:

Data Organizations

FVM enables a new kind of organization centered around data.

Data DAOs and tokenized datasets

The FVM makes it possible to create and manage decentralized and autonomous organizations (Data DAOs) focused on data curation and preservation. Data DAOs allow groups of individuals or organizations to govern and monetize data access, pooling returns into a shared treasury to fund preservation and growth. These data tokens can also be exchanged among peers or used to request computation services, such as validation, analysis, feature detection, and machine learning.

Perpetual storage

The FVM allows users to store data once and use repair and replication bots to manage ongoing storage deals, ensuring perpetual data storage. Through smart contracts, users can fund a wallet with FIL, allowing storage providers to maintain data storage indefinitely. Repair bots monitor these storage deals and replicate data across providers as needed, offering long-term data permanence.

Financial services for miners

The FVM can facilitate unique financial services tailored for storage providers (SPs) in the Filecoin ecosystem.

Lending and staking protocols

Users can lend Filecoin to storage providers to be used as storage collateral, earning interest in return. Loans may be undercollateralized based on SP performance history, with reputation scores generated from on-chain data. Loans can also be automatically repaid to investors using a multisig wallet, which includes lenders and a third-party arbitrator. New FVM-enabled smart contracts create yield opportunities for FIL holders while supporting the growth of storage services on the network.

Insurance

SPs may require financial products to protect against risks in providing storage solutions. Attributes such as payment history, operational length, and availability can be used to underwrite insurance policies, shielding SPs from financial impacts due to storage faults or token price fluctuations.

Core chain infrastructure

The FVM is expected to achieve feature parity with other persistent EVM chains, supporting critical infrastructure for decentralized exchanges and token bridges.

Decentralized exchanges

To facilitate on-chain token exchange, the FVM may support decentralized exchanges like Uniswap or Sushi, or implement decentralized order books similar to Serum on Solana.

Token bridges

Although not an immediate focus, token bridges will eventually connect Filecoin to EVM, Move, and Cosmos chains, enabling cross-chain wrapped tokens. While Filecoin currently offers unique value without needing to bootstrap liquidity from other chains, long-term integration with other blockchains is anticipated.

In addition to these, the FVM could support various other use cases, such as data access control (), trustless reputation systems, replication workers, storage bounties, and L2 networks. For more details on potential use cases, see our post.

If you are interested in building these use cases, the following solution blueprints may be helpful:

Filecoin EVM

The Filecoin EVM (FEVM) is an Ethereum Virtual Machine (EVM) runtime built on top of the FVM. It allows developers to port existing EVM-based smart contracts directly onto Filecoin. The FEVM emulates EVM bytecode at a low level, supporting contracts written in Solidity, Vyper, and Yul. The EVM runtime is based on open-source libraries, including and Revm. More details can be found in the .

Since Filecoin nodes support the Ethereum JSON-RPC API, FEVM is compatible with existing EVM development tools, such as Hardhat, Brownie, and MetaMask. Most smart contracts deployed to Filecoin require minimal adjustments, if any. For example, new ERC-20 tokens can be launched on Filecoin or bridged to other chains.

Developers can choose between deploying actors on the FEVM or native FVM: for optimal performance, actors should be written in languages that compile to Wasm and deployed to the native FVM. For familiarity with Solidity and EVM tools, the FEVM is a convenient alternative.

In summary, the FEVM provides a straightforward path for Web3 developers to begin building on Filecoin using familiar tools and languages, while gaining native access to Filecoin storage deals.

The primary difference between FEVM and EVM contracts is that FEVM contracts can interact directly with Filecoin-specific actors, such as miner actors, which are inaccessible to Ethereum contracts. To enable seamless integration, a Filecoin-Solidity API library has been developed to facilitate interactions with Filecoin-specific actors and syscalls.

For example FEVM contracts, see the available .

Basics

This page will help you understand how to plan a profitable business, design a suitable storage provider architecture, and make the right hardware investments.

The Filecoin network provides decentralized data storage and makes sure data is verified, always available, and immutable. Storage providers in the Filecoin network are in charge of storing, providing content and issuing new blocks.

To become a storage provider in the Filecoin network you need a range of technical, financial and business skills. We will explain all the key concepts you need to understand in order to design a suitable architecture, make the right hardware investments, and run a profitable storage provider business.

Follow these steps to begin your storage provider journey:

  1. Understand Filecoin economics

  2. Plan your business

  3. Make sure you have the right skills

  4. Build the right infrastructure

Understand Filecoin economics

To understand how you can run a profitable business as a Filecoin storage provider, it is important to make sure you understand the economics of Filecoin. Once you understand all core concepts, you can build out a strategy for your desired ROI.

Storage providers can also add additional value to clients when they offer certain certifications. These can enable a storage provider to charge customers additional fees for storing data in compliance with those standards, for example, HIPAA, SOC2, PCI, GDPR and others.

Plan your business

The hardware and other requirements for running a Filecoin storage provider business are significantly higher than regular blockchain mining operations. The mechanisms are designed this way because, in contrast to some other blockchain solutions, where you can simply configure one or more nodes to “mine” tokens, the Filecoin network’s primary goal is to provide decentralized storage for humanity’s most valuable data.

You need to understand the various earning mechanisms in the Filecoin network.

Daily fees and startup readiness (FIP-0100)

With the activation of in network version 25, all new sectors — and any sectors that are extended or updated — incur a daily fee.

This fee replaces the previous batch fee model and introduces a predictable cost structure tied to each sector’s quality-adjusted power and the network’s circulating supply.

The fee begins accruing the day after a sector is committed or extended. It is deducted automatically at the end of each proving deadline.

The network first draws from vesting block rewards. If those are insufficient, it draws from the miner’s available balance. If both are empty, the unpaid amount becomes fee debt.

Fee debt does not directly cause faults. However, it can impact operations:

  • A miner with fee debt may be blocked from submitting certain messages (e.g., pre-commits or recoveries).

  • If the balance is too low to pay for WindowPoSt messages, sectors may fault.

  • Critically, a miner with outstanding fee debt cannot win block rewards until the debt is repaid.

To avoid this, storage providers should:

  • Keep a FIL buffer in the miner actor’s balance.

  • Avoid fully withdrawing unlocked funds unless upcoming rewards will cover future fees.

Startup considerations

Miners become eligible to win block rewards once they reach 10 TiB of raw byte power (RBP).

However, rewards are not guaranteed as soon as that threshold is met. Block production is probabilistic, and smaller miners may wait longer to win a block — especially when competing against larger ones.

This creates a funding gap during the startup phase.

New storage providers must plan for this by funding their miner actor with enough FIL to:

  • Cover daily fees during onboarding,

  • Support message submission (like WindowPoSt),

  • And continue sealing until rewards start arriving.

While the amount of FIL required is relatively small compared to overall infrastructure costs, it is operationally critical. Without it, the miner may become stuck — unable to seal new sectors, submit required messages, or produce blocks and win block rewards due to fee debt or insufficient balance.

To estimate how much FIL may be needed, review the or use the to model your expected onboarding rate.

Make sure you have the right skills

As will become clear, running a storage operation is a serious business, with client data and pledged funds at stake. You will be required to run a highly-available service, and there are automatic financial penalties if you cannot demonstrate data availability to the network. There are many things that can go wrong in a data center, on your network, on your OS, or at an application level.

You will need skilled people to operate your storage provider business. Depending on the size and complexity of your setup this can be 1 person with skills across many different domains, or multiple dedicated people or teams.

Build the right infrastructure

At the lowest level, you will need datacenter infrastructure. You need people capable of architecting, racking, wiring and operating infrastructure components. Alternatively, you can get it collocated, or even entirely as a service from a datacenter provider.

Take availability and suitable redundancy into consideration when choosing your datacenter or collocation provider. Any unavailability of your servers, network or storage can result in automatic financial penalties on the Filecoin network.

Interplanetary consensus

InterPlanetary Consensus (IPC) powers planetary-scale decentralized applications (dApps) through horizontal scalability of Filecoin, Ethereum and more.

What is IPC?

is a framework that enables on-demand horizontal scalability of networks, by deploying "subnets" running different consensus algorithms depending on the application's requirements.

What is horizontal scalability and why is it important for dApps?

generally refers to the addition of nodes to a system, to increase its performance. For example, adding more nodes to a compute network helps distribute the effort needed to run a single compute task. This reduces cost per task and decreases latency, while improving overall throughput.

In web3, horizontal scalability refers to scaling blockchains, for desired performance. More specifically, scaling the ability of a blockchain to process transactions and achieve consensus, across an increasing number of users, at desired latencies and throughput. IPC is one such scaling solution, alongside other popular layer 2 solutions, like and .

For decentralized applications (dApps), there are several key motivations to adopt scaling - performance, decentralization, security. The challenge is that these factors are known to be conflicting goals.

How does IPC achieve horizontal scalability?

IPC is a scaling solution intentionally designed to achieve considerable performance, decentralization and security for dApps.

It achieves scaling through the permissionless spawning of new blockchain sub-systems, which are composed of .

Subnets are organized in a hierarchy, with one parent subnet being able to spawn infinite child subnets. Within a hierarchical subsystem, subnets can seamlessly communicate with each other, reducing the need for cross-chain bridges.

Subnets also have their own specific consensus algorithms, whilst leveraging security features from parent subnets. This allows dApps to use subnets for hosting sets of applications or to a single application, according to its various cost or performance needs.

How is IPC unique as a scaling solution?

Earlier, we talked about the challenge of scaling solutions to balance performance, security and decentralization. IPC is a standout framework that strikes a considerable balance between these factors, to achieve breakthroughs in scaling.

  • Highly customizable without compromising security. Most L2 scaling solutions today either inherit the L1's security features but don't have their own consensus algorithms (e.g. rollups), or do the reverse (e.g. sidechains). They are also deployed in isolation and require custom bridges or protocols to transfer assets and state between L2s that share a common L1, which are vulnerable to attacks. In contrast, IPC subnets have their own consensus algorithms, inherit security features from the parent subnet and have native cross-net communication, eliminating the need for bridges.

  • Multi-chain interoperability. IPC uses the as its transaction execution layer. The FVM is a WASM-based polyglot execution environment for IPLD data and is designed to support smart contracts written in any programming language, compiled to WASM. It currently supports Filecoin and Ethereum. Today, IPC is fully compatible with Filecoin and Ethereum and can use either as a rootnet. IPC will eventually allow any chain to be taken as rootnet.

  • Tight storage integration with Filecoin. IPC was designed from the data-centric L1, , which is the largest decentralized storage network. IPC can leverage its storage primitives, like IPLD data integration, to deliver enhanced solutions for data availability and more.

Applications of IPC

Here are some practical examples of how IPC improves the performance of dApps:

  • Distributed Computation: Spawn ephemeral subnets to run distributed computation jobs.

  • Coordination: Assemble into smaller subnets for decentralized orchestration with high throughput and low fees.

  • Localization: Leverage proximity to improve performance and operate with very low latency in geographically constrained settings.

  • Partition tolerance: Deploy blockchain substrates in mobile settings or other environments with limited connectivity.

With better performance, lower fees and faster transactions, IPC can rapidly improve horizontal and vertical markets with decentralized technology:

  • Artificial Intelligence: IPC is fully compatible with , the world’s largest decentralized data storage. Leveraging Filecoin, IPC can enable distributed computation to power hundreds of innovative AI models.

  • Decentralized Finance (DeFi): Enabling truly high-frequency trading and traditional backends with verifiability and privacy.

  • Big Data and Data Science: Multiple teams are creating global-scale distributed compute networks to enable Data Science analysis on Exabytes of decentralized stored data.

  • Metaverse/Gaming: Enabling real-time tracking of player interactions in virtual worlds.

  • DAOs: Assemble into smaller subnets for decentralized orchestration with high throughput and low fees. Partition tolerance: Deploy blockchain substrates in mobile settings or other environments with limited connectivity.

Get involved

  • Visit the

  • Read the

  • Check out the

  • Connect with the community on

Backup and disaster recovery

This page covers the basics of backups and disaster recovery for storage providers. A backup strategy is only as good as the last successful restore.

It is crucial to have a backup of any production system. It is even more crucial to be able to restore from that backup. These concepts are vital to a Filecoin storage provider because not only are you storing customer data for which you have (on-chain) contracts, you have also pledged a large amount of collateral for that data.

If you are unable to restore your Lotus miner and start proving your storage on-chain, you risk losing a lot of money. If you are unable to come back online in 6 weeks, you are losing all of your collateral, which will most likely lead to bankruptcy.

As such it matters less what kind of backup you have, as long as you are able to restore from it fast.

High availability (HA) versus Disaster recovery (DR)

It is a common misconception to assume you are covered against any type of failure by implementing a highly available (HA) setup. HA will protect against unplanned unavailability in many cases, such as a system failure. It will not protect you against data corruption, data loss, ransomware, or a complete disaster at the datacenter level.

Backups and (tested) restores are the basis for a DR (disaster recovery) plan and should be a major point of attention for any Filecoin storage provider, regardless of your size of operation.

Recovery Time Objective (RTO) and Recovery Point Objective

When planning for backup and recovery, the terms RPO and RTO are important concepts to know about.

  • Recovery Time Objective (RTO) is the time taken to recover a certain application or dataset in the event of a failure. Fast recovery means a shorter RTO (typically measured in hours/minutes/seconds). Enterprises plan for very short RTOs when downtime is not acceptable to their business. Application and file system snapshots typically provide the lowest possible RTO.

  • Recovery Point Objective (RPO) is the last known working backup from which you can recover. A shorter RPO means the time between the last backup and the failure is short. Enterprises plan for very short RPOs for systems and data that changes very often (like databases). Synchronous replication of systems and data typically provides the lowest possible RPO.

RPO/RTO for storage providers

Although ‘RPO zero’ and ‘RTO zero’ are the ideal, in practice it is rarely economical. DR planning requires compromises and if you are a storage provider you need to consider cost versus RPO.

RTO is typically less concerning for storage providers. The most critical parts to recover are your sealed storage and your wallets. Wallet addresses typically do not change, so the only thing to worry about is your sealed storage. With storage level snapshots (such as ZFS snapshots), you can reduce your RTO to almost zero.

For RPO, although synchronous replication, together with snapshots, can reduce RPO to nearly zero, that is not a cost-efficient solution. Asynchronous replication of sealed storage is the most viable option if you are running at small-to-medium scale. Once you grow beyond 10PB of storage, even replicating the data will become an expensive solution.

In such cases you might want to look into storage cluster solutions with built-in redundancy. Very large storage providers will operate or other solutions with built-in erasure coding. Although this does more become more like a HA setup than a DR setup, at scale, it becomes the only economically viable option.

Running a storage cluster comes with its own operational challenges though, which does not make this a good fit for small-to-medium setups.

RPO/RTO for customers

Both storage providers and data owners (customers) should look at RPO and RTO options. As a customer, you can achieve HA/DR by having multiple copies of your data stored (and proven) across multiple storage providers. In the event of data loss at one provider, other providers will hold a copy of your data from which you can retrieve. As a customer, you choose how much redundancy you need, by doing storage deals with more providers.

RTO for data owners is a matter of how fast the storage provider(s) can provide you the data.

  • Do your storage providers offer “fast retrieval” of the data through unsealed copies? If not, the unsealing process (typically multiple hours) must be calculated into the RTO.

  • Do your storage providers offer retrieval through for ultra-fast retrieval?

  • Do your storage providers pin your data on IPFS, in addition to storing it on Filecoin?

RPO for data owners is less of a concern, especially once the data is sealed. The Filecoin blockchain will enforce availability and durability of the data being stored, once it is sealed. It is therefore important, as a data owner, to know how fast your storage provider can prove the data on-chain.

Backup techniques

  • A first level of protection comes from ZFS (if you are using ZFS as the file system for your storage). Having ZFS snapshots available protects you against data loss caused by human error or tech failure, and potentially even against ransomware. Other file systems typically also have a way to make snapshots, albeit not as efficiently as ZFS.

  • A second level of defense comes from a dedicated backup system. Not only should you have backup storage (on a different storage array than the original data), you also need to have a backup server that can at a minimum run the Lotus daemon, Lotus miner and 1 WindowPoSt worker (note: this requires a GPU). With that you can sync the chain, offer retrievals and prove your storage on-chain, from your backup system, whilst you bring your primary back online.

  • An alternative technique to having a dedicated backup system and copy is to have a storage cluster. This still requires a backup system to run the Lotus daemon, Lotus miner and PoST worker on. Implementing a storage cluster is usually only done for large-scale deployments as it comes with additional operational tasks.

For maximum resilience, you could host your backup system (server + storage) in a different datacenter than your primary system.

DR failover techniques

One way to prepare for an easy failover of the software components in the event of a failure is to configure floating IP addresses. Instead of pinning lotus daemon and lotus-miner to the host IP address of the server they are running on, you can configure a secondary IP address and pin the daemon to its own IP, and lotus-miner to yet another IP.

This helps to reduce the amount of manual tasks for a failover drastically. If the recovered daemon or miner instance changes IP address it requires quite a lot of reconfiguration in various places.

Having the services on a floating IP allows to assign this IP to another machine and start the service on it.

No Penalty for Recovered Faults

Note that as of , storage providers are no longer required to pay prior to recovering a new storage fault. This enables a storage provider that currently has accrued fee debt to recover faults without being further penalized with additional fees.

Enable PDP

This section enables Proof of Data Possession (PDP) for a Storage Provider node using Curio. These steps guide you through running a standalone PDP service using Curio and pdptool.

DEPRECATED DEVELOPER TOOL This documentation refers to the legacy pdptool, which is intended only for low-level developer testing. It is not the recommended method for onboarding or interacting with PDP Storage Providers.

For current usage, including working with live PDP SPs and submitting real deals, please use the and .

Attach Storage Locations

With Curio running with the GUI layer:

Run the following commands in your Curio CLI to attach storage paths:

Your fast-storage path should point to high-performance storage media such as NVMe or SSD


Add a PDP Configuration Layer

Browse to the Configurations page of the Curio GUI.

Create a new layer named pdp. Enable and set to true the following under Subsystems:

You may find it helpful to search for the setting names in your browser.

  • ✅ EnableParkPiece

  • ✅ EnablePDP

  • ✅ EnableCommP

  • ✅ EnableMoveStorage

In the HTTP section:

  • ✅ Enable: true

  • 🌐 DomainName: your domain (e.g., pdp.mydomain.com)

  • 📡 ListenAddress: 0.0.0.0:443

You must point your domain’s A record to your server’s public IP address for Let’s Encrypt to issue a certificate.


Set Up PDP Service Keys

Build the pdptool:

Generate a service secret:

Browse to the PDP page of the Curio GUI and in the Services section:

  • Select Add PDP Service

  • Input a Service Name of your choice (e.g. pdp-service)

  • Copy the previously generated public key into the Public Key field.

  • Select Add Service


Import your Filecoin Wallet Private Key:

There are several ways to obtain private keys for Ethereum addresses. For this guide, we will use a new delegated Filecoin wallet address.

Create a new delegated wallet:

You can display your Lotus wallets at any time by running:

Export & convert your new delegated wallet address private key:

Browse to the PDP page of the Curio GUI and in the Owner Address section:

  • Select Import Key

  • Copy the previously generated private wallet key into the Private Key (Hex) field.

  • Select Import Key

Your 0x wallet address - the delegated Ethereum address derived from your Filecoin Metamask private key - will be added to the Owner Address section of the Curio PDP page.

Make sure to send a small amount of FIL to your 0x wallet - we recommend 5 FIL to ensure uninterrupted PDP operation during initial setup and testing.

Important: Secure your private key material. Don’t expose or store it in plain text without protection.


Restart and Verify

Restart Curio with both layers:

If you encounter errors binding to port 443 when starting Curio with the pdp configuration layer, run:

Test the PDP service:

Use the service name specified in the Service Name field when you added your public PDP Service key - e.g. pdp-service

Expected output:

Note: The first ping often fails. Try again after a short delay.


🎉 You’re Ready!

You’ve successfully launched a PDP-enabled Filecoin Storage Provider stack. Your system is now:

  • ✅ Syncing with the Filecoin network via Lotus

  • ✅ Recording deal and sector metadata in YugabyteDB

  • ✅ Operating Curio to manage sealing and coordination

  • ✅ Submitting Proof of Data Possession to verify storage integrity


🔜 Next Steps

  • 🚙 Take PDP for a test drive with the guide

  • 🧭 Monitor logs and metrics

  • 💬 Join the community - Filecoin Slack -

Fundamentals

Learn about the various tools and options for adding Filecoin storage to software applications, smart contracts, and workflows.

Develop on Filecoin

Filecoin combines the benefits of content-addressed data leveraged by IPFS with blockchain-powered storage guarantees. The network offers robust and resilient distributed storage at massively lower cost compared to current centralized alternatives.

Developers choose Filecoin because it:

  • is the world’s largest distributed storage network, without centralized servers or authority

  • offers on-chain proofs to verify and authenticate data

  • is highly compatible with and content addressing

  • is the only decentralized storage network with petabyte-scale capacity

  • stores data at extremely low cost (and keeps it that way for the long term)

Filecoin and IPFS

How do Filecoin and IPFS work together? They are complementary protocols for storing and sharing data in the distributed web. Both systems are open-source and share many building blocks, including content addressing (CIDs) and network protocols (libp2p).

IPFS does not include built-in mechanisms to incentivize the storage of data for other people. To persist IPFS data, you must either run your own IPFS node or pay a provider.

This is where Filecoin comes in. Filecoin adds an incentive layer to content-addressed data. Storage deals are recorded on-chain, and providers must submit proofs of storage to the network over time. Payments, penalties, and block rewards are all enforced by the decentralized protocol.

Filecoin and IPFS are designed as separate layers to give developers more choice and modularity, but many tools are available for combining their benefits. This diagram illustrates how these tools (often called storage helpers) provide developer-friendly APIs for storing on IPFS, Filecoin, or both.

Filecoin and smart contracts

You can improve speed and reduce gas fees by storing smart contract data on Filecoin. With Filecoin, the data itself is stored off-chain, but is used to generate verifiable CIDs and storage proofs that are recorded on the Filecoin chain and can be included in your smart contracts. This design pairs well with multiple smart contract networks such as Ethereum, Polygon, Avalanche, Solana, and more. Your smart contract only needs to include the compact content ids.

Get started

Let’s get building. Choose one of the following APIs. These are all storage helpers, or tools and services that abstract Filecoin’s robust deal making processes into simple, streamlined API calls.

  • - for projects needing S3 compatibility

  • - for NFT data

  • - for general application data

Examples:

Additional resources

  • (video)

  • (Pinata explainer)

  • (video)

  • Textile tools: and

  • (video)

Linux

This page covers importance of understanding the Linux operating system including installation, configuration, environment variables, performance optimization, and performance analysis.

Becoming a storage provider requires a team with a variety of skills. Of all the technical skills needed to run a storage provider business, storage knowledge is important, but arguably, it is even more important to have deep understanding of the Linux operating system.

Where most enterprise storage systems (NAS, SAN and other types) do not require the administrator to have hands-on Linux experience, Filecoin does require a lot more knowledge about Linux. For starters, this is because Filecoin is not just a storage system. It is a blockchain platform that offers decentralized storage. As a storage provider, you must ensure that your production system is always available, not just providing the storage.

Ubuntu Server LTS

Although Lotus also runs on Mac, production systems generally all run on Linux. More specifically, most storage providers run on Ubuntu. Any Linux distribution should be possible but running Ubuntu makes it easier to find support in the community. Every distribution is a bit different and knowing that all components have been built and tested on Ubuntu, and knowing you have the same OS variables in your environment as someone else, lowers the barrier to starting as a storage provider significantly. Go for Ubuntu Server and choose the latest LTS version.

Install Ubuntu LTS as a headless server. This means there is no desktop environment or GUI installed. It requires you to do everything on the command line. Not having a desktop environment on your server(s) has multiple advantages:

  • It reduces the attack surface of your systems. Fewer packages installed means fewer patches and updates, but more importantly, fewer potential vulnerabilities.

  • As you will be running several tasks on GPU (see ), it’s best to avoid running a desktop environment, which might compete for resources on the GPU.

Exclude the nvidia-drivers and cuda packages from your updates using set. Once you have a working setup for your specific GPU, you will want to test these packages before you risk breaking them. Many storage providers may need to since some operating systems do not include this package by default.

Command-line and environment variables

All installation tasks and operational activities happen from the CLI. When installing and upgrading Lotus, it is recommended to build the binaries from source code. Upgrades to Lotus happen every two months or so. If you are unable to perform a mandatory Lotus upgrade, you may become disconnected from the Filecoin network, which means you could be penalized and lose money, so it’s vital to keep Lotus up-to-date.

Configuration parameters for the Lotus client are stored in 2 places:

  • into config.toml files in ~/.lotus, ~/.lotusminer and ~/.lotusworker

  • into environment variables in ~/.bashrc if you are using Bash as your shell

Configuration parameters, and most environment variables, are covered in the . More specific environment variables around performance tuning can be found on the repository on GitHub.

Linux performance optimization

Scheduler

Some storage providers fine-tune their setups by enabling CPU-core-pinning of certain tasks (especially PC1), as a starting storage provider it’s not necessary to do that level of tuning. It is essential, however, to have some level of understanding of the to know how to prioritize and deprioritize other tasks in the OS. In the case of Lotus workers you certainly want to prioritize the lotus-worker process(es).

Configuring open file limits

Lotus needs to open a lot of files simultaneously, and it is necessary to reconfigure the OS to support this.

This is one of the examples where not every Linux distribution is the same. On Ubuntu, run the following commands:

Performance analysis

Diagnosing performance bottlenecks on a system is vital to keeping a well balanced .

There are many good resources to check out when it comes to Linux performance troubleshooting. Brendan Gregg’s is an excellent introduction. Each one of these commands deserves a chapter on its own but can be further researched in their man pages.

The commands used are:

Storage

This page covers RAID configurations, performance implications and availability, I/O behavior for sealed and unsealed sectors, and read/write performance considerations.

RAID configurations

Storage systems use RAID for protection against data corruption and data loss. Since cost is an important aspect for storage providers, and you are dealing with cold storage mostly, you will be opting for SATA disks is RAID configurations that favor capacity (and read performance). This leads to RAID5, RAID6, RAIDZ and RAIDZ2. Double parity configurations like RAID6 and RAIDZ2 are preferred.

The width of a volume is defined by how many spindles (SATA disks) there are in a RAID group. Typical configurations range between 10+2 and 13+2 disks in a group (in a VDEV in the case of ZFS).

RAID implications

Although RAIDZ2 provides high fault tolerance, configuring wide VDEVs also has an impact on performance and availability. ZFS performs an automatic healing task called scrubbing which performs a checksum validation over the data and recovers from data corruption. This task is I/O intensive and might get in the way of other tasks that should get priority, like storage proving of sealed sectors.

Another implication of large RAID sets that gets aggravated with very large capacity per disk is the time it takes to rebuild. Rebuilding is the I/O intensive process that takes place when a disk in a RAID group is replaced (typically after a disk failed). If you choose to configure very wide VDEVs while using very large spindles (20TB+) you might experience very long rebuild times which again get in the way of high priority tasks like storage proving.

It is possible though to configure wider VDEVs (RAID groups) for the unsealed sectors. Physically separating sealed and unsealed copies has other advantages, which are explained in .

I/O Behavior

Storage providers keep copies of sealed sectors and unsealed sectors (for fast retrieval) on their storage systems. However the I/O behavior on sealed sectors is very different from the I/O behavior on unsealed sectors. When happens only a very small portion of the data is read by WindowPoSt. A large storage provider will have many sectors in multiple partitions for which WindowPoSt requires fast access to the disks. This is unusual I/O behavior for any storage system.

The unsealed copies are used for fast retrieval of the data towards the customer. Large datasets in chunks of 32 GiB (or 64 GiB depending on the configured sector size) are read.

In order to avoid different tasks competing for read I/O on disk it is recommended to create separate disk pools with their own VDEVs (when using ZFS) for sealed and unsealed copies.

Write performance

Write access towards the storage also requires your attention. Depending how your storage array is connected (SAS or Ethernet) you will have different transfer speeds towards the sealed storage path. At a sealing capacity of 6 TiB/day you will effectively be writing 12 TiB/day towards the storage (6 TiB sealed, 6 TiB unsealed copies). Both your storage layout and your network need to be able to handle this.

If this 12 TiB were equally spread across the 24 hrs of a day, this would already require 1.14 Gbps.

12 TiB 1024 / 24 hr / 3600 sec 8 = 1.14 Gbps

The sealing pipeline produces 32 GiB sectors (64 GiB depending on your configured sector size) which are written to the storage. If you configured batching of the commit messages (to reduce total gas fees) then you will write multiple sectors towards disk at once.

A minimum network bandwidth of 10 Gbps is recommended and write cache at the storage layer will be beneficial too.

Read performance

Read performance is optimal when choosing for RAIDZ2 VDEVs of 10 to 15 disks. RAID-sets using parity like RAIDZ and RAIDZ2 will employ all spindles for read operations. This means read throughput is a lot better compared to reading from a single or a few spindles.

There are 2 types of read operations that are important in the context of Filecoin:

  • random read I/O:

    When storage proving happens, a small portion of a sector is read for proving.

  • sequential read I/O:

    When retrievals happens, entire sectors are read from disk and streamed towards the customer via Boost.

Support

If you need assistance while exploring the Filecoin virtual machine, you can reach out to the team and community using the links on this page.

Slack

Like many other distributed teams, the Filecoin developer relations, lead by the team, works mostly on Slack and Discord. You can join the Filecoin Project Slack for free by going to and the Discord by going to .

The following Slack channels are most relevant for Filecoin builders:

  • for building solutions on FVM and Filecoin

  • for development of the FVM

  • for FVM documentation

Forum

If you just need a general pointer or looking for technical FAQs, you can head over to the .

Developer grants

The connects grant makers with builders and researchers in the Filecoin community. Whether you represent a foundation that wants to move the space forward, a company looking to accelerate development on the features your application needs, or a developer team itching to hack on the FVM,

Filecoin deals

This section covers the different types of deals in the Filecoin network, and how they relate to storage providers.

Yugabyte Documentation
Filecoin Slaxk - #fil-curio-help
Yugabyte Slack
SECP256K1
BLS
Ledger
Glif web wallet
Filecoin MetaMask Wallet
Snaps
FoxWallet
Filfox
Brave Wallet
Trust wallet
ImToken
MathWallet
FoxWallet
Filfox
FilSnap MetaMask Snap
Snaps
D'CENT Wallet
Bacalhau
Lurk
reference FVM
Medusa
Request for Startups
DataDAO Solution Blueprint
Perpetual Storage Solution Blueprint
Lending Pool Cookbook
SputnikVM
EVM <> FVM mapping specification
example contracts here
Was this page helpful?
Filecoin economics ->
Filecoin deals ->
FIP-0100
FIP-0100 discussion thread
real-time fee calculator
People and skills ->
Software architecture ->
Infrastructure ->
Was this page helpful?
Interplanetary Consensus (IPC)
Horizontal scalability
sidechains
rollups
subnets
shard
Filecoin Virtual Machine (FVM)
Filecoin
Filecoin
website
docs
repository
Discord
Was this page helpful?
Ceph clusters
Saturn, (the Web3 CDN)
FIP006: No repay debt requirement for DeclareFaultsRecovered
fee debt
Was this page helpful?
Custom Storage Layout
storage proving
Was this page helpful?
FIL Builders
filecoin.io/slack
https://discord.com/invite/filecoin
#fil-builders
#fil-fvm-dev
#fvm-docs
FVM GitHub Discussion tab
Filecoin Grant Platform
take a look at the supported grant types and available opportunities →
Was this page helpful?
Was this page helpful?
IPFS
Chainsafe Storage API
NFT.storage
Web3.storage
Polygon tutorial
Flow tutorial
Avalanche tutorial
Using IPFS & Filecoin on Harmony
Filecoin integrations for Web3 infrastructure
What is an IPFS Pinning Service?
IPFS documentation: Persistence, permanence and pinning
Developing on Filecoin
video
documentation
Building decentralized apps using Fleek’s Space daemon
Was this page helpful?
Web3-Enabled Architecture by Filecoin
Filecoin blog post

Proofs

In Filecoin cryptographic proving systems, often simply referred to as proofs, are used to validate that a storage provider (SP) is properly storing data.

Different blockchains use different cryptographic proving systems (proofs) based on the network’s specific purpose, goals, and functionality. Regardless of which method is used, proofs have the following in common:

  • All blockchain networks seek to achieve consensus and rely on proofs as part of this process.

  • Proofs incentivize network participants to behave in certain ways and allow the network to penalize participants who do not abide by network standards.

  • Proofs allow decentralized systems to agree on a network state without a central authority.

Proof-of-Work and Proof-of-Stake are both fairly common proof methods:

  • Proof-of-Work: nodes in the network solve complex mathematical problems to validate transactions and create new blocks,

  • Proof-of-Stake: nodes in the network are chosen to validate transactions and create new blocks based on the amount of cryptocurrency they hold and “stake” in the network.

The Filecoin network aims to provide useful, reliable storage to its participants. With a traditional centralized entity like a cloud storage provider, explicit trust is placed in the entity itself that the data will be stored in a way that meets some minimum set of standards such as security, scalability, retrievability, or replication. Because the Filecoin network is a decentralized network of storage providers (SPs) distributed across the globe, network participants need an automated, trustless, and decentralized way to validate that an SP is doing a good job of handling the data.

In particular, the Filecoin proof process must verify the data was properly stored at the time of the initial request and is continuing to be stored based on the terms of the agreement between the client and the SP. In order for the proof processes to be robust, the process must:

  • Target a random part of the data.

  • Occur at a time interval such that it is not possible, profitable, or rational for an SP to discard and re-fetch the copy of data.

In Filecoin, this process is known as Proof-of-Storage, and consists of two distinct types of proofs:

  • Proof of Replication (PoRep): a procedure used at the time of initial data storage to validate that an SP has created and stored a unique copy of some piece of data.

  • Proof of Spacetime (PoST): a procedure to validate that an SP is continuing to store a unique copy of some piece of data.

Proof-of-Replication (PoRep)

In the Filecoin storage lifecycle process, Proof-of-Replication (PoRep) is used when an SP agrees to store data on behalf of a client and receives a piece of client data. In this process:

  1. The data is placed into a sector.

  2. The sector is sealed by the SP.

  3. A unique encoding, which serves as proof that the SP has replicated a copy of the data they agreed to store, is generated (described in Sealing as proof).

  4. The proof is compressed.

  5. The result of the compression is submitted to the network as certification of storage.

Sealing as proof

The unique encoding created during the sealing process is generated using the following pieces of information:

  • The data is sealed.

  • The storage provider who seals the data.

  • The time at which the data was sealed.

Because of the principles of cryptographic hashing, a new encoding will be generated if the data changes, the storage provider sealing the data changes, or the time of sealing changes. This encoding is unique and can be used to verify that a specific storage provider did, in fact, store a particular piece of client data at a specific time.

Proof-of-Spacetime (PoSt)

After a storage provider has proved that they have replicated a copy of the data that they agreed to store, the SP must continue to prove to the network that:

  • They are still storing the requested data.

  • The data is available.

  • The data is still sealed.

Because this method is concerned with proving that data is being stored in a particular space for a particular period or at a particular time, it is called Proof-of-Spacetime (PoSt). In Filecoin, the PoSt process is handled using two different sub-methods, each of which serves a different purpose:

  • WinningPoSt is used to prove that an SP selected using an election process has a replica of the data at the specific time that they were asked and is used in the block consensus process.

  • WindowPoSt is used to prove that, for any and all SPs in the network, a copy of the data that was agreed to be stored is being continuously maintained over time and is used to audit SPs continuously.

WinningPoSt

WinningPoSt is used to prove that an SP selected via election has a replica of the data at the specific time that they were asked and is specifically used in Filecoin to determine which SPs may add blocks to the Filecoin blockchain.

At the beginning of each epoch, a small number of SPs are elected to mine new blocks using the Expected Consensus algorithm, which guarantees that validators will be chosen based on a probability proportional to their power. Each of the SPs selected must submit a WinningPoSt, proof that they have a sealed copy of the data that they have included in their proposed block. The deadline to submit this proof is the end of the current epoch and was intentionally designed to be short, making it impossible for the SP to fabricate the proof. Successful submission grants the SP:

  • The block reward .

  • The opportunity to charge other nodes fees in order to include their messages in the block.

If an SP misses the submission deadline, no penalty is incurred, but the SP misses the opportunity to mine a block and receive the block reward.

WindowPoSt

WindowPoSt is used to prove that, for any and all SPs in the network, a copy of the data that was agreed to be stored is being continuously maintained over time and is used to audit SPs continuously. In WindowPoSt, all SPs must demonstrate the availability of all sectors claimed every proving period. Sector availability is not proved individually; rather, SPs must prove a whole partition at once, and that sector must be proved by the deadline assigned (a 30-minute interval in the proving period).

The more sectors an SP has pledged to store, the more the partitions of sectors that the SP will need to prove per deadline. As this requires that the SP has access to sealed copies of each of the requested sectors, it makes it irrational for the SP to seal data every time they need to provide a WindowPoSt proof, thus ensuring that SPs on the network are continuously maintaining the data agreed to. Additionally, failure to submit WindowPoSt for a sector will result in the SPs’ pledge collateral being forfeited and their storage power being reduced.

Was this page helpful?

Roadmap

The FVM project has come a long way in an incredibly short amount of time. This is the roadmap for FVM features for the Filecoin network.

Goal

The goal of the FVM project is to add general programmability to the Filecoin blockchain. Doing so will give developers all kinds of creative options, including:

  • Orchestrating storage.

  • Creating L2 networks on top of the Filecoin blockchain.

  • Providing new incentive structures for providers and users.

  • Frequently verifying that providers are storing data correctly.

  • Automatically finding which storage providers are storing what data.

  • Many more data-based applications.

Filecoin was the first network deploying programmability, post-genesis, to ensure that layer 0 of the Filecoin blockchain was stable and fully functional. Due to the large amounts of capital already secured within the Filecoin network, the development of the FVM needs to be careful and gradual.

Roadmap

The FVM roadmap is split into three initiatives:

  • Milestone 1: Initialize the project and allow built-in actors to run on the FVM.

  • Milestone 2: Enable the deployment of Ethereum virtual machine (EVM) compatible smart contracts onto the FVM. Also, allow developers to create and deploy their own native actors to the FVM.

  • Milestone 3: Continue to enhance programmability on FVM.

✅ Milestone 0

✅ Lotus mainnet canaries with FVM support

Completed in February 2022

The reference FVM implementation has been integrated into a fork of Lotus (the Filecoin reference client). A fleet of canary nodes have been launched on mainnet, running WASM-compiled built-in actors on the FVM. The canaries are monitored for consensus faults and to gather telemetry. This milestone is a testing milestone that’s critical to collect raw execution data to feed into the overhaul of the gas model, in preparation for user-programmability. It implies no network upgrade.

✅ Milestone 0.5

✅ Ability to run FVM node and sync mainnet

Completed in March 2022

Any node operator can sync the Filecoin Mainnet using the FVM and Rust built-in actors, integrated in Lotus, Venus, Forest, and Fuhon implementations. It implies no network upgrade.

✅ Milestone 1

✅ Introduction of non-programmable WASM-based FVM

Completed in May 2022

Mainnet will atomically switch from the current legacy virtual machines to the WASM-based reference FVM. A new gas model will be activated that accounts for actual WASM execution costs. Only Rust built-in actors will be supported at this time. This milestone requires a network upgrade.

✅ Network Version 17 (nv17): Initial protocol refactors for programmability

Completed in November 2022

An initial set of protocol refactors targeting built-in actors, including the ability to introduce new storage markets via user-defined smart contracts.

✅ Milestone 2.1

✅ Ability to deploy EVM contracts to mainnet (FEVM)

Completed in March 2023

The Filecoin network will become user-programmable for the first time. Developers will be able to deploy smart contracts written in Solidity or Yul, and compiled to EVM. Smart contracts will be able to access Filecoin functionality by invoking built-in actors. Existing Ethereum tooling will be compatible with Filecoin. This milestone requires a network upgrade.

✅ Hyperspace testnet goes live

Completed on January 16th 2023

A new stable developer testnet called Hyperspace will be launched as the pre-production testnet. The community is invited to participate in heavy functional, technical, and security testing. Incentives and bounties will be available for developers and security researchers.

✅ FEVM goes live on mainnet

Completed on March 14th 2023

The Filecoin EVM runtime is deployed on Filecoin mainnet via the Filecoin nv18 Hygge upgrade.

🔄 Milestone 2.2

🔄 Ability to deploy Wasm actors to mainnet

To complete midway through 2023

Developers will be able to deploy custom smart contracts written in Rust, AssemblyScript, or Go, and compiled to WASM bytecode. SDKs, tutorials, and other developer materials will be generally available. This milestone requires a network upgrade.

🔮 Milestone 3+

🔮 Further incremental protocol refactors to enhance programmability

To complete in 2023

A series of additional incremental protocol upgrades (besides nv17) to move system functionality from privileged space to user space. The result will be a lighter and less opinionated base Filecoin protocol, where storage markets, deal-making, incentives, etc. are extensible, modular, and highly customizable through user-deployed actors. Enhanced programming features such as user-provided cron, asynchronous call patterns, and more will start to be developed at this stage.

Was this page helpful?

Prerequisites

This guide walks you through setting up a PDP-enabled Filecoin Storage Provider using Lotus, YugabyteDB, and Curio

This guide is written specifically for Ubuntu 22.04. If you are using a different Linux distribution, refer to the relevant documentation for package installation and compatibility.

Before starting, make sure you have a user with sudo privileges. This section prepares your system for the PDP stack.

System Package Installation

sudo apt update && sudo apt upgrade -y && sudo apt install -y \
mesa-opencl-icd ocl-icd-opencl-dev gcc git jq pkg-config curl clang \
build-essential hwloc libhwloc-dev libarchive-dev wget ntp python-is-python3 aria2

Install Go

sudo rm -rf /usr/local/go
wget https://go.dev/dl/go1.23.7.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.23.7.linux-amd64.tar.gz
echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.bashrc
source ~/.bashrc
go version

You should see something like: go version go1.23.7 linux/amd64


Install Rust

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

When prompted, choose the option 1) Proceed with standard installation (default — just press Enter).

source $HOME/.cargo/env
rustc --version

You should see something like: rustc 1.86.0 (05f9846f8 2025-03-31)


Add Go and Rust to Secure Sudo Path

sudo tee /etc/sudoers.d/dev-paths <<EOF
Defaults secure_path="/usr/local/go/bin:$HOME/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
EOF
curio run --layers=gui
curio cli storage attach --init --seal /fast-storage/path
curio cli storage attach --init --store /long-term-storage/path
cd curio/cmd/pdptool
go build .
./pdptool create-service-secret
# Example output:

-----BEGIN PUBLIC KEY-----
LxP9MzVmHdC7KwYBvNAo1jXuIRfGXqQyo2JzE4Uctn0a5eFZbs6Wlvq3dKYgphTD
XAqRsm38LPt2iVcGb9MruZJxEkBhO71wDdNyaFMoXpCJnUqRAezvKlfbIg==
-----END PUBLIC KEY-----
lotus wallet new delegated
# Example output:

t410fuo4dghaeiqzokiqnxruzdr6e3cjktnxprrc56bi
lotus wallet list
lotus wallet export <your-delegated-wallet-address> | xxd -r -p | jq -r '.PrivateKey' | base64 -d | xxd -p -c 32
# Example output:

d4c2e3f9a716bb0e47fa91b2cf4a29870be3c5982fd6eafed71e8ac3f9c0b12
curio run --layers=gui,pdp
sudo setcap 'cap_net_bind_service=+ep' /usr/local/bin/curio
./pdptool ping --service-url https://your-domain.com --service-name <ServiceName>
Ping successful: Service is reachable and JWT token is valid.
Synapse SDK
Synapse dApp Tutorial
Use PDP
#fil-pdp
Cover

Cover

Cover

sudo echo "* soft nofile 32000000" >> /etc/security/limits.conf
sudo echo "* hard nofile 128000000" >> /etc/security/limits.conf
sudo echo "fs.nr_open=128000000" >> /etc/sysctl.conf
sudo echo "fs.file-max=128000000" >> /etc/sysctl.conf
sudo sysctl -p
uptime
dmesg | tail
vmstat 1
mpstat -P ALL 1
pidstat 1
iostat -xz 1
free -m
sar -n DEV 1
sar -n TCP,ETCP 1
top
Reference Architectures
the appropriate command
install CUDA
Lotus documentation
Rust FIL Proofs
Linux kernel scheduler
sealing pipeline
Linux performance analysis in 60 seconds
Was this page helpful?

Reference architectures

This page contains some reference architectures that storage providers can use to build out their infrastructure.

1 PiB raw architecture

1 PiB raw reference architecture.

The following reference architecture is designed for 1 PiB of raw sectors or raw data to be stored. Let’s discuss the various design choices of this architecture.

Virtual machines

  • 32 CPU Cores

  • 512 GB RAM

  • 8x 2 TB SSD storage

  • 2x 10 GbE ethernet NICs

Lotus daemon and Boost run as Virtual Machines in this architecture. The advantages of virtualization are well-known, including easy reconfiguration of parameters (CPU, memory, disk) and portability. The daemon is not a very intensive process by itself, but must be available at all times. We recommend having a second daemon running as another VM or on backup infrastructure to which you can fail over.

Boost is a resource-intensive process, especially when deals are being ingested over the internet. It also feeds data payload of the deals into the Lotus miner.

We recommend 12-16 cores per VM and 128 GiB of memory. Lotus daemon and Boost need to run on fast storage (SSD or faster). The capacity requirements of Boost depend on the size of deals you are accepting as a storage provider. Its capacity must be sufficient to be landing space for deals until the data can be processed by your sealing cluster in the backend.

Both Lotus daemon and Boost require public internet connectivity. In the case of Boost you also need to consider bandwidth. Depending on the deal size you are accepting, you might require 1 Gbps or 10 Gbps internet bandwidth.

Lotus miner

  • 16 CPU Cores

  • 256 GB RAM

  • 2x 1TB SSD storage

  • 2x 10 GbE ethernet NICs

Lotus miner becomes a less intensive process with dedicated PoST workers separated from it (as in this design). If you use a dedicated storage server or NAS system as the storage target for your sealed and unsealed sectors, Lotus miner eventually could also become a VM. This requires additional CPU and memory on the hypervisor host.

We opted for a standalone Lotus miner in this design and gave it 256 GiB of memory. This is because we operate ZFS at the storage layer, which requires a lot of memory for caching. Lotus miner has enough with 128 GiB of memory when you opt for a dedicated storage server or NAS system for your storage.

SATA Storage

In this architecture we have attached storage shelves to the Lotus miner with 2.4 PiB of usable capacity. This is the capacity after the creation of a RAIDZ2 file system (double parity). We recommend vdevs of 12 disks wide. In RAIDZ2 this results in 10 data disks and 2 parity disks. Storage systems also don’t behave well at 100% used capacity, so we designed for 20% extra capacity.

PoST workers

  • 16 CPU Cores

  • 128 GB RAM

  • 2x 1TB SSD storage

  • 1x GPU 10+ GB memory, 3500+ CUDA cores

  • 2x 10 GbE ethernet NICs

We have split off the Winning PoST and Window PoST tasks from the Lotus miner. Using dedicated systems for those processes increase the likelihood of winning block rewards and reduces the likelihood of missing a proving deadline. For redundancy, you can run a standby WindowPoSt worker on the WinningPoSt server and vice versa.

PoST workers require 128 GiB of memory at the minimum and require a capable GPU with 10GB of memory and 3500 or more CUDA cores.

Sealing workers

The sealing workers require the most attention during the design of a solution. Their performance will define the sealing rate of your setup, and hence, how fast you can onboard client deals.

Keep in mind that using Sealing-as-a-Service reduces the requirements to have a fast performing sealing setup. In this design, however, we plan for an on-premise sealing setup of maximum 7 TiB/day. This theoretical sealing capacity is based on the entire sealing setup running at full speed for 24 hrs/day.

AP / PC1 worker

  • 32 CPU Cores with SHA-extensions

  • 1 TB RAM

  • 2x 1TB SSD OS storage

  • 15+ TB U.3 NVMe sealing / scratch storage

  • 2x 10 GbE (or faster) ethernet NICs

We put the AddPiece and PreCommit1 tasks together on a first worker. This makes sense because AddPiece prepares the scratch space that will be used by the PC1 tasks thereafter. The first critical hardware component for PC1 is the CPU. This must be a CPU with SHA-256 extensions. Most storage providers opt for AMD Epyc (Rome, Milan or Genova) processors, although Ice Lake and newer Intel Xeon processors also support these extensions.

To verify if your CPU has the necessary extensions, run:

cat /proc/cpuinfo | grep --color sha_ni

PC1 is a single-threaded process so we require enough CPU cores to run multiple PC1 tasks in parallel. This reference architecture has 32 cores in a PC1, which would allow for ~30 parallel PC1 processes.

For this, we also need 1TB of memory in the PC1 server.

Every PC1 processes requires approximately 450 GiB of sealing scratch space. This scratch space is vital to the performance of the entire sealing setup. It requires U.2 or U.3 NVMe media. For 30 parallel PC1 processes we then need ~15 TiB of scratch space. RAID protection on this volume is not mandatory, however losing 30 sectors during sealing and having to start over does have an impact on your sealing rate.

PC2 / C1 / C2 workers

  • 32 CPU Cores

  • 512 GB RAM

  • 2x 1TB SSD

  • 1x GPU 10+ GB memory, 3500+ CUDA cores

  • 2x 10 GbE (or faster)

The next step in the sealing pipeline is PreCommit2 (PC2). You could decide to keep it together with PC1, but given the size of our setup (1 PiB) and the likely requirement to scale beyond that later, we split off PC2 in this architecture.

We plan for twice the amount of PC2 workers compared to PC1, as explained under sealing rate. Apart from the memory requirements this process specifically requires a capable GPU with preferably 24GB of memory and 6000 or more CUDA cores.

The scratch space contents from PC1 is copied over to the PC2 worker. This PC2 worker also requires fast NVMe scratch space. Since we plan for 2 PC2 workers against 1 PC1 worker, the capacity of the scratch space per PC2 worker is half of the total scratch space capacity of the PC1 worker, 8 TiB in our case.

C1 doesn’t require much attention for our architecture. C2 however requires a capable GPU again.

Solo storage providing

Please take a look at the presentation Benjamin Hoejsbo from PIKNIK gave, in which solo storage provider setups are examined. The presentation is from 2022, but the content is still relevant as of March 2023.

We are working to improve this section. If you would like to share your mining setup, please create an issue in the Filecoin documentation GitHub repository!

Was this page helpful?

Sealing pipeline

The process of sealing sectors is called the sealing pipeline. It is important for storage providers to understand the steps of the process.

Each step in the sealing process has different performance considerations, and fine-tuning is required to align the different steps optimally. For example, storage providers that don’t understand the process expected throughput may end up overloading the sealing pipeline by trying to seal too many sectors at once or taking on a dataset that is too large for available infrastructure. This can lead to a slower sealing rate, which is discussed in greater detail in Sealing Rate.

Overview

The sealing pipeline can be broken into the following steps:

AddPiece

The sealing pipeline begins with AddPiece (AP), where the pipeline takes a Piece and prepares it into the sealing scratch space for the PreCommit 1 task (PC1) to take over. In Filecoin, a Piece is data in CAR-file format produced by an IPLD DAG with a corresponding PayloadCID and PieceCID. The maximum Piece size is equal to the sector size, which is either 32 GiB or 64 GiB. If the content is larger than the sector size, it must be split into more than one PieceCID during data preparation.

The AddPiece process is only uses some CPU cores; it doesn’t require the use of a GPU. It does write a lot of data on the sealing volume though. Therefore it is recommended to limit the concurrent AP processes to 1 or 2 via the environment variable AP_32G_MAX_CONCURRENT=1.

It is typically co-located on a server with other worker processes from the sealing pipeline. As PC1 is the next process in the sealing pipeline, running AddPiece on the same server as the PC1 process is a logical architecture configuration.

Consider limiting the AP process to a few cores by using the taskset command, where <xx-xx> is the range on which cores the process needs to run on:

taskset -c <xx-xx> lotus-worker run ...

PreCommit 1

PreCommit 1 (PC1) is the most CPU intensive process of the entire sealing pipeline. PC1 is the step in which a sector, regardless of whether it contains data or not, is cryptographically secured. The worker process loads cryptographic parameters from a cache location, which should be stored on enterprise NVMe for latency reduction. These parameters are then used to run Proof-of-Replication (PoRep) SDR encoding against the sector that was put into the sealing scratch space. This task is single-threaded and very CPU intensive, so it requires a CPU with SHA256 extensions. Typical CPUs that meet this requirement include the AMD Epyc Milan/Rome or an Intel Xeon Ice Lake with 32 cores or more.

Using the scratch space, the PC1 task will create 11 layers of the sector. Storage providers must host scratch space for this on enterprise NVMe. This means that:

  • Every sector consumes memory equal to 1+11 times its size on the scratch volume.

  • For a 32 GiB sector, PC1 requires 384 GiB on the scratch volume

  • For a 64 GiB sector, PC1 requires 768 GiB.

In order to seal at a decent rate and to make use of all the sealing capacity in a PC1 server, you will maximize the concurrent PC1 jobs on a system. Set the PC1_32G_MAX_CONCURRENT= environment variable for the PC1 worker. You can learn more about this in the chapter on Sealing Rate. Sealing several sectors multiplies the requirements on CPU cores, RAM, and scratch space by the number of sectors being sealed in parallel.

The process of sealing a single 32 GiB sector takes roughly 3 hours but that time depends largely on your hardware and what other jobs are running on that hardware.

PreCommit 2

When PC1 has completed on a given sector, the entire scratch space for that sector is moved over to the PreCommit 2 (PC2) task. This task is typically executed on a different server than the PC1 server because it behaves differently. In short, PC2 validates PC1 using the Poseidon hashing algorithm over the Merkle Tree DAG that was created in PC1. As mentioned in the previous section, the entire scratch space is either 384 GiB or 768 GiB, depending on the sector size.

Where PC1 is CPU-intensive, PC2 is executed on GPU. This task is also notably shorter in duration than PC1, typically 10 to 20 minutes on a capable GPU. This requires a GPU with at least 10 GiB of memory and 3500+ CUDA cores or shading units, in the case of Nvidia. Storage providers can use slower GPUs, but this may create a bottleneck in the sealing pipeline.

For best performance, compile Lotus with CUDA support instead of OpenCL. For further information, see the Lotus CUDA Setup.

In the case of a Snap Deal, an existing committed capacity sector is filled with data. When this happens, the entire PC1 task does not run again; however, the snapping process employs PC1’s replica-update and prove-replica-update to add the data to the sector. This can run on the PC2 worker or on a separate worker depending on your sealing pipeline capacity.

When PC2 has completed for a sector, a precommit message is posted on-chain. If batching is configured, Lotus will batch these messages to avoid sending messages to the chain for every single sector. In addition, there is a configurable timeout interval, after which the message will be sent on-chain. This timeout is set to 24 hours by default. These configuration parameters are found in the .lotusminer/config.toml file.

If you want to force the pre-commit message on-chain for testing purposes, run:

lotus-miner sectors batching precommit --publish-now

The sealed sector and its 11 layers are kept on the scratch volume until Commit 2 (C2) is complete.

WaitSeed

WaitSeed is not an actual task that is executed, but it is a step in the pipeline in which the blockchain forces the pipeline to wait for 150 epochs as a built-in security mechanism. With Filecoin’s 30 second epochs, this means 75 minutes must elapse between PC2 and the next task, Commit 1 (C1).

Commit 1

The Commit 1 (C1) phase is an intermediate phase that performs the preparation necessary to generate a proof. It is CPU-bound and typically completes in seconds. It is recommended that storage providers run this process on the server where C2 is running.

Commit 2

The last and final step in the sealing pipeline is Commit 2 (C2). This step involves the creation of zk-SNARK proof. Like PC2, this task is GPU-bound and is, therefore, best co-located with the PC2 task.

Finally, the proof is committed on-chain in a message. As with the pre-commit messages, the commit messages are batched and held for 24 hours by default before committing on-chain to avoid sending messages for each and every sector. You can again avoid batching by running:

lotus-miner sectors batching commit --publish-now

Finally, the sealed sector is stored in the miner’s long-term storage space, along with unsealed sectors, which are required for retrievals if configured to do so.

Was this page helpful?

Basic retrieval

There are multiple ways to fetch data from a storage provider. This page covers some of the most popular methods.

Lassie

Lassie is a simple retrieval client for IPFS and Filecoin. It finds and fetches your data over the best retrieval protocols available. Lassie makes Filecoin retrieval easy. While Lassie is powerful, the core functionality is expressed in a single CLI command:

Lassie also provides an HTTP interface for retrieving IPLD data from IPFS and Filecoin peers. Developers can use this interface directly in their applications to retrieve the data.

Lassie fetches content in content-addressed archive (CAR) form, so in most cases, you will need additional tooling to deal with CAR files. Lassie can also be used as a library to fetch data from Filecoin from within your application. Due to the diversity of data transport protocols in the IPFS ecosystem, Lassie is able to use the Graphsync or Bitswap protocols, depending on how the requested data is available to be fetched. One prominent use case of Lassie as a library is the Saturn Network. Saturn nodes fetch content from Filecoin and IPFS through Lassie in order to serve retrievals.

Retrieve using Lassie

Make sure that you have installed and that your GOPATH is set up. By default, your GOPATH will be set to ~/go. Install Lassie

  1. Download the based on your system architecture.

    Or download and install Lassie using the Go package manager:

  1. Download the based on your system architecture or install the package using the Go package manager. The go-car package makes it easier to work with content-addressed archive (CAR) files:

You now have everything you need to retrieve a file with Lassie and extract the contents with go-car.

Retrieve

To retrieve data from Filecoin using Lassie, all you need is the CID of the content you want to download.

The video below demonstrates how Lassie can be used to render content directly from Filecoin and IPFS.

Lassie and go-car can work together to retrieve and extract data from Filecoin. All you need is the CID of the content to download.

This command uses a | to chain two commands together. This will work on Linux or macOS. Windows users may need to use PowerShell to use this form. Alternatively, you can use the commands separately, as explained later on this page.

An example of fetching and extracting a single file, identified by its CID:

Basic progress information, similar to the output shown below, is displayed:

The resulting file is a tar archive:

Lassie CLI usage

Lassie's usage for retrieving data is as follows:

  • -p is an optional flag that tells Lassie that you would like to see detailed progress information as it fetches your data.

    For example:

  • -o is an optional flag that tells Lassie where to write the output to. If you don’t specify a file, it will append .car to your CID and use that as the output file name.

If you specify -p, the output will be written to stdout so it can be piped to another command, such as go-car, or redirected to a file.

  • <CID>/path/to/content is the CID of the content you want to retrieve and an optional path to a specific file within that content. Example:

A CID is always necessary, and if you don’t specify a path, Lassie will attempt to download the entire content. If you specify a path, Lassie will only download that specific file or, if it is a directory, the entire directory and its contents.

go-car CLI usage

The car extract command can be used to extract files and directories from a CAR:

  • -f is an optional flag that tells go-car where to read the input from. If omitted, it will read from stdin, as in our example above where we piped lassie fetch -o - output to car extract.

  • /path/to/file/or/directory is an optional path to a specific file or directory within the CAR. If omitted, it will attempt to extract the entire CAR.

  • <OUTPUT_DIR> is an optional argument that tells go-car where to write the output to. If omitted, it will be written to the current directory.

If you supply -p, as in the above example, it will attempt to extract the content directly to stdout. This will only work if we are extracting a single file.

In the example above, where we fetched a file named lidar-data.tar, the > operator was used to redirect the output of car extract to a named file. This is because the content we fetched was raw file data that did not have a name encoded. In this case, if we didn’t use - and > filename, go-car would write to a file named unknown. In this instance, go-car was used to reconstitute the file from the raw blocks contained within Lassie’s CAR output.

go-car has other useful commands. The first is car ls, which can be used to list the contents of a CAR. The second is car inspect, which can be used to inspect the contents of the CAR and optionally verify the integrity of a CAR.

And there we have it! Downloading and managing data from Filecoin is super simple when you use Lassie and Go-car!

Lassie HTTP daemon

The Lassie HTTP daemon is an HTTP interface for retrieving IPLD data from IPFS and Filecoin peers. It fetches content from peers known to have it and provides the resulting data in CAR format.

A GET query against a Lassie HTTP daemon allows retrieval from peers that have the content identified by the given root CID, streaming the DAG in the response in format. You can read more about the HTTP request and response to the daemon in . Lassie’s HTTP interface can be a very powerful tool for web applications that require fetching data from Filecoin and IPFS.

Lassie’s CAR format

Lassie only returns data in CAR format, specifically, format. describes the nature of the CAR data returned by Lassie and the various options available to the client for manipulating the output.

Transfer FIL

Due to the nature of Filecoin and Ethereum having different address types in the Filecoin network, the process for transferring FIL between addresses can be a bit nuanced.

After FVM launched, a new Ethereum-compatible address type (f410 address) was introduced to the Filecoin network. This new f410 address can be converted into Ethereum-style addresses starting with 0x so that it can be used in any Ethereum-compatible toolings or dApps. Filecoin addresses start with f, so we will use the f address in this tutorial. And Ethereum-style addresses start with 0x, so we will use the 0x address in this tutorial.

There are four paths for transferring FIL tokens across the Filecoin network, depending on which address type you are transferring from and to.

ASSETS ON THE FILECOIN NETWORK ARE NOT AVAILABLE ON ANY OTHER NETWORK Remember that Filecoin is fully compatible with Ethereum tools, like wallets. But that doesn’t mean you’re using the Ethereum network. These instructions transfer assets only within the Filecoin network. .

0x => 0x address

If you want to transfer FIL tokens from one f4 address to another f4 address using their corresponding 0x addresses, you need to understand how to convert between f4 and 0x addresses.

  • If you have f4 address, you can convert it to 0x address using .

  • If you have a 0x address, you can directly search it on , which will show the 0x address and corresponding f4 address.

Apart from that, you just need to follow the standard process using your preferred Ethereum-compatible wallet, like MetaMask, MethWallet, etc. For instance, for how to send Ethereum from one account to another.

0x => f address

If you want to transfer FIL tokens from an Ethereum style 0x address to another Filecoin address type, like an f1 or f3 address, follow the steps in tutorial.

f => 0x address

Most wallets and exchanges currently support Filecoin f1 or f3 addresses, and many of them already fully support f4 and 0x addresses, including , , , etc. But there are some exchanges that are still implementing the support for f4 addresses. If your preferred wallets and exchanges don’t let you directly transfer FIL to an f4 or Ethereum-style 0x address, We recommend filing a support issue with the exchange to help accelerate the support of f4 addresses.

The process for sending FIL from a Filecoin f address to an Ethereum-style 0x address depends on the wallet or exchange you use.

Ledger device

Ledger Live supports sending to a Filecoin f4 address, which has an automatic 0x equivalent that you can look up on any . This allows you to directly transfer your FIL to an Ethereum-style 0x address using its f4 equivalent.

Sending directly to a 0x address does not work in Ledger Live. You must use the f4 equivalent.

Hot wallet

A hot wallet is a cryptocurrency wallet that is always connected to the internet. They allow you to store, send, and receive tokens. Because hot wallets are always connected to the internet, they tend to be somewhat more vulnerable to hacks and theft than cold storage methods. However, they are generally easier to use than cold wallets and do not require any specific hardware like a Ledger device.

If you want to transfer your FIL tokens from the f1\f3 to the 0x address, but the wallet or exchange you are using does not support the f4 and 0x style addresses. Then, you can create a burner wallet using Glif, transfer FIL to the burner wallet, and then transfer FIL from the burner wallet to the 0x address on MetaMask.

  1. Navigate to . Create a Burner wallet.

  1. Click Create Seed Phase. Write down your seed phrase somewhere safe. You can also copy or download the seed phrase. You will need it later.

  1. Click I’ve recorded my seed phrase. Using your seed phrase, enter the missing words in the blank text fields.

  2. Click Next, and then Connect. The burner wallet is created

  3. In the upper left corner of your wallet dashboard, click on the double squares icon next to your address to copy it. Record this address. You will need it later.

  1. From your main wallet account or exchange, transfer your FIL token to this address.

  2. Connect to MetaMask and copy your 0x address.

  3. Once the funds appear in the burner wallet, click on Send FIL.

  4. Enter the necessary information into the text fields:

  • In the Recipient field, enter your 0x style address. GLIF automatically converts it to an f4 address.

  • In the Amount field, enter the amount of FIL to send. Make sure you have enough FIL to cover the GAS cost.

  1. Click Send. The FIL will arrive in your MetaMask wallet shortly.

Exchange

If you are transferring FIL from any exchange to your 0x address on MetaMask, make sure the exchange supports withdrawing FIL to the 0x or f410 address. If not, you will need extra steps to withdraw FIL to your 0x address. Let’s take Coinbase as an example; you can follow this .

f to f address

There are no special steps or requirements for sending Filecoin from one Filecoin-style address to another on the Filecoin network.

Filecoin plus

What is Filecoin Plus?

The goal of the Filecoin Plus program is to increase the amount of useful data stored with storage providers by clients on the Filecoin network.

In short, this is achieved by appointing allocators responsible for assigning DataCap tokens to clients that are vetted by the allocator as trusted parties storing useful data. Clients then pay DataCap to storage providers as part of a storage deal, which increases a storage provider’s probability of earning block rewards. A full description of this mechanism is described below.

Filecoin Plus creates demand on the Filecoin network, ensuring the datasets stored on the network are legitimate and useful to either the clients, or a third party.

Storage Providers & DataCap

Filecoin Plus introduces two concepts important to interactions on the Filecoin network – DataCap and Quality Adjusted Power (QAP).

DataCap

DataCap is a token paid to storage providers as part of a deal in which the client and the data they are storing is verified by a Filecoin Plus allocator. Batches of DataCap are granted to allocators by root-key holders, allocators give DataCap to verified clients, and clients pay DataCap to storage providers as part of a deal. The more DataCap a storage provider ends up with, the higher probability they have to earn block rewards. The role of each of these participants, and how DataCap is used in a Filecoin Plus deal, is described below in the "Filecoin Plus Processes & Participants" section.

Quality Adjusted Power

Quality Adjusted Power is an assigned rating to a given , the basic unit of storage on the Filecoin network. Quality Adjusted Power is a function of a number of features of the sector, including, but not limited to, the sector’s size and promised duration, and whether the sector includes a Filecoin+ deal. It's clear to the network that a sector includes a Filecoin Plus deal if a deal in that sector involves DataCap paid to the storage provider. The more Filecoin Plus verified data the storage provider has in a sector, the higher the Quality-Adjusted Power a storage provider has, which linearly increases the number of votes a miner has in the , determining which storage provider gets to serve as the verifier for the next block in the blockchain, and thus increasing the probability the storage provider is afforded the opportunity to earn block rewards. For more details on Quality Adjusted Power, see the .

Important

There is a common misconception that a Filecoin Plus deal increases the miner’s reward paid to a Filecoin storage provider by a factor of ten. This is not true, Filecoin+ does not increase the amount of block rewards available to storage providers. Including Filecoin Plus deals in a sector increases the Quality Adjusted Power of a storage provider, which increases the probability a storage provider is selected as the block verifier for the next block on the Filecoin blockchain, and thus increases the probability they earn block rewards.

Consider first a network with ten storage providers. Initially, each storage provider has an equal 10% probability of winning available block rewards in a given period:

In the above visualization, "VD" means "verified deals", that is, deals that have been reviewed by allocators and have associated spending of datacap.

If two of these storage providers begin filling their sectors with verified deals, their chances of winning a block reward increases by a factor of ten relative to their peers. Each one of these storage providers with verified deals in their sectors has a 36% chance of winning the block reward, while storage providers with only in their sectors have a 4% probability of winning the block rewards.

Incentives for storage providers to accept verified deals is strongest initially. As more and more storage providers include verified deals in their sectors, the probability any one of them earns the block rewards returns to an equal chance.

As seen in the diagrams above, Filecoin Plus increases the collateral requirements needed by a storage provider. As a higher percentage of storage providers include verified deals in their sectors, the collateral needed by each storage provider will increase. To learn more about storage provider collateral, see .

Filecoin+ Processes & Participants

The participants of the Filecoin+ program, along with how they interact with each other, is detailed here:

  • Decisions as to who the root-key holders should be, how they should grant and remove batches of DataCap to/from allocators, and other important decisions about the Filecoin+ program are determined through Filecoin Improvement Proposals (FIPs), the community governance process. Learn more about . To see a list of FIPs, see this .

  • Root-key holders execute the governance process for Filecoin+ as determined through community executed Filecoin Improvement Proposals, their role is to grant and remove batches of DataCap to/from allocators. Root-key holders are signers to a multisig wallet on-chain –a majority of signers are needed for an allocator to be granted or removed.

  • Allocators perform due diligence on clients and the data they are storing, allocate DataCap to trusted clients, and facilitate predetermined dispute resolution processes. To learn more about how allocators are chosen and evaluated, see .

  • Clients are participants in the Filecoin network who store data with a storage provider. A trusted client, as determined by an allocator who performs due diligence on the client and the data they are looking to store, will be given DataCap by the allocator. Clients offer to give this DataCap to a storage provider as part of a deal, which increases the “deal quality multiplier” of the deal, and in turn the likelihood a storage provider will accept the deal.

  • Storage providers who receive DataCap as part of a deal are able to use this DataCap to increase their “quality adjusted power” of the storage provider on the network by a factor of ten. As described above, this increases their probability of being selected as the verifier for a block, affording them the opportunity to earn block rewards.

How Filecoin Plus Works

A visualization of the interactions between parties involved in a Filecoin+ deal described above is shown below in Figure 1.

Acquiring DataCap for Clients & Builders

Clients can secure DataCap by making a request to an allocator. Each one of the allocators maintain their own applications for requesting DataCap.

One such allocator is . They maintain a that includes an where clients can make a request of FIDL for DataCap. Clients and builders looking to acquire DataCap may consider applying directly with FIDL, noting that all DataCap applications are transparent and open for public review on the .

Steps to Acquire Mainnet DataCap as a Client

The steps a client should follow to acquire DataCap are as follows:

  1. Create a .

  2. Choose an allocator from the or the who have verified public datasets.

  3. Check that you satisfy the requirements of the allocator. In the case of uploading open source datasets with FIDL as the allocator, the client will need to demonstrate to FIDL that they can (1) satisfy a third party Know Your Customer(KYC) identity check, (2) provide the details of storage provider (entity, storage location) where the data is intended to be stored, and (3) demonstrate proof that the dataset can be actively retrieved. You can learn more about .

  4. Submit an application for DataCap from an allocator. You can submit a request to FIDL via their or .

  5. Use the DataCap in a storage deal.

Steps to Acquire Testnet DataCap as a Builder

For builders on the who need testnet DataCap to test their applications, a faucet is available. The steps a builder should follow to acquire testnet DataCap are as follows:

  1. Create a wallet on Filecoin Calibration testnet. For more information, see the or .

  2. Grant the wallet address DataCap by using this .

DataCap for Smart contracts

Smart contracts can acquire and use DataCap just like any regular client. To do so, simply enter the f410 address of the smart contract as the client address when making a request for DataCap.

Important

It’s important to note that DataCap allocations are a one-time credit for a Filecoin address and cannot be transferred between smart contracts. If you need to redeploy the smart contract, you must request additional DataCap.

How to Use DataCap

Once you have an address with DataCap, you can make deals using DataCap as a part of the payment. Because storage providers receive a deal quality multiplier for taking Filecoin+ deals, many storage providers offer special pricing and services to attract clients who use DataCap to make deals.

By default, when you make a deal with an address with DataCap allocated, you will spend that DataCap when making the deal.

Visualizing Blockchain Data for Filecoin+

There are three resources you can use to check the current status of the Filecoin+ deals and participants:

  • The includes visualizations of and tables for data about Filecoin+ deals on the Filecoin blockchain, organized by Allocators, Clients, and Storage Providers.

  • The shows DataCap allocations, including the number of allocators, clients, and storage providers. You can also see number and size of deals.

  • The includes network health data related to Filecoin+ verified deals.

To learn more about Filecoin Plus, review .

Use PDP

This guide walks you through using the PDP client tool (pdptool) to interact with a Filecoin Storage Provider running the Proof of Data Possession (PDP) service.

DEPRECATED DEVELOPER TOOL

This documentation refers to the legacy pdptool, which is intended only for low-level developer testing.

It is not the recommended method for onboarding or interacting with PDP Storage Providers.

For current usage, including working with live PDP SPs and submitting real deals, please use the and .

PDP ensures that your data is verifiably stored by a Filecoin Storage Provider using cryptographic proofs without needing to retrieve the file itself.

Prerequisites

Before beginning, ensure:

  • You have access to a terminal with internet connectivity

  • Your system has pdptool installed (bundled with Curio)

If pdptool is not installed:

  • Option 1: Clone Curio and build pdptool:

  • Option 2: Install the of pdptool - Provided by our friends at


Authenticate Your Client (JWT Token)

You first need to authenticate your pdptool with a PDP-enabled Storage Provider

Generate a service secret:

Reach out in the channel in Filecoin Slack to register your public key with a PDP-enabled Storage Provider


Connect to a PDP Service

Start by pinging the PDP service to confirm availability:

You should see something like:


Create a Proof Set

Start by creating an empty proof set. This step must happen before uploading files:

Use the 0x transaction hash from the previous output to monitor proof set creation status:

You should see something like:

The proof set creation process can take a few seconds to complete


Upload Files to the Storage Provider

Once your proof set is ready, you can begin uploading files:

Example output:


🌳 Add File Roots to Proof Set

After uploading each file, extract its CID and add it to your proof set:

Example using the information returned in the previous steps:

In the above example, --proof-set-id came from the step, and --root from the step.

Example output:


View a Piece or Proof Set

You can retrieve a proof set or inspect a file root directly:

Example output:


Retrieve From a Proof Set

Download a file using an ordered chunks list:

💡In the above example, –chunk-file and –output-file flags were defined in the


You’re Done!

You’ve now:

✅ Connected to a PDP-enabled storage provider ✅ Created a proof set ✅ Uploaded files and added file roots ✅ Verified availability and proof status

🧭 Next: Track your proof sets in the PDP Explorer

💬 Questions? Join the conversation on Filecoin Slack:

PDP Documentation
Filecoin Slack - #fil-pdp
Filecoin Wallet - MetaMask Setup
git clone 
https://github.com/filecoin-project/curio.git

cd curio
cd cmd/pdptool
go build .
./pdptool create-service-secret
# Example output:

-----BEGIN PUBLIC KEY-----
LxP9MzVmHdC7KwYBvNAo1jXuIRfGXqQyo2JzE4Uctn0a5eFZbs6Wlvq3dKYgphTD
XAqRsm38LPt2iVcGb9MruZJxEkBhO71wDdNyaFMoXpCJnUqRAezvKlfbIg==
-----END PUBLIC KEY-----
./pdptool ping --service-url https://yablu.net --service-name pdp-service
Ping successful: Service is reachable and JWT token is valid.
./pdptool create-proof-set \
  --service-url https://yablu.net \
  --service-name pdp-service \
  --recordkeeper 0x6170dE2b09b404776197485F3dc6c968Ef948505
# Example output:

Proof set creation initiated successfully.
Location: /pdp/proof-sets/created/0xf91617ef532748efb5a51e64391112e5328fbd9a5b9ac20e5127981cea0012a5
Response: 
./pdptool get-proof-set-create-status \
  --service-url https://yablu.net \
  --service-name pdp-service \
  --tx-hash 0xf91617ef532748efb5a51e64391112e5328fbd9a5b9ac20e5127981cea0012a5
Proof Set Creation Status:
Transaction Hash: 0xf91617ef532748efb5a51e64391112e5328fbd9a5b9ac20e5127981cea0012a5
Transaction Status: confirmed
Transaction Successful: true
Proofset Created: true
ProofSet ID: 43
./pdptool upload-file --service-url https://yablu.net --service-name pdp-service /path/to/file.ext
0: pieceSize: 65536
baga6ea4seaqhsevhssmv3j7jjavm4gzdckpjrvbwhhvn73sgibob5bdvtzoqkli:baga6ea4seaqhsevhssmv3j7jjavm4gzdckpjrvbwhhvn73sgibob5bdvtzoqkli
./pdptool add-roots \
  --service-url https://yablu.net \
  --service-name pdp-service \
  --proof-set-id <PROOF-SET-ID> \
  --root <CID1>+<CID2>+<CID3>...
./pdptool add-roots \
  --service-url https://yablu.net \
  --service-name pdp-service \
  --proof-set-id 43 \
  --root baga6ea4seaqhsevhssmv3j7jjavm4gzdckpjrvbwhhvn73sgibob5bdvtzoqkli:baga6ea4seaqhsevhssmv3j7jjavm4gzdckpjrvbwhhvn73sgibob5bdvtzoqkli
Roots added to proof set ID 43 successfully.
Response: 
./pdptool get-proof-set \
  --service-url https://yablu.net \
  --service-name pdp-service 43
Proof Set ID: 43
Next Challenge Epoch: 2577608
Roots:
  - Root ID: 0
    Root CID: baga6ea4seaqhsevhssmv3j7jjavm4gzdckpjrvbwhhvn73sgibob5bdvtzoqkli
    Subroot CID: baga6ea4seaqhsevhssmv3j7jjavm4gzdckpjrvbwhhvn73sgibob5bdvtzoqkli
    Subroot Offset: 0
./pdptool download-file \
  --service-url https://yablu.net \
  --chunk-file chunks.list \
  --output-file file.ext
Synapse SDK
Synapse dApp Tutorial
Docker version
ChainSafe
#fil-pdp
Create Proof Set
Upload Files to the Storage Provider
Upload Files to the Storage Provider step
Calibration PDP Explorer
Mainnet PDP Explorer
#fil-pdp
lassie fetch <CID>
go install github.com/filecoin-project/lassie/cmd/lassie@latest
go install github.com/ipld/go-car/cmd/car@latest
lassie fetch -o - <CID> | car extract
lassie fetch -o - bafykbzaceatihez66rzmzuvfx5nqqik73hlphem3dvagmixmay3arvqd66ng6 | car extract - > lidar-data.tar
Fetching bafykbzaceatihez66rzmzuvfx5nqqik73hlphem3dvagmixmay3arvqd66ng6................................................................................................................................................
Fetched [bafykbzaceatihez66rzmzuvfx5nqqik73hlphem3dvagmixmay3arvqd66ng6] from [12D3KooWPNbkEgjdBNeaCGpsgCrPRETe4uBZf1ShFXStobdN18ys]:
        Duration: 42.259908785s
          Blocks: 144
           Bytes: 143 MiB
extracted 1 file(s)
ls -l
# total 143M
# -rw-rw-r-- 1 user user 143M Feb 16 11:21 lidar-data.tar
lassie fetch -p -o <OUTFILE_FILE_NAME> <CID>/path/to/content
Fetching bafykbzaceatihez66rzmzuvfx5nqqik73hlphem3dvagmixmay3arvqd66ng6
Querying indexer for bafykbzaceatihez66rzmzuvfx5nqqik73hlphem3dvagmixmay3arvqd66ng6...
Found 4 storage providers candidates from the indexer, querying all of them:
        12D3KooWPNbkEgjdBNeaCGpsgCrPRETe4uBZf1ShFXStobdN18ys
        12D3KooWNHwmwNRkMEP6VqDCpjSZkqripoJgN7eWruvXXqC2kG9f
        12D3KooWKGCcFVSAUXxe7YP62wiwsBvpCmMomnNauJCA67XbmHYj
        12D3KooWLDf6KCzeMv16qPRaJsTLKJ5fR523h65iaYSRNfrQy7eU
Querying [12D3KooWLDf6KCzeMv16qPRaJsTLKJ5fR523h65iaYSRNfrQy7eU] (started)...
Querying [12D3KooWKGCcFVSAUXxe7YP62wiwsBvpCmMomnNauJCA67XbmHYj] (started)...

...
lassie fetch -o - bafybeiaysi4s6lnjev27ln5icwm6tueaw2vdykrtjkwiphwekaywqhcjze/wiki/Cryptographic_hash_function | car extract - | less
car extract -f <INPUT_FILE>[/path/to/file/or/directory] [<OUTPUT_DIR>]
GET /ipfs/{cid}[/path][?params]
Go
#
Lassie Binary from the latest release
go-car binary from the latest release
go-car
CAR (v1)
Lassie’s HTTP spec
CARv1
Lassie’s car spec
Was this page helpful?
Lassie Architecture

From an 0x address

From a f address

To an 0x address

0x => 0x address

f =>0x address

To a f address

0x => f address

f => f address

Learn how to configure your Ethereum wallet on the Filecoin network
Beryx Address converter
Filfox Explorer
MetaMask has a simple guide
FilForwarder
OKX
Kraken
Btcturk
block explorer
glif.io/
Guide: How to transfer FIL from Coinbase to a Metamask Wallet (0x)
Was this page helpful?
Create burner wallet
Seed phase
Copy the wallet address
Fill out send detail

Network

This page covers the importance of network skills for a storage provider setup, including network architecture, monitoring, security, infrastructure components, and performance optimizations.

Network skills are crucial for building and maintaining a well-functioning storage provider setup. The network architecture plays a vital role in the overall performance of the storage system. Without a proper network architecture, the system can easily become bogged down and suffer from poor performance.

To ensure optimal performance, it is essential to understand where the bottlenecks in the network setup are. This requires a good understanding of network topology, protocols, and hardware. It is also important to be familiar with network monitoring tools that can help identify performance issues and optimize network traffic.

In addition, knowledge of security protocols and best practices is essential for protecting the storage provider setup from unauthorized access, data breaches, and other security threats. Understanding network security principles can help ensure the integrity and confidentiality of data stored on the network.

Overall, network skills are essential for building a high-performing, well-balanced storage provider setup. A solid understanding of network architecture, topology, protocols, and security principles can help optimize performance, prevent bottlenecks, and protect against security threats.

For example, a storage provider setup may have multiple servers that are connected to a network. If the network architecture is not designed properly, data transfer between the servers can become slow and cause delays. This can lead to poor performance and frustrated users. By understanding network architecture and designing the network properly, such bottlenecks can be avoided.

Monitoring the network is also crucial in identifying potential performance issues. Network monitoring tools can provide insights into network traffic patterns, bandwidth usage, and other metrics that can be used to optimize performance. Monitoring the network can help identify bottlenecks and areas where improvements can be made.

Network security is another important consideration for storage provider setups. A network that is not properly secured can be vulnerable to unauthorized access, data breaches, and other security threats. Network security principles such as firewalls, encryption, and access control can be used to protect the storage provider setup from these threats.

In summary, network skills are essential for building and maintaining a high-performing storage provider setup. A solid understanding of network architecture, topology, protocols, and security principles can help optimize performance, prevent bottlenecks, and protect against security threats. Monitoring the network is also crucial in identifying potential issues and ensuring smooth data flow.

Network infrastructure

Network infrastructure, including switches, routers, and firewalls, plays a crucial role in the performance, reliability, and security of any network. Having the right infrastructure in place is essential to ensuring smooth and seamless network connectivity.

Switches are essential for connecting multiple devices within a network. They direct data traffic between devices on the same network, allowing for efficient communication and data transfer. Switches come in a variety of sizes and configurations, from small desktop switches for home networks to large modular switches for enterprise networks. Choosing the right switch for your network can help ensure optimal performance and reliability.

Routers, on the other hand, are responsible for connecting different networks together. They enable communication between devices on different networks, such as connecting a home network to the internet or connecting multiple offices in a business network. Routers also provide advanced features such as firewall protection and traffic management to help ensure network security and optimize network performance.

Firewalls act as a first line of defense against external threats. They filter traffic coming into and out of a network, blocking malicious traffic and allowing legitimate traffic to pass through. Firewalls come in various forms, from hardware firewalls to software firewalls, and can be configured to block specific types of traffic or restrict access to certain parts of the network.

When it comes to network infrastructure, it’s important to choose switches, routers, and firewalls that are reliable, efficient, and secure. This means taking into account factors such as network size, bandwidth requirements, and security needs when selecting infrastructure components.

In addition to choosing the right components, it’s also important to properly configure and maintain them. This includes tasks such as setting up VLANs, implementing security features such as access control lists (ACLs), and regularly updating firmware and software to ensure optimal performance and security.

In summary, network infrastructure, including switches, routers, and firewalls, is essential for building a reliable and secure network. Whether you are building a small home network or a large-scale enterprise network, investing in the right infrastructure components and properly configuring and maintaining them can help ensure optimal network performance, reliability, and security.

Performance

Performance is a critical aspect of a storage provider setup, particularly when dealing with high network throughput requirements between multiple systems. To ensure optimal performance, it is important to use network benchmarking tools such as iperf and iperf3. These tools make it easy to test network throughput and identify bottlenecks in the network setup.

By using iperf or iperf3, you can determine the maximum network throughput between two systems. This can help you identify potential performance issues, such as network congestion or insufficient bandwidth. By running network benchmarks, you can also determine the impact of changes to the network setup, such as adding or removing hardware components.

As we are dealing with high network throughput requirements between multiple systems (to and from Boost, between the PC1 and PC2 workers and from PC2 to lotus-miner) it is worth learning to work with iperf and iperf3, which allow for easy network benchmarking.

As a storage provider, you also need to make trade-offs between performance and cost. Higher bandwidth networks typically offer better performance but come with a higher cost. Therefore, you need to perform calculations to determine whether investing in a higher bandwidth network is worth the cost.

For example, if your storage provider setup requires high network throughput, but your budget is limited, you may need to prioritize certain network components, such as switches and network cards, over others. By analyzing the performance impact of each component and comparing it to the cost, you can make informed decisions about which components to invest in.

In summary, performance is a critical aspect of a storage provider setup, particularly when dealing with high network throughput requirements. Network benchmarking tools such as iperf and iperf3 can help identify potential performance issues and optimize the network setup. To make informed decisions about the network setup, you also need to make trade-offs between performance and cost by analyzing the impact of each component and comparing it to the cost.

Was this page helpful?

sector
Secret Leader Election
Filecoin specification
regular deals
this link
Filecoin+ governance
link
this blog
Filecoin Incentive Design Labs (FIDL)
Github repository
application
issues page
Filecoin wallet
full list of active allocators
active list of allocators
FIDL’s requirements and application process
Github application form
Google Form
Calibration testnet
Calibration docs
Github
faucet
Learn more about Storage Deals.
Filecoin Pulse dashboard
Datacap Stats dashboard
Starboard Dashboard
FIP003: Filecoin Plus Principles
Was this page helpful?
verified-deals-impact-1
verified-deals-impact-2
filecoinplus3
Figure 1 | Diagram showing participant interactions in a Filecoin+ deal.

Filecoin compared to

While Filecoin shares some similarities to other file storage solutions, the protocol has significant differences that one should consider.

Filecoin combines many elements of other file storage and distribution systems. What makes Filecoin unique is that it runs on an open, peer-to-peer network while still providing economic incentives and proofs to ensure files are being stored correctly. This page compares Filecoin against other technologies that share some of the same properties.

Filecoin vs. Amazon S3, Google Cloud Storage

Filecoin
Amazon S3, Google Cloud Storage

Filecoin tokens (FIL) vs. Bitcoin tokens (BTC)

FIL
BTC

Return-on-investment

This page covers the potential return-on-investment (ROI) for storage providers (SPs) and how each SP can calculate their ROI.

Calculating the Return-on-Investment (ROI) of your storage provider business is essential to determine the profitability and sustainability of your operations. The ROI indicates the return or profit on your investment relative to the cost of that investment. There are several factors to consider when calculating the ROI of a storage provider business.

First, the cost of the initial hardware investment and the collateral in FIL required to participate in the network must be considered. These costs are significant and will likely require financing from investors, venture capitalists, or banks.

Second, the income generated from the block rewards must be factored into the ROI calculation. However, this income is subject to the volatility of the FIL token price, which can be highly unpredictable.

Third, it is important to consider the cost of sales when calculating the ROI. Sales costs include the cost of acquiring new customers, marketing, and any fees associated with payment processing. These costs can vary depending on the sales strategy and the size of the business.

Fourth, the total cost of ownership must be considered. This includes the cost of backups, providing access to ingest and retrieve data, preparing the data, and any other costs associated with operating a storage provider business.

Finally, the forecasted growth of the network and the demand for storage will also impact the ROI calculation. If the network and demand for storage grow rapidly, the ROI may increase. However, if the growth is slower than anticipated, the ROI may decrease.

Overall, calculating the ROI of a storage provider business is complex and requires a thorough understanding of the costs and income streams involved. The storage provider Forecast Calculator can assist in determining the ROI by accounting for various factors such as hardware costs, token price, and expected growth of the network.

Calculating the ROI of your storage provider business is important. Check out the for more details.

For more information and context see the following video:

It takes more variables than the cost vs. the income. In summary, the factors that influence your ROI are:

  • Verified Deals:

    How much of your total sealed capacity will be done with Verified Deals (Filecoin Plus)? Those deals give a far higher return because of the 10x multiplier that is added to your storage power and block rewards.

  • Committed Capacity:

    How much of your total sealed capacity will be just committed capacity (CC) sectors (sometimes also called pledged capacity)? These deals give a lower return compared to verified deals but are an easy way to get started in the network. Relying solely on this to generate income is challenging though, especially when the price of FIL is low.

  • Sealing Capacity:

    How fast can you seal sectors? Faster sealing means you can start earning block rewards earlier and add more data faster. The downside is that it requires a lot of .

  • Deal Duration:

    How long do you plan to run your storage provider? Are you taking short-term deals only, or are you in it for the long run? Taking long-term deals comes with an associated risk: if you can’t keep your storage provider online for the duration of the deals, you will get penalized. Short-term deals that require extension have the downside of higher operational costs to extend (which requires that the data be re-sealed.).

  • FIL Collateral pledged:

    A substantial amount of FIL is needed to start accepting deals in the Filecoin network. Verified deals require more pledged collateral than CC-deals. Although the collateral is not lost if you run your storage provider business well, it does mean an upfront investment (or lending).

  • Hardware Investment:

    Sealing, storing, and proving the data does require a significant hardware investment as a storage provider. Although relying on services like can lower these requirements for you, it is still an investment in high-end hardware. Take the time to understand your requirements and your future plans so that you can invest in hardware that will support your business.

  • Operational Costs:

    Last but not least there’s the ongoing monthly cost of operating the storage provider business. Both the costs for technical operations as well as business operations need to be taken into consideration.

Main use case

Storing files at hypercompetitive prices

Storing files using a familiar, widely-supported service

Pricing

Determined by a hypercompetitive open market

Set by corporate pricing departments

Centralization

Many small, independent storage providers

A handful of large companies

Reliability stats

Independently checked by the network and publicly verifiable

Companies self-report their own stats

API

Applications can access all storage providers using the Filecoin protocol

Applications must implement a different API for each storage provider

Retrieval

Competitive market for retrieving files

Typically more expensive than storing files to lock users in

Fault handling

If a file is lost, the user is refunded automatically by the network

Companies can offer users credit if files are lost or unavailable

Support

If something goes wrong, the Filecoin protocol determines what happens without human intervention

If something goes wrong, users contact the support help desk to seek resolution

Physical location

Miners located anywhere in the world

Limited to where provider’s data centres are located

Becoming a storage provider

Low barrier to entry for storage providers (computer, hard drive, internet connection)

High barrier to entry for storage providers (legal agreements, marketing, support staff)

Main use case

File storage

Payment network

Data storage

Good at storing large amounts of data inexpensively

Small amounts of data can be stored on blockchain at significant cost

Proof

Blockchain secured using proof of replication and proof of spacetime

Blockchain secured using proof of work

Consensus power

Miners with the most storage have the most power

Miners with the most computational speed have the most power

Mining hardware

Hard drives, GPUs, and CPUs

ASICs

Mining usefulness

Mining results in peoples’ files being stored

Mining results in heat

Types of provider

Storage provider, retrieval provider, repair provider

All providers perform proof of work

Uptime requirements

Storage providers rewarded for uptime, penalized for downtime

Miners can go offline without being penalized

Network status

Mainnet running since 2020

Mainnet running since 2009

Filecoin vs. Amazon S3, Google Cloud Storage
Filecoin vs. Bitcoin
Was this page helpful?
Storage Provider Forecast Calculator
hardware
sealing-as-a-service
Was this page helpful?

Filecoin FAQs

Answers to your frequently asked questions on everything from Filecoin’s crypto-economics and storage expenses to hardware and networking.

What are some of the primary use cases for Filecoin?

Filecoin is a protocol that provides core primitives, enabling a truly trustless decentralized storage network. These primitives and features include publicly verifiable cryptographic storage proofs, cryptoeconomic mechanisms, and a public blockchain. Filecoin provides these primitives to solve the really hard problem of creating a trustless decentralized storage network.

On top of the core Filecoin protocol, there are a number of layer 2 solutions that enable a broad array of use cases and applications, many of which also use IPFS, such as Lighthouse or Tableland. Using these solutions, any use case that can be built on top of IPFS can also be built on Filecoin!

Some of the primary areas for development on Filecoin are:

  • Additional developer tools and layer-2 solutions and libraries that strengthen Filecoin as a developer platform and ecosystem.

  • IPFS apps that rely on decentralized storage solutions and want a decentralized data persistence solution as well.

  • Financial tools and services on Filecoin, like wallets, signing libraries, and more.

  • Applications that use Filecoin’s publicly verifiable cryptographic proofs in order to provide trustless and timestamped guarantees of storage to their users.

How can a website or app be free if it costs to retrieve data from the Filecoin network?

Most websites and apps make money by displaying ads. This type of income-model could be replaced with a Filecoin incentivized retrieval setup, where users pay small amounts of FIL for whatever files they’re hoping to download. Several large datasets are hosted through Amazon’s pay per download S3 buckets, which Filecoin retrieval could also easily augment or replace.

How will Filecoin attract developers to use Filecoin for storage?

It’s going to require a major shift in how we think about the internet. At the same time, it is a very exciting shift, and things are slowly heading that way. Browser vendors like Brave, Opera, and Firefox are investing into decentralized infrastructure.

We think that the internet must return to its decentralized roots to be resilient, robust, and efficient enough for the challenges of the next several decades. Early developers in the Filecoin ecosystem are those who believe in that same vision and potential for the internet, and we’re excited to work with them to build this space.

What are the detailed parameters of Filecoin’s cryptoeconomics?

We are still finalizing our cryptoeconomic parameters, and they will continue to evolve.

Here is a blog about Filecoin economics from December 2020: Filecoin network economics.

How expensive will Filecoin storage be at launch?

As Filecoin is a free market, the price will be determined by a number of variables related to the supply and demand for storage. It’s difficult to predict before launch. However, a few design elements of the network help support inexpensive storage.

Along with revenue from active storage deals, Storage Miners receive block rewards, where the expected value of winning a given block reward is proportional to the amount of storage they have on the network. These block rewards are weighted heavily towards the early days of the network (with the frequency of block rewards exponentially decaying over time). As a result, Storage Miners are relatively incentivized to charge less for storage to win more deals, which would increase their expected block reward.

Further, Filecoin introduces a concept called Verified Clients, where clients can be verified to actually be storing useful data. Storage Miners who store data from Verified Clients also increase their expected block reward. Anyone running a Filecoin-backed IPFS Pinning Services should qualify as a Verified Client. We do not have the process of verification finalized, but we expect it to be similar to submitting a GitHub profile.

Will it be cheaper to store data on Filecoin than other centralized cloud services?

Filecoin creates a hyper-competitive market for data storage. There will be many storage providers offering many prices, rather than one fixed price on the network. We expect Filecoin’s permissionless model and low barriers to entry to result in some very efficient operations and low-priced storage, but it’s impossible to say what exact prices will be until the network is live.

What happens to the existing content on IPFS once Filecoin launches? What if nodes continue to host content for free and undermine the Filecoin incentive layer?

IPFS will continue to exist as it is, enhanced with Filecoin nodes. There are many use cases that require no financial incentive. Think of it like IPFS is HTTP, and Filecoin is a storage cloud-like S3 – only a fraction of IPFS content will be there.

People with unused storage who want to earn monetary rewards should pledge that storage to Filecoin, and clients who want guaranteed storage should store that data with Filecoin storage providers.

Lotus or Venus, which is better for storage providers?

Lotus is the primary reference implementation for the Filecoin protocol. At this stage, we would recommend most storage providers use lotus to participate in the Filecoin network.

What is your recommendation on the right hardware to use?

While the Filecoin team does not recommend a specific hardware configuration, we document various setups here. Additionally, this guide to storage mining details hardware considerations and setups for storage providers. However, it is likely that there are more efficient setups, and we strongly encourage storage providers to test and experiment to find the best combinations.

We are worried about the ability of our network to handle the additional overhead of running a Filecoin node and still provide fast services for our customers. What are the computational demands of a Lotus node? Are there any metrics for node performance given various requirements?

For information on Lotus requirements, see Prerequisites > Minimal requirements.

For information on Lotus full nodes and lite nodes, see Types of nodes.

We bought a lot of hard drives of data through the Discover project. When will they be shipped to China?

There are a number of details that are still being finalized between the verified deals construction and the associated cryptoeconomic parameters.

Our aim is to allow these details to finalize before shipping, but given timelines, we’re considering enabling teams to take receipt of these drives before the parameters are set. We will publish updates on the status of the Discover project on the Filecoin blog.

Do Filecoin storage providers need a fixed IP?

For mainnet, you will need a public IP address, but it doesn’t need to be fixed (just accessible).

What if we lost a sector accidentally, is there any way to fix that?

If you lost the data itself, then no, there’s no way to recover that, and you will be slashed for it. If the data itself is recoverable, though (say you just missed a WindowPoSt), then the Recovery process will let you regain the sector.

Has Filecoin confirmed the use of the SDR algorithm? Is there any evidence of malicious construction?

SDR (Stacked DRG PoRep) is confirmed and used, and we have no evidence of malicious construction. The algorithm is also going through both internal and external security audits.

If you have any information about any potential security problem or malicious construction, reach out to our team at [email protected].

How likely is it that the Filecoin protocol will switch to the NSE Proof-of-Replication construction later?

Native storage extension (NSE) is one of the best candidates for a proof upgrade, and teams are working on implementation. But there are other candidates too, which are promising as well. It may be that another algorithm ends up better than NSE – we don’t know yet. Proof upgrades will arrive after the mainnet launch and will coexist.

AMD may be optimal hardware for SDR. You can see this description for more information on why.

How are you working on bootstrapping the demand side of the marketplace? The Discover program is nice, but who is the target market for users, and how do you get them?

In addition to Filecoin Discover, a number of groups are actively building tools and services to support the adoption of the Filecoin network with developers and clients. For example, check out the recordings from our Virtual Community Meetup to see updates about Textile and Starling Storage. You can also read more about some of the teams building on Filecoin through HackFS in our HackFS Week 1 Recap.

Does Filecoin have an implementation of client and storage provider order matching through order books?

There will be off-chain order books and storage provider marketplaces – some are in development now from some teams. They will work mostly off-chain because transactions per second on-chain are not enough for the volume of usage we expect on Filecoin. These order books build on the basic deal-flow on-chain. These order books will arrive in their own development trajectory – most likely around or soon after the mainnet launch.

Why does Filecoin mining work best on AMD?

Currently, Filecoin’s Proof of Replication (PoRep) prefers to be run on AMD processors. See this description of Filecoin sealing for more information. More accurately, it runs much slower on Intel CPUs. It runs competitively fast on some ARM processors, like the ones in newer Samsung phones, but they lack the RAM to seal the larger sector sizes. The main reason that we see this benefit on AMD processors is due to their implementation of the SHA hardware instructions.

What do storage providers have to do to change a committed capacity (CC) sector into a “real-data” sector?

Storage providers will publish storage deals that they will upgrade the CC sector with, announce to the chain that they are doing an upgrade, and prove to the chain that a new sector has been sealed correctly. We expect to evolve and make this cheaper and more attractive over time after the mainnet launch.

What does “terminating a sector” mean?

When a committed capacity sector is added to the chain, it can upgrade to a sector with deals, extend its lifetime, or terminate through either faults or voluntary actions. While we don’t expect this to happen very often on mainnet, a storage provider may deem it rational to terminate their promise to the network and their clients, and accept a penalty for doing so.

Does the committed capacity sector still need to be sealed before it upgrades to one with real data?

For the first iteration of the protocol, yes. We have plans to make it cheaper and more economically attractive after mainnet with no resealing required and other perks.

What’s the minimum time period for the storage contract between the provider and the buyer?

The minimum duration for a deal is set in the storage provider’s ask. There’s also a practical limitation because sectors have a minimum duration (currently 180 days).

After I made a deal with a storage provider and sent my data to them, how exactly is the data supposed to be recoverable and healable if that storage provider goes down?

Automatic repair of faulted data is a feature we’ve pushed off until after the mainnet launch. For now, the way to ensure resiliency is to store your data with multiple storage providers, to gain some level of redundancy. If you want to learn more about how we are thinking about repair in the future, here are some notes.

How do I know that my storage provider will not charge prohibitively high costs for data retrieval?

To avoid extortion, always ensure you store your data with a fairly decentralized set of storage providers (and note: it’s pretty difficult for a storage provider to be sure they are the only person storing a particular piece of data, especially if you encrypt the data).

Storage providers currently provide a ‘dumb box’ interface and will serve anyone any data they have. Maybe in the future, storage providers will offer access control lists (ACLs) and logins and such, but that requires that you trust the storage provider. The recommended (and safest) approach here is to encrypt data you don’t want others to see yourself before storing it.

How do you update data stored on Filecoin?

We have some really good ideas around ‘warm’ storage (that is mutable and provable) that we will probably implement in the near future. But for now, your app will have to treat Filecoin as an append-only log. If you want to change your data, you just write new data.

‘Warm’ storage can be done with a small amount of trust, where you make a deal with a storage provider with a start date quite far in the future. The storage provider can choose to store your data in a sector now (but they won’t get paid for proving it until the actual start date), or they can hold it for you (and even send you proofs of it on request), and you can then send them new data to overwrite it, along with a new storage deal that overwrites the previous one.

There’s a pretty large design space here, and we can do a bunch of different things depending on the levels of trust involved, the price sensitivity, and the frequency of updates clients desire.

Who will be selected to be verifiers to verify clients on the network?

Allocators, selected through an application process, serve as fiduciaries for the Filecoin network and are responsible for allocating DataCap to clients with valuable storage use cases.

See Filecoin Plus.

Will the existence of Filecoin mining pools lead to centralized storage and away from the vision of distributed storage?

No – Filecoin creates a decentralized storage network in part by massively decreasing the barrier to entry to becoming a storage provider. Even if there were some large pools, anyone can join the network and provide storage with just a modest hardware purchase, and we expect clients to store their files with many diverse storage providers.

Also, note that world location matters for mining: many clients will prefer storage providers in specific regions of the world, so this enables lots of storage providers to succeed across the world, where there is storage demand.

Even though Filecoin will be backed up to our normal IPFS pinning layer, we still need to know how quickly we can access data from the Filecoin network. How fast will retrieval be from the Filecoin network?

If you are retrieving your data from IPFS or a remote pinning layer, retrieval should take on the order of milliseconds to seconds in the worst case. Our latest tests for retrieval from the Filecoin network directly show that a sealed sector holding data takes ~1 hour to unseal. 1-5 hours is our best real-world estimate to go from sector unsealing to delivery of the data. If you need faster data retrieval for your application, we recommend building on IPFS.

Was this page helpful?

Actors

Actors are smart contracts that run on the Filecoin virtual machine (FVM) and are used to manage, query, and update the state of the Filecoin network. Smart contracts are small, self-executing blocks.

For those familiar with the Ethereum virtual machine (EVM), actors work similarly to smart contracts. In the Filecoin network, there are two types of actors:

  • Built-in actors: Hardcoded programs written ahead of time by network engineers that manage and orchestrate key subprocesses and subsystems in the Filecoin network.

  • User actors: Code implemented by any developer that interacts with the Filecoin Virtual Machine (FVM).

Built-in actors

Built-in actors are how the Filecoin network manages and updates global state. The global state of the network at a given epoch can be thought of as the set of blocks agreed upon via network consensus in that epoch. This global state is represented as a state tree, which maps an actor to an actor state. An actor state describes the current conditions for an individual actor, such as its FIL balance and its nonce. In Filecoin, actors trigger a state transition by sending a message. Each block in the chain can be thought of as a proposed global state, where the block selected by network consensus sets the new global state. Each block contains a series of messages and a checkpoint of the current global state after the application of those messages. The Filecoin Virtual Machine (FVM) is the Filecoin network component that is in charge of the execution of all actor code.

A basic example of how actors are used in Filecoin is the process by which storage providers prove storage and are subsequently rewarded. The process is as follows:

  1. The StorageMinerActor processes proof of storage from a storage provider.

  2. The storage provider is awarded storage power based on whether the proof is valid or not.

  3. The StoragePowerActor accounts for the storage power.

  4. During block validation, the StoragePowerActor state, which includes information on storage power allocated to each storage provider, is read.

  5. Using the state information, the consensus mechanism randomly awards blocks to the storage providers with the most power, and the RewardActor sends FIL to storage providers.

Blocks

Each block in the Filecoin chain contains the following:

  • Inline data such as current block height.

  • A pointer to the current state tree.

  • A pointer to the set of messages that, when applied to the network, generated the current state tree.

State tree

A Merkle Directed Acyclic Graph (Merkle DAG) is used to map the state tree and the set of messages. Nodes in the state tree contain information on:

  • Actors, like FIL balance, nonce, and a pointer (CID) to actor state data.

  • Messages in the current block

Messages

Like the state tree, a Merkle Directed Acyclic Graph (Merkle DAG) is used to map the set of messages for a given block. Nodes in the messages may contain information on:

  • The actor the message was sent to

  • The actor that sent the message

  • Target method to call on the actor being sent the message

  • A cryptographic signature for verification

  • The amount of FIL transferred between actors

Actor code

The code that defines an actor in the Filecoin network is separated into different methods. Messages sent to an actor contain information on which method(s) to call and the input parameters for those methods. Additionally, actor code interacts with a runtime object, which contains information on the general state of the network, such as the current epoch, cryptographic signatures, and proof validations. Like smart contracts in other blockchains, actors must pay a gas fee, which is some predetermined amount of FIL to offset the cost (network resources used, etc.) of a transaction. Every actor has a Filecoin balance attributed to it, a state pointer, a code that tells the system what type of actor it is, and a nonce, which tracks the number of messages sent by this actor.

Types of built-in actors

The 11 different types of built-in actors are as follows:

  • CronActor

  • InitActor

  • AccountActor

  • RewardActor

  • StorageMarketActor

  • StorageMinerActor

  • MultisigActor

  • PaymentChannelActor

  • StoragePowerActor

  • VerifiedRegistryActor

  • SystemActor

CronActor

The CronActor sends messages to the StoragePowerActor and StorageMarketActor at the end of each epoch. The messages sent by CronActor indicate to StoragePowerActor and StorageMarketActor how they should maintain the internal state and process deferred events. This system actor is instantiated in the genesis block and interacts directly with the FVM.

InitActor

The InitActor can initialize new actors on the Filecoin network. This system actor is instantiated in the genesis block and maintains a table resolving a public key and temporary actor addresses to their canonical ID addresses. The InitActor interacts directly with the FVM.

AccountActor

The AccountActor is responsible for user accounts. Account actors are not created by the InitActor but by sending a message to a public-key style address. The account actor updates the state tree with a new actor address and interacts directly with the FVM.

RewardActor

The RewardActor manages unminted Filecoin tokens and distributes rewards directly to miner actors, where they are locked for vesting. The reward value used for the current epoch is updated at the end of an epoch. The RewardActor interacts directly with the FVM.

StorageMarketActor

The StorageMarketActor is responsible for processing and managing on-chain deals. This is also the entry point of all storage deals and data into the system. This actor keeps track of storage deals and the locked balances of both the client storing data and the storage provider. When a deal is posted on-chain through the StorageMarketActor, the actor will first check if both transacting parties have sufficient balances locked up and include the deal on-chain. Additionally, the StorageMarketActor holds Storage Deal Collateral provided by the storage provider to collateralize deals. This collateral is returned to the storage provider when all deals in the sector successfully conclude. This actor does not interact directly with the FVM.

StorageMinerActor

The StorageMinerActor is created by the StoragePowerActor and is responsible for storage mining operations and the collection of mining proofs. This actor is a key part of the Filecoin storage mining subsystem, which ensures a storage miner can effectively commit storage to Filecoin and handles the following:

  • Committing new storage

  • Continuously proving storage

  • Declaring storage faults

  • Recovering from storage faults

This actor does not interact directly with the FVM.

MultisigActor

The MultisigActor is responsible for dealing with operations involving the Filecoin wallet and represents a group of transaction signers with a maximum of 256. Signers may be external users or the MultisigActor itself. This actor does not interact directly with the FVM.

PaymentChannelActor

The PaymentChannelActor creates and manages payment channels, a mechanism for off-chain microtransactions for Filecoin dApps to be reconciled on-chain at a later time with less overhead than a standard on-chain transaction and no gas costs. Payment channels are uni-directional and can be funded by adding to their balance. To create a payment channel and deposit fund, a user calls the PaymentChannelActor. This actor does not interact directly with the FVM.

StoragePowerActor

The StoragePowerActor is responsible for keeping track of the storage power allocated to each storage miner and has the ability to create a StorageMinerActor. This actor does not interact directly with the FVM.

VerifiedRegistryActor

The VerifiedRegistryActor is responsible for managing Filecoin Plus clients. This actor can add a verified client to the Filecoin Plus program, remove and reclaim expired DataCap allocations, and manage claims. This actor does not interact directly with the FVM.

SystemActor

For more information on SystemActor, see the source code.

User actors (smart contracts)

A user actor is code defined by any developer that can interact with the FVM, otherwise known as a smart contract.

A smart contract is a small, self-executing block of custom code that runs on other blockchains, like Ethereum. In the Filecoin network, the term is a synonym for user actor. You may see the term smart contract used in tandem with user actor, but there is no difference between the two.

With the FVM, actors can be written in Solidity. In future updates, any language that compiles to WASM will be supported. With user actors, users can create and enforce custom rules for storing and accessing data on the network. The FVM is responsible for actors and ensuring that they are executed correctly and securely.

Was this page helpful?

Xinan Xu's presentation on Singularity

Software components

Understanding the components of Lotus is necessary in understanding subsequent sections on sealing, and what it means to build well-balanced storage provider architecture.

The diagram below shows the major components of Lotus:

The following components are the most important to understand:

  • Lotus daemon

  • Lotus miner

  • Lotus worker

  • Boost

Click here for a compatibility matrix of the different components and the required Golang version.

Lotus daemon

The daemon is a key Lotus component that does the following:

  • Syncs the chain

  • Holds the wallets of the storage provider

The machine running the Lotus daemon must be connected to the public internet for the storage provider to function. See the Lotus documentation for more in-depth information on connectivity requirements.

Syncing the chain

Syncing the chain is a key role of the daemon. It communicates with the other nodes on the network by sending messages, which are, in turn, collected into blocks. These blocks are then collected into tipsets. Your Lotus daemon receives the messages on-chain, enabling you to maintain consensus about the state of the Filecoin network with all the other participants.

Due to the growth in the size of the chain since its genesis, it is not advised for storage providers to sync the entire history of the network. Instead, providers should use the available lightweight snapshots to import the most recent messages. One exception in which a provider would need to sync the entire chain would be to run a blockchain explorer.

Synced chain data should be stored on an SSD; however, faster NVMe drives are strongly recommended. A slow chain sync can lead to delays in critical messages being sent on-chain from your Lotus miner, resulting in the faulting of sectors and the slashing of collateral.

Another important consideration is the size of the file system and available free space. Because the Filecoin chain grows as much as 50GB a day, any available space will eventually fill up. It is up to storage providers to manage the size of the chain on disk and prune it as needed. Solutions like SplitStore (enabled by default) and compacting reduce the storage space used by the chain. Compacting involves replacing the built-up chain data with a recent lightweight snapshot.

Holding wallets

Another key role of the Lotus daemon is to host the Filecoin wallets that are required to run a storage provider (SP). As an SP, you will need a minimum of 2 wallets: an owner wallet and a worker wallet. A third wallet called the control wallet) is required to scale your operations in a production environment.

To keep wallets safe, providers should consider physical access, network access, software security, and secure backups. As with any cryptocurrency wallet, access to the private key means access to your funds. Lotus supports Ledger hardware wallets, the use of which is recommended, or remote wallets with lotus-wallet on a remote machine (see remote lotus wallet for instructions). The worker and control wallets can not be kept on a hardware device because Lotus requires frequent access to those types of wallets. For instance, Lotus may require access to a worker or control wallet to send WindowPoSt proofs on-chain.

Control wallets

Control wallets are required to scale your operations in a production environment. In production, only using the general worker wallet increases the risk of message congestion, which can result in delayed message delivery on-chain and potential sector faulting, slashing, or lost block rewards. It is recommended that providers create wallets for each subprocess. There are five different types of control wallets a storage provider can create:

  • PoSt wallet

  • PreCommit wallet

  • Commit wallet

  • Publish storage deals wallet

  • Terminate wallet

The lotus-miner also gets an address to which funds can/should be sent. This address can be used to pay any fees and collateral. Withdrawal from this address is only possible with the owner wallet private key.

Lotus miner

The Lotus miner, often referred to using the daemon naming syntax lotus-miner, is the process that coordinates most of the storage provider activities. It has 3 main responsibilities:

  • Storing sectors and data

  • Scheduling jobs

  • Proving the stored data

Storing sectors and data

Storage Providers on the Filecoin network store sectors. There are two types of sectors that a provider may store:

  • Sealed sectors: these sectors may or may not actually contain data, but they provide capacity to the network, for which the provider is rewarded.

  • Unsealed sectors: used when storing data deals, as retrievals happen from unsealed sectors.

Originally, lotus-miner was the component with storage access. This resulted in lotus-miner hardware using internal disks, directly attached storage shelves like JBODs, Network-Attached-Storage (NAS), or a storage cluster. However, this design introduced a bottleneck on the Lotus miner.

More recently, Lotus has added a more scalable storage access solution in which workers can also be assigned storage access. This removes the bottleneck from the Lotus miner. Low-latency storage access is critical because of the impact on storage-proving processes.

Keeping a backup of your sealed sectors, the cache directory, and any unsealed sectors is crucial. Additionally, you should keep a backup of the sectorstore.json file that lives under your storage path. The sectorestore.json file is required to restore your system in the event of a failure. You can read more about the sectorstore.json file in the lotus docs.

It is also imperative to have at least a daily backup of your lotus-miner state. Backups can be made with:

lotus-miner backup 

The sectorstore.json file, which lives under your storage path, is also required for restoration in the event of a failure. You can read more about the file in the Lotus docs.

Scheduling jobs

Another key responsibility of the Lotus Miner is the scheduling of jobs for the sealing pipeline and storage proving.

Storage proving

One of the most important roles of lotus-miner is the Storage proving. Both WindowPoSt and WinningPoSt processes are usually handled by the lotus-miner process. For scalability and reliability purposes it is now also possible to run these proving processes on dedicated servers (proving workers) instead of using the Lotus miner.

The proving processes require low-latency access to sealed sectors. The proving challenge requires a GPU to run on. The resulting zkProof will be sent to the chain in a message. Messages must arrive within 30 minutes for WindowPoSt, and 30 seconds for WinningPoSt. It is extremely important that providers properly size and configure the proving workers, whether they are using just the Lotus miner or separate workers. Additionally, dedicated wallets, described in Control wallets, should be set up for these processes.

Always check if there are upcoming proving deadlines before halting any services for maintenance. For detailed instructions, refer to the Lotus maintenance guide.

Lotus worker

The Lotus worker is another important component of the Lotus architecture. There can be - and most likely will be - multiple workers in a single storage provider setup. Assigning specific roles to each worker enables higher throughput, sealing rate, and improved redundancy.

As mentioned above, proving tasks can be assigned to dedicated workers, and workers can also get storage access. The remaining worker tasks encompass running a sealing pipeline, which is discussed in the next section.

Boost

Boost is the market component for storage providers to interact with clients. Boost is made of several components (such as boostd, boostd-data, yugabytedb, booster-http etc.). It works as a deal-taking engine (from deals made by clients or other tools), and serves data retrievals to clients who request a copy of the data over graphsync, bitswap or http.

Boost has become a critical component in the software stack of a storage provider and it is therefore necessary to read the Boost documentation carefully.

Boost requires YugabyteDB as of version 2.0. Plan your deployment so that you understand the concepts of Yugabyte well enough. See the Boost documentation for more details.

Helpful commands

The following commands can help storage providers with their setup.

Backup Lotus miner state

It is imperative to have at least one daily backup of your Lotus miner state. Backups can be made using the following command:

lotus-miner backup

View wallets and funds

You can use the following command to view wallets and their funds:

lotus wallet list

Check storage configuration

Run the following command to check the storage configuration for your Lotus miner instance:

lotus-miner storage list

This command return information on your sealed space and your scratch space, otherwise known as a cache. These spaces are only available if you have properly configured your Lotus miner by following the steps described in the Lotus documentation.

In some cases it might be useful to check if the system has access to the storage paths to a certain sector. To check the storage paths to sector 1 for instance, use:

lotus-miner storage find 1

View scheduled jobs

To view the scheduled sealing jobs, run the following:

lotus-miner sealing jobs

View available workers

To see the workers on which the miner can schedule jobs, run:

lotus-miner sealing workers

View proving deadlines

To check if there are upcoming proving deadlines, run the following:

lotus-miner proving deadlines

Was this page helpful?

Filecoin programs and tools

This page covers the various programs and services that storage providers can take part in.

Although it is possible to find your own data storage customers with valuable datasets they want to store, and have them verified through KYC (Know Your Customer) to create verified deals for Filecoin Plus, there are also programs and platforms that make it easier for storage providers to receive verified deals.

Web3.storage

Web3.storage runs on “Elastic IPFS” as the inbound storage protocol offering scalability, performance and reliability as the platform grows. It guarantees the user (typically developers) that the platform will always serve your data when you need it. In the backend the data is uploaded onto the Filecoin Network for long-term storage.

Filecoin Green

Filecoin Green aims to measure the environmental impacts of Filecoin and verifiably drive them below zero, building infrastructure along the way that allows anyone to make transparent and substantive environmental claims. The team maintains the Filecoin Energy Dashboard and works with storage providers to decarbonize their operations through the Energy Validation Process. Connect with the team on Slack at #fil-green, or via email at [email protected].

Spade

Spade automates the process of renewing storage deals on the Filecoin network, ensuring the longevity of data stored on the blockchain. This is particularly useful for datasets that need to be preserved for extended periods, far beyond the standard deal duration. By using Spade, organizations and individuals can manage and maintain their data storage deals more efficiently, guaranteeing that valuable data remains accessible and secure over time.

Singularity

Singularity is an end-to-end solution for onboarding datasets to Filecoin storage providers, supporting PiB-scale data. It offers modular compatibility with various data preparation and deal-making tools, allowing efficient processing from local or remote storage. Singularity integrates with over 40 storage solutions and introduces inline preparation, which links CAR files to their original data sources, preserving dataset hierarchies. It also supports content distribution and retrieval through multiple protocols and provides push and pull modes for deal making along with robust wallet management features.

Partner tools and programs

Many other programs and tools exist in the Filecoin community, developed by partners or storage providers. We list some examples below.

Akave

Akave is revolutionizing data management with a decentralized, modular solution that combines the robust storage of Filecoin with cutting-edge encryption and easy-to-use interfaces. Read more on the Akave Docs.

CIDGravity

CIDGravity is a software-as-a-service that allows storage providers to handle dynamic pricing and client management towards your solution. It integrates with deal engines such as Boost.

Swan (Filswan)

Swan is a provider of cross-chain cloud computing solutions. Developers can use its suite of tools to access resources across multiple chains.

Swan Cloud provides decentralized cloud computing solutions for Web3 projects by integrating storage, computing, and payment into one suite.

Open Panda

Open Panda was a platform for data researchers, analysts, students, and enthusiasts to interact with the largest open datasets in the world. Data available through the platform was stored on Filecoin, a decentralized storage network comprised of thousands of independent Storage Providers around the world.

Former programs and tools

Here is a comprehensive list of deprecated tools and projects.

Evergreen

Evergreen extended the Slingshot program by aiming to store open datasets forever. Standard deals had a maximum duration of 540 days, which was not long enough for valuable, open datasets that might need to be stored forever. Evergreen used the Spade deal engine, which automatically renewed deals to extend the lifetime of the dataset on-chain.

CO2.Storage

CO2.Storage was a decentralized storage solution for structured data based on content-addressed data schemas. CO2.Storage primarily focused on structured data for environmental assets, such as Renewable Energy Credits, Carbon Offsets, and geospatial datasets, and mapped inputs to base data schemas (IPLD DAGs) for off-chain data (like metadata, images, attestation documents, and other assets) to promote the development of standard data schemas for environmental assets. This project was in alpha, and while many features could be considered stable, it was waiting until being feature complete to fully launch. The Filecoin Green team was actively working on this project and welcomed contributions from the community.

Filecoin Tracker

Filecoin Tracker was deprecated on April 20, 2024.

Here are great existing and working Filecoin dashboards that cover similar topics:

  • Starboard

  • Filecoin Dune Daily Metrics

  • Filecoin Pulse (PoC)

Slingshot

Slingshot was a program that united Data clients, Data preparers and storage providers in a community to onboard data and share replicas of publicly valuable Open Datasets. Rather than providing a web interface like Estuary, Slingshot was a program that provides a workflow and tools for onboarding of large open datasets. The Slingshot Deal Engine provided deals to registered and certified storage providers. The data was prepared and uploaded using a tool called Singularity.

Dataprograms.org

dataprograms.org listed tools, products, and incentive programs designed to drive growth and make data storage on Filecoin more accessible. It was discontinued in April 2024.

Moonlanding

Moon Landing was designed to ramp up storage providers in the Filecoin network by enabling them to serve verified deals at scale.

Filecoin Dataset Explorer

Filecoin Dataset Explorer showcased data stored on the Filecoin network between 2020 and 2022, including telemetry, historical archives, Creative Commons media, entertainment archives, scientific research, and machine learning datasets. It highlighted Filecoin's capability to store large datasets redundantly, ensuring availability from multiple Storage Providers worldwide. Each dataset is identified by a unique content identifier (CID). The platform aimed to make diverse datasets accessible to users globally.

See also: Legacy Explorer (legacy.datasets.filecoin.io)

Estuary

Estuary was an experimental software platform designed for sending public data to the Filecoin network, facilitating data retrieval from anywhere. It integrated IPFS and Filecoin technologies to provide a seamless end-to-end example for data storage and retrieval. When a file was uploaded, Estuary immediately made multiple storage deals with different providers to ensure redundancy and security. The software automated many aspects of deal making and retrieval, offering tools for managing connections, block storage, and deal tracking. Estuary aimed to simplify the use of decentralized storage networks for developers and users.

Estuary was discontinued in July 2023, and the website shut down in April 2024.

Big Data Exchange

Big Data Exchange was a program that allowed storage providers easy access to Filecoin+ deals through an auction where Storage Providers could bid on datasets by offering to pay clients FIL to choose the bidder as their Storage Provider.

Was this page helpful?

Install & Run Curio

Curio is the core PDP client that coordinates sealing, interacts with Lotus and submits PDP proofs.

System Configuration

Before you proceed with the installation, you should increase the UDP buffer size:

To make this change persistent across reboots:

Build Curio

Clone the repository and switch to the PDP branch:

Curio is compiled for a specific Filecoin network at build time. Choose the appropriate build command below.

This step will take a few minutes to complete.

Install and Verify Curio

Run the following to install the compiled binary:

This will place curio in /usr/local/bin

Verify the installation:

Expected output:


Guided Setup

Curio provides a utility to help you set up a new miner interactively. Run the following command:

1️⃣ Select "Create a new miner"

Use the arrow keys to navigate the guided setup menu and select "Create a new miner".

2️⃣ Enter Your YugabyteDB Connection Details

If you used the default installation steps from this guide, the following values should work:

  • Host: 127.0.0.1

  • Port: 5433

  • Username: yugabyte

  • Password: yugabyte

  • Database: yugabyte

You can verify these settings by running the following command from the Yugabyte directory:

After selecting "Continue to connect and update schema", Curio will automatically create the required tables and schema in the database.

3️⃣ Set Wallet Addresses

For this step, use the two BLS wallets you created earlier with Lotus:

  • Use wallet 1 for the Owner Address

  • Use wallet 2 for the Worker Address

  • Use wallet 1 again for the Sender Address

These addresses must match the Lotus wallets created earlier.

You can display your Lotus wallets at any time by running:

4️⃣ Choose Sector Size

Choose sector size:

  • 64 GiB

💡 Selecting a sector size is required during the Curio guided setup, but PDP itself doesn’t use sectors. Proof set sizes in PDP are arbitrary and fully flexible.

5️⃣ Create Miner Actor

Review the information to ensure all inputs are correct. Then select "Continue to verify the addresses and create a new miner actor" to proceed.

This step may take a few minutes to complete as Curio pushes the message and waits for it to land on-chain.

Once the actor is created, Curio will:

  • Register your miner ID

If the guided setup fails after creating the miner actor, run the following command to complete the installation:

6️⃣ Telemetry (Optional)

You’ll be asked whether to share anonymised or signed telemetry with the Curio team to help improve the software.

Select your preference and continue.

7️⃣ Save Database Configuration

At the final step of the guided setup, you’ll be prompted to choose where to save your database configuration file.

Use the arrow keys to select a location. A common default is:

Once selected, setup will complete, and the miner configuration will be stored.

8️⃣ Launch the Curio Web GUI

To explore the Curio interface visually, start the GUI layer:

Then, open your browser and go to:

This will launch the Curio web GUI locally.

sudo sysctl -w net.core.rmem_max=2097152
sudo sysctl -w net.core.rmem_default=2097152
echo 'net.core.rmem_max=2097152' | sudo tee -a /etc/sysctl.conf
echo 'net.core.rmem_default=2097152' | sudo tee -a /etc/sysctl.conf
git clone https://github.com/filecoin-project/curio.git
cd curio
git checkout synapse
# For Filecoin Mainnet:
make clean build

# For Calibration Testnet:
make clean calibnet
sudo make install
curio --version
# Example output for Mainnet:
curio version 1.24.4+mainnet+git_f954c0a_2025-04-06T15:46:32-04:00

# Example output for Calibration:
curio version 1.24.4+calibnet+git_f954c0a_2025-04-06T15:46:32-04:00
curio guided-setup
./bin/yugabyted status
lotus wallet list
curio config new-cluster <miner ID>
/home/your-username/curio.env
curio run --layers=gui
http://127.0.0.1:4701
Cover

Cover

Metamask setup

MetaMask is a popular browser extension that allows users to interact with blockchain applications. This guide shows you how to configure MetaMask to work with the Filecoin

Using ChainID

ChainID.network is a website that lets users easily connect their wallets to EVM-compatible blockchains. ChainID is the simplest way to add the Filecoin network to your MetaMask wallet.

  1. Navigate to chainid.network.

  2. Search for Filecoin Mainnet.

  3. Click Connect Wallet.

  4. Click Approve when prompted to Allow this site to add a network.

  5. Click Switch network when prompted by MetaMask.

  6. Open MetaMask from the browser extensions tab.

  7. You should see Filecoin listed at the top.

You can now use MetaMask to interact with the Filecoin network.

  1. Navigate to chainid.network.

  2. Search for Filecoin Calibration.

  3. Click Connect Wallet.

  4. Click Approve when prompted to Allow this site to add a network.

  5. You may be shown a warning that you are connecting to a test network. If prompted, click Accept.

  6. Click Switch network when prompted by MetaMask.

  7. Open MetaMask from the browser extensions tab. You should see Filecoin Calibration listed at the top.

You can now use MetaMask to interact with the Filecoin network.

  1. Navigate to chainid.network.

  2. Search for Filecoin Local testnet.

  3. Click Connect Wallet.

  4. Click Approve when prompted to Allow this site to add a network.

  5. You may be shown a warning that you are connecting to a test network. If prompted, click Accept.

  6. Click Switch network when prompted by MetaMask.

  7. Open MetaMask from the browser extensions tab. You should see Filecoin Local testnet listed at the top.

You can now use MetaMask to interact with the Filecoin network.

Manual process

If you can't or don't want to use ChainID, you can add the Filecoin network to your MetaMask manually.

Prerequisites

Before we get started, you’ll need the following:

  • A Chromium-based browser, or Firefox.

  • A browser with MetaMask installed.

Steps

The process for configuring MetaMask to use Filecoin is fairly simple but has some very specific variables that you must copy exactly.

  1. Open your browser and open the MetaMask plugin. If you haven’t opened the MetaMask plugin before, you’ll be prompted to create a new wallet. Follow the prompts to create a wallet.

  2. Click the user circle and select Settings.

  3. Select Networks.

  4. Click Add a network.

  5. Scroll down and click Add a network manually.

  6. Enter the following information into the fields:

Field
Value

Network name

Filecoin

New RPC URL

Either: - https://api.node.glif.io/rpc/v1 - https://filecoin.chainup.net/rpc/v1 - https://rpc.ankr.com/filecoin

Chain ID

314

Currency symbol

FIL

Field
Value

Network name

Filecoin Calibration testnet

New RPC URL

Either: - https://api.calibration.node.glif.io/rpc/v1 - https://filecoin-calibration.chainup.net/rpc/v1

Chain ID

314159

Currency symbol

tFIL

Field
Value

Network name

Filecoin Local testnet

New RPC URL

http://localhost:1234/rpc/v1

Chain ID

31415926

Currency symbol

tFIL

  1. Pick one block explorer from the Networks section, and enter the URL into the Block explorer (optional) field.

  2. Review the values in the fields and click Save.

  3. The Filecoin network should now be shown in your MetaMask window.

  4. Done!

You can now use MetaMask to interact with the Filecoin network.

Ledger hardware wallet

MetaMask is compatible with the Ledger hardware wallet. There are 2 options for Ledger apps that support Filecoin:

  • Filecoin Ledger App - compatible with MetaMask or the Glif.io wallet

  • Ethereum Ledger App - currently deprecated for Filecoin as of v1.15.0 (previous versions will work) until Ledger releases their upcoming Dynamic Networks feature

Note on Filecoin EVM vs Filecoin Native addresses

Note that MetaMask supports Filecoin EVM addresses that follow the Ethereum 0x format (see this section for more info on address types). To use native Filecoin address types that begin with f, you can use:

  • Glif.io wallet (also compatible with the Filecoin Ledger App),

  • Ledger Live and the Filecoin Ledger App or

  • Filecoin MetaMask Wallet installable from the right menu in Metamask under Snaps

Some exchanges only support specific address types (see this table on FilecoinTl;dr for more info). Which address types are best to use may depend on your use case and goals.

Install the Ledger app

Follow these instructions to connect your Filecoin addresses within MetaMask to your Ledger wallet. This guide assumes you have Ledger Live and MetaMask installed on your computer.

Before you can connect MetaMask to your Ledger, you must install the Filecoin Ledger App on your Ledger device.

  1. Open Ledger Live and navigate to My Ledger.

  2. Connect your Ledger device and unlock it.

  3. Confirm that you allow My Ledger to access your Ledger device. You can do that by clicking both buttons on your Ledger device simultaneously.

  4. Go back to Ledger Live on your computer.

  5. In My Ledger, head over to App catalog and search for Filecoin.

  6. Click Install.

For more details on the official Filecoin Ledger app, check out the Ledger documentation.

Enable expert-mode

MetaMask requires that the Filecoin app on your Ledger device is set to Expert mode.

  1. Open the Filecoin app on your Ledger device.

    A Ledger with the Filecoin app open.
  2. Use the buttons on your device to navigate to Expert mode.

    A Ledger showing the expert mode option.
  3. Press both buttons simultaneously to enable Expert mode.

Connect to MetaMask

Once you have installed the Filecoin app on your Ledger device and enabled expert mode, you can connect your device to MetaMask.

  1. Open your browser and open the MetaMask extension.

  2. In the Accounts menu, select Add hardware wallet.

    MetaMask with the 'Add hardware wallet' option highlighted.
  3. Select Ledger

    MetaMask showing the available hardware wallet options.
  4. A list of accounts should appear. Select an 0x... account.

    MetaMask showing multiple accounts from a Ledger device.
  5. Done!

That's it! You've now successfully connected your Ledger device to MetaMask. When you submit any transactions through MetaMask using this account, the Filecoin Ledger app will prompt you for a confirmation on the Ledger device.

You may see a blind signing warning on your MetaMask device. This is expected, and is the reason why Expert Mode must be enabled before you can interact with the Filecoin Ledger app.

A Ledger device showing a blind signing warning.

Was this page helpful?

Curio Documentation
Filecoin Slack - #fil-curio-help

Spin up a lite-node

Lite-nodes are a simplified node option that allows developers to perform lightweight tasks on a local node. This page covers how to spin up a lite node on your local machine.

In this guide, we will use the Lotus Filecoin implementation to install a lite-node on MacOS and Ubuntu. For other Linux distributions, check out the Lotus documentation. To run a lite-node on Windows, install WSL with Ubuntu on your system and follow the Ubuntu instructions below.

Prerequisites

Lite-nodes have relatively lightweight hardware requirements. Your machine should meet the following hardware requirements:

  1. At least 2 GiB of RAM

  2. A dual-core CPU.

  3. At least 4 GiB of storage space.

To build the lite-node, you’ll need some specific software. Run the following command to install the software prerequisites:

  1. Ensure you have XCode and Homebrew installed.

  2. Install the following dependencies:

  1. Install the following dependencies:

  2. Install Go and add /usr/local/go/bin to your $PATH variable:

  3. Install Rust, choose the standard installation option, and source the ~/.cargo/env config file:

Pre-build

Before we can build the Lotus binaries, we need to follow a few pre-build steps. MacOS users should select their CPU architecture from the tabs:

  1. Clone the repository and move into the lotus directory:

  2. Retrieve the latest Lotus release version:

    This should output something like:

    v1.33.0
  3. Using the value returned from the previous command, checkout to the latest release branch:

  4. Done! You can move on to the Build section.

  1. Clone the repository and move into the lotus directory:

  2. Retrieve the latest Lotus release version:

    This should output something like:

    v1.33.0
  3. Using the value returned from the previous command, checkout to the latest release branch:

  4. Create the necessary environment variables to allow Lotus to run on M1 architecture:

  5. Done! You can move on to the Build section.

  1. Clone the repository and move into the lotus directory:

    git clone https://github.com/filecoin-project/lotus.git
    cd lotus
  2. Retrieve the latest Lotus release version:

    git tag -l 'v*' | grep -v '-' | sort -V -r | head -n 1

    This should output something like:

    v1.33.0
  3. Using the value returned from the previous command, checkout to the latest release branch:

    git checkout v1.33.0
  4. If your processor was released later than an AMD Zen or Intel Ice Lake CPU, enable SHA extensions by adding these two environment variables. If in doubt, ignore this command and move on to the next section.

    export RUSTFLAGS="-C target-cpu=native -g"
    export FFI_BUILD_FROM_SOURCE=1
  5. Done! You can move on to the Build section.

Build the binary

The last thing we need to do to get our node setup is to build the package. The command you need to run depends on which network you want to connect to:

  1. Remove or delete any existing Lotus configuration files on your system:

    mv ~/.lotus ~/.lotus-backup
  2. Make the Lotus binaries and install them:

    make clean all
    sudo make install
  3. Once the installation finishes, query the Lotus version to ensure everything is installed successfully and for the correct network:

    lotus --version

    This will output something like:

    lotus version 1.33.0+mainnet+git.1ff3b360b
  1. Remove or delete any existing Lotus configuration files on your system:

    mv ~/.lotus ~/.lotus-backup
  2. Make the Lotus binaries and install them:

    make clean && make calibrationnet
    sudo make install
  3. Once the installation finishes, query the Lotus version to ensure everything is installed successfully and for the correct network:

    lotus --version

    This will output something like:

    lotus version 1.33.0+calibnet+git.1ff3b360b

Start the node

Let's start the lite-node by connecting to a remote full-node. We can use the public full-nodes from glif.io:

  1. Create an environment variable called FULLNODE_API_INFO and set it to the WebSockets address of the node you want to connect to. At the same time, start the Lotus daemon with the --lite tag:

    FULLNODE_API_INFO=wss://wss.node.glif.io/apigw/lotus lotus daemon --lite

    This will output something like:

    2023-01-26T11:18:54.251-0400    INFO    main    lotus/daemon.go:219     lotus repo: /Users/johnny/.lotus
    2023-01-26T11:18:54.254-0400    WARN    cliutil util/apiinfo.go:94      API Token not set and requested, capabilities might be limited.
    ...
  2. The Lotus daemon will continue to run in this terminal window. All subsequent commands we use should be done in a separate terminal window.

  1. Create an environment variable called FULLNODE_API_INFO and set it to the WebSockets address of the node you want to connect to. At the same time, start the Lotus daemon with the --lite tag:

    FULLNODE_API_INFO=wss://wss.calibration.node.glif.io/apigw/lotus lotus daemon --lite

    This will output something like:

    2023-01-26T11:18:54.251-0400    INFO    main    lotus/daemon.go:219     lotus repo: /Users/johnny/.lotus
    2023-01-26T11:18:54.254-0400    WARN    cliutil util/apiinfo.go:94      API Token not set and requested, capabilities might be limited.
    ...
  2. The Lotus daemon will continue to run in this terminal window. All subsequent commands we use should be done in a separate terminal window.

Expose the API

To send JSON-RPC requests to our lite-node, we need to expose the API.

  1. Open ~/.lotus/config.toml and uncomment ListenAddress on line 6:

    [API]
      # Binding address for the Lotus API
      #
      # type: string
      # env var: LOTUS_API_LISTENADDRESS
      ListenAddress = "/ip4/127.0.0.1/tcp/1234/http"
    
      # type: string
      # env var: LOTUS_API_REMOTELISTENADDRESS
      # RemoteListenAddress = ""
    ...
  2. Open the terminal window where your lite-node is running and press CTRL + c to close the daemon.

  3. In the same window, restart the lite-node:

    FULLNODE_API_INFO=wss://wss.node.glif.io/apigw/lotus lotus daemon --lite

    This will output something like:

    2023-01-26T11:18:54.251-0400    INFO    main    lotus/daemon.go:219     lotus repo: /Users/johnny/.lotus
    2023-01-26T11:18:54.254-0400    WARN    cliutil util/apiinfo.go:94      API Token not set and requested, capabilities might be limited
    ...
  4. The Lotus daemon will continue to run in this terminal window. All subsequent commands we use should be done in a separate terminal window.

  1. Open ~/.lotus/config.toml and uncomment ListenAddress on line 6:

    [API]
      # Binding address for the Lotus API
      #
      # type: string
      # env var: LOTUS_API_LISTENADDRESS
      ListenAddress = "/ip4/127.0.0.1/tcp/1234/http"
    
      # type: string
      # env var: LOTUS_API_REMOTELISTENADDRESS
      # RemoteListenAddress = ""
    
    ...
  2. Open the terminal window where your lite-node is running and press CTRL + c to close the daemon.

  3. In the same window, restart the lite-node:

    FULLNODE_API_INFO=wss://wss.calibration.node.glif.io/apigw/lotus lotus daemon --lite

    This will output something like:

    2023-01-26T11:18:54.251-0400    INFO    main    lotus/daemon.go:219     lotus repo: /Users/johnny/.lotus
    2023-01-26T11:18:54.254-0400    WARN    cliutil util/apiinfo.go:94      API Token not set and requested, capabilities might be limited.
    ...
  4. The Lotus daemon will continue to run in this terminal window. All subsequent commands we use should be done in a separate terminal window.

The lite-node is now set up to accept local JSON-RPC requests! However, we don't have an authorization key, so we won't have access to privileged JSON-RPC methods.

Create a key

To access privileged JSON-RPC methods, like creating a new wallet, we need to supply an authentication key with our Curl requests.

  1. Create a new admin token and set the result to a new LOTUS_ADMIN_KEY environment variable:

    lotus auth create-token --perm "admin"

    This will output something like:

    eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBbGxvdyI6WyJyZWFkIiwid3JpdGUiLCJzaWduIiwiYWRtaW4iXX0.um-LqY7g-SDOsMheDRbQ9JIaFzus_Pan0J88VQ6ZLVE
  2. Keep this key handy. We're going to use it in the next section.

Send requests

Let's run a couple of commands to see if the JSON-RPC API is set up correctly.

  1. First, let's grab the head of the Filecoin network chain:

    curl -X POST '127.0.0.1:1234/rpc/v0' \
    -H 'Content-Type: application/json' \
    --data '{"jsonrpc":"2.0","id":1,"method":"Filecoin.ChainHead","params":[]}' \
    | jq 

    This will output something like:

    {
      "jsonrpc": "2.0",
      "result": {
        "Cids": [
          {
            "/": "bafy2bzacead2v2y6yob7rkm4y4snthibuamzy5a5iuzlwvy7rynemtkdywfuo"
          },
          {
            "/": "bafy2bzaced4zahevivrcdoefqlh2j45sevfh5g3zsw6whpqxqjig6dxxf3ip6"
          },
    ...
  2. Next, let's try to create a new wallet. Since this is a privileged method, we need to supply our auth key eyJhbGc...:

    curl -X POST '127.0.0.1:1234/rpc/v0' \
    -H 'Content-Type: application/json' \
    -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBbGxvdyI6WyJyZWFkIiwid3JpdGUiLCJzaWduIiwiYWRtaW4iXX0.um-LqY7g-SDOsMheDRbQ9JIaFzus_Pan0J88VQ6ZLVE' \
    --data '{"jsonrpc":"2.0","id":1,"method":"Filecoin.WalletNew","params":["secp256k1"]}' \
    | jq

    This will output something like:

    {
      "id": 1,
      "jsonrpc": "2.0",
      "result": "f1vuc4eu2wgsdnce2ngygyzuxky3aqijqe7gj5qqa"
    }

    The result field is the public key for our address. The private key is stored within our lite-node.

  3. Set the new address as the default wallet for our lite-node. Remember to replace the Bearer token with our auth key eyJhbGc... and the "params" value with the wallet address, f1vuc4..., returned from the previous command:

    curl -X POST '127.0.0.1:1234/rpc/v0' \
    -H 'Content-Type: application/json' \
    -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBbGxvdyI6WyJyZWFkIiwid3JpdGUiLCJzaWduIiwiYWRtaW4iXX0.um-LqY7g-SDOsMheDRbQ9JIaFzus_Pan0J88VQ6ZLVE' \
    --data '{"jsonrpc":"2.0","id":1,"method":"Filecoin.WalletSetDefault","params":["f1vuc4eu2wgsdnce2ngygyzuxky3aqijqe7gj5qqa"]}' \
    | jq 

    This will output something like:

    {
      "id": 1,
      "jsonrpc": "2.0",
      "result": null
    }

Next steps

You should now have a local lite-node connected to a remote full-node with an admin API key! You can use this setup to continue playing around with the JSON-RPC, or start building your applications on Filecoin!

Was this page helpful?

ERC-20 quickstart

In this quickstart tutorial we’ll walk through how to deploy your first smart-contract to the Filecoin network.

We’re going to install a browser-based wallet called MetaMask, create a new wallet address, supply some test currency to that wallet, and then use a browser-based development environment called Remix to deploy a smart contract to the Filecoin network. We’re going to be creating an ERC-20 token in this quickstart. The ERC-20 contract is used a lot in representing a massive array of tokens across multiple blockchains, primarily the Ethereum blockchain.

If you’re an Ethereum developer, check out the .

Accounts and assets

We’re going to be using MetaMask, a cryptocurrency wallet that lives in your browser making it very easy for users to interact with web3-based sites!

Create a wallet

Before we can interact with the Filecoin network, we need funds. But before we can get any funds, we need somewhere to put them!

  1. Open your browser and visit the .

  2. Install the wallet by clicking the Download for button. MetaMask is available for Brave, Chrome, Edge, Firefox, and Opera.

  3. Once you have installed MetaMask, it will open a Get started window.

  4. Click Create a new wallet.

  5. Enter a password to secure your MetaMask wallet. You will need to enter this password every time you use the wallet.

  6. Follow the prompts until you get to the Secret Recovery Phrase window. Read the information about what this recovery phrase is on this page.

  7. Eventually you should get to the Wallet creation success page!

  8. Once you’ve done that, you should have your account set up!

Switch networks

You may notice that we are currently connected to the Ethereum Mainnet. We need to point MetaMask to the Filecoin network, specifically the . We’ll use a website called to give MetaMask the information it needs quickly.

  1. Go to .

  2. Enable the Testnets toggle and enter Filecoin into the search bar.

  3. Scroll down to find the Filecoin – Calibration testnet.

  4. In MetaMask click Next.

  5. Click Connect.

  6. Click Approve when prompted to Allow this site to add a network.

  7. Click Switch network when prompted by MetaMask.

  8. Open MetaMask from the browser extensions tab:

  9. You should see the Filecoin Calibration testnet listed at the top.

Nice! Now we’ve got the Filecoin Calibration testnet set up within MetaMask. You’ll notice that our MetaMask window shows 0 TFIL. Test-filecoin (TFIL) is FIL that has no value in the real world, and developers use it for testing. We’ll grab some TFIL next.

Get some funds

  1. In your browser, open MetaMask and copy your address to your clipboard:

  2. Go to and click Send Funds.

  3. Paste your address into the address field, and click Send Funds.

  4. The faucet will show a transaction ID. You can copy this ID into a Calibration testnet to view your transaction. After a couple of minutes, you should see some tFIL transferred to your address.

That’s all there is to it! Getting tFil is easy!

Contract creation

The development environment we’re going to be using is called Remix, viewable at . Remix is an incredibly sophisticated tool, and there’s a lot you can play around with! In this tutorial however, we’re going to stick to the very basics. If you want to learn more, check out .

Create a workspace

In Remix, workspaces are where you can create a contract, or group of contracts, for each project. Let’s create a new workspace to create our new ERC-20 token.

  1. Open .

  2. Open the dropdown menu and click create a new workspace.

  3. In the Choose a template dropdown, select ERC20.

  4. Under Customize template > Features, check the Mintable box.

  5. Enter a fun name for your token in the Workspace name field. Something like CorgiCoin works fine.

  6. Click OK to create your new workspace.

Customize the contract

The contract template we’re using is pretty simple. We just need to modify a couple of variables.

  1. Click the compiler icon to open the compiler panel. Update the compiler version by selecting 0.8.20 from the compiler dropdown.

  2. Under the contract directory, click MyToken.sol.

  3. In the editor panel, replace MyToken with whatever you’d like to name your token. In this example, we’ll use CorgiCoin.

  4. On the same line, replace the second string with whatever you want the symbol of your token to be. In this example, we’ll use CRG.

That’s all we need to change within this contract. You can see on line 4 that this contract is importing another contract from @openzeppelin for us, meaning that we can keep our custom token contract simple.

Compile

  1. Click the green play symbol at the top of the workspace to compile your contract. You can also press CMD + s on MacOS or CTRL + s on Linux and Windows.

  2. Remix automatically fetches the three import contracts from the top of our .sol contract. You can see these imported contracts under the .deps directory. You can browse the contracts there, but Remix will not save any changes you make.

Deploy

Now that we’ve successfully compiled our contract, we need to deploy it somewhere! This is where our previous MetaMask setup comes into play.

  1. Click the Deploy tab from the left.

  2. Under the Environment dropdown, select Injected Provider - MetaMask.

  3. MetaMask will open a new window confirming that you want to connect your account to Remix.

  4. Click Next:

  5. Click Connect to connect your tFIL account to Remix.

  6. Back in Remix, under the Account field, you’ll see that it says something like 0x11F... (5 ether). This value is 5 tFIL, but Remix doesn’t support the Filecoin network so doesn’t understand what tFIL is. This isn’t a problem, it’s just a little quirk of using Remix.

  7. Under the Contract dropdown, ensure the contract you created is selected.

  8. Gather your MetaMask account address and populate the deploy field in Remix.

  9. Click Deploy.

  10. MetaMask will open a window and as you to confirm the transaction. Scroll down and click Confirm to have MetaMask deploy the contract.

  11. Back in Remix, a message at the bottom of the screen shows that the creation of your token is pending.

  12. Wait around 90 seconds for the deployment to complete.

On the Filecoin network, a new set of blocks, also called a tipset, is created every thirty seconds. When deploying a contract, the transaction needs to be received by the network, and then the network needs to confirm the contract. This process takes around one to two tipsets to process – or around 60 to 90 seconds.

Use your contract

Now that we’ve compiled and deployed the contract, it’s time to actually interact with it!

Mint your tokens

Let’s call a method within the deployed contract to mint some tokens.

  1. Back in Remix, open the Deployed Contracts dropdown, within the Deploy sidebar tab.

  2. Expand the mint method. You must fill in two fields here: to and amount.

  3. The to field specifies where address you want these initial tokens sent to. Open MetaMask, copy your address, and paste it into this field.

  4. This field expects an attoFil value. 1 FIL is equal to 1,000,000,000,000,000,000 attoFil. So if you wanted to mint 100 FIL, you would enter 100 followed by 18 zeros: 100000000000000000000.

  5. Click Transact.

  6. MetaMask will open a window and ask you to confirm the transaction:

Again, you must wait for the network to process the transaction, which should take about 90 seconds. You can move on to the next section while you’re waiting.

Add to MetaMask

Currently, MetaMask has no idea what our token is or what it even does. We can fix this by explicitly telling MetaMask the address of our contract.

  1. Go back to Remix and open the Deploy sidebar tab.

  2. Under Deployed Contracts, you should see your contract address at the top. Click the copy icon to copy the address to your clipboard:

  3. Open MetaMask, select Assets, and click Import your tokens:

  4. In the Token contract address field, paste the contract address you just copied from Remix and then click Add custom token. MetaMask should autofill the rest of the information based on what it can find from the Filecoin network.

  5. Click Import token:

  6. You should now be able to see that you have 100 of your tokens within your MetaMask wallet!

Share your tokens

Having a bunch of tokens in your personal MetaMask is nice, but why not send some tokens to a friend? Your friend needs to create a wallet in MetaMask as we did in the and sections. They will also need to import your contract deployment address like you did in the section. Remember, you need to pay gas for every transaction that you make! If your friend tries to send some of your tokens to someone else but can’t, it might be because they don’t have any tFil.

FEVM Hardhat kit
MetaMask website
Calibration testnet
chainlist.org
chainlist.org
faucet.calibration.chainsafe-fil.io
block explorer
remix.ethereum.org
the Remix documentation
remix.ethereum.org
Create a wallet
Switch networks
Add your tokens to MetaMask
Was this page helpful?
Get started with MetaMask.
Create a password for your MetaMask wallet.
Wallet creation successful!
Default MetaMask page.
Search for Filecoin testnets in Chainlist.
Click next in MetaMask.
Open MetaMask from the browser extensions tab.
Copy your address to your clipboard.
Create a new workspace.
Set workspace details.
Update the compiler version
Open the MyToken contract.
Change token name.
Change token ticket.
Compile the contract.
Compile and get the dependencies
Select the deploy tab.
Select MetaMask within Remix.
Click next in MetaMask.
Click Connect in MetaMask.
Remix and MetaMask linked.
Select contract in Remix.
Copy the address in MetaMask
Populate the deploy address
Click Deploy in Remix.
Deployment confirmation in Remix.
Deployment complete.
Deploy the contracts.
Open the mint method.
Enter your address.
Click transact.
Confirm message in MetaMask.
Copy your contract address.
Import your address details.
Complete your asset details.
MetaMask showing a new token.
brew install go jq pkg-config hwloc coreutils rust
sudo apt update -y
sudo apt install mesa-opencl-icd ocl-icd-opencl-dev gcc git jq pkg-config curl clang build-essential hwloc libhwloc-dev wget -y
wget https://go.dev/dl/go1.21.7.linux-amd64.tar.gz
sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.21.7.linux-amd64.tar.gz
echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.bashrc && source ~/.bashrc
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source "$HOME/.cargo/env"
git clone https://github.com/filecoin-project/lotus.git
cd lotus/
git tag -l 'v*' | grep -v '-' | sort -V -r | head -n 1
git checkout v1.33.0
git clone https://github.com/filecoin-project/lotus.git
cd lotus
git tag -l 'v*' | grep -v '-' | sort -V -r | head -n 1
git checkout v1.33.0
export LIBRARY_PATH=/opt/homebrew/lib
export FFI_BUILD_FROM_SOURCE=1
export PATH="$(brew --prefix coreutils)/libexec/gnubin:/usr/local/bin:$PATH"

Ways to contribute

So you want to contribute to Filecoin and the ecosystem? Here is a quick listing of things to which you can contribute and an overview on how you can get started.

Ways to contribute

Code

Filecoin and its sister-projects are big, with lots of code written in multiple languages. We always need help writing and maintaining code, but it can be daunting to just jump in. We use the label Help Wanted on features or bug fixes that people can help out with. They are an excellent place for you to start contributing code.

The biggest and most active repositories we have today are:

  • filecoin-project/venus

  • filecoin-project/lotus

  • filecoin-project/rust-fil-proofs

If you want to start contributing to the core of Filecoin, those repositories are a great place start. But the Help Wanted label exists in several related projects:

  • IPFS

  • libp2p

  • IPLD

  • Multiformats

Documentation

Filecoin is a huge project and undertaking, and with lots of code comes the need for lots of good documentation! However, we need a lot more help to write the awesome docs the project needs. If writing technical documentation is your area, any and all help is welcome!

Before contributing to the Filecoin docs, please read these quick guides; they’ll save you time and help keep the docs accurate and consistent!

  1. Style and formatting guide

  2. Writing guide

If you have never contributed to an open-source project before, or just need a refresher, take a look at the contribution tutorial.

Community

If interacting with people is your favorite thing to do in this world, join the Filecoin chat and discussion forums to say hello, meet others who share your goals, and connect with other members of the community. You should also consider joining Filecoin Slack.

Build Applications

Filecoin is designed for you to integrate into your own applications and services.

Get started by looking at the list of projects currently built on Filecoin. Build anything you think is missing! If you’re unsure about something, you can join the chat and discussion forums to get help or feedback on your specific problem/idea. You can also join a Filecoin Hackathon, apply for a Filecoin Developer Grant or apply to the Filecoin accelerator program to support the development of your project.

  • Filecoin Hackathons

  • Filecoin Developer Grants

  • Filecoin Accelerator Program

Protocol Design

Filecoin is ultimately about building better protocols, and the community always welcome ideas and feedback on how to improve those protocols.

  • filecoin-project/specs

Research

Finally, we see Protocol Labs as a research lab, where YOUR ideas can become technologies that have a real impact on the world. If you’re interested in contributing to our research, please reach out to [email protected] for more information. Include what your interests are so we can make sure you get to work on something fun and valuable.

Writing guide

This guide explains things to keep in mind when writing for Filecoin’s documentation. While the grammar, formatting, and style guide lets you know the rules you should follow, this guide will help you to properly structure your writing and choose the correct tone for your audience.

Walkthroughs

The purpose of a walkthrough is to tell the user how to do something. They do not need to convince the reader of something or explain a concept. Walkthroughs are a list of steps the reader must follow to achieve a process or function.

The vast majority of documentation within the Filecoin documentation project falls under the Walkthrough category. Walkthroughs are generally quite short, have a neutral tone, and teach the reader how to achieve a particular process or function. They present the reader with concrete steps on where to go, what to type, and things they should click on. There is little to no conceptual information within walkthroughs.

Goals

Use the following goals when writing walkthroughs:

Goal
Keyword
Explanation

Audience

General

Easy for anyone to read with minimal effort.

Formality

Neutral

Slang is restricted, but standard casual expressions are allowed.

Domain

Technical

Acronyms and tech-specific language is used and expected.

Tone

Neutral

Writing contains little to no emotion.

Intent

Instruct

Tell the reader how to do something.

Function or process

The end goal of a walkthrough is for the reader to achieve a very particular function. Installing the Filecoin Desktop application is an example. Following this walkthrough isn’t going to teach the reader much about working with the decentralized web or what Filecoin is. Still, by the end, they’ll have the Filecoin Desktop application installed on their computer.

Short length

Since walkthroughs cover one particular function or process, they tend to be quite short. The estimated reading time of a walkthrough is somewhere between 2 and 10 minutes. Most of the time, the most critical content in a walkthrough is presented in a numbered list. Images and GIFs can help the reader understand what they should be doing.

If a walkthrough is converted into a video, that video should be no longer than 5 minutes.

Walkthrough structure

Walkthroughs are split into three major sections:

  1. What we’re about to do.

  2. The steps we need to do.

  3. Summary of what we just did, and potential next steps.

Conceptual articles

Articles are written with the intent to inform and explain something. These articles don’t contain any steps or actions that the reader has to perform right now.

These articles are vastly different in tone when compared to walkthroughs. Some topics and concepts can be challenging to understand, so creative writing and interesting diagrams are highly sought-after for these articles. Whatever writers can do to make a subject more understandable, the better.

Article goals

Use the following goals when writing conceptual articles:

Goal
Keyword
Explanation

Audience

Knowledgeable

Requires a certain amount of focus to understand.

Formality

Neutral

Slang is restricted, but standard casual expressions are allowed.

Domain

Any

Usually technical, but depends on the article.

Tone

Confident and friendly

The reader must feel confident that the writer knows what they’re talking about.

Intent

Describe

Tell the reader why something does the thing that it does, or why it exists.

Article structure

Articles are separated into five major sections:

  1. Introduction to the thing we’re about to explain.

  2. What the thing is.

  3. Why it’s essential.

  4. What other topics it relates to.

  5. Summary review of what we just read.

Tutorials

When writing a tutorial, you’re teaching a reader how to achieve a complex end-goal. Tutorials are a mix of walkthroughs and conceptual articles. Most tutorials will span several pages, and contain multiple walkthroughs within them.

Take the hypothetical tutorial Get up and running with Filecoin, for example. This tutorial will likely have the following pages:

  1. A brief introduction to what Filecoin is.

  2. Choose and install a command line client.

  3. Understanding storage deals.

  4. Import and store a file.

Pages 1 and 3 are conceptual articles, describing particular design patterns and ideas to the reader. All the other pages are walkthroughs instructing the user how to perform one specific action.

When designing a tutorial, keep in mind the walkthroughs and articles that already exist, and note down any additional content items that would need to be completed before creating the tutorial.

Grammar and formatting

Here are some language-specific rules that the Filecoin documentation follows. If you use a writing service like Grammarly, most of these rules are turned on by default.

American English

While Filecoin is a global project, the fact is that American English is the most commonly used style of English used today. With that in mind, when writing content for the Filecoin project, use American English spelling. The basic rules for converting other styles of English into American English are:

  1. Swap the s for a z in words like categorize and pluralize.

  2. Remove the u from words like color and honor.

  3. Swap tre for ter in words like center.

The Oxford comma

In a list of three or more items, follow each item except the last with a comma ,:

Use
Don’t use

One, two, three, and four.

One, two, three and four.

Henry, Elizabeth, and George.

Henry, Elizabeth and George.

References to Filecoin

As a proper noun, the name “Filecoin” (capitalized) should be used only to refer to the overarching project, to the protocol, or to the project’s canonical network:

Filecoin [the project] has attracted contributors from around the globe! Filecoin [the protocol] rewards contributions of data storage instead of computation! Filecoin [the network] is currently storing 50 PiB of data!

The name can also be used as an adjective:

The Filecoin ecosystem is thriving! I love contributing to Filecoin documentation!

When referring to the token used as Filecoin’s currency, the name FIL, is preferred. It is alternatively denoted by the Unicode symbol for an integral with a double stroke ⨎:

  • Unit prefix: 100 FIL.

  • Symbol prefix: ⨎ 100.

The smallest and most common denomination of FIL is the attoFIL (10^-18 FIL).

The collateral for this storage deal is 5 FIL. I generated ⨎100 as a storage provider last month!

Examples of discouraged usage:

Filecoin rewards storage providers with Filecoin. There are many ways to participate in the filecoin community. My wallet has thirty filecoins.

Consistency in the usage of these terms helps keep these various concepts distinct.

References to Lotus

Lotus is the main implementation of Filecoin. As such, it is frequently referenced in the Filecoin documentation. When referring to the Lotus implementation, use a capital L. A lowercase l should only be used when referring to the Lotus executable commands such as lotus daemon. Lotus executable commands should always be within code blocks:

1. Start the Lotus daemon:

   ```shell
   lotus daemon
   ```

2. After your Lotus daemon has been running for a few minutes, use `lotus` to check the number of other peers that it is connected to in the Filecoin network:

   ```shell
   lotus net peers
   ```

Acronyms

If you have to use an acronym, spell the full phrase first and include the acronym in parentheses () the first time it is used in each document. Exception: This generally isn’t necessary for commonly-encountered acronyms like IPFS, unless writing for a stand-alone article that may not be presented alongside project documentation.

Virtual Machine (VM), Decentralized Web (DWeb).

Formatting

How the Markdown syntax looks, and code formatting rules to follow.

Syntax

The Filecoin Docs project follows the GitHub Flavoured Markdown syntax for markdown. This way, all articles display properly within GitHub itself.

Rules

We use the rules set out in the VSCode Markdownlint extension. You can import these rules into any text editor like Vim or Sublime. All rules are listed within the Markdownlint repository.

We highly recommend installing VSCode with the Markdownlint extension to help with your writing. The extension shows warnings within your markdown whenever your copy doesn’t conform to a rule.

Style

The following rules explain how we organize and structure our writing. The rules outlined here are in addition to the rules found within the Markdownlinter extension.

Text

The following rules apply to editing and styling text.

Titles

  1. All titles follow sentence structure. Only names and places are capitalized, along with the first letter of the title. All other letters are lower-case:

## This is a title

### Only capitalize names and places

### The capital city of France is Paris
  1. Every article starts with a front-matter title and description:

---
title: Example article
description: This is a brief description that shows up in link teasers in services like Twitter and Slack.
---

## This is a subtitle

Example body text.

In the above example title: serves as a <h1> or # tag. There is only ever one title of this level in each article.

  1. Titles do not contain punctuation. If you have a question within your title, rephrase it as a statement:

<!-- This title is wrong. -->
## What is Filecoin?

<!-- This title is better. -->
## Filecoin explained

Bold text

Double asterisks ** are used to define boldface text. Use bold text when the reader must interact with something displayed as text: buttons, hyperlinks, images with text in them, window names, and icons.

In the **Login** window, enter your email into the **Username** field and click **Sign in**.

Italics

Underscores _ are used to define italic text. Style the names of things in italics, except input fields or buttons:

Here are some American things:

- The _Spirit of St Louis_.
- The _White House_.
- The United States _Declaration of Independence_.

Try entering them into the **American** field and clicking **Accept**.

Quotes or sections of quoted text are styled in italics and surrounded by double quotes ":

In the wise words of Winnie the Pooh _"People say nothing is impossible, but I do nothing every day."_

Code blocks

Tag code blocks with the syntax of the core they are presenting:

    ```javascript
    console.log(error);
    ```

Output from command-line actions can be displayed by adding another codeblock directly after the input codeblock. Here’s an example telling the use to run go version and then the output of that command in a separate codeblock immediately after the first:

    ```shell 
    go version
    ```

    ```plaintext
    go version go1.19.7 darwin/arm64
    ```

Command-line examples can be truncated with three periods ... to remove extraneous information:

    ```shell
    lotus-miner info
    ```

    ```shell
    Miner: t0103
    Sector Size: 16.0 MiB
    ...
    Sectors:  map[Committing:0 Proving:0 Total:0]
    ```

Inline code tags

Surround directories, file names, and version numbers between inline code tags `.

Version `1.2.0` of the program is stored in `~/code/examples`. Open `exporter.exe` to run the program.

List items

All list items follow sentence structure. Only names and places are capitalized, along with the first letter of the list item. All other letters are lowercase:

  1. Never leave Nottingham without a sandwich.

  2. Brian May played guitar for Queen.

  3. Oranges.

List items end with a period ., or a colon : if the list item has a sub-list:

  1. Charles Dickens novels:

    1. Oliver Twist.

    2. Nicholas Nickelby.

    3. David Copperfield.

  2. J.R.R Tolkien non-fiction books:

    1. The Hobbit.

    2. Silmarillion.

    3. Letters from Father Christmas.

Unordered lists

Use the dash character - for un-numbered list items:

- An apple.
- Three oranges.
- As many lemons as you can carry.
- Half a lime.

Special characters

Whenever possible, spell out the name of the special character, followed by an example of the character itself within a code block.

Use the dollar sign `$` to enter debug-mode.

Keyboard shortcuts

When instructing the reader to use a keyboard shortcut, surround individual keys in code tags:

Press `ctrl` + `c` to copy the highlighted text.

The plus symbol + stays outside of the code tags.

Images

The following rules and guidelines define how to use and store images.

Alt text

All images contain alt text so that screen-reading programs can describe the image to users with limited sight:

![Screenshot of an image being uploaded through the Filecoin command line.](filecoin-image-upload-screen.png)

Was this page helpful?