Particle
Everything starts with a data structure called a particle. It originates on some peer in the network, travels the network through a predefined path, triggering function execution along its way. All particles have the following structure.
interface Particle {
// AIR script
script: string
// Script execution data
data: string
// Origin peer's public key, encoded as multihash
init_peer_id: string
// Origin peer's signature
signature: string
// Creation timestamp, in seconds
timestamp: number
// Time to live, in seconds
ttl: number
// Particle identifier, uuid
id: string
}
Contents of the script
field define the execution of a particle – its path, functions it triggers on peers, and so on. State of the execution, proofs, function call history, and their results are stored in the data
field.
Network topology
One could depict a path of some particle, like this:


You can see a set of nodes connected in a circle-like Kademlia network. One client (firefox icon) sends a particle to the network. The particle travels through several nodes and winds up on the other client (chrome icon).
Such an execution path could be expressed as the following script.
(seq
; go to from first client to the relay node
(call firefox_relay ("op" "identity") [])
(seq
; go to Node A
(call node_a ("op" "identity") [])
(seq
; go to Node B
(call node_b ("op" "identity") [])
(seq
; go to Node C
(call node_c ("op" "identity") [])
(seq
; go to the relay node of the other client
(call chrome_relay ("op" "identity") [])
; go to the other client
(call chrome ("op" "identity") [])
)
)
)
)
)
("op" "identity")
basically means "do nothing". It makes sense since this script describes only a network path, no execution is involved. You can read more about AIR scripts and how to trigger actual execution here.
The main takeaway here is that script
defines the network topology that the particle will travel.
Services and function execution
Each peer in the network can define its API in terms of services and functions. If we'd look at the structure of the AIR [call
(doc:instructions#call-execution) instruction], we would see that it has these two arguments – ("dht" "put")
. These are service
and function
identifiers, they define what code should be executed by a target peer.


Peers are free to define their own services and functions or can be used by others to host WebAssembly services. Fluence protocol only requires cloud peers to adhere to the list of predefined built-in services and functions.
You can learn more about creating and hosting WebAssembly services in the overview section and more on how to create these services in getting started section.
WebAssembly runtime
Each peer in the Fluence Network should be able to run WebAssembly programs. Cloud peers achieve that through the FCE – Fluence Compute Engine, while browser peers delegate the work to the browser engine (e.g., V8, etc.).
WebAssembly is used for (but not limited by) two main purposes:
- Interpreting AIR scripts; you can find interpreter on GitHub
- Running services
AIR Interpreter is a single-module program, so it is pretty easy to run it. With services, it gets a little more complex. A single service can be a set of several WebAssembly modules, and these modules can call each other's functions. For that to work, modules must be linked, and information about types of functions and structures must be shared between modules. For that, FCE uses Interface Types. You can read more on that [here](doc:module-linking-and-interface-types].
However, on browsers, support for Interface Types isn't there yet, so browser-based peers must resort to single-module services.
The key takeaway here is that every peer in the Fluence network is WebAssembly-enabled.


Every peer has as WebAssembly runtime
Updated 7 days ago