4. Service Composition And Reuse With Aqua
In the previous three sections, you got a taste of using Aqua with browsers and how to create and deploy a service. In this section, we discuss how to compose an application from multiple distributed services using Aqua. In Fluence, we don't use JSON-RPC or REST endpoints to address and execute the service, we use Aqua.
Recall, Aqua is a purpose-built distributed systems and peer-to-peer programming language that resolves (Peer Id, Service Id) tuples to facilitate service execution on the host node without developers having to worry about transport or network routing. And with Aqua VM available on each Fluence peer-to-peer node, Aqua allows developers to ergonomically locate and execute distributed services.

Composition With Aqua

A service is one or more linked WebAssembly (Wasm) modules that may be linked at runtime. Said dependencies are specified by a blueprint which is the basis for creating a unique service id after the deployment and initiation of the blueprint on our chosen host for deployment. See Figure 1.
When we deploy our service, as demonstrated in section two, the service is "out there" on the network and we need a way to locate and execute the service if w want to utilize he service as part of our application.
Luckily, the (Peer Id, Service Id) tuple we obtain from the service deployment process contains all the information Aqua needs to locate and execute the specified service instance.
Let's create a Wasm module with a single function that adds one to an input in the adder directory:
1
#[marine]
2
fn add_one(input: u64) -> u64 {
3
input + 1
4
}
Copied!
For our purposes, we deploy that module as a service to three hosts: Peer 1, Peer 2, and Peer 3. Use the instructions provided in section two to create the module and deploy the service to three peers of your choosing. See 4-composing-services-with-aqua/adder for the code and data/distributed_service.json for the (Peer Id, Service Id) tuples already deployed to three network peers.
Once we got the services deployed to their respective hosts, we can use Aqua to compose an admittedly simple application by composing the use of each service into an workflow where the (Peer Id, Service Id) tuples facilitate the routing to and execution of each service. Also, recall that in the Fluence peer-to-peer programming model the client need not, and for the most part should not, be involved in managing intermediate results. Instead, results are "forward chained" to the next service as specified in the Aqua workflow.
Using our add_one service and starting with an input parameter value of one, utilizing all three services, we expect a final result of four given sequential service execution:
The underlying Aqua script may look something like this (see the aqua-script directory):
1
-- aqua-scripts/adder.aqua
2
3
-- service interface for Wasm module
4
service AddOne:
5
add_one: u64 -> u64
6
7
-- convenience struct for (Peer Id, Service Id) tuples
8
data NodeServiceTuple:
9
node_id: string
10
service_id: string
11
12
func add_one_three_times(value: u64, ns_tuples: []NodeServiceTuple) -> u64:
13
on ns_tuples!0.node_id:
14
AddOne ns_tuples!0.service_id
15
res1 <- AddOne.add_one(value)
16
17
on ns_tuples!1.node_id:
18
AddOne ns_tuples!1.service_id
19
res2 <- AddOne.add_one(res1)
20
21
on ns_tuples!2.node_id:
22
AddOne ns_tuples!2.service_id
23
res3 <- AddOne.add_one(res2)
24
<- res3
Copied!
Let's give it a whirl! Using the already deployed services or your even better, your own deployed services, let's compile out Aqua script in the 4-composing-services-with-aqua directory:
1
aqua -i aqua-scripts -o compiled-aqua -a
Copied!
We now can use fldist to run the above Aqua script compiled to the compiled-aqua/adder.add_one_three_time.air:
1
fldist run_air -p compiled-aqua/adder.add_one_three_times.air -d '{"value": 5,
2
"ns_tuples":[{
3
"node_id": "12D3KooWFtf3rfCDAfWwt6oLZYZbDfn9Vn7bv7g6QjjQxUUEFVBt",
4
"service_id": "7b2ab89f-0897-4537-b726-8120b405074d"
5
},
6
{
7
"node_id": "12D3KooWKnEqMfYo9zvfHmqTLpLdiHXPe4SVqUWcWHDJdFGrSmcA",
8
"service_id": "e013f18a-200f-4249-8303-d42d10d3ce46"
9
},
10
{
11
"node_id": "12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi",
12
"service_id": "dbaca771-f0a6-4d1e-9af7-5b49368ffa9e"
13
}]
14
}' --generated
Copied!
Since we are starting with a value of 5 and increment it three times, we expect an 8 which we get:
1
[
2
8
3
]
Copied!
Of course, we can drastically change our application logic by changing the execution flow of our workflow composition. In the above example, we executed each of the three services once in sequence. Alternatively, we could also execute them in parallel or some combination of sequential and parallel execution arms.
Reusing our deployed services with a different execution flow may look like the following:
1
```aqua
2
3
-- service interface for Wasm module
4
service AddOne:
5
add_one: u64 -> u64
6
7
-- convenience struc for (Peer Id, Service Id) tuples
8
data NodeServiceTuple:
9
node_id: string
10
service_id: string
11
12
-- our app as defined by the worflow expressed in Aqua
13
func add_one_par(value: u64, ns_tuples: []NodeServiceTuple) -> []u64:
14
res: *u64
15
for ns <- ns_tuples par:
16
on ns.node_id:
17
AddOne ns.service_id
18
res <- AddOne.add_one(value)
19
MyOp.identity(res!2) --< flatten the stream variable
20
<- res --< return the final results [value +1, value + 1, value + 1, ...] to the client
Copied!
Unlike the sequential execution model, this example returns an array where each item is the incremented value, which is captured by the stream variable res. That is, for a starting value of five (5), we obtain [6,6,6] assuming our NodeServiceTuple array provided the three distinct (Peer Id, Service Id) tuples.
Running the script with fldist:
1
fldist run_air -p compiled-aqua/adder.add_one_par.air -d '{"value": 5,
2
"ns_tuples":[{
3
"node_id": "12D3KooWFtf3rfCDAfWwt6oLZYZbDfn9Vn7bv7g6QjjQxUUEFVBt",
4
"service_id": "7b2ab89f-0897-4537-b726-8120b405074d"
5
},
6
{
7
"node_id": "12D3KooWKnEqMfYo9zvfHmqTLpLdiHXPe4SVqUWcWHDJdFGrSmcA",
8
"service_id": "e013f18a-200f-4249-8303-d42d10d3ce46"
9
},
10
{
11
"node_id": "12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi",
12
"service_id": "dbaca771-f0a6-4d1e-9af7-5b49368ffa9e"
13
}]
14
}' --generated
Copied!
We get the expected result:
1
[
2
[
3
6,
4
6,
5
6
6
]
7
]
Copied!
We can improve on our business logic and change our input arguments to make parallelization a little more useful. Let's extend our data struct and update the workflow:
1
-- aqua-scripts/adder.aqua
2
3
data ValueNodeService:
4
node_id: string
5
service_id: string
6
value: u64 --< add value
7
8
func add_one_par_alt(payload: []ValueNodeService) -> []u64:
9
res: *u64
10
for vns <- payload par: --< parallelized run
11
on vns.node_id:
12
AddOne vns.service_id
13
res <- AddOne.add_one(vns.value)
14
MyOp.identity(res!2)
15
<- res
Copied!
And we can run the fldist command line:
1
fldist run_air -p compiled-aqua/adder.add_one_par_alt.air -d '{"payload":
2
[{"value": 5,
3
"node_id": "12D3KooWFtf3rfCDAfWwt6oLZYZbDfn9Vn7bv7g6QjjQxUUEFVBt",
4
"service_id": "7b2ab89f-0897-4537-b726-8120b405074d"
5
},
6
{ "value": 10,
7
"node_id": "12D3KooWKnEqMfYo9zvfHmqTLpLdiHXPe4SVqUWcWHDJdFGrSmcA",
8
"service_id": "e013f18a-200f-4249-8303-d42d10d3ce46"
9
},
10
{ "value": 15,
11
"node_id": "12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi",
12
"service_id": "dbaca771-f0a6-4d1e-9af7-5b49368ffa9e"
13
}]
14
}' --generated
Copied!
Given our input values [5, 10, 15], we get the expected output array of [6, 11, 16]:
1
[
2
[
3
11,
4
16,
5
6
6
]
Copied!
Alternatively, we can run our Aqua scripts with a Typescript client. In the client-peer directory:
1
npm i
2
npm run start
Copied!
Which of course gives us the expected results:
1
created a Fluence client 12D3KooWGve35kvMQ8USbmtRoMCzxaBPXSbqsZxfo6T8gBAV6bzy with relay 12D3KooWKnEqMfYo9zvfHmqTLpLdiHXPe4SVqUWcWHDJdFGrSmcA
2
add_one to 5 equals 6
3
add_one sequentially equals 8
4
add_one parallel equals [ 6, 6, 6 ]
5
add_one parallel alt equals [ 11, 6, 16 ]
Copied!

Summary

This section illustrates how Aqua allows developers to locate and execute distributed services on by merely providing a (Peer Id, Service Id) tuple and the associated data. From an Aqua user perspective, there are no JSON-RPC or REST endpoints just topology tuples that are resolved on peers of the network. Moreover, we saw how the Fluence peer-to-peer workflow model facilitates a different request-response model than commonly encountered in traditional client-server applications. That is, instead of returning each service result to the client, Aqua allows us to forward the (intermittent) result to the next service, peer-to-peer style.
Furthermore, we explored how different Aqua execution flows, e.g. sequential vs. parallel, and data models allow developers to compose drastically different workflows and application re-using already deployed services. For more information on Aqua, please see the Aqua book and for more information on Fluence development, see the developer docs.
Last modified 1mo ago