Hydro Deploy
Hydro comes equipped with a Rust-native built-in deployment system, Hydro Deploy, which allows you to deploy your Hydro app to a variety of platforms. With Hydro Deploy, you can spin up complex services with just a few lines of Rust!
We have actually already been using Hydro Deploy in all of the examples so far, but without any special configuration:
use std::sync::Arc;
use hydro_deploy::Deployment;
use hydro_deploy::gcp::GcpNetwork;
use hydro_lang::location::{Location, NetworkHint};
use tokio::sync::RwLock;
use tokio_util::codec::LinesCodec;
#[tokio::main]
async fn main() {
let gcp_project = std::env::args()
.nth(1)
.expect("Expected GCP project as first argument");
let mut deployment = Deployment::new();
let vpc = Arc::new(RwLock::new(GcpNetwork::new(&gcp_project, None)));
let flow = hydro_lang::compile::builder::FlowBuilder::new();
let process = flow.process();
let external = flow.external::<()>();
let (port, input, output) =
process.bind_single_client::<_, _, LinesCodec>(&external, NetworkHint::Auto);
output.complete(hydro_template::echo_capitalize(input));
let nodes = flow
.with_process(
&process,
deployment
.GcpComputeEngineHost()
.project(gcp_project.clone())
.machine_type("e2-micro")
.image("debian-cloud/debian-11")
.region("us-west1-a")
.network(vpc.clone())
.add(),
)
.with_external(&external, deployment.Localhost())
.deploy(&mut deployment);
deployment.deploy().await.unwrap();
let raw_port = nodes.raw_port(port);
let server_port = raw_port.server_port().await;
println!("Please connect a client to port {:?}", server_port);
deployment.start_ctrl_c().await.unwrap();
}
TODO(mingwei): Explain the details/nuances of the example and simple configurations.
TrybuildHost
TrybuildHost provides additional options for compiling your Hydro app, such as setting rustflags,
enabling features, or setting up performance profiling.
.with_process(
&leader,
TrybuildHost::new(create_host(&mut deployment))
.rustflags(rustflags)
.additional_hydro_features(vec!["runtime_measure".to_string()])
// ...
TODO(mingwei)
Performance Profiling
Hydro Deploy also supports performance profiling with flamegraphs, which can be used to visualize which parts of your code are taking the most time to execute.
TrybuildHost provides a tracing method that will automatically generate a flamegraph after your app has run:
.tracing(
TracingOptions::builder()
.perf_raw_outfile("leader.perf.data")
.samply_outfile("leader.profile")
.fold_outfile("leader.data.folded")
.flamegraph_outfile("leader.svg")
.frequency(frequency)
.setup_command(hydro_deploy::rust_crate::tracing_options::DEBIAN_PERF_SETUP_COMMAND)
.build(),
),
The TracingOptions builder has several options, such as sampling frequency, output file names, tracing options, and
optionally a setup command to run before the profiling starts. The setup_command can be used to install the profiling,
in this case we use the provided DEBIAN_PERF_SETUP_COMMAND which installs perf and sets some kernel parameters to
enable tracing.
You may need to use TrybuildHost to set the following rustflags to enable detailed performance profiling:
rustflags = "-C opt-level=3 -C codegen-units=1 -C strip=none -C debuginfo=2 -C lto=off";
Different platforms require particular configuration to enable CPU profiling. Each platform uses a different tool to collect CPU profiling data, but Hydro Deploy will automatically process the resulting traces and download the resulting flamegraph:
- Linux:
perffor CPU profiling - macOS:
samplyfor CPU profiling - Windows:
samplyfor CPU profiling
For example, on GCP Linux machines, you may need to include additional rustflags:
rustflags = "-C opt-level=3 -C codegen-units=1 -C strip=none -C debuginfo=2 -C lto=off -C link-args=--no-rosegment";