Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Preface

NADI is currently under active development. As such does not have stable API yet, and many of the concepts explained in this book might not work yet.

If you still want to use it for your projects, please do them with the knowledge that the API might change in next versions, and you might have to keep it updated until the system is stable. If you have any problems with the program, or would like some new features, please make an github issue, we will try to accomodate it if it fits within the scope of the program.

Acknowledgements

Thank you everyone who has been consistently testing this software throughout the development and providing feedbacks. Specially the members of Water System Analysis Lab in University of Cincinnati.

Funding

Grant: #W912HZ-24-2-0049 Investigators: Ray, Patrick 09-30-2024 – 09-29-2025 U.S. Army Corps of Engineers Advanced Software Tools for Network Analysis and Data Integration (NADI) 74263.03 Hold Level:Federal

What is NADI System?

Network Analysis amd Data Integration (NADI) System is a system of programs made to make network based data analysis easier and more accessible.

It consists of multiple tools, that perform two important functions, network detection and network analysis. First part is done throuh the Geographic Information (GIS) Tool, while the second part is done using a Domain Specific Programming Language (DSPL) called NADI Task system.

Nadi Workflow

Why use NADI System?

Hydrologic modeling involves the integration of diverse data to simulate complex (and often poorly understood) hydrological processes. The analysis of complex hydrological processes often requires using domain specific calculations, and the visual representation requires the creation of custom maps and plots. Both of which can be a repetitive and error-prone processes, diverting time from data interpretation and scientific inquiry. Efficient methods are needed to automate these tasks, allowing researchers to focus on higher-level analysis and translation of their findings.

Current solution to that problem is to either use general purpose programming languages like Python, R, Julia, etc., or use domain specific software packages to increase the reliability of the tasks. Domain Specific Programming Languages (DSPLs) like the NADI Task system provides better syntax for domain specific tasks, while also are general purpose enough for users to extend it for their use cases. NADI System is trying to be the software framework that can connect those two by integrating with various softwares and providing a intuitive way to do network based data analysis.

Some example functionality of NADI system includes:

  • Detection of upstream/downstream relationships from stream network,
  • Network based programming using an extensible custom programming language,
  • Interactive plots and reports generation,
  • Import/export from/to various GIS data formats, etc.

Network Based Data Analysis

If you have data that are network based, like in case of data related to points in a river. NADI provides a text representation of the network that can be manually created with any text editor, or through NADI GIS tool.

Task System

The Domain Specific Programming Language (DSPL) developed for network analysis in NADI makes network analysis simple and intutive. So, it is easier to understand, interpret and catch mistakes. While the NADI IDE has network visualization tools built in that can help you visualiza the network attributes for visual analysis.

For example, implementing “cumulative sum of streamflow” in nadi:

node<inputsfirst>.cum_sf = node.streamflow + sum(inputs.streamflow);

The trying to do this in Python while making sure input nodes are run before the output. So you might have to write a recursive algorithm like this:

def cum_sf(node):
	node.cum_sf = node.streamflow + sum([cum_sf(i) for i in node.inputs()])
	return node.cum_sf

cum_sf(network.outlet())

While a common mistake people might make is to write a simple loop like this:

for node in network.nodes():
    node.cum_sf = node.streamflow + sum(
	    [i.streamflow for i in node.inputs()]
	)

Which doesn’t make sure input nodes are run before output in this case, and can error out when some variables are not present. NADI provides special syntax for cases where you can make sure variables exist before running something.

Extensibility

NADI has two types of plugin systems, which means users can write their own analysis in any programming language and have it interact with NADI through attributes, or they can write it in rust and have even more direct interaction.

Who this book is for

This book has sections explaining the concepts of the NADI system, its developmental notes, user guide and developer guide.

Hence it can be useful for people who:

  • Want to understand the concepts used in NADI,
  • Want to use NADI system for their use case,
  • Want to develop plugin system for NADI,
  • Want to contribute to the NADI system packages, etc.

Although not intended, it might include resources and links to other materials related to Rust concepts, Geographical Information System (GIS) concepts, Hydrology concepts, etc. that people could potentially benefit from.

How to use this book

You can read this book sequentially to understand the concepts used in the NADI system. And then go through the references sections for a specific use cases you want to get into the details of.

If you are in a hurry, but this is your first time reading this book, at least read the Core Concepts, then refer to the section you are interested in. Learn by Example

Code Blocks

The code blocks will have example codes for various languages, most common will be string template, task, and rust codes.

String template and task have custom syntax highlights that is intended to make it easier for the reader to understand different semantic blocks.

For task scripts/functions, if relevant to the topic, they might have Results block following immediately showing the results of the execution.

For example:

network load_file("./data/mississippi.net")
node[ohio] render("{_NAME:case(title)} River")

Results:

{
  ohio = "Ohio River"
}

Task and Rust code block might also include lines that are needed to get the results, but hidden due to being irrelevant to the discussion. In those cases you can use the eye icon on the top right side of the code blocks to make them visible. Similarly use the copy icon to copy the visible code into clipboard.

String Template Syntax Highlight

The syntax highlight here in this book makes it so that any unknown transformers will be marked for easy detection to mistakes.

This shows var = {var:unknown()}, {_var:case(title)}

Besides this, the syntax highlight can help you detect the variables part (within {}), lisp expression (within =()), or commands (within $()) in the template.

Note: commands are disabled, so they won’t run during template rendering process. But if you are rendering a template to run as a command, then they will be executed during that process.

How to Cite

The sections below show you a bibliography entry in ASCE format, and BibTeX format that you can copy.

Journal Papers: TODO

The papers are currently still being worked on, and will be added here when they are published.

This book

You can cite the link to this book as follows Make sure to replace Accessed Data by today’s date.

Atreya, G. 2025. “Network Analysis and Data Integration (NADI).” Accessed May 1, 2025. https://nadi-system.github.io/.

@misc{PrefaceNetworkAnalysis,
  title = {Network {{Analysis}} and {{Data Integration}} ({{NADI}})},
  author = {Atreya, Gaurav},
  year = {2025},
  url = {https://nadi-system.github.io/},
  urldate = {2025-05-02}
}

Works using Nadi System

Atreya, G., G. Mandavya, and P. Ray. 2024. “Which came first? Streamgages or Dams: Diving into the History of Unaltered River Flow Data with a Novel Analytical tool.” H51L-0865.

@inproceedings{atreyaWhichCameFirst2024,
  title = {Which Came First? {{Streamgages}} or {{Dams}}: {{Diving}} into the {{History}} of {{Unaltered River Flow Data}} with a {{Novel Analytical}} Tool},
  shorttitle = {Which Came First?},
  booktitle = {{{AGU Fall Meeting Abstracts}}},
  author = {Atreya, Gaurav and Mandavya, Garima and Ray, Patrick},
  year = {2024},
  month = dec,
  volume = {2024},
  pages = {H51L-0865},
  urldate = {2025-06-03},
  annotation = {ADS Bibcode: 2024AGUFMH51L.0865A}
}

Network Analysis and Data Integration (NADI)

NADI is group of software packages that facilitate network analysis and do data analysis on data related to network/nodes.

NADI System consists of:

ToolDescription
NADI GISGeographic Information (GIS) Tool for Network Detection
NADI Task SystemDomain Specific Programming Language
NADI PluginsPlugins that provide the functions in Task System
NADI libraryRust and Python library to use in your programs
NADI CLICommand Line Interface to run NADI Tasks
NADI IDEIntegrated Development Environment to write/ run NADI Tasks

The github repositories consisting of source codes:

RepoTool
nadi-gisNadi GIS
nadi-systemNadi CLI/ IDE/ Core
nadi-plugins-rustSample Plugins
nadi-bookSource for this Nadi Book

NADI GIS

Geographic Information (GIS) Tool for Network Detection. The main purpose of the NADI GIS is to find the network connectivity between a set of points using a stream network (which can be developed from elevation models, or downloaded from national databases).

NADI GIS can be used as a terminal command or QGIS plugin, refer to installation section for how to install it.

NADI Task System

Task System is a Domain Specific Programming Language (DSL) that is designed for river network analysis. This is the main core of the network analysis. This is included when you install NADI as a library, CLI or GUI.

NADI Plugins

The functions available to call in the task system comes from plugins. There are many internal plugins with core functions already available, while users can load their own plugins for other functions.

Refer to the plugins section of the book for more details on how to use plugins, how to write them and what to keep in mind while using them.

NADI libraries

Rust and Python library to use in your programs. Rust library nadi_core is available to download/use from cargo with the command cargo add nadi_core.

While Python library requires you to clone the repo and build it with maturin (for now). Future plan for it includes publishing it in pypi.

Rust Libraries

If you are not writing your own rust programs or plugins, you can skip this section.

There are three rust libraries:

LibraryUse
nadi_coreCore library with data types, and plugin structure
nadi_pluginRust Procedural macro library to write nadi plugins
string_template_plusLibrary for string templates with variables

Everything is loaded by nadi_core so you don’t need to load them separately.

NADI Python

While using NADI from python library, you only have access to nadi data types (Node, Network, etc), and the plugin functions, which are enough for most cases as python language syntax, variables, loops etc will give you a lot of flexibility on how to do your own analysis. The python module is structured as follows:

nadi [contains Node, Network, etc]
 +-- functions
 | +-- node [contains node functions]
 | +-- network [contains network functions]
 | +-- env [contains env functions]
 +-- plugins
   +-- <plugin> [each plugin will be added here]
   | +-- node [contains node functions]
   | +-- network [contains network functions]
   | +-- env [contains env functions]
   +-- <next-plugin> and so on ...

The functions are available directly through functions submodule, or through each plugin in plugins submodule. An example python script looks like this:

import nadi
import nadi.functions as fn

net = nadi.Network("data/ohio.network")
for node in net.nodes:
    try:
        _ = int(node.name)
        node.is_usgs = True
        print(fn.node.render(node, "Node {_NAME} is USGS Site"))
    except ValueError:
        node.is_usgs = False

This code shows how to load a network, how to loop through the nodes, and use python logic, or use nadi functions for the node and assign attributes.

More detail on how to use NADI from python will be explained in NADI Python chapter.

NADI CLI

Command Line Interface to run NADI Tasks.

This can run nadi task files, syntax highlight them for verifying them, generate markdown documentations for the plugins. The documentations included in this book (Function List and each plugin’s page like Attributes Plugin attrs) are generated with that. The documentation on each plugin functions comes from their docstrings in the code, please refer to how to write plugins section of the book for details on that.

The available options are shown below.

Usage: nadi [OPTIONS] [TASK_FILE]

Arguments:
  [TASK_FILE]  Tasks file to run; if `--stdin` is also provided this runs before stdin

Options:
  -C, --completion <FUNC_TYPE>  list all functions and exit for completions [possible values: node, network, env]
  -c, --fncode <FUNCTION>       print code for a function
  -f, --fnhelp <FUNCTION>       print help for a function
  -g, --generate-doc <DOC_DIR>  Generate markdown doc for all plugins and functions
  -l, --list-functions          list all functions and exit
  -n, --network <NETWORK_FILE>  network file to load before executing tasks
  -p, --print-tasks             print tasks before running
  -s, --show                    Show the tasks file, do not do anything
  -S, --stdin                   Use stdin for the tasks; reads the whole stdin before execution
  -t, --task <TASK_STR>         Run given string as task before running the file
  -h, --help                    Print help
  -V, --version                 Print version

NADI IDE

NADI Integrated Development Environment (IDE) is a Graphical User Interface (GUI) for the users to write/ run NADI Tasks.

As seen in the image below, IDE consists of multiple components arranged in a tiling manner. You can drag them to move them around and build your own layout. When you start IDE it suggests you some layouts and what to open. You can use the buttons on the top right of each pane to:

  • change pane type
  • vertically split current pane
  • horizontally split current pane
  • fullscreen current page/ restore layout if it’s fullscreen
  • close current pane

Screenshot of NADI IDE

It has the following components:

Text Editor

Open text files, edit and save them.

It comes with syntax highlighting for most languages. And custom highlight for tasks and network files.

For Tasks file, it can also show you function signatures on top so you can write tasks easily, knowing what arguments the function needs and what the default values are.

While open inside IDE, it can also run the tasks by sending them to the terminal, or search help documentations on functions. Hover over the buttons on the top row to see which button does what, and the keyboard shortcut to use them as well.

Terminal

Terminal is there so you can run NADI in a interactive session. Read Eval Print Loop (REPL) of NADI here is meant mostly to be used inside the IDE to evaluate the tasks from editor, but you can open it independently as well.

Function Help

This is a GUI with the list of all available plugin functions. You can expand the sidebar on left to search and browse functions. You can filter by type of function (node, network, env) with the buttons. When you click a function you can read its documentation on the right side.

Capabilities of the iced GUI libraries are limited right now, so you cannot select or copy text from the help. Please refer to the documentation online to do that. Or generate the documentation locally using nadi-cli tool.

Network Viewer

This is a pane where network is visualized, this is a very basic visualization to see the connections and is not optimized for drawing. Please avoid using this pane (making it visible) in case of large networks as it takes a lot of computation to draw this each frame.

Attribute Browser

When you click on a node on Network Viewer it will open/update showing the attributes of that node. There is no way to edit the attributes from here, which is intensional design as attributes should be assigned from tasks so that they are reproducible. For temporary assignments use the terminal.

SVG Viewer

This is a basic utility that can open a SVG file from disk and visualize it. You can click the refresh button to re-read the same file. This is intended for a quick way to check the SVG saved/exported from tasks. This is not a full fledge SVG renderer, so open them in image viewers or browsers to see how it looks.

Trivia

  • Nadi means River in Nepali (and probably in many south asian languages).
  • First prototype of NADI was Not Available Data Integration, as it was meant to be an algorithm to fill data gaps using network information, but it was modified to be more generic for many network related analysis.

Installation

Nadi System is a suite of software packages each have different installation methods. The packages are planned to be uploaded to crates.io or pypi.org later to make it easier to install.

For now, you can either get the binaries compiled for different OS from the Releases page of the github repo. Or you can get the source code using git, and using cargo build the packages.

Downloading Binaries

Goto the repo of each component and refer to the releases section for binaries of different versions.

To setup the nadi-systme to load the plugins you have to place them inside the directory included in the NADI_PLUGIN_DIRS environmental variable. Refer to your Operating System’s documentation on how to set environemental variables.

The binaries should be able to run directly without needing extra steps. If you get a security warnings because the binaries are not signed, you might have to ignore it.

Building from Source

This is currently the preferred way of installing nadi-system (and nadi-gis for Linux and MacOS). Although it includes a bit more steps this makes sure the compiled program is compatible with your OS.

Prerequisites

The prerequisites for building from source are:

  • git [Optional]: to clone the repo, you can directly download zip from github
  • cargo: To build the binaries from source.
  • gdal [Optional]: Only for nadi_gis binary and plugin.

To install git refer to the instructions for your operating system from the official page.

For cargo follow the instructions to install rust toolsets for your operating system from the official page

Installing gdal can be little complicated for windows. For Linux, use your package manager to install gdal and/or gdal-dev package. Mac users can also install gdal using homebrew. For windows, follow the instructions from official website, after installation you might have to make some changes to environmental variables to let cargo know where your gdal binaries/header files are for the compilation to be successful. More details will be provided in the NADI GIS section.

If you use Linux or Mac (with homebrew), then the installation of prerequisites should be easy. But if you do not have the confidence to setup gdal for compiling nadi_gis use the binaries provided for them from the previous steps.

NADI System

It will build the binaries for nadi, nadi-ide, nadi-help, nadi-editor, etc. nadi is the command line interface to run nadi tasks, parse/validate syntax etc. While nadi-ide is the program to graphically develop nadi tasks and run them.

Assuming you have git and cargo,

git clone https://github.com/Nadi-System/nadi-system
cd nadi-system
cargo build --release

To run one of the binary from nadi system, use the command cargo run with binary name.

For example, the following will run the nadi-ide:

cargo run --release --bin nadi-ide

The compiled binaries will be saved in the target/release directory, you can copy them and distribute it. The binaries do not need any other files to run.

The plugins files if present in the system are automatically loaded from NADI_PLUGIN_DIRS environmental variable. Look into installing the plugin section below.

Note: all programs will compile and run in Windows, Linux, and MacOS, while only nadi-cli and mdbook-nadi will run in Android (tmux). nadi-ide and family need the GUI libraries that are not available for android (tmux) yet.

NADI GIS

NADI GIS uses gdal to read/write GIS files, so it needs to be installed. Please refer to gdal installation documentation for that.

Windows

First download compiled gdal from here:

  • https://www.gisinternals.com/sdk.php Then download clang from here:
  • https://github.com/llvm/llvm-project/releases

Extract it into a folder, and then set environmental variables to point to that:

  • GDAL_VERSION: Version of gdal e.g. ‘3.10.0’
  • LIBCLANG_PATH: Path to the lib directory of clang
  • GDAL_HOME: Path to the gdal that has the subdirectories like bin, lib, etc.

You can also follow the errors from the rust compilers as you compile to set the correct variables.

Finally you can get the source code and compile nadi-gis with the following command:

git clone https://github.com/Nadi-System/nadi-gis
cd nadi-gis
cargo build --release

This will generate the nadi-gis binary and gis.dll plugin in the target/release folder, they need to be run along side the gdal shared libraries (.dlls). Place the binaries in the same folder as the dlls from gdal and run it. To use the gis.dll plugin from nadi, nadi-ide, etc. same thing applies there, those binaries should be run with the gdal’s dlls to be able to load the gis plugin.

Linux and Mac

Assuming you have git, cargo, and gdal installed in your system you can build it like this:

git clone https://github.com/Nadi-System/nadi-gis
cd nadi-gis
cargo build --release --features bindgen

The bindgen feature will link the nadi-gis binary with the gdal from your system. So that you do not have to distribute gdal with the binary for your OS.

If you do not have gdal installed in your system, then you can still build the nadi-gis without the bindgen feature. This will still require gdal to be available and distributed with the binary.

cargo build --release

QGIS Plugin

The nadi-gis repo also contains the QGIS plugin that can be installed to run it through QGIS. The plugin will use the nadi-gis binary in your PATH if available. And it also contains the nadi plugin that can be loaded into the nadi system to import/export GIS files into/from the system.

You can download the zip file for plugin from releases page, and use the “Install from Zip” option on QGIS plugins tab. Or copy the nadi directory inside qgis to your python plugin directory for qgis.

Refer to the QGIS plugins page for more instructions. In future we are planning on publishing the plugin so that you can simply add it from QGIS without downloading from here.

Nadi GIS Plugin

The nadi plugin on this repo provides the functions to import attributes, geometries from GIS files, and export them into GIS files.

Nadi Plugins

Out of the two types of plugins, the executable plugins are just simple commands, they do not need to be installed along side nadi, just make sure the executables that you are using from nadi can be found in path. A simple way to verify that is to try to run that from terminal and see if it works.

The compiled plugins can be loaded by setting the NADI_PLUGIN_DIRS environmental variable. The environment variable should be the path to the folder containing the nadi plugins (in .dll, .so, or .dylib formats for windows, linux and mac). You can write your own plugins based on our examples and compile them.

Officially available plugins are in the nadi-plugins-rust directory.

Assuming you have git and cargo,

git clone https://github.com/Nadi-System/nadi-plugins-rust
cd nadi-gis
cargo build --release

The plugins will be inside the target/release directory. Copy them to the NADI_PLUGIN_DIRS directory for nadi to load them.

You can take any one of the plugins as an example to build your own, or following the plugin development instructions from the plugins chapter.

Nadi GIS

Nadi GIS is available as a CLI tool and QGIS plugin, the CLI tool has the following functions:

Usage: nadi-gis [OPTIONS] <COMMAND>

Commands:
  nid      Download the National Inventory of Dams dataset
  usgs     Download data from USGS NHD+
  layers   Show list of layers in a GIS file
  check    Check the stream network to see outlet, branches, etc
  order    Order the streams, adds order attribute to each segment
  network  Find the network information from streams file between points
  help     Print this message or the help of the given subcommand(s)

Options:
  -q, --quiet  Don't print the stderr outputs
  -h, --help   Print help

The important functions are:

  • Download NID and USGS NHD+ data,
  • Check stream network for validity of DAG (Directed Acyclic Graph) required for NADI,
  • Stream ordering for visual purposes,
  • Network detection between points of interest using the stream network

You can use the help command for each one of the subcommand for more help. For example, usgs subcommand’s help using nadi-gis help usgs gets us:

Download data from USGS NHD+

Usage: nadi-gis usgs [OPTIONS] --site-no <SITE_NO>

Options:
  -s, --site-no <SITE_NO>
          USGS Site number (separate by ',' for multiple)

  -d, --data <DATA>
          Type of data (u/d/t/b/n)
          
          [upstream (u), downstream (d), tributaries (t), basin (b), nwis-site (n)]
          
          [default: b]

  -u, --url
          Display the url and exit (no download)

  -v, --verbose
          Display the progress

  -o, --output-dir <OUTPUT_DIR>
          [default: .]

  -h, --help
          Print help (see a summary with '-h')

NADI QGIS

The QGIS plugin for nadi has a subset of the CLI functionality. It can be accessed from the Processing Toolbox.

QGIS Processing Toolbox

You can run the tools from there and use the layers in QGIS as inputs. The QGIS plugin will first try to find nadi-gis binary on your PATH and use it, if not it’ll try to use the binary provided with the plugins. It is preferred to have nadi-gis available in PATH and running without errors.

Example

The examples here will be given using QGIS plugin, and using the CLI tool both. CLI tool is great for quickly running things, and doing things in batch, while QGIS plugin will be better on visualization and manual fixes using other GIS tools.

Using QGIS Plugin

First downloading the data is done through the Download USGS Data tool. As shown in the screenshot below, input the USGS site ID and the data type you want to download.

QGIS Download

You will need, tributaries for the upstream tributaries for network, and nwis-site will download the USGS NWIS sites upstream of the location. We will use those two for the example. If you have national data from other sources, you can use the basin polygon to crop them.

Stream Order tool is mostly for visual purposes. The figure below shows the results from stream order on right compared to the raw download on left.

Stream Order Result

After you have streams (tributaries), you can use the Check Streams tool to see if there are any errors. It will give all the nodes and their categories, you can filter them to see if it has branches, or if it has more than one outlet. The figure below shows the branches with red dot. If we zoom in we can see how the bifurcation on the stream is detected, and how stream order calculation is confused there.

Check Streams Result

Note: Clicking the checkbox for different node types in the Layers tab seems to be glitchy in my QGIS, so you might have to goto Symbology to turn on/off the different node types here. If it's a bug from the plugin code being incorrect, I'll fix it later.

Find Connections tool will find the connection between the points using the stream network. The results below shows the tool being run on the NWIS points.

Find Connections Result

If we select simplify option, it’ll only save the start and end point of the connection instead of the whole stream.

Find Connections Result Alt

Of course you can run Stream Order on the results to get a more aesthetically pleasing result.

Find Connections Result with Order

Using CLI

An example of running nadi-gis using CLI can be done in the following steps:

Download data

We’ll download the streamlines and the NWIS Sites from USGS for station 03217200 (Ohio River at Portsmouth, OH).

nadi-gis usgs -s 03217200 -d n -d t -o output/

This will download two files:

output/03217200_nwis-site.json  output/03217200_tributaries.json

Now we can use check command to see if there are any problems with the streams.

nadi-gis check output/03217200_tributaries.json

That gives us the following output:

Invalid Streams File: Branches (826)
* Outlet: 1
* Branch: 826
* Confluence: 30321
* Origin: 29591

We can generate a GIS file to locate the branches and see if those are significant. Refer to the help for check or use the QGIS plugin.

And to find the connections, we use network subcommand like this:

nadi-gis network -i output/03217200_nwis-site.json output/03217200_tributaries.json

Output:

Outlet: 3221 (-82.996916801, 38.727624498) -> None
3847 -> 3199
2656 -> 2644
399 -> 1212
2965 -> 3942
2817 -> 6236
5708 -> 4733
2631 -> 5741
201 -> 2101
2066 -> 2317
3770 -> 1045
... and so on

Since this is not as useful, we can use the flags in the network subcommand to use a different id, and save the results to a network file.

First we can use layers subcommand to see the available fields in the file:

nadi-gis layers output/03217200_nwis-site.json -a

which gives us:

03217200_nwis-site
  - Fields:
    + "type" (String)
    + "source" (String)
    + "sourceName" (String)
    + "identifier" (String)
    + "name" (String)
    + "uri" (String)
    + "comid" (String)
    + "reachcode" (String)
    + "measure" (String)
    + "navigation" (String)

Using comid as the id for points, and saving the results:

nadi-gis network -i output/03217200_nwis-site.json output/03217200_tributaries.json -p comid -o output/03217200.network

The output/03217200.network file will have the connections like:

15410797 -> 15411587
6889212 -> 6890126
8980342 -> 10220188
19440469 -> 19442989
19390000 -> 19389366
6929652 -> 6929644
... and so on

Make sure you use a field with unique name, and valid identifier in NADI System.

Core Concepts

This section contains a brief explanation of core concepts. Refer to the Reference section for the full details on each.

Node

A Node is a point in network. A Node can have multiple input nodes and only one output node. And a Node can also have multiple attributes identifiable with their unique name, along with timeseries values also identifiable with their names.

If you understand graph theory, then node in nadi network is the same as a node in a graph.

Network

A Network is a collection of nodes. The network can also have attributes associated with it. The connection information is stored within the nodes itself. But Network will have nodes ordered based on their connection information. So that when you loop from node from first to last, you will always find output node before its input nodes.

A condition a nadi network is that it can only be a directed graph with tree structure.

Example Network file:

# network consists of edges where input node goes to output node
# each line is of the format: input -> output
tenessee -> ohio
# if your node name has characters outside of a-zA-Z_, you need to
# quote them as strings
ohio -> "lower-mississippi"
"upper-mississippi" -> "lower-mississippi"
missouri -> "lower-mississippi"
arkansas -> "lower-mississippi"
red -> "lower-mississippi"

The given network can be visualized using svg_save function.

network load_file("./data/mississippi.net")
network command("mkdir -p output")
network svg_save(
   "./output/network-mississippi.svg",
	label="[{INDEX}] {_NAME:repl(-, ):case(title)}",
	bgcolor="gray"
)

Results:

You can assign different graphical properties through node properties.

network load_file("./data/mississippi.net")
node[red].visual.nodecolor = "red";
node[ohio].visual.linecolor = "blue";
node[ohio].visual.linewidth = 3;
node["upper-mississippi", red].visual.nodesize = 8;
node[red].visual.nodeshape = "triangle";
node["upper-mississippi"].visual.nodeshape = "ellipse:0.5";
network svg_save(
   "./output/network-mississippi-colors.svg",
   label="[{INDEX}] {_NAME:repl(-, ):case(title)}",
	bgcolor="gray"
)

Results:

Attributes

Attributes are TOML like values. They can be one of the following types:

Type NameRust TypeDescription
BoolboolBoolean values (true or false)
StringRStringQuoted String Values
Integeri64Integer values (numbers)
Floatf64Float values (numbers with decimals)
DateDateDate (yyyy-mm-dd formatted)
TimeTimeTime (HH:MM, HH:MM:SS formatted)
DateTimeDateTimeDate and Time separed by or T
ArrayRVec<Attribute>List of any attribute values
TableAttrMapKey Value pairs of any attribute values

You can write attributes directly into the task system to assign them, use them in functions. You can also load attributes from a file into the node/network.

Example Attribute File:

river = "Ohio River"
outlet = "Smithland Lock and Dam"
outlet_is_gage = true
outlet_site_no = ""
streamflow_start = 1930-06-07
mean_streamflow = 123456.0
obs_7q10 = 19405.3
nat_7q10 = 12335.9
num_dams_gages = 2348

String Template

String templates are strings with dynamic components that can be rendered for each node based on the node attributes.

A simple template can be like below:

Hi, my name is {name}, my address is {address?"N/A"}.
I wrote this document on {%A}, exact date: {%Y-%m-%d}.

Results (with: name=John; address=123 Road, USA):

Hi, my name is John, my address is 123 Road, USA.
I wrote this document on Wednesday, exact date: 2025-06-04.

With more complicated templates, we would be able to generate documents with text and images based on the node attributes as well.

For example the following template can be used to generate a table.

| Name             | Index   |
|------------------|---------|
<!-- ---8<--- -->
| {_NAME:case(up)} | {INDEX} |
<!-- ---8<--- -->
network load_file("./data/mississippi.net");
network echo(render_template("./data/example.template"))

Results:

NameIndex
LOWER-MISSISSIPPI0
UPPER-MISSISSIPPI1
MISSOURI2
ARKANSAS3
RED4
OHIO5
TENESSEE6

Of course, there are better ways to generate table than this, but this shows how flexible the template system is.

Task

Tasks system acts like a scripting language for nadi system. A Task consists of getting/evaluating/setting attributes in environment, network or nodes. The value that can be evaluated are expressions that consists of literal values, variables, or function calls that can either be a environment, node or a network function. Functions are unique based on their names, and can have default values if users do not pass all arguments.

The code examples throughout this book, that are being used to generate network diagrams, tables, etc are run using the task system.

Here is an example contents of a task file:

# sample .tasks file which is like a script with functions
node<inputsfirst> print_attrs("uniqueID")
node show_node()
network save_graphviz("/tmp/test.gv")
node<inputsfirst>.cum_val = node.val + sum(inputs.cum_val);

node[WV04113,WV04112,WV04112] print_attr_toml("testattr2")
node render("{NAME} {uniqueID} {_Dam_Height_(Ft)?}")
node list_attr("; ")
# some functions can take variable number of inputs
network calc_attr_errors(
    "Dam_Height_(Ft)",
    "Hydraulic_Height_(Ft)",
    "rmse", "nse", "abserr"
)
node sum_safe("Latitude")
node<inputsfirst> render("Hi {SUM_ATTR}")
# multiple line for function arguments
network save_table(
	"test.table",
	"/tmp/test.tex",
	true,
	radius=0.2,
	start = 2012-19-20,
	end = 2012-19-23 12:04
	)
node.testattr = 2
node set_attrs_render(testattr2 = "{testattr:calc(+2)}")
node[WV04112] render("{testattr} {testattr2}")

# here we use a complicated template that can do basic logic handling
node set_attrs_render(
    testattr2 = "=(if (and (st+has 'Latitude) (> (st+num 'Latitude) 39)) 'true 'false)"
)
# same thing can be done if you need more flexibility in variable names
node load_toml_string(
    "testattr2 = =(if (and (st+has 'Latitude) (> (st+num 'Latitude) 39)) 'true 'false)"
)
# selecting a list of nodes to run a function
node[
	# comment here?
    WV04113,
    WV04112
] print_attr_toml("testattr2")
# selecting a path
node[WV04112 -> WV04113] render("=(> 2 3)")

Node Function

Node function runs on each node. It takes arguments and keyword arguments.

For example following node function takes multiple attribute names and prints them. The signature of the node function is print_attrs(*args).

network load_file("./data/mississippi.net")
node print_attrs("INDEX", name=false)

Results:

INDEX = 0
INDEX = 1
INDEX = 2
INDEX = 3
INDEX = 4
INDEX = 5
INDEX = 6

Only the NAME is printed as they do not have any other attributes.

Selective Execution

You can selectively run only a few nodes, or change the order the nodes are executed.

Given this network:

Network Diagram

Inverse Order

network load_file("./data/mississippi.net")
node<inverse> print_attrs("NAME")

Results:

NAME = "tenessee"
NAME = "ohio"
NAME = "red"
NAME = "arkansas"
NAME = "missouri"
NAME = "upper-mississippi"
NAME = "lower-mississippi"

List of Nodes

network load_file("./data/mississippi.net")
node[tenessee,"lower-mississippi"] print_attrs("NAME")

Results:

NAME = "tenessee"
NAME = "lower-mississippi"

Path of Nodes

network load_file("./data/mississippi.net")
node[tenessee -> "lower-mississippi"] print_attrs("NAME")

Results:

NAME = "tenessee"
NAME = "ohio"
NAME = "lower-mississippi"

As we can see in the diagram, the path from tenessee to lower mississippi includes the ohio node.

Network Function

Network function runs on the network as a whole. It takes arguments and keyword arguments.

For example following network function takes file path as input to save the network in graphviz format:

save_graphviz(
	outfile [PathBuf],
	name [String] = "network",
	global_attrs [String] = "",
	node_attr [Option < Template >],
	edge_attr [Option < Template >]
)

Note that, if the arguments have default values, or are optional, then you do not need to provide them.

For example, you can simply call the above function like this.

network load_file("./data/mississippi.net")
network save_graphviz("./output/test.gv")
network clip()
# the path link are relative to /src
network echo("./output/test.gv")

Results:

digraph network {

"upper-mississippi" -> "lower-mississippi"
"missouri" -> "lower-mississippi"
"arkansas" -> "lower-mississippi"
"red" -> "lower-mississippi"
"ohio" -> "lower-mississippi"
"tenessee" -> "ohio"
}

With extra commands you can also convert it into an image

network load_file("./data/mississippi.net")
network save_graphviz("./output/test.gv")
network command("dot -Tsvg ./output/test.gv -o ./output/test.svg")
network clip()
# the link path needs to be relative to this file
network echo("../output/test.svg")

Results:

Further Reading

If you need help on any functions. Use the help as a task. You can use help node or help network for specific help. You can also browse through the function help window in the nadi-ide for help related to each functions.

help node render

Results:

node render (template: '& Template', safe: 'bool' = false)
Render the template based on the node attributes
# Arguments
- `template: & Template` String template to render
- `safe: bool` [def = false] if render fails keep it as it is instead of exiting


For more details on the template system. Refer to the String
Template section of the NADI book.

Or you can use nadi --fnhelp <function> using the nadi-cli.

Now that you have the overview of the nadi system’s data structures. We’ll jump into the software structure and how to setup and use the system.

If you want more details on any of the data structures refer the Developer’s references, or the library documentation.

Learn by Examples

TopicLearn About
AttributesSetting and Getting Attributes
Control FlowControl flow, if, else, while loops etc
ConnectionsLoading and modifying connections
CountingCounting nodes in network, conditional
CumulativeCalculating Network cumulative sums and those
Import ExportImporting and exporting multiple data formats
String TemplateUsing String Templates to do various things

Attributes

There are 3 kind of attributes in nadi. Environment, Network and Node attributes. as their name suggests environment attributes are general attributes available in the current context. Network attributes are associated with the currenly loaded network. and node attributes are associated with each nodes.

nadi has special syntax where you can get/set attributes for multiple nodes at once.

network load_str("a -> b\n b -> d\n c -> d\n");
# environmental attribute
env.someattr = 123;
env.other = 1998-12-21;
env array(someattr, other)
# network attribute
network.someattr = true;
network.someattr
# node attributes
node.someattr = "string val";

node.someattr

Results:

[123, 1998-12-21]
true
{
  d = "string val",
  c = "string val",
  b = "string val",
  a = "string val"
}

like you saw with the array function, variables used are inferred as the attributes of the current env/network/node task.

you can use attributes from outside of current task type in some cases like:

  • env/network variables can be used anywhere
  • node variables are valid in node tasks
  • node tasks has special variables types like inputs and output
network load_str("a -> b\n b -> d\n c -> d\n");
# environmental attribute
env.someattr = 123;
env.other = 1998-12-21;
# network attribute
network.someattr = true;

# using network attr in env task
env array(network.someattr, other)

# using nodes in network task
network nodes.NAME

Results:

[true, 1998-12-21]
["d", "c", "b", "a"]

Similarly inputs:

network load_str("a -> b\n b -> d\n c -> d\n");

node inputs.NAME

Results:

{
  d = ["b", "c"],
  c = [],
  b = ["a"],
  a = []
}

Refer to the network diagram below to verify the output are correct:

network load_str("a -> b\n b -> d\n c -> d\n");
network svg_save("./output/attrs-simp.svg")

Results:

Control Flow

Task has some basic control flow required to write programs. They are if-else branches and while loops.

Conditional (If-Else) Blocks

There are two kind of if-else branches. One is on an expression level. which means there has to be if and else branch both as it expects a return value. The following example shows the expression with if-else block.

env.newvar = if (12 > 90) {"yes"} else {"no"};
env.newvar

Results:

"no"

Trying to do it without else block will result in an parse error as the program will error with a syntax error, for example the code below is invalid

env.newvar = if (12 > 90) {"yes"};
env.newvar

That’s when you can use the if-else block on the task level. This can be only if block as the execution blocks are tasks instead of expressions.

Here, since the condition is negative the task inside the block is never executed, hence env.newvar is empty.

if (12 > 90) {
	env.newvar = "yes";
}
env.newvar

*Error*:

EvalError: Attribute not found

While Loop

While loop runs the tasks inside the block repeatedly while the condition is satisfied. There is an iteration limit of 1,000,000 for now just in case people write infinite loop. This is arbritary.

env.somevar = 1;
while (somevar < 10) {
	env.somevar
	env.somevar = env.somevar + 1;
}

Results:

1
2
3
4
5
6
7
8
9

This can be used to repeat a set of tasks for a various reasons.

If your tasks take a long time to run, note that, the while loop needs to be completely run before the output can be processed and displayed, so that even if your output is not printed, it is being run. This will be fixed in the future version of the program.

Connections

Connections between the nodes is the most important part of nadi. you can load networks by loading a file or string. The network is a simple multiline text with one edge (input -> output) in each line. comments starting with # are supported.

Default is Empty Network

Tasks are run by default with an empty network. So you might still be able to work with network attributes, but the nodes will be empty. also note that when you load network it replaces the old one including the attributes.

network.someattr = 1234;
network.someattr

Results:

1234

But we can see the nodes are not there,

network count()
network nodes.NAME

Results:

0
[]

Trying to run node functions on the empty network means nothing is run

node render("{NAME}")

Results:


Loading Network from String

Here assume we have a network consisting of nodes of dams and gages like the following where dam nodes start with d and gages with g:

network load_str("
d1 -> d2
d3 -> g2
d2 -> g1
g1 -> d4
g2 -> d4
d4 -> g3
");
network svg_save(
   "./output/simple-count.svg",
	label="[{INDEX}] {_NAME}"
)

Results:

Loading Network from a File

we can load a network from a file:

network load_file("./data/mississippi.net");
network svg_save(
   "./output/ex-network-conn.svg",
	label="[{INDEX}] {_NAME}"
)

Results:

Modifying the network

You can modify the network after loading it as well. The example below extracts just the nodes that are dams. Compare this with the previous network to see how the connections are retained during the subsets.

network load_str("
d1 -> d2
d3 -> g2
d2 -> g1
g1 -> d4
g2 -> d4
d4 -> g3
");
node.is_dam = NAME match "^d[0-9]+";
network subset(nodes.is_dam);
network svg_save(
   "./output/simple-count-subset.svg",
	label="[{INDEX}] {_NAME}"
)

Results:

This can be useful when you want to remove nodes that do not satisfy some selection criteria for your analysis without having to redo the network detection part.

Counting Nodes

Here assume we have a network consisting of nodes of dams and gages like the following where dam nodes start with d and gages with g:

network load_str("
d1 -> d2
d3 -> g2
d2 -> g1
g1 -> d4
g2 -> d4
d4 -> g3
");
network svg_save(
   "./output/simple-count.svg",
	label="[{INDEX}] {_NAME}"
)

Results:

Simply counting number of nodes, or certain types of nodes in a network is done through count function.

network load_str("
d1 -> d2
d3 -> g2
d2 -> g1
g1 -> d4
g2 -> d4
d4 -> g3
");
node.g_node = NAME match "^g[0-9]+";
network count()
network count(nodes.g_node)
network count(nodes.g_node) / count()

Results:

7
3
0.42857142857142855

when you call a network function, you get one output, while a node function will give you the output for each node like here:

network load_str("
d1 -> d2
d3 -> g2
d2 -> g1
g1 -> d4
g2 -> d4
d4 -> g3
");
node.g_node = NAME match "^g[0-9]+";
node.g_node

Results:

{
  g3 = true,
  d4 = false,
  g2 = true,
  d3 = false,
  g1 = true,
  d2 = false,
  d1 = false
}

Always be careful that node function is run for all the nodes separately, if you are running them without any variables from the node, then you can use network function, or environment function to get the results.

Counting the number of nodes upstream of each node gives us the order of the nodes.

network load_str("
d1 -> d2
d3 -> g2
d2 -> g1
g1 -> d4
g2 -> d4
d4 -> g3
");
node<inputsfirst>.nodes_us = 1 + sum(inputs.nodes_us);
network svg_save(
   "./output/simple-count-1.svg",
	label="{_NAME} = {nodes_us}"
)

Results:

We can add a condition and count the nodes that satisfy that condition only. Like counting the number of dams upstream of each node (including the node).

network load_str("
d1 -> d2
d3 -> g2
d2 -> g1
g1 -> d4
g2 -> d4
d4 -> g3
");
node.is_dam = NAME match "^d[0-9]+";
node<inputsfirst>.dams_us = int(is_dam) + sum(inputs.dams_us);
network svg_save(
   "./output/simple-count-2.svg",
	label="{_NAME} = {dams_us}"
)

Results:

You can similarly count the number of gages downstream. Here we need a conditional unlike in previous cases as not all nodes have output. In case of inputs, a leaf node would have no inputs but sum([]) would still be a valid output of 0. But for node without output nodes, the variable type output fails with NoOutputNode error, so we add a conditional check to avoid that.

network load_str("
d1 -> d2
d3 -> g2
d2 -> g1
g1 -> d4
g2 -> d4
d4 -> g3
");
node.is_gage = NAME match "^g[0-9]+";
node<outputfirst>.gages_ds = int(is_gage) + if (output._?) {
	output.gages_ds
	} else {
	0
};
network svg_save(
   "./output/simple-count-3.svg",
	label="{_NAME} = {gages_ds}"
)

Results:

Here the condition (output._?) checks if there is output on the node or not by checking for the dummy variable _ which is present in all nodes/network.

Cumulative Sum

Here we can use the stream ordering formula to calculate the stream order for each node:

network load_str("
d1 -> d2
d3 -> g1
d2 -> g1
g1 -> d4
g2 -> d4
d4 -> g3
");
node<inputsfirst>.stream_ord = max(inputs.stream_ord, 1) + int(count(inputs._?) > 1);
network svg_save(
   "./output/cumulative-1.svg",
	label="{_NAME} = {stream_ord}"
)

Results:

The first part takes the maximum order of the input nodes, then the second part int(count(inputs._?) > 1) checks if there are more than one input, adding one to the order when multiple streams combine into one. You can use the funciton inputs_count() instead of count(inputs._?) to do the same thing.

That is the core of the NADI Task System, you can write functions that have their own logic and then load them into the system. You can then use the syntax and network based analysis methods of NADI using those functions.

And of course, we can visualize the different order of streams for easier understanding.

network load_str("
d1 -> d2
d3 -> g1
d2 -> g1
g1 -> d4
g2 -> d4
d4 -> g3
");
node<inputsfirst>.stream_ord = max(inputs.stream_ord, 1) + int(count(inputs._?) > 1);
node.visual.linewidth = stream_ord / 2;
node(stream_ord == 1).visual.linecolor = "green";
node(stream_ord == 2).visual.linecolor = "blue";
node(stream_ord == 3).visual.linecolor = "red";
network svg_save(
   "./output/cumulative-2.svg",
	label="{_NAME} = {stream_ord}"
)

Results:

Import Export Files

Similar to how you can load network files, you can load attributes from files as well. Direct load of TOML format is supported from the internal plugins, while you might need external plugins for other formats.

load_attrs function takes a template, and reads a different files for each node to load the attributes from.

network load_file("data/ohio.network")
node attributes.load_attrs("data/attrs/{_NAME}.toml")
network svg_save(
  "output/ohio-import-export.svg",
  label="{_NAME} (A = {basin_area?:f(2)})",
  height=700,
  bgcolor="gray"
)

Results:

You can use the render function to see if the files being loaded are correct. Here we can see the examples for the first 4 nodes:

network load_file("data/ohio.network")
node(INDEX<4) render("data/attrs/{_NAME}.toml")

Results:

{
  smithland = "data/attrs/smithland.toml",
  golconda = "data/attrs/golconda.toml",
  old-shawneetown = "data/attrs/old-shawneetown.toml",
  mountcarmel = "data/attrs/mountcarmel.toml"
}

You can also read a attributes from string, so you can combine that with files.from_file and load it.

network load_file("data/ohio.network")
env.somevalue = attributes.parse_attrmap(
	files.from_file("data/attrs/smithland.toml")
);
env.somevalue.basin_area
env.somevalue.length

Results:

371802.16
1675.95

You can export csv files

network load_file("data/ohio.network")
node attributes.load_attrs("data/attrs/{_NAME}.toml")
network table.save_csv("output/ohio-export.csv", ["NAME", "basin_area", "length"])
network command("cat output/ohio-export.csv | head", echo=true)

Results:

$ cat output/ohio-export.csv | head
NAME,basin_area,length
"smithland",371802.16,1675.95
"golconda",370942.26,1701.32
"old-shawneetown",363656.85,1772.27
"mountcarmel",74359.92,1918.08
"jt-myers",277962.45,1791.07
"evansville",275482.9,1878.29
"calhoun",18540.88,1992.5
"newburgh",253065.62,1903.58
"cannelton",249382.5,1993.72

GIS Files

The examples below require the `gis` external plugin from `nadi-gis` repository to work. Make sure you have the plugin file in the directory in your `NADI_PLUGIN_DIRS` environmental variable.

First we make a GIS file by exporting. The image below shows the resulting points (red) from the shapefile and connections (black) from the Geopackage file when we visualize this on QGIS (with background of Terrain and Ohio River tributaries).

network load_file("data/ohio.network")
node attributes.load_attrs("data/attrs/{_NAME}.toml")
node.geometry = render("POINT ({lon} {lat})");
network gis.gis_save_nodes(
  "output/ohio-nodes.shp",
  "geometry",
  {
    NAME = "String",
	basin_area = "Float",
	length = "Float"
  }
)
# Exporting the edges
network gis.gis_save_connections(
  "output/ohio-connections.gpkg",
  "geometry"
)

Results:

The geometry attributes should be WKT String.

Now we are using the generated GIS files to load the network and the attributes:

network gis.gis_load_network("output/ohio-connections.gpkg", "start", "end")
network gis.gis_load_attrs("output/ohio-nodes.shp", "NAME")

network svg_save(
  "output/ohio-from-gis.svg",
  label="{_NAME} (A = {basin_area?:f(2)}; L = {length:f(1)})",
  height=700,
  bgcolor="gray"
)

Results:

As we can see the plugins make it easier to interoperate with a lot of different data formats. Here GIS plugin will support any file types supported by gdal. Similarly, other formats can be supported by writing plugins.

String Templates

Nadi Extension Capabilities

Nadi System can be extended for custom use cases with the following ways:

All Plugin Functions

All the functions available on this instance of nadi, are listed here.

Env Functions

PluginFunctionHelp
attributesfloat_divmap values from the attribute based on the given table
attributesfloat_transformmap values from the attribute based on the given table
coreintmake an int from the value
filesfrom_fileReads the file contents as string
debugechoEcho the string to stdout or stderr
coreminMinimum of the variables
coresumSum of the variables
coreconcatConcat the strings
logicifelseSimple if else condition
logicltGreater than check
coremaxMinimum of the variables
coredayday from date/datetime
coreappendappend a value to an array
corecount_strGet a count of unique string values
corefloatmake a float from value
coremonthmonth from date/datetime
attributesparse_attrSet node attributes based on string templates
debugdebugPrint the args and kwargs on this function
attributesstrmapmap values from the attribute based on the given table
coreyearyear from date/datetime
corelengthlength of an array or hashmap
coreunique_strGet a list of unique string values
debugclipEcho the ––8<–– line for clipping syntax
debugsleepsleep for given number of milliseconds
filesto_fileWrites the string to the file
logicgtGreater than check
logiceqGreater than check
logicandBoolean and
logicorboolean or
logicnotboolean not
coreprodProduct of the variables
corearraymake an array from the arguments
attributesparse_attrmapSet node attributes based on string templates
regexstr_matchCheck if the given pattern matches the value or not
regexstr_replaceReplace the occurances of the given match
corecountCount the number of true values in the array
coretype_nameType name of the arguments
coreisnacheck if a float is nan
corestrmake a string from value
coreisinfcheck if a float is +/- infinity
attributesfloat_multmap values from the attribute based on the given table
coreattrmapmake an array from the arguments
coremax_numMinimum of the variables
filesexistsChecks if the given path exists
logicallcheck if all of the bool are true
regexstr_filterCheck if the given pattern matches the value or not
coremin_numMinimum of the variables
regexstr_findFind the given pattern in the value
logicanycheck if any of the bool are true
regexstr_find_allFind all the matches of the given pattern in the value
regexstr_countCount the number of matches of given pattern in the value
renderrenderRender the template based on the node attributes

Node Functions

PluginFunctionHelp
seriessr_to_arrayMake an array from the series
timeseriests_listList all timeseries in the node
attributesprint_attrsPrint the given node attributes if present
seriessr_dtypeType name of the series
timeseriests_printPrint the given timeseries values in csv format
damscount_node_ifCount the number of nodes upstream at each point that satisfies a certain condition
streamflowcheck_negativeCheck the given streamflow timeseries for negative values
errorscalc_ts_errorCalculate Error from two timeseries values in the node
attributesset_attrs_ifelseif else condition with multiple attributes
seriesset_seriesset the following series to the node
errorscalc_ts_errorsCalculate Error from two timeseries values in the node
datafilldatafill_experiment
timeseriests_countNumber of timeseries in the node
attributesfirst_attrReturn the first Attribute that exists
commandrunRun the node as if it’s a command if inputs are changed
timeseriests_dtypeType name of the timeseries
renderrenderRender the template based on the node attributes
attributesload_toml_renderSet node attributes based on string templates
seriessr_listList all series in the node
seriessr_meanType name of the series
timeseriests_lenLength of the timeseries
print_nodeprint_nodePrint the node with its inputs and outputs
filesexistsChecks if the given path exists when rendering the template
coreinputs_countCount the number of input nodes in the node
seriessr_sumSum of the series
attributesget_attrRetrive attribute
attributeshas_attrCheck if the attribute is present
seriessr_lenLength of the series
attributesprint_all_attrsPrint all attrs in a node
coreoutput_attrGet attributes of the output node
commandcommandRun the given template as a shell command.
attributesset_attrsSet node attributes
datafillload_csv_fill
damsmin_yearPropagate the minimum year downstream
coreinputs_attrGet attributes of the input nodes
corehas_outletNode has an outlet or not
attributesset_attrs_renderSet node attributes based on string templates
attributesload_attrsLoads attrs from file for all nodes based on the given template
seriessr_countNumber of series in the node

Network Functions

PluginFunctionHelp
connectionssubsetTake a subset of network by only including the selected nodes
print_nodeprint_attr_csvPrint the given attributes in csv format with first column with node name
tabletable_to_markdownRender the Table as a rendered markdown
connectionsload_strLoad the given file into the network
renderrender_templateRender a File template for the nodes in the whole network
connectionsload_fileLoad the given file into the network
commandparallelRun the given template as a shell command for each nodes in the network in parallel.
renderrender_nodesRender each node of the network and combine to same variable
timeseriesseries_csvWrite the given nodes to csv with given attributes and series
tablesave_csvSave CSV
attributesset_attrsSet network attributes
renderrenderRender from network attributes
graphvizsave_graphvizSave the network as a graphviz file
visualsset_nodesize_attrsSet the node size of the nodes based on the attribute value
visualssvg_saveExports the network as a svg
commandcommandRun the given template as a shell command.
corecountCount the number of nodes in the network
timeseriests_print_csvSave timeseries from all nodes into a single csv file
gisgis_load_attrsLoad node attributes from a GIS file
htmlexport_mapExports the network as a HTML map
attributesset_attrs_renderSet network attributes based on string templates
fancy_printfancy_printFancy print a network
datafillsave_experiments_csvWrite the given nodes to csv with given attributes and experiment results
gisgis_load_networkLoad network from a GIS file
errorscalc_attr_errorCalculate Error from two attribute values in the network
gisgis_save_nodesSave GIS file of the nodes
gisgis_save_connectionsSave GIS file of the connections
connectionssave_fileSave the network into the given file
connectionsload_edgesLoad the given edges into the network
gnuplotplot_timeseriesGenerate a gnuplot file that plots the timeseries data in the network

Executable Plugins

Executable plugins are programs that can be called from terminal. The node command function, network command function and their families in the command plugin have the capacity to run external programs through the command line.

The inputs to the program is given through the command line arguments, while the output of the programs are read through the standard output of the program. This can be used to call different/same commands for nodes with arguments dependent on their attributes.

And the output from the programs are taken by reading their stdout (standard output). Any lines starting from nadi:var: (prefix) is considered a communication attempt with Nadi. Currently, you can set attribute values by providing key=val pairs after the prefix. The node function will set it for current node, and network function will set it for the network. Furthermore, in network function, you can add one more section after prefix to set node attributes. For example, nadi:var:node1:value=12 will set the value attribute to 12 in the node named node1 in the current network.

The executable plugin or commands are language agnostic, as long as the command is available to run from the parent shell they will be run.

The following section shows example programs written in different languages that can interact with nadi in this way.

Python

Here is an example python script that can be called from nadi for each node. This script just reads a CSV file and passes the attributes to nadi, but more complicated programs can be written by the users.

First part is importing libraries and getting the arguments from nadi. The code below reads one string as a commandline argument and saves that into station variable.

import sys
import pandas as pd

try:
    station = sys.argv[1]
except IndexError:
    print("Give station")
    exit(1)

Then we can use any python logic with any libraries to do what we want. Here it reads the CSV and extracts values based on the station name. This is just an example, but you can load different csv files for each station and do a lot of analysis before sending those variables to nadi.

 import sys
 import pandas as pd
 
 try:
     station = sys.argv[1]
 except IndexError:
     print("Give station")
     exit(1)
 
df = pd.read_csv(f"data/streamflow/{station}.csv", header=None)
sf = df.iloc[:, 4]
sf.index = pd.to_datetime(df.iloc[:, 2])
sf = sf.resample('1d').mean()

Once we have our variables from analysis, we can simply print them with nadi:var: prefix so that nadi knows they are the variables it should read and load into each node.

 import sys
 import pandas as pd
 
 try:
     station = sys.argv[1]
 except IndexError:
     print("Give station")
     exit(1)
 
 df = pd.read_csv(f"data/streamflow/{station}.csv", header=None)
 sf = df.iloc[:, 4]
 sf.index = pd.to_datetime(df.iloc[:, 2])
 sf = sf.resample('1d').mean()
print("nadi:var:sf_mean=", float(sf.mean()))

for year, flow in sf.groupby(sf.index.year).mean().items():
    print(f"nadi:var:sf_year_{year}={flow}")

Now we can call this script from inside the nadi tasks system like the following, assuming the python file is saved as streamflow.py.

node command("python streamflow.py {_NAME}")

If you want to know what the template will be rendered as, use render function, and if you want to check whether it exists or not, you can use exists function.

RScript

Similar to most programming languages R can also read command line arguments when ran with RScript command instead of R.

For example if you run the following script in a file called test.r and ran it with command Rscript test.r some args 2, you get the output of [1] "some" "args" "2"

args <- commandArgs(trailingOnly = TRUE)
print(args)

So you can use the same method like in python to pass arguments, do analysis and pass it back using the cat function in r as shown below. cat function avoids printing the [1] type indices to the stdout.

cat(sprintf("nadi:var:this_val=%d\n", 1200))

Compiled Plugins

As it is not possible to forsee all the use cases in advance, the nadi software can be easily extended (easy being an relative term) to account for different use cases.

The program can load compiled shared libraries (.dll in windows, .so in linux, and .dylib on mac). Since they are shared libraries compiled into binaries, any programming languages can be used to generate those. So far, the nadi_core library is available for Rust only. Using that, plugins can be written and those functions can be made available from the system.

Nadi core automatically loads:

  • internal plugins if feature functions is used in nadi_core to compile it,
  • external plugins in the directories inside the NADI_PLUGIN_DIRS environmental variables. The plugins must be compiled using the same nadi_core version and must have the same internal ABI for data types.

The syntax for functions in plugins are same for internal and external plugins. While the way to register the plugin differ slightly.

The difference between the internal and external plugins are that, internal plugins are compiled with the nadi_core and come with the program, while external plugins are separately compiled and loaded through dynamic libraries.

The methods for writing the plugins are the same, except at the top level: to export plugins, you have to use [nadi_core::nadi_plugin::nadi_plugin] macro for external plugins while [nadi_core::nadi_plugin::nadi_internal_plugin] for internal ones.

In the next sections we will go in detail about how to write plugins and load them in nadi.

Internal Plugins

Internal plugins come with the nadi system. They are only modified between the different versions of Nadi.

The internal plugins provide core functionality of the Task system like data conversion, parsing network/attribute files, logical operations, template rendering, etc.

Future planned internal plugin functions can be found in nadi-futures repository. Which in itself is an external plugin.

External Plugins

External plugins are plugins that are their own separate programs that compile to a shared library. The shared library has information about the name of the plugin, the functions that are available, as well as the bytecode required to run the functions.

You have to use the nadi_core library and the macros available there to make the plugins. Although it might be possible to write it without the macros (an example is provided), it is strongly discouraged. The example only serves as a way to demonstrate the inner working of the external plugins.

Some examples of external plugins are given in the nadi-plugins-rust repository.

An example of a complex external plugin can be found in the gis plugin from nadi-gis repository.

Steps to create a Plugin

nadi CLI tool has a function that can generate a plugin template. Simply run the nadi command with --new-plugin flag.

nadi --new-plugin <plugin-name>

This will create a directory with plugin’s name with Cargo.toml and src/lib.rs with some sample codes for plugin functions. You can then edit them as per your need.

The generated files using nadi --new-plugin sample look something like this:

Cargo.toml:

[package]
name = "sample"
version = "0.1.0"
edition = "2021"

[lib]
crate-type = ["cdylib"]

# make sure you use the same version of nadi_core, your nadi-system is in
[dependencies]
abi_stable = "0.11.3"
nadi_core = "0.7.0"

src/lib.rs:

use nadi_core::nadi_plugin::nadi_plugin;

#[nadi_plugin]
mod sample {
    use nadi_core::prelude::*;

    /// The macros imported from nadi_plugin read the rust function you
    /// write and use that as a base to write more core internally that
    /// will be compiled into the shared libraries. This means it'll
    /// automatically get the argument types, documentation, mutability,
    /// etc. For more details on what they can do, refer to nadi book.
    use nadi_core::nadi_plugin::{env_func, network_func, node_func};

    /// Example Environment function for the plugin
    ///
    /// You can use markdown format to write detailed documentation for the
    /// function you write. This will be availble from nadi-help.
    #[env_func(pre = "Message: ")]
    fn echo(message: String, pre: String) -> String {
        format!("{}{}", pre, message)
    }

    /// Example Node function for the plugin
    #[node_func]
    fn node_name(node: &NodeInner) -> String {
        node.name().to_string()
    }

    /// Example Network function for the plugin
    ///
    /// You can also write docstrings for the arguments, this syntax is not
    /// a valid rust syntax, but our macro will read those docstrings, saves
    /// it and then removes it so that rust does not get confused. This means
    /// You do not have to write separate documentation for functions.
    #[network_func]
    fn node_first_with_attr(
        net: &Network,
        /// Name of the attribute to search
        attrname: String,
    ) -> Option<String> {
        for node in net.nodes() {
            let node = node.lock();
            if node.attr_dot(&attrname).is_ok() {
                return Some(node.name().to_string());
            }
        }
        None
    }
}

The plugin can be compiled with the cargo build or cargo build --release command, it’ll generate the shared library in the target/debug or target/release folder. You can simply copy it to directory in NADI_PLUGIN_DIRS for it to be loaded.

Functions

Plugin functions are very close to normal rust functions, with extra syntax for the function arguments, and limited function argument and return types.

Function Types

There are 3 function types:

  • environment
  • node
  • network

the macro used for each function type are availabel from nadi_core::nadi_plugin. All the macro take optional list of key = value pairs that can act like default arguments to the functions while called from the task system.

These macro will read the rust function and generate the necessary plugin code, function signature, documentation, and will even save the original code so that users can browse it through the nadi-help.

Function Arguments

There are 5 types of function arugments, that are denoted by the following attributes

macro attrTypeSupported Types
Node/Network&/& mut + NodeInner/Network
Normal argumentsT: FromAttribute
#[relaxed]Relaxed argumentsT: FromAttributeRelaxed
#[args]Positional Arguments List&[Attribute]
#[kwargs]Keyword Arguments AttrMap&AttrMap

Users can not provide the argument Node/Network for node/network function as it is automatically provided based on the context.

Furthermore, there are required and optional arguments. And users can optionally omit the arguments that are of type Option<T>, or have default value in the macro (e.g. safe = false in the codes below).

For now, the function arguments except the Node or Network cannot be mut. But they can be reference of T if T satisfies the trait constraints, for example, instead of Vec<String>, it can be &[String]. But because the function context is evaluated for each node/network, there is no optimization by using the references.

Return Types

Function Return can be empty, an attribute value, or an error. When a function returns an error, the execution is halted. When it doesn’t return a value and an assignment is performed, it will error as well.

The return type of the function should implement Into<FunctinRet>, refer to the documentation for nadi_core::functions::FunctionRet to see what types implement it. You can also implement that for your own types.

You can simply use any type that satisfy the trait requirement mentioned above as a function return and the nadi macros will convert them automatically for you.

Verbosity

In future versions the functions will also get a flag that will let them know how verbose the functions can be. This will also come with a way to pass progress and other information while the function is still running.

Examples

Refer to the nadi_core, and other plugin repositories for sample codes for plugin functions as they are always up to date with the current version.

Here is an example containing render function that is available on all function types.

    /// Render the template based on the given attributes
    ///
    /// For more details on the template system. Refer to the String
    /// Template section of the NADI book.
    #[env_func(safe = false)]
    fn render(
        /// String template to render
        template: &Template,
        #[kwargs] keyval: &AttrMap,
        /// if render fails keep it as it is instead of exiting
        safe: bool,
    ) -> Result<String, String> {
        let text = if safe {
            keyval
                .render(template)
                .unwrap_or_else(|_| template.original().to_string())
        } else {
            keyval.render(template).map_err(|e| e.to_string())?
        };
        Ok(text)
    }
    /// Render the template based on the node attributes
    ///
    /// For more details on the template system. Refer to the String
    /// Template section of the NADI book.
    #[node_func(safe = false)]
    fn render(
        node: &NodeInner,
        /// String template to render
        template: &Template,
        /// if render fails keep it as it is instead of exiting
        safe: bool,
    ) -> Result<String, String> {
        let text = if safe {
            node.render(template)
                .unwrap_or_else(|_| template.original().to_string())
        } else {
            node.render(template).map_err(|e| e.to_string())?
        };
        Ok(text)
    }
    /// Render from network attributes
    #[network_func(safe = false)]
    fn render(
        network: &Network,
        /// Path to the template file
        template: &Template,
        /// if render fails keep it as it is instead of exiting
        safe: bool,
    ) -> Result<String, String> {
        let text = if safe {
            network
                .render(template)
                .unwrap_or_else(|_| template.original().to_string())
        } else {
            network.render(template).map_err(|e| e.to_string())?
        };
        Ok(text)
    }

Environment Functions

Environment functions are like any normal function on programming languages that take arguments and run code. In Nadi environment functions can be called from any scope. For example, if a node function and environement function share the same name, then in a node task node function is called, but in network task env function is called.

Environment functions are denoted in the plugins with #[env_func] macro. All the arguments this function takes need to be provided by user or through default values.

Here is an example of a environment function and in plugin logic.

    /// Boolean and
    #[env_func]
    fn and(
        /// List of attributes that can be cast to bool
        #[args]
        conds: &[Attribute],
    ) -> bool {
        let mut ans = true;
        for c in conds {
            ans = ans && bool::from_attr_relaxed(c).unwrap();
        }
        ans
    }

This function can be called inside the task system in different context like follows:

env and(true, 12)
env.something = false
env and(something, true) == (something & true)

network and(what?, and(true, true))

Results:

true
true
false

Node Functions

Node functions are run for each node in the network (or a selected group of nodes). Hence, it takes the first argument as & NodeInner or & mut NodeInner depending on the purpose of the function. Immutable functions can be called from any place, while mutable functions can only be called once on the outermost layer on the task.

Other arguments and the return types for node functions are the same as the environement functions.

Network Functions

Network functions, like node functions take &Network or & mut Network as the first argument. It has the same restrictions as the env/node functions for the arguments and the return types.

Example Usage

Ohio River Streamflow Routing Project

The Network for the flow routing is as follows:

network load_file("./data/ohio.network")
network svg_save(label="{_NAME}", outfile = "./output/ohio.svg", height=1000)

Results:

Making Tables

network load_file("./data/ohio.network")
node load_attrs("./data/attrs/{_NAME}.toml")
network clip()
# ^Ind => =(+ (st+num 'INDEX) 1)
<Node ID => {_NAME}
<Title => {_description:case(title):repl(Ky,KY):repl(In,IN):repl(Wv,WV):repl(Oh,OH)?}
>Latitude => {lat:f(4)}
>Longitude => {lon:f(4)}

Results:

Node IDTitleLatitudeLongitude
smithlandOHio River at Smithland Dam Smithland KY37.1584-88.4262
golcondaOHio River at Dam 51 at Golconda, Il37.3578-88.4825
old-shawneetownOHio River at Old Shawneetown, Il-KY37.6919-88.1333
mountcarmelWabash River at Mt. Carmel, Il38.3983-87.7564
jt-myersOHio River at Uniontown Dam, KY37.7972-87.9983
evansvilleOHio River at Evansville, IN37.9723-87.5764
calhounGreen River at Lock 2 at Calhoun, KY37.5339-87.2639
newburghNewburgh37.9309-87.3722
canneltonOHio River at Cannelton Dam at Cannelton, IN37.8995-86.7055
shepherdsvilleSalt River at Shepherdsville, KY37.9851-85.7175
mcalpineOHio River at Louisville, KY38.2803-85.7991
lockportKentucky River at Lock 2 at Lockport, KY38.4390-84.9633
marklandOHio River at Markland Dam Near Warsaw, KY38.7748-84.9644
milfordLittle Miami River at Milford OH39.1714-84.2980
catawbaLicking River at Catawba, KY38.7103-84.3108
hamiltonGreat Miami River at Hamilton OH39.3912-84.5722
perintownEast Fork Little Miami River at Perintown OH39.1370-84.2380
brookvilleWhitewater River at Brookville, IN39.4075-85.0129
meldahlMeldahl38.7972-84.1705
higbyScioto River at Higby OH39.2123-82.8638
greenupGreenup38.6468-82.8608
graysonLittle Sandy River at Grayson, KY38.3301-82.9393
ashlandOHio River at Ashland, KY38.4812-82.6365
branchlandGuyandotte River at Branchland, WV38.2209-82.2026
rc-byrdRc-Byrd38.6816-82.1883
charlestonKanawha River at Charleston, WV38.3715-81.7021
racineOHio River at Racine Dam, WV38.9167-81.9121
bellevilleOHio River at Belleville Dam, WV39.1190-81.7424
mcconnelsvilleMuskingum River at McConnelsville OH39.6451-81.8499
athensHocking River at Athens OH39.3290-82.0876
elizabethLittle Kanawha River at Palestine, WV39.0590-81.3896
willow-islandWillow-Island39.3605-81.3204
hannibalHannibal39.6671-80.8653
pike-islandOHio River at Martins Ferry, OH40.1051-80.7084
new-cumberlandNew-Cumberland40.5277-80.6276
montgomeryMontgomery40.6486-80.3855
beaverfallsBeaver River at Beaver Falls, PA40.7634-80.3151
dashieldsOHio River at Sewickley, PA40.5492-80.2056
emsworthEmsworth40.5043-80.0889
natronaAllegheny River at Natrona, PA40.6153-79.7184
elizabeth2Monongahela River at Elizabeth, PA40.2623-79.9012
sutersvilleYoughiogheny River at Sutersville, PA40.2402-79.8067

Nadi style table with network information:

network load_file("./data/ohio.network")
node load_attrs("./data/attrs/{_NAME}.toml")
network clip()
network echo("../output/ohio-table.svg")
# ^Ind => =(+ (st+num 'INDEX) 1)
<Node ID => {_NAME}
<Title => {_description:case(title):repl(Ky,KY):repl(In,IN):repl(Wv,WV):repl(Oh,OH)?}
>Latitude => {lat:f(4)}
>Longitude => {lon:f(4)}

*Error*:

network function: "table_to_svg" not found

Generating Reports

So we write this template:


## Ohio River Routing Project

<!-- ---8<---:[smithland]: -->
Our basin Outlet is at {_description:case(title):repl(Ky,KY)} with the total basin area {basin_area:f(1)} acre-ft.
<!-- ---8<--- -->

The lower part of the Ohio basin are specifically important to us. Those are:
| ID      | Basin Area   | Length to Outlet |
|---------|-------------:|-----------------:|
<!-- ---8<---:[greenup -> smithland]: -->
| {_NAME} | {basin_area:f(1)} | {length:f(2)}  |
<!-- ---8<--- -->


We used 4 locks and dams in the ohio river as representative locks and dams as below:

<!-- ---8<---:["willow-island",racine,markland,smithland]: -->
- {_NAME:repl(-, ):case(title)?}

  ![](../data/{_NAME}.svg)
<!-- ---8<--- -->

Which makes the table only for the main-stem ohio:

network load_file("./data/ohio.network")
node load_attrs("./data/attrs/{_NAME}.toml")
network clip()
network render("./data/ohio-report.template")

Results:
“./data/ohio-report.template”

Analysing Timeseries

Looking at Data Gaps

Couting the gaps in a csv data with all the nodes is easy. Let’s look at the top 5 nodes with data gaps.

network load_file("./data/ohio.network")
network clip()
network csv_count_na(
	"./data/ts/observed.csv",
	sort=true,
	head = 5
)

*Error*:

----8<----

network function: "csv_count_na" not found

Running it for two timeseries, and comparing them base don network information. We can see the downstream part have more missing data on natural timeseries.

network load_file("./data/ohio.network")
network csv_count_na("./data/ts/observed.csv", outattr = "observed_missing")
network csv_count_na("./data/ts/natural.csv", outattr = "natural_missing")
network table_to_svg(
	template="
<Node=> {_NAME}
>Observed => {observed_missing}
>Natural => {natural_missing}
",
	outfile="./output/natural-gaps.svg"
)
network clip()
network echo("
<center>
Number of Missing Days in Timeseries Data

 ![](../output/natural-gaps.svg)
<center>
")

*Error*:

network function: "csv_count_na" not found

Visualizing Data Gaps

To look at the temporal distribution of the gaps, we can use this function.

network load_file("./data/ohio.network")
network csv_count_na("./data/ts/natural.csv", outattr = "nat_na")
network csv_data_blocks_svg(
	csvfile="./data/ts/natural.csv",
	outfile="./output/natural-blocks.svg",
	label="{_NAME} ({=(/ (st+num 'nat_na) 365.0):f(1)} yr)"
)
network clip()
network echo("../output/natural-blocks.svg")

*Error*:

network function: "csv_count_na" not found
network load_file("./data/ohio.network")
network csv_count_na("./data/ts/observed.csv", outattr = "obs_na")
network csv_data_blocks_svg(
	csvfile="./data/ts/observed.csv",
	outfile="./output/observed-blocks.svg",
	label="{_NAME} ({obs_na})"
)
network clip()
network echo("../output/observed-blocks.svg")

*Error*:

network function: "csv_count_na" not found

Internal Plugins

There are some plugins that are provided with the nadi_core library. They are part of the library, so users can directly use them.

For example in the following tasks file, the functions that are highlighted are functions available from the core plugins. Other functions need to be loaded from plugins.

# sample .tasks file which is like a script with functions
node<inputsfirst> print_attrs("uniqueID")
node show_node()
network save_graphviz("/tmp/test.gv")
node<inputsfirst>.cum_val = node.val + sum(inputs.cum_val);

node[WV04113,WV04112,WV04112] print_attr_toml("testattr2")
node render("{NAME} {uniqueID} {_Dam_Height_(Ft)?}")
node list_attr("; ")
# some functions can take variable number of inputs
network calc_attr_errors(
    "Dam_Height_(Ft)",
    "Hydraulic_Height_(Ft)",
    "rmse", "nse", "abserr"
)
node sum_safe("Latitude")
node<inputsfirst> render("Hi {SUM_ATTR}")
# multiple line for function arguments
network save_table(
	"test.table",
	"/tmp/test.tex",
	true,
	radius=0.2,
	start = 2012-19-20,
	end = 2012-19-23 12:04
	)
node.testattr = 2
node set_attrs_render(testattr2 = "{testattr:calc(+2)}")
node[WV04112] render("{testattr} {testattr2}")

# here we use a complicated template that can do basic logic handling
node set_attrs_render(
    testattr2 = "=(if (and (st+has 'Latitude) (> (st+num 'Latitude) 39)) 'true 'false)"
)
# same thing can be done if you need more flexibility in variable names
node load_toml_string(
    "testattr2 = =(if (and (st+has 'Latitude) (> (st+num 'Latitude) 39)) 'true 'false)"
)
# selecting a list of nodes to run a function
node[
	# comment here?
    WV04113,
    WV04112
] print_attr_toml("testattr2")
# selecting a path
node[WV04112 -> WV04113] render("=(> 2 3)")

Env Functions

strmap

env attributes.strmap(
    attr: '& str',
    attrmap: '& AttrMap',
    default: 'Option < Attribute >'
)

Arguments

  • attr: '& str' => Value to transform the attribute
  • attrmap: '& AttrMap' => Dictionary of key=value to map the data to
  • default: 'Option < Attribute >' => Default value if key not found in attrmap

map values from the attribute based on the given table

parse_attr

env attributes.parse_attr(toml: '& str')

Arguments

  • toml: '& str' => String to parse into attribute

Set node attributes based on string templates

parse_attrmap

env attributes.parse_attrmap(toml: 'String')

Arguments

  • toml: 'String' => String to parse into attribute

Set node attributes based on string templates

float_transform

env attributes.float_transform(value: 'f64', transformation: '& str')

Arguments

  • value: 'f64' => value to transform
  • transformation: '& str' => transformation function, can be one of log/log10/sqrt

map values from the attribute based on the given table

float_div

env attributes.float_div(value1: 'f64', value2: 'f64')

Arguments

  • value1: 'f64' => numerator
  • value2: 'f64' => denominator

map values from the attribute based on the given table

float_mult

env attributes.float_mult(value1: 'f64', value2: 'f64')

Arguments

  • value1: 'f64' => numerator
  • value2: 'f64' => denominator

map values from the attribute based on the given table

Node Functions

load_attrs

node attributes.load_attrs(filename: 'PathBuf')

Arguments

  • filename: 'PathBuf' => Template for the filename to load node attributes from

Loads attrs from file for all nodes based on the given template

Arguments

  • filename: Template for the filename to load node attributes from
  • verbose: print verbose message

The template will be rendered for each node, and that filename from the rendered template will be used to load the attributes.

Errors

The function will error out in following conditions:

  • Template for filename is not given,
  • The template couldn’t be rendered,
  • There was error loading attributes from the file.

print_all_attrs

node attributes.print_all_attrs()

Arguments

Print all attrs in a node

No arguments and no errors, it’ll just print all the attributes in a node with node::attr=val format, where,

  • node is node name
  • attr is attribute name
  • val is attribute value (string representation)

print_attrs

node attributes.print_attrs(*attrs, name: 'bool' = false)

Arguments

  • *attrs =>
  • name: 'bool' = false =>

Print the given node attributes if present

Arguments

  • attrs,… : list of attributes to print
  • name: Bool for whether to show the node name or not

Error

The function will error if

  • list of arguments are not String
  • the name argument is not Boolean

The attributes will be printed in key=val format.

set_attrs

node attributes.set_attrs(**attrs)

Arguments

  • **attrs => Key value pairs of the attributes to set

Set node attributes

Use this function to set the node attributes of all nodes, or a select few nodes using the node selection methods (path or list of nodes)

Error

The function should not error.

Example

Following will set the attribute a2d to true for all nodes from A to D

node[A -> D] set_attrs(a2d = true)

get_attr

node attributes.get_attr(attr: '& str', default: 'Option < Attribute >')

Arguments

  • attr: '& str' => Name of the attribute to get
  • default: 'Option < Attribute >' => Default value if the attribute is not found

Retrive attribute

has_attr

node attributes.has_attr(attr: '& str')

Arguments

  • attr: '& str' => Name of the attribute to check

Check if the attribute is present

first_attr

node attributes.first_attr(attrs: '& [String]', default: 'Option < Attribute >')

Arguments

  • attrs: '& [String]' => attribute names
  • default: 'Option < Attribute >' => Default value if not found

Return the first Attribute that exists

set_attrs_ifelse

node attributes.set_attrs_ifelse(cond: 'bool', **values)

Arguments

  • cond: 'bool' => Condition to check
  • **values => key = [val1, val2] where key is set as first if cond is true else second

if else condition with multiple attributes

set_attrs_render

node attributes.set_attrs_render(**kwargs)

Arguments

  • **kwargs => key value pair of attribute to set and the Template to render

Set node attributes based on string templates

load_toml_render

node attributes.load_toml_render(toml: '& Template', echo: 'bool' = false)

Arguments

  • toml: '& Template' => String template to render and load as TOML string
  • echo: 'bool' = false => Print the rendered toml or not

Set node attributes based on string templates

Network Functions

set_attrs

network attributes.set_attrs(**attrs)

Arguments

  • **attrs => key value pair of attributes to set

Set network attributes

Arguments

  • key=value - Kwargs of attr = value

set_attrs_render

network attributes.set_attrs_render(**kwargs)

Arguments

  • **kwargs => Kwargs of attr = String template to render

Set network attributes based on string templates

Node Functions

command

node command.command(
    cmd: '& Template',
    verbose: 'bool' = true,
    echo: 'bool' = false
)

Arguments

  • cmd: '& Template' => String Command template to run
  • verbose: 'bool' = true => Show the rendered version of command, and other messages
  • echo: 'bool' = false => Echo the stdout from the command

Run the given template as a shell command.

Run any command in the shell. The standard output of the command will be consumed and if there are lines starting with nadi:var: and followed by key=val pairs, it’ll be read as new attributes to that node.

For example if a command writes nadi:var:name="Joe" to stdout, then the for the current node the command is being run for, name attribute will be set to Joe. This way, you can write your scripts in any language and pass the values back to the NADI system.

It will also print out the new values or changes from old values, if verbose is true.

Errors

The function will error if,

  • The command template cannot be rendered,
  • The command cannot be executed,
  • The attributes from command’s stdout cannot be parsed properly

run

node command.run(
    command: '& str',
    inputs: '& str',
    outputs: '& str',
    verbose: 'bool' = true,
    echo: 'bool' = false
)

Arguments

  • command: '& str' => Node Attribute with the command to run
  • inputs: '& str' => Node attribute with list of input files
  • outputs: '& str' => Node attribute with list of output files
  • verbose: 'bool' = true => Print the command being run
  • echo: 'bool' = false => Show the output of the command

Run the node as if it’s a command if inputs are changed

This function will not run a command node if all outputs are older than all inputs. This is useful to networks where each nodes are tasks with input files and output files.

Network Functions

parallel

network command.parallel(
    cmd: '& Template',
    _workers: 'i64' = 4,
    verbose: 'bool' = true,
    echo: 'bool' = false
)

Arguments

  • cmd: '& Template' => String Command template to run
  • _workers: 'i64' = 4 => Number of workers to run in parallel
  • verbose: 'bool' = true => Print the command being run
  • echo: 'bool' = false => Show the output of the command

Run the given template as a shell command for each nodes in the network in parallel.

Warning

Currently there is no way to limit the number of parallel processes, so please be careful with this command if you have very large number of nodes.

command

network command.command(
    cmd: 'Template',
    verbose: 'bool' = true,
    echo: 'bool' = false
)

Arguments

  • cmd: 'Template' => String Command template to run
  • verbose: 'bool' = true => Print the command being run
  • echo: 'bool' = false => Show the output of the command

Run the given template as a shell command.

Run any command in the shell. The standard output of the command will be consumed and if there are lines starting with nadi:var: and followed by key=val pairs, it’ll be read as new attributes to that node.

See node command.command for more details as they have the same implementation

Network Functions

load_file

network connections.load_file(file: 'PathBuf', append: 'bool' = false)

Arguments

  • file: 'PathBuf' => File to load the network connections from
  • append: 'bool' = false => Append the connections in the current network

Load the given file into the network

This replaces the current network with the one loaded from the file.

load_str

network connections.load_str(contents: '& str', append: 'bool' = false)

Arguments

  • contents: '& str' => String containing Network connections
  • append: 'bool' = false => Append the connections in the current network

Load the given file into the network

This replaces the current network with the one loaded from the file.

load_edges

network connections.load_edges(edges: '& [(String, String)]', append: 'bool' = false)

Arguments

  • edges: '& [(String, String)]' => String containing Network connections
  • append: 'bool' = false => Append the connections in the current network

Load the given edges into the network

This replaces the current network with the one loaded from the file.

subset

network connections.subset(filter: '& [bool]', keep: 'bool' = true)

Arguments

  • filter: '& [bool]' =>
  • keep: 'bool' = true => Keep the selected nodes (false = removes the selected)

Take a subset of network by only including the selected nodes

save_file

network connections.save_file(
    file: 'PathBuf',
    quote_all: 'bool' = true,
    graphviz: 'bool' = false
)

Arguments

  • file: 'PathBuf' => Path to the output file
  • quote_all: 'bool' = true => quote all node names; if false, doesn’t quote valid identifier names
  • graphviz: 'bool' = false => wrap the network into a valid graphviz file

Save the network into the given file

For more control on graphviz file writing use save_graphviz from graphviz plugin instead.

Env Functions

count

env core.count(vars: '& [bool]')

Arguments

  • vars: '& [bool]' =>

Count the number of true values in the array

type_name

env core.type_name(value: 'Attribute', recursive: 'bool' = false)

Arguments

  • value: 'Attribute' => Argument to get type
  • recursive: 'bool' = false => Recursively check types for array and table

Type name of the arguments

isna

env core.isna(val: 'f64')

Arguments

  • val: 'f64' =>

check if a float is nan

isinf

env core.isinf(val: 'f64')

Arguments

  • val: 'f64' =>

check if a float is +/- infinity

float

env core.float(value: 'Attribute', parse: 'bool' = true)

Arguments

  • value: 'Attribute' => Argument to convert to float
  • parse: 'bool' = true => parse string to float

make a float from value

str

env core.str(value: 'Attribute', quote: 'bool' = false)

Arguments

  • value: 'Attribute' => Argument to convert to float
  • quote: 'bool' = false => quote it if it’s literal string

make a string from value

int

env core.int(
    value: 'Attribute',
    parse: 'bool' = true,
    round: 'bool' = true,
    strfloat: 'bool' = false
)

Arguments

  • value: 'Attribute' => Argument to convert to int
  • parse: 'bool' = true => parse string to int
  • round: 'bool' = true => round float into integer
  • strfloat: 'bool' = false => parse string first as float before converting to int

make an int from the value

array

env core.array(*attributes)

Arguments

  • *attributes => List of attributes

make an array from the arguments

attrmap

env core.attrmap(**attributes)

Arguments

  • **attributes => name and values of attributes

make an array from the arguments

append

env core.append(array: 'Vec < Attribute >', value: 'Attribute')

Arguments

  • array: 'Vec < Attribute >' => List of attributes
  • value: 'Attribute' =>

append a value to an array

length

env core.length(value: '& Attribute')

Arguments

  • value: '& Attribute' => Array or a HashMap

length of an array or hashmap

year

env core.year(value: 'Attribute')

Arguments

  • value: 'Attribute' => Date or DateTime

year from date/datetime

month

env core.month(value: 'Attribute')

Arguments

  • value: 'Attribute' => Date or DateTime

month from date/datetime

day

env core.day(value: 'Attribute')

Arguments

  • value: 'Attribute' => Date or DateTime

day from date/datetime

min_num

env core.min_num(vars: 'Vec < Attribute >', start: 'Attribute' = Integer(9223372036854775807))

Arguments

  • vars: 'Vec < Attribute >' =>
  • start: 'Attribute' = Integer(9223372036854775807) =>

Minimum of the variables

Starts with integer for type purpose, MAX float is larger than max int, so it’ll be incorrect for large numbers

max_num

env core.max_num(vars: 'Vec < Attribute >', start: 'Attribute' = Integer(-9223372036854775808))

Arguments

  • vars: 'Vec < Attribute >' =>
  • start: 'Attribute' = Integer(-9223372036854775808) =>

Minimum of the variables

Starts with integer for type purpose, MAX float is larger than max int, so it’ll be incorrect for large numbers

min

env core.min(vars: 'Vec < Attribute >', start: 'Attribute')

Arguments

  • vars: 'Vec < Attribute >' =>
  • start: 'Attribute' =>

Minimum of the variables

max

env core.max(vars: 'Vec < Attribute >', start: 'Attribute')

Arguments

  • vars: 'Vec < Attribute >' =>
  • start: 'Attribute' =>

Minimum of the variables

sum

env core.sum(vars: 'Vec < Attribute >', start: 'Attribute' = Integer(0))

Arguments

  • vars: 'Vec < Attribute >' =>
  • start: 'Attribute' = Integer(0) =>

Sum of the variables

prod

env core.prod(vars: 'Vec < Attribute >', start: 'Attribute' = Integer(1))

Arguments

  • vars: 'Vec < Attribute >' =>
  • start: 'Attribute' = Integer(1) =>

Product of the variables

unique_str

env core.unique_str(vars: 'Vec < String >')

Arguments

  • vars: 'Vec < String >' =>

Get a list of unique string values

count_str

env core.count_str(vars: 'Vec < String >')

Arguments

  • vars: 'Vec < String >' =>

Get a count of unique string values

concat

env core.concat(*vars, join: '& str' = "")

Arguments

  • *vars =>
  • join: '& str' = "" =>

Concat the strings

Node Functions

inputs_count

node core.inputs_count()

Arguments

Count the number of input nodes in the node

inputs_attr

node core.inputs_attr(attr: 'String' = "NAME")

Arguments

  • attr: 'String' = "NAME" => Attribute to get from inputs

Get attributes of the input nodes

has_outlet

node core.has_outlet()

Arguments

Node has an outlet or not

output_attr

node core.output_attr(attr: 'String' = "NAME")

Arguments

  • attr: 'String' = "NAME" => Attribute to get from inputs

Get attributes of the output node

Network Functions

count

network core.count(vars: 'Option < Vec < bool > >')

Arguments

  • vars: 'Option < Vec < bool > >' =>

Count the number of nodes in the network

Env Functions

sleep

env debug.sleep(time: 'u64' = 1000)

Arguments

  • time: 'u64' = 1000 =>

sleep for given number of milliseconds

debug

env debug.debug(*args, **kwargs)

Arguments

  • *args => Function arguments
  • **kwargs => Function Keyword arguments

Print the args and kwargs on this function

This function will just print out the args and kwargs the function is called with. This is for debugging purposes to see if the args/kwargs are identified properly. And can also be used to see how the nadi system takes the input from the function call.

echo

env debug.echo(
    line: 'String',
    error: 'bool' = false,
    newline: 'bool' = true
)

Arguments

  • line: 'String' => line to print
  • error: 'bool' = false => print to stderr instead of stdout
  • newline: 'bool' = true => print newline at the end

Echo the string to stdout or stderr

This simply echoes anything given to it. This can be used in combination with nadi tasks that create files (image, text, etc). The echo function can be called to get the link to those files back to the stdout.

Also useful for nadi preprocessor.

clip

env debug.clip(error: 'bool' = false)

Arguments

  • error: 'bool' = false => print in stderr instead of in stdout

Echo the ––8<–– line for clipping syntax

This function is a utility function for the generation of nadi book. This prints out the ----8<---- line when called, so that mdbook preprocessor for nadi knows where to clip the output for displaying it in the book.

This makes it easier to only show the relevant parts of the output in the documentation instead of having the user see output of other unrelated parts which are necessary for generating the results.

Example

Given the following tasks file:

net load_file("...")
net load_attrs("...")
net clip()
net render("{_NAME} {attr1}")

The clip function’s output will let the preprocessor know that only the parts after that are relevant to the user. Hence, it’ll discard outputs before that during documentation generation.

Env Functions

ifelse

env logic.ifelse(
    cond: 'bool',
    iftrue: 'Attribute',
    iffalse: 'Attribute'
)

Arguments

  • cond: 'bool' => Attribute that can be cast to bool value
  • iftrue: 'Attribute' => Output if cond is true
  • iffalse: 'Attribute' => Output if cond is false

Simple if else condition

gt

env logic.gt(a: '& Attribute', b: '& Attribute')

Arguments

  • a: '& Attribute' => first attribute
  • b: '& Attribute' => second attribute

Greater than check

lt

env logic.lt(a: '& Attribute', b: '& Attribute')

Arguments

  • a: '& Attribute' => first attribute
  • b: '& Attribute' => second attribute

Greater than check

eq

env logic.eq(a: '& Attribute', b: '& Attribute')

Arguments

  • a: '& Attribute' => first attribute
  • b: '& Attribute' => second attribute

Greater than check

and

env logic.and(*conds)

Arguments

  • *conds => List of attributes that can be cast to bool

Boolean and

or

env logic.or(*conds)

Arguments

  • *conds => List of attributes that can be cast to bool

boolean or

not

env logic.not(cond: 'bool')

Arguments

  • cond: 'bool' => attribute that can be cast to bool

boolean not

all

env logic.all(vars: '& [bool]')

Arguments

  • vars: '& [bool]' =>

check if all of the bool are true

any

env logic.any(vars: '& [bool]')

Arguments

  • vars: '& [bool]' =>

check if any of the bool are true

Env Functions

str_filter

env regex.str_filter(attrs: 'Vec < String >', pattern: 'Regex')

Arguments

  • attrs: 'Vec < String >' => attribute to check for pattern
  • pattern: 'Regex' => Regex pattern to match

Check if the given pattern matches the value or not

str_match

env regex.str_match(attr: '& str', pattern: 'Regex')

Arguments

  • attr: '& str' => attribute to check for pattern
  • pattern: 'Regex' => Regex pattern to match

Check if the given pattern matches the value or not

You can also use match operator for this

str_replace

env regex.str_replace(
    attr: '& str',
    pattern: 'Regex',
    rep: '& str'
)

Arguments

  • attr: '& str' => original string
  • pattern: 'Regex' => Regex pattern to match
  • rep: '& str' => replacement string

Replace the occurances of the given match

str_find

env regex.str_find(attr: '& str', pattern: 'Regex')

Arguments

  • attr: '& str' => attribute to check for pattern
  • pattern: 'Regex' => Regex pattern to match

Find the given pattern in the value

str_find_all

env regex.str_find_all(attr: '& str', pattern: 'Regex')

Arguments

  • attr: '& str' => attribute to check for pattern
  • pattern: 'Regex' => Regex pattern to match

Find all the matches of the given pattern in the value

str_count

env regex.str_count(attr: '& str', pattern: 'Regex')

Arguments

  • attr: '& str' => attribute to check for pattern
  • pattern: 'Regex' => Regex pattern to match

Count the number of matches of given pattern in the value

Env Functions

render

env render.render(
    template: '& Template',
    **keyval,
    safe: 'bool' = false
)

Arguments

  • template: '& Template' => String template to render
  • **keyval =>
  • safe: 'bool' = false => if render fails keep it as it is instead of exiting

Render the template based on the node attributes

For more details on the template system. Refer to the String Template section of the NADI book.

Node Functions

render

node render.render(template: '& Template', safe: 'bool' = false)

Arguments

  • template: '& Template' => String template to render
  • safe: 'bool' = false => if render fails keep it as it is instead of exiting

Render the template based on the node attributes

For more details on the template system. Refer to the String Template section of the NADI book.

Network Functions

render

network render.render(template: '& Template', safe: 'bool' = false)

Arguments

  • template: '& Template' => Path to the template file
  • safe: 'bool' = false => if render fails keep it as it is instead of exiting

Render from network attributes

render_nodes

network render.render_nodes(
    template: '& Template',
    safe: 'bool' = false,
    join: '& str' = "\n"
)

Arguments

  • template: '& Template' => Path to the template file
  • safe: 'bool' = false => if render fails keep it as it is instead of exiting
  • join: '& str' = "\n" => String to join the render results

Render each node of the network and combine to same variable

render_template

network render.render_template(template: 'PathBuf')

Arguments

  • template: 'PathBuf' => Path to the template file

Render a File template for the nodes in the whole network

Write the file with templates for input variables in the same way you write string templates. It’s useful for markdown files, as the curly braces syntax won’t be used for anything else that way. Do be careful about that. And the program will replace those templates with their values when you run it with inputs.

It’ll repeat the same template for each node and render them. If you want only a portion of the file repeated for nodes inclose them with lines with ---8<--- on both start and the end. The lines containing the clip syntax will be ignored, ideally you can put them in comments.

You can also use ---include:<filename>[::line_range] syntax to include a file, the line_range syntax, if present, should be in the form of start[:increment]:end, you can exclude start or end to denote the line 1 or last line (e.g. :5 is 1:5, and 3: is from line 3 to the end)

Arguments

  • template: Path to the template file
  • outfile [Optional]: Path to save the template file, if none it’ll be printed in stdout

Node Functions

sr_count

node series.sr_count()

Arguments

Number of series in the node

sr_list

node series.sr_list()

Arguments

List all series in the node

sr_dtype

node series.sr_dtype(name: '& str', safe: 'bool' = false)

Arguments

  • name: '& str' => Name of the series
  • safe: 'bool' = false => Do not error if series does’t exist

Type name of the series

sr_len

node series.sr_len(name: '& str', safe: 'bool' = false)

Arguments

  • name: '& str' => Name of the series
  • safe: 'bool' = false => Do not error if series does’t exist

Length of the series

sr_mean

node series.sr_mean(name: '& str')

Arguments

  • name: '& str' => Name of the series

Type name of the series

sr_sum

node series.sr_sum(name: '& str')

Arguments

  • name: '& str' => Name of the series

Sum of the series

set_series

node series.set_series(
    name: '& str',
    value: 'Attribute',
    dtype: '& str'
)

Arguments

  • name: '& str' => Name of the series to save as
  • value: 'Attribute' => Argument to convert to series
  • dtype: '& str' => type

set the following series to the node

sr_to_array

node series.sr_to_array(name: '& str', safe: 'bool' = false)

Arguments

  • name: '& str' => Name of the series
  • safe: 'bool' = false => Do not error if series does’t exist

Make an array from the series

Network Functions

save_csv

network table.save_csv(
    path: '& Path',
    fields: '& [String]',
    filter: 'Option < Vec < bool > >'
)

Arguments

  • path: '& Path' =>
  • fields: '& [String]' =>
  • filter: 'Option < Vec < bool > >' =>

Save CSV

table_to_markdown

network table.table_to_markdown(
    table: 'Option < PathBuf >',
    template: 'Option < String >',
    outfile: 'Option < PathBuf >',
    connections: 'Option < String >'
)

Arguments

  • table: 'Option < PathBuf >' => Path to the table file
  • template: 'Option < String >' => String template for table
  • outfile: 'Option < PathBuf >' => Path to the output file
  • connections: 'Option < String >' => Show connections column or not

Render the Table as a rendered markdown

Error

The function will error out if,

  • error reading the table file,
  • error parsing table template,
  • neither one of table file or table template is provided,
  • error while rendering markdown (caused by error on rendering cell values from templates)
  • error while writing to the output file

Node Functions

ts_count

node timeseries.ts_count()

Arguments

Number of timeseries in the node

ts_list

node timeseries.ts_list()

Arguments

List all timeseries in the node

ts_dtype

node timeseries.ts_dtype(name: '& str', safe: 'bool' = false)

Arguments

  • name: '& str' => Name of the timeseries
  • safe: 'bool' = false => Do not error if timeseries does’t exist

Type name of the timeseries

ts_len

node timeseries.ts_len(name: '& str', safe: 'bool' = false)

Arguments

  • name: '& str' => Name of the timeseries
  • safe: 'bool' = false => Do not error if timeseries does’t exist

Length of the timeseries

ts_print

node timeseries.ts_print(
    name: '& String',
    header: 'bool' = true,
    head: 'Option < i64 >'
)

Arguments

  • name: '& String' => name of the timeseries
  • header: 'bool' = true => show header
  • head: 'Option < i64 >' => number of head rows to show (all by default)

Print the given timeseries values in csv format

TODO

  • save to file instead of showing with outfile: Option<PathBuf>

Network Functions

ts_print_csv

network timeseries.ts_print_csv(
    name: 'String',
    head: 'Option < usize >',
    nodes: 'Option < HashSet < String > >'
)

Arguments

  • name: 'String' => Name of the timeseries to save
  • head: 'Option < usize >' => number of head rows to show (all by default)
  • nodes: 'Option < HashSet < String > >' => Include only these nodes (all by default)

Save timeseries from all nodes into a single csv file

TODO: error/not on unqual length TODO: error/not on no timeseries, etc… TODO: output to file: PathBuf

series_csv

network timeseries.series_csv(
    filter: 'Vec < bool >',
    outfile: 'PathBuf',
    attrs: 'Vec < String >',
    series: 'Vec < String >'
)

Arguments

  • filter: 'Vec < bool >' =>
  • outfile: 'PathBuf' => Path to the output csv
  • attrs: 'Vec < String >' => list of attributes to write
  • series: 'Vec < String >' => list of series to write

Write the given nodes to csv with given attributes and series

Network Functions

set_nodesize_attrs

network visuals.set_nodesize_attrs(
    attrs: '& [f64]',
    minsize: 'f64' = 4.0,
    maxsize: 'f64' = 12.0
)

Arguments

  • attrs: '& [f64]' => Attribute values to use for size scaling
  • minsize: 'f64' = 4.0 => minimum size of the node
  • maxsize: 'f64' = 12.0 => maximum size of the node

Set the node size of the nodes based on the attribute value

svg_save

network visuals.svg_save(
    outfile: '& Path',
    label: 'Template' = Template { original: "{_NAME}", parts: [Var("_NAME", "")] },
    x_spacing: 'u64' = 25,
    y_spacing: 'u64' = 25,
    offset: 'u64' = 10,
    twidth: 'f64' = 9.0,
    width: 'u64' = 500,
    height: 'u64' = 240,
    bgcolor: 'Option < String >',
    page_width: 'Option < u64 >',
    page_height: 'Option < u64 >'
)

Arguments

  • outfile: '& Path' =>
  • label: 'Template' = Template { original: "{_NAME}", parts: [Var("_NAME", "")] } =>
  • x_spacing: 'u64' = 25 =>
  • y_spacing: 'u64' = 25 =>
  • offset: 'u64' = 10 =>
  • twidth: 'f64' = 9.0 => in average how many units each text character takes

For auto calculating width of the page since we don’t have Cairo

  • width: 'u64' = 500 =>
  • height: 'u64' = 240 =>
  • bgcolor: 'Option < String >' =>
  • page_width: 'Option < u64 >' =>
  • page_height: 'Option < u64 >' =>

Exports the network as a svg

External Plugins

This section showcases the functions from external plugins developed along side the NADI project due to various reasons.

The plugins listed here can be installed with following steps:

  • clone the repository of external plugins,
  • compile it locally with cargo,
  • move all generated dynamic libraries to the nadi plugin directory.

Node Functions

count_node_if

node dams.count_node_if(count_attr: '& str', cond: 'bool')

Arguments

  • count_attr: '& str' =>
  • cond: 'bool' =>

Count the number of nodes upstream at each point that satisfies a certain condition

min_year

node dams.min_year(yearattr: '& str', write_var: '& str' = "MIN_YEAR")

Arguments

  • yearattr: '& str' =>
  • write_var: '& str' = "MIN_YEAR" =>

Propagate the minimum year downstream

Node Functions

load_csv_fill

node datafill.load_csv_fill(
    name: 'String',
    file: 'Template',
    timefmt: 'String',
    columns: '(String, String)',
    method: 'DataFillMethod' = Linear,
    dtype: 'String' = "Floats"
)

Arguments

  • name: 'String' => Name of the timeseries
  • file: 'Template' => Template of the CSV file for the nodes
  • timefmt: 'String' => date time format, if you only have date, but have time on format string, it will panic
  • columns: '(String, String)' => Names of date column and value column
  • method: 'DataFillMethod' = Linear => Method to use for data filling: forward/backward/linear
  • dtype: 'String' = "Floats" => DataType to load into timeseries

datafill_experiment

node datafill.datafill_experiment(
    name: 'String',
    file: 'Template',
    ratio_var: 'String',
    columns: 'Option < (String, String) >',
    experiments: 'usize' = 10,
    samples: 'usize' = 100
)

Arguments

  • name: 'String' => Prefix for name of the series to save metrics on
  • file: 'Template' => Template of the CSV file for the nodes
  • ratio_var: 'String' => Variable to use for inputratio/outputratio methods
  • columns: 'Option < (String, String) >' => Names of date column and value column
  • experiments: 'usize' = 10 => Number of experiements to run
  • samples: 'usize' = 100 => Number of samples on each experiment

Network Functions

save_experiments_csv

network datafill.save_experiments_csv(
    outfile: 'PathBuf',
    attrs: 'Vec < String >',
    prefix: 'String',
    errors: 'Vec < String >',
    filter: 'Option < Vec < bool > >'
)

Arguments

  • outfile: 'PathBuf' => Path to the output csv
  • attrs: 'Vec < String >' => list of attributes to write
  • prefix: 'String' => Prefix
  • errors: 'Vec < String >' => list of errors to write
  • filter: 'Option < Vec < bool > >' =>

Write the given nodes to csv with given attributes and experiment results

Node Functions

calc_ts_error

node errors.calc_ts_error(
    ts1: '& str',
    ts2: '& str',
    error: '& str' = "rmse"
)

Arguments

  • ts1: '& str' => Timeseries value to use as actual value
  • ts2: '& str' => Timeseries value to be used to calculate the error
  • error: '& str' = "rmse" => Error type, one of rmse/nrmse/abserr/nse

Calculate Error from two timeseries values in the node

It calculates the error between two timeseries values from the node

calc_ts_errors

node errors.calc_ts_errors(
    ts1: '& String',
    ts2: '& String',
    errors: '& [String]'
)

Arguments

  • ts1: '& String' => Timeseries value to use as actual value
  • ts2: '& String' => Timeseries value to be used to calculate the error
  • errors: '& [String]' => Error types to calculate, one of rmse/nrmse/abserr/nse

Calculate Error from two timeseries values in the node

It calculates the error between two timeseries values from the node.

Network Functions

calc_attr_error

network errors.calc_attr_error(
    attr1: 'String',
    attr2: 'String',
    error: 'String' = "rmse"
)

Arguments

  • attr1: 'String' => Attribute value to use as actual value
  • attr2: 'String' => Attribute value to be used to calculate the error
  • error: 'String' = "rmse" => Error type, one of rmse/nrmse/abserr/nse

Calculate Error from two attribute values in the network

It calculates the error using two attribute values from all the nodes.

Network Functions

fancy_print

network fancy_print.fancy_print()

Arguments

Fancy print a network

Network Functions

plot_timeseries

network gnuplot.plot_timeseries(
    csvfile: 'Template',
    datecol: '& str',
    datacol: '& str',
    outfile: '& Path',
    timefmt: '& str' = "%Y-%m-%d",
    config: '& GnuplotConfig' = GnuplotConfig { outfile: None, terminal: None, csv: false, preamble: "" },
    skip_missing: 'bool' = false
)

Arguments

  • csvfile: 'Template' =>
  • datecol: '& str' =>
  • datacol: '& str' =>
  • outfile: '& Path' =>
  • timefmt: '& str' = "%Y-%m-%d" =>
  • config: '& GnuplotConfig' = GnuplotConfig { outfile: None, terminal: None, csv: false, preamble: "" } =>
  • skip_missing: 'bool' = false =>

Generate a gnuplot file that plots the timeseries data in the network

This plugin uses cairo to draw graphics, so far it has only been tested on Linux, but this should also work on Mac. Compiling it in windows might need additional steps that are not documented here.

Node Functions

attr_fraction_svg

node graphics.attr_fraction_svg(
    attr: '& str',
    outfile: '& Template',
    color: '& AttrColor',
    height: 'f64' = 80.0,
    width: 'f64' = 80.0,
    margin: 'f64' = 10.0
)

Arguments

  • attr: '& str' =>
  • outfile: '& Template' =>
  • color: '& AttrColor' =>
  • height: 'f64' = 80.0 =>
  • width: 'f64' = 80.0 =>
  • margin: 'f64' = 10.0 =>

Create a SVG file with the given network structure

Network Functions

csv_load_ts

network graphics.csv_load_ts(
    file: 'PathBuf',
    name: 'String',
    date_col: 'String' = "date",
    timefmt: 'String' = "%Y-%m-%d",
    data_type: 'String' = "Floats"
)

Arguments

  • file: 'PathBuf' =>
  • name: 'String' =>
  • date_col: 'String' = "date" =>
  • timefmt: 'String' = "%Y-%m-%d" =>
  • data_type: 'String' = "Floats" =>

Count the number of na values in CSV file for each nodes in a network

Arguments

  • file: Input CSV file path to read (should have column with node names for all nodes)
  • name: Name of the timeseries
  • date_col: Date Column name
  • timefmt: date time format, if you only have date, but have time on format string, it will panic
  • data_type: Type of the data to cast into

csv_count_na

network graphics.csv_count_na(
    file: 'PathBuf',
    outattr: 'Option < String >',
    sort: 'bool' = false,
    skip_zero: 'bool' = false,
    head: 'Option < i64 >'
)

Arguments

  • file: 'PathBuf' =>
  • outattr: 'Option < String >' =>
  • sort: 'bool' = false =>
  • skip_zero: 'bool' = false =>
  • head: 'Option < i64 >' =>

Count the number of na values in CSV file for each nodes in a network

Arguments

  • file: Input CSV file path to read (should have column with node names for all nodes)
  • outattr: Output attribute to save the count of NA to. If empty print to stdout
  • sort: show the nodes with larger gaps on top, only applicable while printing
  • head: at max show only this number of nodes
  • skip_zero: skip nodes with zero missing numbers

csv_data_blocks_svg

network graphics.csv_data_blocks_svg(
    csvfile: 'PathBuf',
    outfile: 'PathBuf',
    label: 'Template',
    date_col: 'String' = "date",
    config: 'NetworkPlotConfig' = NetworkPlotConfig { width: 250.0, height: 300.0, delta_x: 20.0, delta_y: 20.0, offset: 30.0, radius: 3.0, fontsize: 16.0, fontface: FontFace { inner: Shared { inner: 0x64a7356cd4c0 } } },
    blocks_width: 'f64' = 500.0,
    fit: 'bool' = false
)

Arguments

  • csvfile: 'PathBuf' =>
  • outfile: 'PathBuf' =>
  • label: 'Template' =>
  • date_col: 'String' = "date" =>
  • config: 'NetworkPlotConfig' = NetworkPlotConfig { width: 250.0, height: 300.0, delta_x: 20.0, delta_y: 20.0, offset: 30.0, radius: 3.0, fontsize: 16.0, fontface: FontFace { inner: Shared { inner: 0x64a7356cd4c0 } } } =>
  • blocks_width: 'f64' = 500.0 =>
  • fit: 'bool' = false =>

Draw the data blocks with arrows in timeline

export_svg

network graphics.export_svg(
    outfile: 'PathBuf',
    config: 'NetworkPlotConfig' = NetworkPlotConfig { width: 250.0, height: 300.0, delta_x: 20.0, delta_y: 20.0, offset: 30.0, radius: 3.0, fontsize: 16.0, fontface: FontFace { inner: Shared { inner: 0x64a7356cd4c0 } } },
    fit: 'bool' = false,
    label: 'Option < Template >',
    highlight: '& [usize]' = []
)

Arguments

  • outfile: 'PathBuf' =>
  • config: 'NetworkPlotConfig' = NetworkPlotConfig { width: 250.0, height: 300.0, delta_x: 20.0, delta_y: 20.0, offset: 30.0, radius: 3.0, fontsize: 16.0, fontface: FontFace { inner: Shared { inner: 0x64a7356cd4c0 } } } =>
  • fit: 'bool' = false =>
  • label: 'Option < Template >' =>
  • highlight: '& [usize]' = [] =>

Create a SVG file with the given network structure

table_to_svg

network graphics.table_to_svg(
    outfile: 'PathBuf',
    table: 'Option < PathBuf >',
    template: 'Option < String >',
    config: 'NetworkPlotConfig' = NetworkPlotConfig { width: 250.0, height: 300.0, delta_x: 20.0, delta_y: 20.0, offset: 30.0, radius: 3.0, fontsize: 16.0, fontface: FontFace { inner: Shared { inner: 0x64a7356cd4c0 } } },
    fit: 'bool' = false,
    highlight: '& [String]' = []
)

Arguments

  • outfile: 'PathBuf' =>
  • table: 'Option < PathBuf >' =>
  • template: 'Option < String >' =>
  • config: 'NetworkPlotConfig' = NetworkPlotConfig { width: 250.0, height: 300.0, delta_x: 20.0, delta_y: 20.0, offset: 30.0, radius: 3.0, fontsize: 16.0, fontface: FontFace { inner: Shared { inner: 0x64a7356cd4c0 } } } =>
  • fit: 'bool' = false =>
  • highlight: '& [String]' = [] =>

Create a SVG file with the given network structure

Network Functions

save_graphviz

network graphviz.save_graphviz(
    outfile: '& Path',
    name: '& str' = "network",
    global_attrs: '& str' = "",
    node_attr: 'Option < & Template >',
    edge_attr: 'Option < & Template >'
)

Arguments

  • outfile: '& Path' =>
  • name: '& str' = "network" =>
  • global_attrs: '& str' = "" =>
  • node_attr: 'Option < & Template >' =>
  • edge_attr: 'Option < & Template >' =>

Save the network as a graphviz file

Arguments:

  • outfile - Path to the output file
  • name - Name of the graph

Network Functions

export_map

network html.export_map(
    outfile: '& Path',
    template: 'Template',
    pagetitle: '& str' = "NADI Network",
    nodetitle: 'Template' = Template { original: "{_NAME}", parts: [Var("_NAME", "")] },
    connections: 'bool' = true
)

Arguments

  • outfile: '& Path' =>
  • template: 'Template' =>
  • pagetitle: '& str' = "NADI Network" =>
  • nodetitle: 'Template' = Template { original: "{_NAME}", parts: [Var("_NAME", "")] } =>
  • connections: 'bool' = true =>

Exports the network as a HTML map

This plugin uses gdal to read/write GIS files, it can be compiled easily in Linux and Mac by installing gdal as a prerequisite, but on windows that step might be complicated. Please refer to the documentation of gdal to know how to install it in windows. Or use the provided dlls from the plugin repo.

Network Functions

gis_load_network

network gis.gis_load_network(
    file: 'PathBuf',
    source: 'String',
    destination: 'String',
    layer: 'Option < String >',
    ignore_null: 'bool' = false
)

Arguments

  • file: 'PathBuf' => GIS file to load (can be any format GDAL can understand)
  • source: 'String' => Field in the GIS file corresponding to the input node name
  • destination: 'String' => layer of the GIS file corresponding to the output node name
  • layer: 'Option < String >' => layer of the GIS file, first one picked by default
  • ignore_null: 'bool' = false => Ignore feature if it has fields with null value

Load network from a GIS file

Loads the network from a gis file containing the edges in fields

gis_load_attrs

network gis.gis_load_attrs(
    file: 'PathBuf',
    node: 'String',
    layer: 'Option < String >',
    geometry: 'String' = "GEOM",
    ignore: 'String' = "",
    sanitize: 'bool' = true,
    err_no_node: 'bool' = false
)

Arguments

  • file: 'PathBuf' => GIS file to load (can be any format GDAL can understand)
  • node: 'String' => Field in the GIS file corresponding to node name
  • layer: 'Option < String >' => layer of the GIS file, first one picked by default
  • geometry: 'String' = "GEOM" => Attribute to save the GIS geometry in
  • ignore: 'String' = "" => Field names separated by comma, to ignore
  • sanitize: 'bool' = true => sanitize the name of the fields
  • err_no_node: 'bool' = false => Error if all nodes are not found in the GIS file

Load node attributes from a GIS file

The function reads a GIS file in any format (CSV, GPKG, SHP, JSON, etc) and loads their fields as attributes to the nodes.

gis_save_connections

network gis.gis_save_connections(
    file: 'PathBuf',
    geometry: 'String',
    driver: 'Option < String >',
    layer: 'String' = "network",
    filter: 'Option < Vec < bool > >'
)

Arguments

  • file: 'PathBuf' =>
  • geometry: 'String' =>
  • driver: 'Option < String >' =>
  • layer: 'String' = "network" =>
  • filter: 'Option < Vec < bool > >' =>

Save GIS file of the connections

gis_save_nodes

network gis.gis_save_nodes(
    file: 'PathBuf',
    geometry: 'String',
    attrs: 'HashMap < String, String >' = {},
    driver: 'Option < String >',
    layer: 'String' = "nodes",
    filter: 'Option < Vec < bool > >'
)

Arguments

  • file: 'PathBuf' =>
  • geometry: 'String' =>
  • attrs: 'HashMap < String, String >' = {} =>
  • driver: 'Option < String >' =>
  • layer: 'String' = "nodes" =>
  • filter: 'Option < Vec < bool > >' =>

Save GIS file of the nodes

Node Functions

print_node

node print_node.print_node()

Arguments

Print the node with its inputs and outputs

Network Functions

print_attr_csv

network print_node.print_attr_csv(*args)

Arguments

  • *args =>

Print the given attributes in csv format with first column with node name

Node Functions

check_negative

node streamflow.check_negative(ts_name: '& str')

Arguments

  • ts_name: '& str' => Name of the timeseries with streamflow data

Check the given streamflow timeseries for negative values

Data Structure

This section will describe the data structures associated with NADI system in brief.

For more accurate and upto date details on the data structures and their available methods. Look at the API reference of nadi_core on docs.rs.

Node

Points with attributes and timeseries. These can be any point as long as they’ll be on the network and connection to each other.

The attributes can be any format. There is a special type of attribute timeseries to deal with timeseries data that has been provided by the system. But users are free to make their own attributes and plugins + functions that can work with those attributes.

Since attributes are loaded using TOML file, simple attributes can be stored and parsed from strings, moderately complex ones can be saved as a combination of array and tables, and more complex ones can be saved in different files and their path can be stored as node attributes.

Here is an example node attribute file. Here we have string, float, int and boolean values, as well as a example csv timeseries

stn="smithland"
nat_7q10=12335.94850131619
orsanco_7q10=16900
lock=true

[ts.csv]
streamflow = {path="data/smithland.csv", datetime="date", data="flow"}

Network

Collection of Nodes, with Connection information. The connection information is saved in the nodes itself (=inputs= and =output= variables), but they are assigned from the network.

The nadi system (lit, river system), is designed for the connections between points along a river. Out of different types of river networks possible, it can only handle non-branching tributaries system, where each point can have zero to multiple inputs, but can only have one output. Overall the system should have a single output point. There can be branches in the river itself in the physical sense as long as they converse before the next point of interests. There cannot be node points that have more than one path to reach another node in the representative system.

Network file are simple text files with each edge on one line. Node names can be words with alphanumeric characters with the additional character _, similar to how rust identifiers work. The Node names can also be quoted strings, in those cases any characters are supported inside the quotes.

Here is an example network file,

cannelton -> newburgh
newburgh -> evansville
evansville -> "jt-myers"
# comments are supported
"jt-myers" -> "old-shawneetown"
"old-shawneetown" -> golconda
markland -> mcalpine
golconda -> smithland

Drawing it out:

network load_file("./data/mississippi.net")
network svg_save(
   "./output/mississippi.svg",
	label="[{INDEX}] {_NAME:repl(-, ):case(title)}"
)
network clip()
# the link path needs to be relative to this file
network echo("../output/mississippi.svg")

Results:

The program also plans to support the connection import from the DOT format (graphviz package).

Network file without any connection format can be written as a node per line, but those network can only call sequential functions, and not input dependent ones.

Depending on the use cases, it can probably be applied to other systems that are similar to a river system. Or even without the connection information, the functions that are independent to each other can be run in sequential order.

Timeseries

Timeseries of values, at regular interval. Can support integers, floats, booleans, strings, Arrays and Tables.

For timeseries that are not in a format that NADI can understand. The path to the timeseries can be provided as a node attribute and plugin functions can be written to use that path to load the timeseries for the node.

String Templates

The templating system will be used by an external library developed by me. The library can be modified if there are specific needs for this project.

The template system is feature rich, allowing for formatting, simple string transformations, and airthmatic calculations based on the variables (node attributes in this case). This can be used to generate file paths, and similar strings based on node attributes, as well as to format the cell values for exported table, figures, etc.

The template library is also available for Rust, C and C++, but all the interactions with the templates will be done through the nadi interface, so that is not required.

Documentations on the template system, can be redirected to the string_template_plus library page.

Brief explanation on the template system is given below.

Template Parts

Templates have variables, time formats, expressions, and commands (disabled by default);

Hi, my name is {name}, my address is {address?"N/A"}.
Current time is {%H} hour  {%M} minutes.

Results (with: name=John; address=123 Road, USA):

Hi, my name is John, my address is 123 Road, USA.
Current time is 16 hour  31 minutes.

Optional Variables

Variables can be chained in an optional way, so the first one that’s found will be used (e.g. {nickname?name} will render nickname if it’s present, else name);

Hi, I am {nickname?name}, my address is {address?"N/A"}.

Results (with: name=John; nickname=J; address=123 Road, USA):

Hi, I am J, my address is 123 Road, USA.

String Literal

Variables when replaced with literal strings (quoted strings), they will be used directly {address?"N/A"} will render N/A is address is not present;

Hi, I am {nickname?name}, my address is {address?"N/A"}.

Results (with: name=John):

Hi, I am John, my address is N/A.

Transformers

Variables can have optional transformers which transform the string based on their rules, (e.g. float transformer will truncate the float, upcase will make the string UPPERCASE, etc.);

Hi, I am {nickname?name:case(up)}, my address is {address?"N/A"}.

Results (with: name=Joe):

Hi, I am JOE, my address is N/A.

Time formats

time formats are formatted current time (e.g. {%Y} will become 2024 as of now);

Today is {%B %d} of the year {%Y}.

Results (with: name=John):

Today is June 04 of the year 2025.

Lisp Expressions

expressions are lisp expressions that will be evaluated and the results will be used. The lisp expression can also access any variables and do any supported programming. (e.g. (+ 1 1) in lisp will become 2);

guess my age(x) if: (x + 21) * 4 = =(* (+ (st+num 'age) 21) 4).

Results (with: age=20):

guess my age(x) if: (x + 21) * 4 = 164.

NADI Specific options

Besides the above points, specific to nadi system, any node template will have all the variables from node attributes available as strings for template. For string variables, their name can be used to access quoted string format, while their name with underscore prefix will be unquoted raw string. (e.g. if we have attribute name="smithland", then {name} will render to "smithland", while {_name} will render to smithland).

Nadi system uses templates in a variety of place, and plugin functions also sometimes take templates for file path, or strings, and such things. Look at the help string of the function to see if it takes String or Template type.

For example render is a function that takes a template and prints it after rendering it for each node.

network load_file("./data/mississippi.net")
node[ohio] set_attrs(river="the Ohio River", streamflow=45334.12424343)
node[ohio,red] render(
	"(=(+ 1 (st+num 'INDEX))th node) {_NAME:case(title)}
	River Flow = {streamflow:calc(/10000):f(3)?\"NA\"} x 10^4"
)

Results:

{
  ohio = "(6th node) Ohio\n\tRiver Flow = 4.533 x 10^4",
  red = "(5th node) Red\n\tRiver Flow = NA x 10^4"
}

As seen in above example, you can render variables, transform them, use basic calculations.

Or you can use lisp syntax to do more complex calculations. Refer to Nadi Extension Capabilities section for more info on how to use lisp on string template.

network load_file("./data/mississippi.net")
node[ohio] set_attrs(river="the Ohio River", streamflow=45334.12424343)
node[ohio] render(
	"{_river:case(title)} Streamflow
	from lisp = {=(/ (st+num 'streamflow) 1000):f(2)} x 10^3 cfs"
)

Results:

{
  ohio = "The Ohio River Streamflow\n\tfrom lisp = 45.33 x 10^3 cfs"
}

Some Complex Examples

Optional variables and a command; note that commands can have variables inside them:

hi there, this {is?a?"test"} for $(echo a simple case {that?} {might} be "possible")

Results (with: might=may):

hi there, this test for $(echo a simple case  may be possible)

Optional variables with transformers inside command.

Hi {like?a?"test"} for $(this does {work:case(up)} now) (yay)

Results (with: work=Fantastic Job):

Hi test for $(this does FANTASTIC JOB now) (yay)

If you need to use { and } in a template, you can escape them. Following template shows how LaTeX commands can be generated from templates.

more {formatting?} options on {%F} and
\\latex\{command\}\{with {variable}\}, should work.

Results (with: command=Error;variable=Var):

more  options on 2025-06-04 and
\latex{command}{with Var}, should work.

This just combined a lot of different things from above:

let's try {every:f(2)?and?"everything"}
for $(a complex case {that?%F?} {might?be?not?found} be "possible")

see $(some-command --flag "and the value" {problem})
=(+ 1 2 (st+num 'hithere) (st+num "otherhi"))
{otherhi?=(1+ pi):f(4)}

*Error*:

None of the variables ["might", "be", "not", "found"] found

This shows the error for the first template part that errors out, even if {problem} will also error later, so while solving for problems in string templates, you might have to give it multiple tries.

Advanced String Template with LISP

Nadi Template string is useful when you want to represent node specific string, or file path in a network. This is not as advanced as the formatted strings in python. But it can be used for complex situations based on the current functionality.

The most important extension capability of the string template is the embedded lisp system.

As we know, templates can render variables, and have some capacity of transforming them:

{name:case(title):repl(-, )} River Streamflow = {streamflow} cfs

Results (with: name=Ohio; streamflow=12000):

Ohio River Streamflow = 12000 cfs

But for numerical operation, the transformers capabilities are limited as they are made for strings.

With lisp, we can add more logic to our templates.

{name:case(title):repl(-, )} River Streamflow is =(
	if (> (st+num 'streamflow) 10000)
	'Higher 'Lower
) than the threshold of 10^5 cfs.

Results (with: name=Ohio; streamflow=12000):

Ohio River Streamflow is Higher than the threshold of 10^5 cfs.

The available lisp functions are also limited, but the syntax itself gives us better airthmetic and logical calculations.

Note

As the template string can get complicated, and the parsing is done through Regex, it is not perfect. If you come across any parsing problems, please raise an issue at string template plus github repo.

Commands

Note that running commands within the templates is disabled for now.

echo today=$(date +%Y-%m-%d) {%Y-%m-%d}

Results (with: ):

echo today=$(date +%Y-%m-%d) 2025-06-04

But if you are writing a command template to run in bash, then it’ll be executed as the syntax is similar.

network command("echo today=$(date +%Y-%m-%d) {%Y-%m-%d}")

Results:

$ echo today=$(date +%Y-%m-%d) 2025-06-04

Here although the $(date +%Y-%m-%d) portion was not rendered on template rendering process, the command was still valid, and was executed.

Tables

Tables are data types with headers and the value template. Tables can be rendered/exported into CSV, JSON, and LaTeX format. Other formats can be added later. Although tables are not exposed to the plugin system, functions to export different table formats can be written as a network function.

A sample Table file showing two columns, left aligned name for station in title case, and right aligned columns for latitude and longitude with float value of 4 digits after decimal:

network load_file("./data/mississippi.net")
<Name => {_NAME:repl(-, ):case(title)}
^Ind => =(+ (st+num 'INDEX) 1)
>Order => {ORDER}
^Level => {LEVEL}
# something is wrong with the set_level algorithm
# Ohio - tenessee should be level 1, and missouri/yellowstone should be 0

Results:

NameIndOrderLevel
Lower Mississippi170
Upper Mississippi211
Missouri311
Arkansas411
Red511
Ohio620
Tenessee710

Here the part before => is the column header and the part after is the template. Presence of < or > in the beginning of the line makes the column left or right aligned, with center aligned (^) by default.

Exporting the table in svg instead of markdown allows us better network diagram.

network load_file("./data/mississippi.net")
network echo("../output/example-table2.svg")
<Name => {_NAME:repl(-, ):case(title)}
^Ind => =(+ (st+num 'INDEX) 1)
>Order => {ORDER}
^Level => {LEVEL}

*Error*:

network function: "table_to_svg" not found

A SVG Table can also be generated using the table file, using the task system like this:

network load_file("./data/mississippi.net")
network table_to_svg(
	table = "./data/sample.table",
	# either table = "path/to/table", or template = "table template"
	outfile = "./output/example-table.svg",
	config = {fontsize = 16, delta_y = 20, fontface="Noto Serif"}
)
network clip()
# the link path needs to be relative to this file
network echo("../output/example-table.svg")

*Error*:

network function: "table_to_svg" not found

File Templates

File templates are templates that use string templates, but they are a whole file that can be used to generate rendered text files.

File templates also have sections which can be repeated for different nodes, with corresponding syntax.

Following template will render a markdown table with headers and all the name and index of the nodes.

| Node | Index |
|------|-------|
<!-- ---8<--- -->
| {_NAME} | {INDEX} |
<!-- ---8<--- -->

Tasks

Task is a function call that the system performs. The function call can be a node function or a network function. The function can have arguments and keyword arguments that can determine its functionality. Node functions will be called on a node at a time, while the network function will be called with the whole network at once.

Currently tasks are performed one after another. The functions that any task can use can be internal functions provided by the library or the external functions provided by the plugins.

A sample tasks file is shown below:

node print_attrs()
network save_graphviz("/tmp/test.gv", offset=1.3, url="{_NAME}")
node savedss(
	"natural",
	"test.dss",
	"/OHIO-RIVER/{_NAME}/01Jan1994/01Jan2012/1Day/NATURAL/"
	)
node check_sf("sf")
node.inputsfirst route_sf("observed")
node render("Node {NAME} at index {INDEX}")

Here each line corresponds to one task. And if it’s a node task, then it’ll be called for each node (in sequential order by default). The last line node.inputsfirst will call that function in input node before the current node. Those functions can only be called for network with an output node.

Please note that although the string in the examples are highlighted as if they are string templates for readability. Those are just normal strings that functions take as inputs. Whether they are used as template or not depends on the individual function, refer to their help to see if they take Template type or String type.

Node Functions

Node functions are functions that take a node, and the function context to do some operations on it. They take mutable reference to the node, hence can read all node attributes, inputs, outputs, their attributes and timeseries.

Node functions can be run from the system for all the nodes in the network in different orders.

Currently the task system only supports running node functions for all nodes in the following 6 ways,

  • Sequential order,
  • Reverse order,
  • Run input nodes before the current node (recursively),
  • Run output node before the current node (recursively),
  • Run a list of nodes, and
  • Run on a path between two nodes (inclusive).

Depending on the way the function works, it might be required to be run in a particular order. For example, a function that counts the number of dams upstream of each point, might have to be run inputs first, so that you can cumulate the number as you move downstream.

Network Functions

Network functions are functions that take the network as a mutable reference and run on it.

Some examples of network functions:

  • List all the networks with their inputs/outputs,
  • Checks if any nodes have some attribute larger than their output,
  • Export the node attributes as a single CSV file,
  • Export the nodes in LaTeX file using Tikz to draw the network,
  • Calculate rmse,mse,etc errors between two attribute values for all nodes,
  • Generate an interactive HTML/PDF with network information and some other template, etc.

Developer Notes

This section contains my notes as I develop the NADI system. Kind of like a dev blog.

The software package will consists of multiple components. It is planned to be designed in such a way that users can add their functionality and extend it with ease.

Along with the Free and Open Source Software (FOSS) principles, the plugin system will make extension of the software functionality and sharing between users. As well as a way to develop in-house functionality for niche use cases.

Motivation

As Hydrologist, we often deal with the data related to the points in the river. Since most of the analysis requires doing the same things in multiple points, the initial phase of data cleaning process can be automated.

We spend a beginning phases of all projects preparing the data for analysis. And combining the time spent on visualizing the data, it’s a significant chunk of our time.

Data visualization influences the decision making from the stakeholders. And can save time by making any problems obvious from the very beginning. For examples, things like showing the quality of data (continuity for time series), interactive plots to compare data in different locations/formats, etc can help people understand their data better.

Besides plot, the example below shows how simply adding a column with connection visual can immediately make it easier to understand the relationship between the data points in a river. Without it people need to be familiar with the names of the data points and their location, or consult a different image/map to understand the relationship.

Table with Connection Information

The inspiration on making this software package comes from many years of struggle with doing the same thing again and again in different projects like these. And the motivation to make something generic that can be used for plethora of projects in the future.

Why Rust?

Rust1 is an open source programming language that claims to be fast and memory efficient to power performance critical services. Rust is also able to integrate with other programming languages.

Rust provides a memory safe way to do modern programming. The White House has a recent press release2 about the need to have memory safe language in future softwares. The report3 has following sentense about the Rust language.

At this time, the most widely used languages that meet all three properties are C and C++, which are not memory safe programming languages. Rust, one example of a memory safe programming language, has the three requisite properties above, but has not yet been proven in space systems.

The results of the survey from stackoverflow4 shows Rust has been a top choice for developers who want to use a new technology for the past 8 years, and the analysis also shows Rust is a language that generates for desire to use it once you get to know.


  1. https://www.rust-lang.org/

  2. https://www.whitehouse.gov/oncd/briefing-room/2024/02/26/press-release-technical-report/

  3. https://www.whitehouse.gov/wp-content/uploads/2024/02/Final-ONCD-Technical-Report.pdf

  4. https://survey.stackoverflow.co/2023/#technology-admired-and-desired

Writing this Book

I’m used to emacs’s org-mode, where you can evaluate code and show output and all those things. Like markdown in steroids.

mdbook seems to have some of those functionality in it as well. Though I think emacs’s extension through elisp is lot more flexible and easier to extend. mdbook supporting custom preprocessors and renderer means we can extend it as well.

In the process of writing this book. I made the following things.

Syntax Highlight for NADI specific syntax

mdbook uses highlight.js to syntax highlight the code blocks in it. And since nadi system has a lot of its own syntax for string templates, task system, table system, network system etc. I wanted syntax highlight for those things. Although the attribute files are subset of TOML format, so we have syntax highlight for it. Everything else needed a custom code.

Following the comments in this github issue led me to find a workaround for the custom syntax hightlight. I don’t know for how long it will work, but this works well for now.

Basically I am using the custom JS feature of mdbook like:

[output.html]
additional-js = ["theme/syntax-highlight.js"]

To insert custom highlight syntax. For example adding the syntax highlight for network text is:

// network connections comments and node -> node syntax
hljs.registerLanguage("network", (hljs) => ({
    name: "Network",
    aliases: [ 'net' ],
    contains: [
	hljs.QUOTE_STRING_MODE,
	hljs.HASH_COMMENT_MODE,
	{
	    scope: "meta",
	    begin: '->',
	    className:"built_in",
	},
    ]
}));

The syntax for network is really simple, for others (task, table, string-template, etc) refer to the theme/syntax-highlight.js file in the repository for this book.

After registering all the languages, you re-initialize the highlight.js:

hljs.initHighlightingOnLoad();

mdbook-nadi preprocessor

Instead of just showing the syntax of how to use the task system, I wanted to also show the output of the examples for readers. So I started this with writing some elisp code to run the text in selection and then copying the output to clipboard that I could paste in output block. It was really easy in emacs.

Following code takes the selection, saves them in temporary tasks file, runs them and then puts the output in the clipboard that I can paste manually.

(defun nadi-run-tasks (BEG END)
  (interactive "r")
  (let ((tasks-file (make-temp-file "tasks-")))
    (write-region BEG END tasks-file)
    (let ((output '(shell-command-to-string (format "nadi %s" tasks-file))))
	  (message output)
	  (kill-new output)
	  (delete-file tasks-file))))

But this is manual process with a bit of automation. So I wanted a better solution, and that’s where the mdbook preprocessor comes in.

With the mdbook-nadi preprocessor, I can extract the code blocks, run it, and insert the contents just below the code block as output.

Once I had a working prototype for this, I also started adding support for rendering string templates, and generating tables along with the task system.

String templates

For string templates, write the templates in stp blocks like below that will have the syntax hightlight.

Hi my name is {name}.

If you add run into it, it’ll run the template with any key=val pairs provided after run.

Basically writing the following in the mdbook markdown:

```stp run name=John
Hi my name is {name}.
```

Will become:


Hi my name is {name}.

Results (with: name=John):

Hi my name is John.

Tasks

For tasks, similary write a block with task as language. You can use ! character at the start of the line to hide it in the view. Use them for essential code that are needed for results but are not the current focus. And when you add run it’ll run and show the output.

```task run
!network load_file("data/mississippi.net")
node render("Node {NAME}")
```

network load_file("data/mississippi.net")
node render("Node {NAME}")

Results:

{
  lower-mississippi = "Node \"lower-mississippi\"",
  upper-mississippi = "Node \"upper-mississippi\"",
  missouri = "Node \"missouri\"",
  arkansas = "Node \"arkansas\"",
  red = "Node \"red\"",
  ohio = "Node \"ohio\"",
  tenessee = "Node \"tenessee\""
}

Tables

The implementation for tables are little weird right now, but it works. Since we need to be able to load network, and perform actions before showing a table.

So the current implementation takes the hidden lines using !and runs them as task system, with additional task of rendering the table at the end.

Example:

```table run markdown
!network load_file("./data/mississippi.net")
<Name => {_NAME:repl(-, ):case(title)}
^Ind => =(+ (st+num 'INDEX) 1)
>Order => {ORDER}
```

Becomes:


network load_file("./data/mississippi.net")
<Name => {_NAME:repl(-, ):case(title)}
^Ind => =(+ (st+num 'INDEX) 1)
>Order => {ORDER}

Results:

NameIndOrder
Lower Mississippi17
Upper Mississippi21
Missouri31
Arkansas41
Red51
Ohio62
Tenessee71

I’d like to refine this further.

Task can be used to generate markdown in the same way as the tables can:

For example task run of this:

network load_file("./data/mississippi.net")
network table_to_markdown(template="
<Name => {_NAME:repl(-, ):case(title)}
^Ind => =(+ (st+num 'INDEX) 1)
>Order => {ORDER}
")

Results:

| Name              | Ind | Order |
|:------------------|:---:|------:|
| Lower Mississippi |  1  |     7 |
| Upper Mississippi |  2  |     1 |
| Missouri          |  3  |     1 |
| Arkansas          |  4  |     1 |
| Red               |  5  |     1 |
| Ohio              |  6  |     2 |
| Tenessee          |  7  |     1 |

If you do task run markdown then:

network load_file("./data/mississippi.net")
network table_to_markdown(template="
<Name => {_NAME:repl(-, ):case(title)}
^Ind => =(+ (st+num 'INDEX) 1)
>Order => {ORDER}
")

Results:

NameIndOrder
Lower Mississippi17
Upper Mississippi21
Missouri31
Arkansas41
Red51
Ohio62
Tenessee71

Which means it can be used for other things:

network load_file("./data/mississippi.net");
network echo("**Details about the Nodes:**")
network echo(render_nodes("
=(+ (st+num 'INDEX) 1). {_NAME:repl(-, ):case(title)} River
"))

Results:
Details about the Nodes:

  1. Lower Mississippi River

  2. Upper Mississippi River

  3. Missouri River

  4. Arkansas River

  5. Red River

  6. Ohio River

  7. Tenessee River

You can also use the same method to insert images like this, at the end of your tasks, so that the image generated by the tasks can be inserted here.

# do some tasks
network echo("Some other output form your tasks")
network clip()
network echo("../images/ohio-low.svg")

Results:

Optimization Algorithms

We can have input variables to change, and output variables to optimize, but how do we take what function to run to calculate the output variable…

One simple idea can be to take a command template to run. So we will change the input variables, run the command for each node or network, and then that command will update the output variable that we can optimize for.

We might require an option to call other functions in this case. Then maybe we can just pass the name of the function.

Complex idea could be to add the support for loop syntax in task system.

Interactive Plots

An experiment using the cairo graphics library shows that a PDF can be directly produced without using LaTeX as intermediate using the network information. This functionality — although not as complete as the one in the example — has been exposed as an internal network function for now. Further functionality related to this idea can be embedding network information in simple plots, or generate the whole plot along side the network information.

It might be a good idea to make several functions that can export the interactive plots in LaTeX, PDF, PNG, SVG, HTML, etc. separately instead of single format.

LaTeX and HTML will be easier due to text nature, for others I might have to spend time with some more experimentation on cairo.