aboutsummaryrefslogtreecommitdiff
path: root/README.md
blob: be73a2246b53b90768e6767ba0066919d0796c63 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
<h1 align="center"> BioNix </h1>

BioNix is a tool for reproducible bioinformatics that unifies workflow
engines, package managers, and containers. It is implemented as a
lightweight library on top of the [Nix](https://nixos.org/nix/)
deployment system.

BioNix is currently a work in progress, so documentation is sparse.
Please get in contact with us for more information, help, and
contributing (see bottom of this page).

## Installation

BioNix requires no dependencies beyond [Nix](http://nixos.org/nix),
which may be installed by:
```{sh}
curl https://nixos.org/nix/install | sh
```
If you do not have root access a variety of [rootless
install](https://nixos.wiki/wiki/Nix_Installation_Guide#Installing_without_root_permissions)
options are available.

API docs can be generated by executing `nix build` in the `doc`
directory and viewing `result/OEBPS/index.html`.

## Examples

Several examples are available in `./examples/`. The main example is
presented in `./examples/default.nix` and can be built using `nix build`
in `./examples/`. This sample pipeline performs variant calling using
[`platypus`](https://github.com/andyrimmer/Platypus), alignment using
[`bwa mem`](https://github.com/lh3/bwa), and preprocessing using
[`samtools`](http://www.htslib.org/).

See the documentation in `./examples/README.md` for more detail about
this pipeline and the other examples.

- The pipeline itself is specified in `examples/call.nix` and
  `examples/default.nix`.
- The BioNix wrapper to run `platypus` is in
  `tools/platypus-callVariants.nix`.
- The Nix expression for the `platypus` software itself can be found in
  [nixpkgs](https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/science/biology/platypus/default.nix).
  
## Constructing workflows

Writing workflows requires some familiarity with the Nix
programming language and deployment system. Good introductions can be
found [here](https://learnxinyminutes.com/docs/nix/) and
[here](https://ebzzry.io/en/nix/).

To understand how to construct workflows it is recommended to study the
examples provided. Thanks to the flexibility of Nix, the workflows can
be constructed in different ways to suit the intended purposes and the
examples illustrate some of the ways one might approach various
problems.

For constructing tool wrappers, take a look in the `./tools/`
directory for the currently existing tool wrappers. A good starting
point are the wrappers for BWA.

## HPC execution

BioNix supports submission of jobs to computing queues rather than
directly building them using the Nix build engine. The two supported
engines are Slurm and PBS represented by the `slurm` and `qsub` entries
in the root BioNix tree, which take an attribute set of default
parameters to a new tree of tools. Simply use tools out of these trees
to submit jobs, and specify resource requirements as ordinary
configuration options to the tools.

The following resource parameters can be specified:

- *ppn*: The number of cores to request;
- *mem*: The amount of memory to request (GB);
- *walltime*: A string defining the maximum walltime.

As we rely on side effects to submit jobs sandbox builds cannot be used
and must be disabled (`--option sandbox false` with `nix-build` or
`--no-sandbox` with `nix build`).

### Slurm specifics

Slurm jobs are submitted by executing the `salloc` binary on the
cluster. By default this is assumed to be `/usr/bin/salloc`; if this is
not the case on your cluster then you need to additionally specify the
path to salloc via the `salloc` parameter.

When launching the build, it is important that the `TMPDIR`
environment variable points to a location which is on shared storage
(i.e., available from all nodes). This will be the location used for
temporary files during the execution of stages.

### PBS specifics

The PBS wrapper is considerably more complicated as initiating
interactive processes is not as reliable as Slurm's `salloc`.
Consequently, jobs are submitted via non-interactive queue submissions
and the queue polled to determine when the submitted job has completed.

The path to the PBS executables (i.e., `qsub` and `qstat`) has to be
given in the `qsubPath` attribute. Furthermore, a temporary directory
that's shared across all nodes must be specified in `tmpDir`.

## Distributed execution

Nix has support for distributing jobs amongst a collection of
distributed machines. See the
[manual](https://nixos.org/nix/manual/#chap-distributed-builds) and
[wiki](https://nixos.wiki/wiki/Distributed_build) for more information.

## Getting help and contributing

For general questions, issues, and
[contributing](https://git-send-email.io), please
[email](mailto:bionix@cua0.org) or [subscribe
to](mailto:bionix+subscribe@cua0.org) our mailing list. You may also
chat with us at [#bionix:cua0.org](http://matrix.to/#/#bionix:cua0.org).