each step of the way, the agent reassesses its plan of action and self-corrects, allowing for informed decision-making.
- Jul 2025
-
www.ibm.com www.ibm.com
-
-
What is an agent? read more in detail
Tags
Annotators
URL
-
-
genomebiology.biomedcentral.com genomebiology.biomedcentral.com
-
scaffold information generated by Bambus 2 allows us to integrate multiple sources of information and obtain more accurate annotations of the resulting assembly
-
Unlike MetAMOS, SmashCommunity only supports a small set of assembly and analysis tools
-
simply links together the individual analysis tools
-
provide additional functionality made possible by the integration of different analyses
Need to understand details of this: What specific integration does MetAMOS really do?
-
compare its performance to other software tools
-
'Assembly mode', which requires larger amounts of RAM and starts from raw read data
-
We intended to encourage users to tailor MetAMOS to the biological questions they want to answer, not the inverse
-
customize their own pipelines by combining the modules they deem necessary
-
Ruffus [29]) to track inputs/outputs/states and checkpoint while running through computationally intensive analyses.
-
INSTALL script. This will automatically configure the pipeline to run within the user's environment and also fetch all required data
data => databases?
-
initPipeline is mainly involved with creating a project environment, and describing input files
-
runPipeline takes a project directory as the input and will initiate execution of the entire MetAMOS pipeline
-
MetAMOS pipeline ends by generating an interactive, HTML summary
-
assembly statistics and estimated abundance information
-
-
link.springer.com link.springer.com
-
Is a multi-centre study evaluating the use of nanopore-16S for clinical microbial detection using shared mock samples (looking for consitency, LODs etc..?)
This study does nanopore on 16S. Compares two bioinformatic pipelines and uses Emu
Todd: Emu holding its own against a commercial tool, fewer species classified (likely DB issue) but better precision wrt discriminating species
-
Only shortcoming is that the Emu pipeline (GMS-16S) classified fewer species
-
Todd says this is likely a database issue.
-
Can be fixed when implementing #SOMAteM?
-
Check methods for details on the Emu pipeline: “
Bioinformatic data analysis and identification of pathogen
”
-
Evaluation of two bioinformatic pipelines: 1928-16S and GMS-16S<br /> The performance of two separate bioinformatic pipelines were compared: the commercial 16S pipeline developed by 1928 Diagnostics (1928-16S) and the gms_16S bioinformatics analysis pipeline that uses the EMU classification tool (GMS-16S). Overall, 1928-16S identified a higher number of species in comparison to GMS-16S (Supplementary FigS2, Supplementary file 2 and 3). However, significant differences were observed at species level, particularly for Streptococcus and Staphylococcus. GMS-16S demonstrated high accuracy of species level classification, effectively discriminating S. intermedius from S. anginosus in sample G4, as well as separating S. aureus from Staphylococcus argenteus in sample Q3 (Fig. 3a). GMS-16S also more accurately classified members of the Enterobacteriaceae family (Q7, Q5), and was able to identify Serratia marcescens at species level with greater precision in sample Q1 compared to 1928-16S. Conversely, 1928-16S classified a larger proportion of reads as C. acnes in sample G6 (laboratory k), whereas GMS-16S distributed the reads between C. acnes and the closely related C. namnetense.
<annotations in Public group>
-
-
commercial 16S bioinformatic pipeline from 1928 Diagnostics (1928-16S) was evaluated and compared with the open-sourced gms_16S pipeline that is based on the EMU classification tool (GMS-16S).
Emu is more accurate ; Todd is happy :)
- more annotations in Public group
-
-
www.nature.com www.nature.com
-
RapidONT, a workflow designed for cost-effective and accessible WGS-based pathogen analysis
Includes both a lab protocol and bioinformatic pipeline
-
routine clinical adoption of WGS is hindered by factors such as high costs, technical complexity, and the requirement for bioinformatics expertise for data analysis
-
user-friendly web-based platform Pathogenwatch, which facilitates species identification, molecular typing, and antimicrobial resistance (AMR) prediction
Checkout this web-gui tool. Claims "minimal bioinformatic expertise"
-
-
-
anvi’o empowers its users to navigate through ‘omics data without imposing rigid workflows.
Using a nextflow backbone would make our workflow more right right?
Tags
Annotators
URL
-
-
www.nature.com www.nature.com
-
users still needing considerable expertise to interpret the results.
-
inter-dependencies of the data types and the various data formats that need to ‘talk’ to each other.
Tags
Annotators
URL
-
-
www.biorxiv.org www.biorxiv.org
-
Assembly graphs produced by different tools from the same data may differ significantly, posing a challenge to tools for downstream processing tasks
This could be a useful tool to integrate post assemblies if it improves compatibility with subsequent tools such as plasmid binning in #SOMAteM
(not relevant, since this paper solves this issue) How can the LLM help solve this by suggesting the correct downstream tool or by converting outputs to be compatible?
Tags
Annotators
URL
-
-
academic.oup.com academic.oup.com
-
choice of the right algorithm for a given dataset has become difficult due to numerous comparative reports on these different assemblers [88, 89]
What does the choice of algorithm depend on?
-
most widely used assemblers are MegaHit, metaSPAdes, RayMeta and IDBA-UD
-
The aim of this work is to review the most important workflows for 16S rRNA sequencing and shotgun and long-read metagenomics
-
assembly, binning, annotation and visualization
-
best-practice protocols
-
major advantage of De Bruijn graphs is that assembled reads contain fewer errors and errors can be easily corrected prior to assembly
Tags
Annotators
URL
-
-
www.biorxiv.org www.biorxiv.org
-
Refer to the original/live annotation in Zotero/note
This tool does something very similar to omi and has lot of desirable qualities + evaluation methods we can learn from. #omi-relevance
What it can do
SpatialAgent employs adaptive reasoning and dynamic tool integration, allowing it to adjust to new datasets, tissue types, and biological questions. It processes multimodal inputs, incorporates external databases, and supports human-in-the-loop interactions, enabling both fully automated and collaborative discovery
tasks such as gene panel design, cell and tissue annotation, and pattern inference in cell-cell communication and pathway analysis
-
-
amos.sourceforge.net amos.sourceforge.net
-
A Modular, Open-Source whole genome assembler.
AMOS
-
-
www.cell.com www.cell.com
-
human mtDNA is an extranuclear molecule of ∼16.5 kilobases
-
mtDNA is an informative matrilineal uniparental marker that can be used to trace the ancestry of an individual
-
scanned all the assembled contigs from each sample for human mtDNA by running homology search (i.e., BLASTn)
-
Positive cases were derived from stool, oral, and skin samples
-
it is now considered mandatory to remove human DNA (or RNA) sequencing reads before depositing metagenomes in public repositories
-
Human DNA could be considered personal identifying information
-
there is not a consensus on which version of the human reference genome to use for human DNA decontamination
-
studies that reported exclusion of only reads where both paired-end reads are mapped still detected mtDNA
-
small, circular nature of the mitochondrial genome allows reads to span the start and end positions, leading to incomplete exclusion of mtDNA
-
nuclear DNA is linear and much larger, so this approach effectively removes most nuclear DNA reads, leaving only minimal traces
-
exclude more off-target reads
-
using single-end mapping
-
extensive validation of the available pipelines is still required.
-
-
www.biorxiv.org www.biorxiv.org
-
MADRe, a modular and scalable pipeline for long-read strain-level metagenomic classification, enhanced with Metagenome Assembly-Driven Database Reduction.
-
contig-to-reference mapping reassignment based on an expectation-maximization algorithm for database reduction,
EM method similar to EMU?
-
mapping-based tools such as MetaMaps [24], PathoScope2 [25], EMU [26] and MORA [27], which rely on read alignments and reassignment algorithms, offer higher precision at a greater computational cost.
-
Kraken2, perform well at the species level
-
The implementation of the EM algorithm in MADRe is inspired by PathoScope2 [46] and EMU [26].
-
range of metagenomic classification tools have been developed, which can be broadly categorized into marker-based, DNA-to-protein and DNA-to-DNA approaches, as described in [4].
-
K-mer-based tools such as Kraken2 [14], KrakenUniq [15], Bracken [16], Centrifuge [17], CLARK/CLARKS [18, 19], Ganon [20, 21], Taxor [22], and Sylph [23] are known for their speed and scalability to large databases, but often trade precision for speed
This whole paragraph has good knowledge that can be incorporated into LLM-RAG? - can ask user about their need for speed!? vs accuracy
-
MADRe achieves high precision and strain-level resolution while maintaining lower memory usage and runtime compared to existing tools
Tags
Annotators
URL
-
-
www.biorxiv.org www.biorxiv.org
-
assembly tools remain prone to large-scale errors caused by repeats in the genome, leading to inaccurate detection of AMR gene content
-
we present Amira, a tool to detect AMR genes directly from unassembled long-read sequencing data
-
the fact that multiple consecutive genes lie within a single read to construct gene-space de Bruijn graphs where the k-mer alphabet is the set of genes in the pan-genome of the species under study
-
reads corresponding to different copies of AMR genes can be effectively separated based on the genomic context of the AMR genes, and used to infer the nucleotide sequence of each copy
-
compare the number of fully (>90%) present genes with good read support by Amira and Flye with AMRFinderPlus
-
quantifying the improvement in recall when handling heterogeneous data.
-
-
-
We present Autocycler, a command-line tool for generating accurate bacterial genome assemblies by combining multiple alternative long-read assemblies of the same genome
-
Autocycler builds a compacted De Bruijn graph from the input assemblies, clusters and filters contigs, trims overlaps and resolves consensus sequences by selecting the most common variant at each locus
-
-
www.nextflow.io www.nextflow.io
-
“module scripts” (or “modules” for short), which are Nextflow scripts that can be “included” by other scripts
-
help you organize a large pipeline into multiple smaller files and take advantage of modules created by others
-
To migrate this code to DSL2, you need to move all of your channel logic throughout the script into a workflow definition
seqscreen was writtein in DSL1, needs to be migrated (Todd)
-
-
www.healthcareittoday.com www.healthcareittoday.com
-
initial goal of tinybio was to remove the barrier to entry for running bioinformatics packages
-
Scientists spend a significant proportion of their time transforming and structuring data for analysis
Useful to cite in introduction?
-
driving the development of community-centric tools on Seqera.io, empowering scientists worldwide to leverage modern software capabilities on demand
-
removing barriers to entry to bioinformatics
-
steep learning curve that prevents newcomers from getting started fast
-
-
seqera.io seqera.io
-
meet scientists at every stage of their work
-
Suggesting
-
Generating Nextflow code
-
Asking bioinformatics questions
-
contextually relevant answers
-
Beyond just a chat interface
-
ability to test their code in the interface
This might not be a big achievement: The CLI also includes their linter if that's what is being used here.
-
Programmed with a deep understanding of Nextflow, common bioinformatics tools, and the overarching scientific community.
by "overarchinve scientific community" do you mean some discussions on nf-core forums?
-
extensive testing with scientists
-
able to identify the root cause of errors, help troubleshoot, and suggest edits
-
has deep knowledge of the errors
What could be the source of this knowledge? - Maybe a human in the loop training with automated code gen + linter use? - Grazing on forums?
able to identify the root cause of errors, help troubleshoot, and suggest edits
-
ability to pair with bioinformatics test data and generate local test scripts
-
generate and run unit tests.
This is quite useful!
-
not only give you the initial conversion, but also run the stages of the code that it generates with sample data and iteratively correct any code that yields runtime errors
-
convert a pipeline from Bash/CWL/WDL to Nextflow
use cases
can not only give you the initial conversion, but also run the stages of the code that it generates with sample data and iteratively correct any code that yields runtime errors
-
AI can be a powerful tool for helping scientists dig into results and more quickly identify interesting patterns
Touch on this for introduction
-
key to figure out how we can get the right context on your pipeline results
-
Seqera AI – a bioinformatics agent purpose-built for the scientific lifecycle
Seqera-AI can - Suggest pipelines (tested and validated) - Answering bioinformatics questions with context - Generate nextflow code + validate/self-correct (when would someone use this?)
context retrieved: - Can retrieve context for writing and testing nextflow code - context of pipeline results to aid interpretation
source: Summarized from text below
-
native integration with MultiQC where you can enable automatic, in-line analysis of MultiQC reports
so it elaborates the reports?
-
fully extensible endpoint in Seqera AI, so that any bioinformatics tool can build their own AI integration.
explore more
Tags
Annotators
URL
-
-
academic.oup.com academic.oup.com
-
threatens to compound this problem owing to the ease with which massive volumes of synthetic data can be generated
-
importance of improved educational programs aimed at biologists and life scientists that emphasize best practices in data engineering
-
increased theoretical and empirical research on data provenance, error propagation, and on understanding the impact of errors on analytic pipelines
-
we focus specifically on concerns that lie at the interface of biological data and computational inference with the goal of inspiring increased research and educational activities in this space
-
-
www.nature.com www.nature.com
-
how to best benefit from recent advances in AI and how to generate, format and disseminate data to enable future breakthroughs in AI-guided drug discovery
-
-
nf-co.re nf-co.re
-
it supports both short and long reads
-
-
academic.oup.com academic.oup.com
-
When given well-crafted instructions, these chatbots hold the potential to significantly augment bioinformatics education and research
-
Crafting effective prompts can be challenging
-
role prompting that assigns a role to the chatbot, few-shot prompting that provides relevant examples, and chatbot self-reflection that improves responses based on task feedbacks
-
domain-specific knowledge.
-
-
academic.oup.com academic.oup.com
-
In addition, varying study designs will require project-specific statistical analyses.
how is this addressed? - helpful for #SOMAteM
-
Hecatomb’s design philosophy recognizes that there are no “perfect” databases or search algorithms
-
Instead, Hecatomb relies on providing a compiled and rich set of data for search result evaluation
-
Hecatomb and Conda handle the installation of all dependencies
-
use of isolated Conda environments for Hecatomb minimizes package version conflicts, minimizes overhead when rebuilding environments for updated dependencies, and allows maintenance and customization of different Hecatomb versions.
-
While Hecatomb is a Snakemake pipeline, it uses the Snaketool command line interface to make running the pipeline as simple as possible [95]. Snaketool populates required file paths and configuration files, allowing Hecatomb to be configured and run with a simple command
-
-
seqera.io seqera.io
-
An opt-in feature for now, strict syntax enables consistent behavior between the Nextflow CLI and language server, and enables numerous new features
-
more actionable error messages
-
output on the terminal highlighting exactly where the problem lies
-
-
nextflow.io nextflow.io
-
This new specification enables more specific error reporting, ensures more consistent code, and will allow the Nextflow language to evolve independently of Groovy.
-
strict syntax will eventually become the only way to write Nextflow code, and new language features will be implemented only in the strict syntax
-
prepare for the strict syntax
-
assignments are allowed only as statements:
-
use higher-order functions, such as the each method, instead:
for
andwhile
loop -
use if-else statements
-
environment variables
-
Use a multi-line string
-
Any Groovy code can be moved into the lib directory, which supports the full Groovy language.
-
For Groovy code that is complicated or if it depends on third-party libraries, it may be better to create a plugin
-
-
training.nextflow.io training.nextflow.io
-
local executor is very useful for workflow development and testing purposes
-
Nextflow provides an abstraction between the workflow’s functional logic and the underlying execution system (or runtime)
-
-
bmcbioinformatics.biomedcentral.com bmcbioinformatics.biomedcentral.com
-
its cost-effectiveness and lower data requirements compared to metagenomic whole-genome sequencing (WGS)
-
-
-
interactive analysis applications
-
namely Jupyter, RStudio, VS Code, and Xpra).
-
-
merenlab.org merenlab.org
-
You should always be suspicious of your metabolic reconstructions, and particularly when you are using short reads where you have partial matches.
snippets of wisdom
Read annotations in Public group
Tags
Annotators
URL
-
-
www.nature.com www.nature.com
-
omi feature idea: minor CLI tools - not pipelines
-
Thought process: What does this tool need as input: MSA.
-
Can this CLI tool make the MSA as well if the user tells it stuff? That’s too specialized -- would be nice to make an LLM tool like omi for that though
-
I think omi can beat seqera AI and chatGPT in this space where we identify and wrap essential CLI tools to be run by text prompts
-
Leave the nextflow part to seqera AI :: if it’s good enough for running pipelines
-
Tags
Annotators
URL
-
-
www.nature.com www.nature.com
-
Nextflow, a workflow management system that uses Docker technology for the multi-scale handling of containerized computation
-
found that multi-scale containerization, which makes it possible to bundle entire pipelines, subcomponents and individual tools into their own containers, is essential for numerical stability
-
The dataflow model is superior to alternative solutions based on a Make-like approach, such as Snakemake16, in which computation involves the pre-estimation of all computational dependencies, starting from the expected results up until the input raw data
-
requires a directed acyclic graph (DAG), whose storage requirement is a limiting factor for very large computations.
-
the top to bottom processing model used by Nextflow follows the natural flow of data analysis, it does not require a DAG
-
Although the graphical user interface (GUI) in Galaxy offers powerful support for de novo pipeline implementation by non-specialists, it also imposes a heavy development burden because any existing and validated third-party pipeline must be re-implemented and re-parameterized using the GUI.
-
-
nf-co.re nf-co.re
-
analysis pipeline for assembly, binning and annotation of metagenomes.
-
-
www.nature.com www.nature.com
-
we present Emu, an approach that uses an expectation–maximization algorithm to generate taxonomic abundance profiles from full-length 16S rRNA reads.
Tags
Annotators
URL
-
-
seqera.io seqera.io
-
Chat sessions100 per month
-
-
nf-co.re nf-co.re
-
It is a good idea to specify the pipeline version when running the pipeline on your data.
-
-
nf-co.re nf-co.re
-
If you are the only person to be running this pipeline, you can create a local config file and use this.
-
Configuration parameters are loaded one after another and overwrite previous values. Hardcoded pipeline defaults are first, then the user’s home directory, then the work directory, then every -c file in the order supplied, and finally command line --<parameter> options.
-
-
nf-co.re nf-co.re
-
If you wish to repeatedly use the same parameters for multiple runs, rather than specifying each flag in the command, you can specify these in a params file. Pipeline settings can be provided in a yaml or json file via -params-file <file>.
-
Differential abundance analysis for relative abundance from microbial community analysis are plagued by multiple issues that aren’t fully solved yet. But some approaches seem promising
-
Profiles can give configuration presets for different compute environments.
-
-
academic.oup.com academic.oup.com
-
Furthermore, we present an update on NanoPlot and NanoComp from the NanoPack tools (De Coster et al. 2018).
-
Improvements to NanoPlot and NanoComp are, among code optimizations, the generation of additional plots, using dynamic HTML plots from the Plotly library, and enabling further exploration by the end users
-
Chopper is a tool that combines the utility of NanoFilt and NanoLyse, for filtering sequencing reads based on quality, length, and contaminating sequences, delivers a 7-fold speed up compared to the Python implementation, making use of the Rust-Bio library
-
-
www.biomedcentral.com www.biomedcentral.com
-
Call for papers - Application of large language models in genome analysis
-
Submission Deadline: 28 November 2025
-
-
nf-co.re nf-co.re
-
For Nextflow DSL2 nf-core pipelines - parameters defined in the parameter block in custom.config files WILL NOT override defaults in nextflow.config! Please use -params-file in yaml or json format in these cases:
-
Please only use Conda as a last resort, i.e., when it’s not possible to run the pipeline with Docker or Singularity.
-
-
www.biorxiv.org www.biorxiv.org
-
Magnet is a whole-genome read-mapping-based method that provides detailed presence and absence calls for bacterial genomes
-
Lemur is a marker-gene-based method
-
methods explicitly designed for long reads tend to perform better.
-
experimental evaluation focused primarily on precision and recall
-
important to evaluate scalability and fitness for execution in low-resource environments such as laptops and tablet computers
-
long-read technologies offer potential for portable and streaming sequence analysis
-
Both methods require a FASTQ file containing sequencing reads as input.
-
Several new tools have recently been developed to leverage long-reads for taxonomic profiling
Long-reads to taxonomic profiling approaches - k-mer based: Kraken 2, Sourmash <br /> - read-mapping to index: Centrifuger, MetaMaps.. - Marker genes: Melon, PhyloSift, ..
-
Lemur additionally requires a marker gene (MG) database, whereas Magnet requires a (ideally small) set of genomes
-
Our results indicate that Lemur can efficiently process large datasets within minutes to hours in limited computational resource settings.
-
can improve precision by detecting and filtering out many false positive calls
-
Lemur and Magnet have limitations that vary by use case. Reliance on bacterial marker genes necessarily implies it cannot generalize to viral genome classification
-
reliance on the marker genes makes it less sensitive than alternatives like Kraken 2 or MetaMaps, which use all long reads and complete genomes.
-
our study focused on taxonomic profiling and binary presence and absence metrics for taxa
-
The EM algorithm begins by initializing F (t) to the uniform distribution and initializing P (r|t) for each read and taxon pair (r, t).
-
The goal of Magnet is to detect and remove potential false positives by performing competitive read alignment leveraging all of the reads mapped against the entire reference genome
-
As input, Magnet requires reads as well as a taxonomic abundance profile (estimated from the input reads e.g. using Lemur)
-
Lastly, Magnet marks species as present or absent.
-
Lightweight tools for taxonomic profiling: Presence/ absence + abundance estimation
-
Lemur: Marker gene based ; uses EM (similar to
Emu
)- Takes raw reads and creates an abundance estimate
-
MAGnet: whole genome, map reads to reference genome
- Takes the abundance estimate + raw reads and removes false positive calls with a threshold (ANI, mapping quality) of alignment to representative genomes from clustering
-
-
-
academic.oup.com academic.oup.com
-
Multiple genome alignment, the process of identifying nucleotides across multiple genomes which share a common ancestor
-
We introduce a partitioning option to Parsnp, which allows the input to be broken up into multiple parallel alignment processes which are then combined into a final alignment
-
-
www.biorxiv.org www.biorxiv.org
-
improves our previous method, MHG-Finder, by utilizing a guide tree to significantly improve scalability and provide more informative biological results
-
Whole-genome alignments play a crucial role in downstream analyses in comparative genomic studies
-
A maximal homologous group, or MHG, is defined as a maximal set of maximum-length sequences whose evolutionary history is a single tree
-
-
journals.plos.org journals.plos.org
-
processes such as horizontal gene transfer or gene duplication and loss may disrupt this homology by recombining only parts of genes, causing gene fission or fusion
-
-
genomebiology.biomedcentral.com genomebiology.biomedcentral.com
-
Read annotations in public group
-
-
www.biorxiv.org www.biorxiv.org
-
Structural variants (SVs), genomic alterations of 10 base pairs or more, play a pivotal role in driving evolutionary processes and maintaining genomic heterogeneity within bacterial populations
-
Bacterial genome dynamics
-
encompassing a single metagenome coassembly graph constructed from all samples in a series
-
log fold change in graph coverage between subsequent samples is then calculated to call SVs
-
show rhea to outperform existing methods for SV and horizontal gene transfer (HGT) detection in two simulated mock metagenomes
-
innovative approach leverages raw read patterns rather than references or MAGs to include all sequencing reads in analysis
-
studying SVs across diverse and poorly characterized microbial communities
-
Recent work utilizes coassembly graphs for metagenomes to decompose strain diversity into haplotypes (30), but to the best of our knowledge, this is the first time coassembly graph patterns have been used for automated detection of SVs in a metagenome series.
-
In isolate genomics, the goal of SV detection is relatively straightforward: detect long genomic differences between a sequence and reference genome that can be classified as an insertion, deletion, inversion, duplication, translocation, or any combination
-
-
nf-co.re nf-co.re
-
You can install modules from nf-core/modules in your pipeline using nf-core modules install. A module installed this way will be installed to the ./modules/nf-core/modules directory.
-
-
www.nextflow.io www.nextflow.io
-
Nextflow automatically creates and activates the Conda environment(s) given the dependencies specified by each process.
-
The use of Conda recipes specified using the conda directive needs to be enabled explicitly in the pipeline configuration file (i.e. nextflow.config):
-
conda.enabled = true
-
-
www.nextflow.io www.nextflow.io
-
conda.useMicromamba
-
Uses the micromamba binary instead of conda to create Conda environments
-
-
www.nextflow.io www.nextflow.io
-
Any channel in the workflow can be assigned to an output, including process and subworkflow outputs. This approach is intended to replace the publishDir directive.
I guess this is to publish important files and exclude intermediate ones?
-
-
academic.oup.com academic.oup.com
-
assembly
-
challenging in complex environmental samples consisting of hundreds to thousands of populations
-
Mapler is a metagenome assembly and evaluation pipeline
-
Hi-Fi long read
means pacbio with long and accurate reads
-
novel metrics assessing the diversity that remains uncaptured by the assembly process
-
-
training.nextflow.io training.nextflow.io
-
output: path "${greeting}-output.txt" script: """ echo '$greeting' > '$greeting-output.txt'
why is there a repetition?
-
You can think of view() as a debugging tool, like a print() statement in Python
-
We prefer to be explicit to aid code clarity, as such the $it syntax is discouraged and will slowly be phased out of the Nextflow language.
-
We are using an operator closure here - the curly brackets.
-
-
www.nextflow.io www.nextflow.io
-
a process will emit value channels if it is invoked with all value channels, including simple values which are implicitly wrapped in a value channel.
-