Gregg Randolph : please see /project/gtl/data/raw/ALF1/16S/tfmergedreads where I made mergereads.nf , teton.conf and edited trim_merge.pl to trim_mergecab.pl (initially this was because I didn't have permissions to run the file, so I copied it, but I found I need to make some changes, with which are in the *cab.pl version). You run the nextflow script with: module load nextflow and nextflow run -bg mergereads.nf -c teton.config . See inside of mergereads.nf for other ways of running it (i.e., not in the background). I tried this on pair of input files and one of the vsearch steps in the middle fails because the inputs are too small. But the . I then ran it on all input. The nextflow script completes. Please have a look and see what you find and can figure out. , but one the vsearch steps appears to produce no output. It might be that some of the input files are genuinely too small. I can see that the trimming step is working. You can see this in output/trimmed/ . But we're not getting other files (in unmerged/ , joined/ , and in output/ itself.). Logs and other files for debugging from Nextflow will be typically be made automatically in work/ . I remove this folder between jobs, so that I can see what output is from what run. For example, see work/ff/d9576360a5295e6c92a0183e485944/.command.log and neighboring files with ls -al work/ff/d9576360a5295e6c92a0183e485944/ . In this case I think vsearch is silently failing to write anything. Importantly, you can see the command that was being run, to debug: see work/ff/d9576360a5295e6c92a0183e485944/.command.sh . I think a next step would be to try the commands with some sample data (an R1 and R2 fastq file) that we expect should work. Right now each job requests 1 hour from SLURM. I suspect this will be much smaller in the end and we can tailor it down. |