Helianthus bioinformatics
Raw reads
We received four files with sequence reads. Two of these contain the 1x100bp reads, because two lanes were used on the instrument. Two of these because CU unnecessarily ran indexing reads on the fragments. I deleted these nonsense files. The two files with the raw reads of interest are (these are in /project/microbiome/data/seq/HMAX1/rawreads
).
WyomingPool_L1_S1_L001_R1_001.fastq.gz (22 GB) – 457,726,974 reads (109 GBytes uncompressed)
WyomingPool_L2_S2_L002_R1_001.fastq.gz (22 GB) – 450,678,667 reads (107 GBytes uncompressed)
I used unpigz.sh to decompress the fastq files, because our parser does not read from gzipped files.
Demultiplexing
In /project/microbiome/analyses/gtl/HMAX1
I removed extraneous spaces in the file that maps MIDS to individual identifiers (Hmax1Demux.csv
). Also, the original Hmax1Demux.csv
didn’t follow the scheme we have used for GBS: MIDname, MID, sample id. So, I made a fixed version (now we have Hmax1Demux_fixed.csv
):
sed -E 's/^([[:alnum:]-]+),([[:alnum:]-]+),([[:alnum:]-]+).*/\3,\1,\2/' Hmax1Demux.csv > Hmax1Demux_fixed.csv
This demux key indicated that there were several samples that were technical replicates, but the user also reported one error, which we corrected by hand (see diff below) and made Hmax1Demux_fixed2.csv
(this file also lacks a final newline at the end of the file, but I verified this is not a problem
196c196
< C01,CGATATAG,MN-ABR-BLA-9-L
---
> C01,CGATATAG,MN-ABR-BLA-10-L
Demultiplexing on the two files in parallel took more than the two days I initially allocated to it (in part because of the ~10% of the data that do not match our MIDS, because we did not filter contaminants). So I broke the data into 228 parts (each with 16 million lines) and ran 228 jobs in parallel. I repeated this when we learned of the one error in the demux key.
mkdir /gscratch/buerkle/data/HMAX1
cd /gscratch/buerkle/data/HMAX1
cat /project/microbiome/data/seq/HMAX1/rawreads/WyomingPool* | split -l 16000000 -d --suffix-length=3 --additional-suffix=.fastq - WyomingPool_HMAX1_
mkdir rawreads
mv WyomingPool_HMAX1_* rawreads/
/project/microbiome/analyses/gtl/HMAX1/demultiplex/run_parsebarcodes_onSplitInput.pl
Note that I did not do separate contaminant filtering (which I did for Penstemon), because the parsing code and other downstream steps should knock out contaminants. I can double-check this.
I modified the script from 16S/ITS work for splitting fastq files based on information in their info line, to different files. It is: /project/microbiome/analyses/gtl/HMAX1/demultiplex/splitFastq_manyInputfiles_gbs.pl
and is run with run_splitFastq_gbs.sh
, in the same directory. Output was initially in /project/microbiome/analyses/gtl/HMAX1/demultiplex/sample_fastq
. All of this now is in /project/microbiome/data/seq/HMAX1/demultiplex
, so that it is reachable thru globus at /project/microbiome/data/seq/HMAX1/
.
Eight individuals were duplicated, with different MIDs. Was this planned? I didn’t account for this in the parsing script (the info line only has the individual sample ID, not the MID. I could add it back in. But then the replicates would need to be merged. As is now, all reads for an individual are going into one file. There are also four tubes labeled ‘BLANK' that will all have been merged (all the reads went into BLANK.GGATCCTT.fq).
On Sep 12, 2022 I moved 8 files that contained no lines into empty/
with wc -l *.gz | awk '{if ($1 == "0") print $2 }' | xargs mv -t empty/
Dec 16, 2021
compressed all
sample_fastq/
files with pigz: usingsbatch /project/microbiome/data/seq/HMAX1/demultiplex/run_pigz.sh
moved fastq for all four blank samples (data are all in one file because names are collapsed; noted above) to a subfolder (
/project/microbiome/data/seq/HMAX1/demultiplex/sample_fastq/blanks
), to get them out of the way.started denovo assembly in
/gscratch/buerkle/data/HMAX1/denovo
Completed first step for dDocent and am running cd-hit for 92%, 96% and 98% minimum match. Initially didn’t give these enough wall time and in reruns I bumped up the number of cores to 16.
Feb 4, 2022
summarized denovo assemblies: 98 (46,971,194 contigs), 96 (27,839,279 contigs), 92 (18,493,729 contigs). Um, that’s a lot. Previously they aligned to the Helianthus annuus genome (v1.0). This was in Testing for evolutionary change in restoration: A genomic comparison between ex situ, native, and commercial seed sources of Helianthus maximiliani. We will use the v2.0 annuus genome (v2.0). It is in /
project/evolgen/data/public/genomes/helianthus/GCF_002127325.2_HanXRQr2.0-SUNRISE_genomic.fna.gz
Assembly
Sep 10, 2022 Working in /project/microbiome/data/seq/HMAX1/assem
and assembling all reads in /project/microbiome/data/seq/HMAX1/demultiplex/sample_fastq/
against /project/evolgen/data/public/genomes/helianthus/GCF_002127325.2_HanXRQr2.0-SUNRISE_genomic.fna
.
Ran
bwa index -a bwtsw GCF_002127325.2_HanXRQr2.0-SUNRISE_genomic.fna
by hand in an interactive node (took roughly one hour)Commands are in
0_assem.nf
. Run this withnextflow run -bg 0_assem.nf -c teton.config
. These are jobs are using:module load swset/2018.05 gcc/7.3.0 bwa/0.7.17 samtools/1.12
as specified inteton.config
in this directory (bwa is version 0.7.17-r1188). Output is in/project/microbiome/data/seq/HMAX1/assem/sambam/
. Gave each job 60 minutes, which was unnecessarily long, but conservative. Longest running jobs I could see were less than 20 minutes. Moved all 468 inputs files through in about 30 minutes total.I removed the duplicative sam and unsorted bam files with:
rm -f *.sam *[^d].bam
, saving ~270 GB of space
Variant calling
Following steps from GitHub - zgompert/DimensionsExperiment: Code and overview for DoB common garden experiment.
Built bcftools version 1.16 and installed in
/project/evolgen/bin/
.bcftools
needed reference genome in bzip2 format, not gzip. So I now simply have an unzipped reference genome, which I have reindexed.Completed this step with something like:
sbatch --account=evolgen --time=1-00:00 --nodes=1 --mem=8G --mail-type=ENDÂ 0_call_variants.sh
(this took 12 hours and 40 minutes and 552 MB of RAM; I asked for 120GB, which likely gave me the whole node and made it a bit faster)Filtered vcf with
1_filter_variants.sh
, which contains notes on the criteria that I used (could be altered to suit). This set is based on a fairly tight set of criteria (it matches what we used for a recent paper), which could be modified as needed. Note that currently there is no explicit minor allele frequency filtering. There are 5016 sites inhmax_variants_filtered.vcf
To do:
Summarize the parse report files in /gscratch with some code to iterate over all the individual reports and get an overall count.