nf-core/phaseimpute
A bioinformatics pipeline to phase and impute genetic data
Introduction
This document describes the output produced by the pipeline. Most of the plots are taken from the MultiQC report, which summarises results at the end of the pipeline.
The directories listed below will be created in the results directory after the pipeline has finished. All paths are relative to the top-level results directory.
Pipeline overview
Panel preparation outputs --steps panelprep
This step of the pipeline performs a QC of the reference panel data and produces the necessary files for imputation (--steps impute).
It has the following optional modes:
--normalize- Normalize the reference panel withbcftools normand remove multiallelic sites. It also allow to remove samples using--remove_samples.--compute_freq- Compute allele frequencies withvcffixup.--phase- Phase the reference panel withSHAPEIT5.
The pipeline will produce the following outputs:
- Normalize reference panel - Remove multiallelic sites from the reference panel and compute allele frequencies if needed.
- Convert - Convert reference panel to
.hapand.legendfiles. - Posfile - Produce a
.tsvfile with the list of positions to genotype for the different tools. - Chromosomes chunks - Create chunks of the reference panel.
- CSV - Obtained
.csvfiles from this step.
The directory structure from --steps panelprep is:
├── panel
├── haplegend
├── sites
├── chunks
│ ├── glimpse1
│ └── glimpse2
├── csvPanel directory
Output files
prep_panel/panel/*.vcf.gz: The reference panel VCF files after all the preprocessing steps are completed.*.tbi: The index file for the prepared reference panel.
A directory containing the reference panel per chromosome after preprocessing.
The files will be normalized if the flag --normalize is used (with _normalized suffix). The files will have their allele frequency computed if the flaq --compute_freq is used (with _fixup suffix).
The files will be phased if the flag --phase is used (with _phased suffix).
Haplegend directory
Output files
prep_panel/haplegend/*.hap: a.hapfile for the reference panel containing the genotype.*.legend*: a.legendfile for the reference panel containing the variants informations.*.samples: a.samplesfile for the reference panel containing the samples informations.
bcftools convert aids in the conversion of VCF files to .hap and .legend files. A .samples file is also generated. Once that you have generated the hap and legend files for your reference panel, you can skip the reference preparation steps and directly submit these files for imputation. The hap and legend files can be used as input files with the --tools quilt option.
Sites directory
Output files
prep_panel/sites/*.vcf.gz: A VCF file with biallelic SNPs only.*.csi: Index file of the VCF file.
bcftools query produces VCF (*.vcf.gz) files per chromosome. These QCed VCF files can be gathered into a CSV file and used with all the tools in --steps impute using the flag --panel.
Chunks directory
Output files
prep_panel/chunks/*.txt: Text file containing the chunks obtained after runningGLIMPSE1_CHUNK.
Glimpse1 chunk defines the chunks where imputation will be performed. For further reading and documentation see the Glimpse1 documentation. Once you have generated the chunks for your reference panel, you can skip the reference preparation steps and directly submit this file for imputation.
CSV directory
Output files
prep_panel/csv/chunks_glimpse1.csv: A CSV file containing the list of chunks obtained for each chromosome and panel.panel.csv: A CSV file containing the final phased and prepared for each chromosome and input panel.posfile.csv: A CSV file containing the final list of panel positions, in VCF and TSV files, for each chromosome and input panel.
Imputation outputs --steps impute
The results from --steps impute will have the following directory structure:
├── batch
├── csv
├── glimpse1/glimpse2/quilt/stitch
│ ├── concat/
│ └── samples/
├── statsOutput files
imputation/batch/all.batchi.id.txt: List of samples names processed in the i^th^ batch.imputation/csv/impute.csv: A single CSV file containing the path to a VCF file and its index, of each imputed sample with their corresponding tool.
imputation/[glimpse1,glimpse2,quilt,stitch]/concat/all.batch*.vcf.gz: The concatenated VCF files of all imputed samples by batches.concat/all.batch*.vcf.gz.tbi: The index file for the concatenated imputed VCF files of the samples.samples/*.vcf.gz: A VCF file of each imputed sample.samples/*.vcf.gz.tbi: The index file of the imputed VCF files.
imputation/*.<tool>.bcftools_stats.txt: The statistics of the imputed VCF target file produced byBCFTOOLS_STATS
bcftools concat will produce a single VCF file from a list of imputed VCF files in chunks.
Simulation outputs --steps simulate
The results from --steps simulate will have the following directory structure:
├── csv
├── samplesOutput files
simulation/csv:simulate.csv: Samplesheet listing all downsampled target alignment files.
*.depth_*x.bam: An alignment file from the target file downsampled at the desired depth.*.bam.csi: The corresponding index of the alignment file.
Validation outputs --steps validate
The results from --steps validate will have the following directory structure:
├── concat
├── samples
├── statsOutput files
validation/concat/all.truth.vcf.gz: The concatenated VCF file of all truth sample.concat/all.truth.vcf.gz.tbi: The index file of the concatenated truth VCF file of the samples.samples/*.vcf.gz: A VCF file of each truth sample.samples/*.vcf.gz.tbi: The index file of the truth VCF file.stats/:*.truth.bcftools_stats.txt: The statistics of the truth VCF target file produced byBCFTOOLS_STATS*.P<panel name>_T<imputation tool>_SNP.txt: Concordance metrics of the SNPs variants obtained withGLIMPSE2_CONCORDANCE.AllSamples.txt: Aggregation of the aboveGLIMPSE_CONCORDANCEoutput across samples and tools.
Reports
Reports contain useful metrics and pipeline information for the different modes.
- MultiQC - Aggregate report describing results and QC from the whole pipeline.
- Pipeline information - Report metrics generated during the workflow execution.
MultiQC
Output files
multiqc/multiqc_report.html: a standalone HTML file that can be viewed in your web browser.multiqc_data/: directory containing parsed statistics from the different tools used in the pipeline.multiqc_plots/: directory containing static images from the report in various formats.
MultiQC is a visualization tool that generates a single HTML report summarising all samples in your project. Most of the pipeline QC results are visualised in the report and further statistics are available in the report data directory.
Results generated by MultiQC collate pipeline QC from supported tools e.g. FastQC. The pipeline has special steps which also allow the software versions to be reported in the MultiQC output for future traceability. For more information about how to use MultiQC reports, see http://multiqc.info.
Pipeline information
Output files
pipeline_info/- Reports generated by Nextflow:
execution_report.html,execution_timeline.html,execution_trace.txtandpipeline_dag.dot/pipeline_dag.svg. - Reports generated by the pipeline:
pipeline_report.html,pipeline_report.txtandsoftware_versions.yml. Thepipeline_report*files will only be present if the--email/--email_on_failparameter’s are used when running the pipeline. - Parameters used by the pipeline run:
params.json. - Report generated by
nf-co2footprint:co2footprint_trace.txt,co2footprint_summary.txtandco2footprint_report.html.
- Reports generated by Nextflow:
Nextflow provides excellent functionality for generating various reports relevant to the running and execution of the pipeline. This will allow you to troubleshoot errors with the running of the pipeline, and also provide you with other information such as launch commands, run times and resource usage.