From ZENBU documentation wiki
Revision as of 19:13, 14 October 2012 by Nicolas.bertin (talk | contribs) (Example)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Data Stream Processing > Processing Modules > Data normalization and rescaling Modules


The NormalizeRPKM processing module is an extension of NormalizePerMillion and designed to normalize the expression of a feature based on both the total expression in an experiment and the cumulative length of each Feature's subfeatures to recompute the expression level as <datatype> per million per 1000 basepairs (RPKM). If there is no Experiment total count then normalization is just based on subfeature total length (<datatype> per 1000 basepairs).


  • <category_filter> : defines the subfeatures which are used in calculating the total subfeature length


This is a script which incorporates a Proxy / TemplateCluster to collate expression into Gencode V10 gene models. The expression is then normalized via the NormalizeRPKM normalization module. The script finishes with CalcFeatureSignificance so that the Features can be displayed via score-coloring.

	<datastream name="gencode" output="full_feature">
		<source id="D71B7748-1450-4C62-92CB-7E913AB12899::19:::FeatureSource"/>
		<spstream module="TemplateCluster">
				<spstream module="Proxy" name="gencode"/>

		<spstream module="NormalizeRPKM">

		<spstream module="CalcFeatureSignificance"/>

Here is a ZENBU view showing this script in use;loc=hg19::chr8:128746973..128755020