# About R If you haven't done so already, you can check out our [R crash course](https://bioinformatics-core-shared-training.github.io/r-crash-course/) for an overview of why R is great and some of the basic syntax. Here are some great examples of people that have used R to do cool stuff with data - [Facebook](http://blog.revolutionanalytics.com/2010/12/analysis-of-facebook-status-updates.html), - [google](http://blog.revolutionanalytics.com/2009/05/google-using-r-to-analyze-effectiveness-of-tv-ads.html), - [Microsoft](http://blog.revolutionanalytics.com/2014/05/microsoft-uses-r-for-xbox-matchmaking.html) (who recently [invested](http://blogs.microsoft.com/blog/2015/01/23/microsoft-acquire-revolution-analytics-help-customers-find-big-data-value-advanced-statistical-analysis/) in a commerical provider of R) - The [New York Times](http://blog.revolutionanalytics.com/2011/03/how-the-new-york-times-uses-r-for-data-visualization.html). - [Buzzfeed](http://blog.revolutionanalytics.com/2015/12/buzzfeed-uses-r-for-data-journalism.html) use R for some of their serious articles and have made the code [publically available](https://buzzfeednews.github.io/2016-04-federal-surveillance-planes/analysis.html) - The [New Zealand Tourist Board](https://mbienz.shinyapps.io/tourism_dashboard_prod/) have R running in the background of their website - [Airbnb](https://medium.com/airbnb-engineering/using-r-packages-and-education-to-scale-data-science-at-airbnb-906faa58e12d) - [The BBC](https://github.com/BBC-Data-Unit/music-festivals) makes code available for some of their stories (e.g. gender bias in music festivals) R's role in promoting reproducible research cannot be overstated. ![](images/rep-research-nyt.png) Two Biostatiscians (later termed '*Forensic Bioinformaticians*') from M.D. Anderson used R extensively during their re-analysis and investigation of a Clinical Prognostication paper from Duke. The subsequent [scandal](https://www.youtube.com/watch?v=W5sZTNPMQRM) put Reproducible Research at the forefront of everyone's mind. Keith Baggerly's talk on the subject is highly-recommended. ## About the practicals for this workshop - The traditional way to enter R commands is via the Terminal, or using the console in RStudio (bottom-left panel when RStudio opens for first time). - However, for this course we will use a relatively new feature called *R-notebooks*. - An R-notebook mixes plain text with R code + we use the terms notebook and markdown interchangeably; they are pretty much the same thing *Markdown* is a very simple way of writing a template to produce a pdf, HTML or word document. The compiled version of this document is available online and is more convenient to browse [here](https://bioinformatics-core-shared-training.github.io/cruk-autumn-school-2017/Day1/bioc-intro.nb.html). - "chunks" of R code can be added using the *insert* option from the toolbar, or the CTRL + ALT + I shortcut - Each line of R can be executed by clicking on the line and pressing CTRL and ENTER - Or you can press the green triangle on the right-hand side to run everything in the chunk - Try this now! - The code might have different options which dictate how the output is displayed in the compiled document (e.g. HTML) + e.g. you might see `EVAL = FALSE` or `echo = FALSE` + you don't have to worry about this if stepping through the markdown interactively ```{r} print("Hello World") ``` *This will be displayed in italic* **This will be displayed in bold** - this - is - a - list + this is a *sub-list* You can also add hyperlinks, images and tables. More help is available through RStudio **Help -> Markdown Quick Reference** To create markdown files for your own analysis; File -> New File -> R Markdown... # About the Bioconductor project ![](http://bioconductor.org/images/logo_bioconductor.gif) Established in 2001 as a means of distributing tools for the analysis and comprehension of high-throughput genomic data in R. Initially focused on microarrays, but now has many tools for NGS data - primary processing of NGS data nearly-always done in other languages + R is extensively-used for visualisation and interpretation once data are in manageable form On the [Bioconductor website](www.bioconductor.org),. you will find - Installation instructions - [Course Materials](http://bioconductor.org/help/course-materials/) - [Support forum](https://support.bioconductor.org/) + a means of communicating with developers and power-users - [Example datasets](http://bioconductor.org/packages/release/BiocViews.html#___ExperimentData) - [Annotation Resources](http://bioconductor.org/packages/release/BiocViews.html#___AnnotationData) - Conferences + upcoming conference in Cambridge - [December 2017](https://bioconductor.github.io/EuroBioc2017/) For this session, we will introduce the Bioconductor project as a means of analysing high-throughput data ## Installing a package All Bioconductor software packages are listed under - bioconductor.org -> Install -> Packages -> Analysis *software* packages - e.g. [edgeR landing page](http://bioconductor.org/packages/release/bioc/html/edgeR.html) - installation instructions are given, which involves running the `biocLite` command + this will install any additional dependancies - you only need to run this procedure once for each version of R ```{r eval=FALSE} ## You don't need to run this, edgeR should already be installed for the course source("http://www.bioconductor.org/biocLite.R") biocLite("edgeR") ``` Once installed, a Bioconductor package can be loaded in the usual way with the `library` function. All packages are required to have a *vignette* which gives detailed instructions on how to use the package and the workflow of commands. Some packages such as `edgeR` have very comprehensive *user guides* with lots of use-cases. ```{r message=FALSE,eval=FALSE} library(edgeR) vignette("edgeR") edgeRUsersGuide() ``` Package documentation can also be accessed via the *Help* tab in RStudio. ## Structures for data analysis Complex data structures are used in Bioconductor to represent high-throughput data, but we often have simple functions that we can use to access the data. We will use some example data available through Bioconductor to demonstrate how high-throughput data can be represented, and also to review some basic concepts in data manipulation in R. - the data are from a *microarray* experiment. We will be concentrating on more-modern technologies in this class, but most of the R techniques required will be similar - [experimental data](http://bioconductor.org/packages/release/BiocViews.html#___ExperimentData) packages are available through Bioconductor, and can be installed in the way we just described + the package should already be installed on your computer, so you won't need to run this. ```{r eval=FALSE} ## No need to run this - for reference only! ## biocLite("breastCancerVDX") ``` To make the dataset accessible in R, we first need to load the package. If we navigate to the documentation for `breastCancerVDX` in RStudio, we find that it provides an object called `vdx` which we load into R's memory using the `data` function. ```{r message=FALSE} library(breastCancerVDX) data(vdx) ``` The object `vdx` is a representation of breast cancer dataset that has been converted for use with standard Bioconductor tools. The package authors don't envisage that we will want to view the entire dataset at once, so have provided a number of ways to interact with the data - typing the name of the object provides a summary + how many genes in the dataset, how many samples etc ```{r} vdx ``` ## Accessing expression values The expression values can be obtained by the `exprs` function:- - remember, `<-` is used for assignment to create a new variable - the data are stored in a `matrix` in R + it is a good idea to check the dimensions using `dim`, `ncol`, `nrow` etc ```{r} eValues <- exprs(vdx) class(eValues) dim(eValues) ncol(eValues) nrow(eValues) ``` - the rownames are the manufacturer-assigned ID for a particular probe - the column names are the identifiers for each patient in the study - each entry is a *normalised* log$_2$ intensity value for a particular gene in a given sample + we won't talk about normalisation here, but basicallly the data have been processed to eliminate technical variation - subsetting a matrix is done using the `[row, column]` notation + the function `c` is used to make a one-dimensional *vector* + the shortcut `:` can used to stand for a sequence of consecutive numbers ```{r} eValues[c(1,2,3),c(1,2,3,4)] eValues[1:3,1:4] ``` - we can also omit certain rows or columns from the output by prefixing the indices with a `-` ```{r eval=FALSE} eValues[1:3,-(1:4)] ``` ## Simple visualisations The most basic plotting function in R is `plot` - using the `plot` function with a vector will plot the values of that vector against the index + what do you think is displayed in the plot below? ```{r} plot(eValues[1,]) ## Various aspects of the plot can be controlled by specifying additional arguments ``` - one possible use is to compare the values in a vector with respect to a given factor - but we don't know the clinical variables in our dataset yet (to come later) - a boxplot can also accept a matrix or data frame as an argument - what do you think the following plot shows? ```{r fig.width=12} boxplot(eValues,outline=FALSE) ``` ## Accessing patient data The *metadata*, or phenotypic data, for the samples is retrieved using the `pData` function. ```{r} metadata <- pData(vdx) metadata ``` - indivdual columns can be accessed using the `$` notation. - *auto-complete* is available in RStudio; once you type the `$`it should give you a list of possible options. - the data are returned as a *vector*, which can be fed-into other standard plotting and analysis functions ```{r eval=FALSE} metadata$samplename ``` ****** ****** ****** ## Exercise - what type of R object is used to store the metadata - why is this different to the object used for the expression values? - use the square bracket notation `[]` to print + the first 10 rows of the metadata object, first five columns + last 10 rows of the metadata object, columns 7 to 9 - visualise the distribution of the patient ages using a histogram - calculate the average age of patients in the study with the `mean` function + what do you notice about the result? + can you change the arguments to mean to get a more meaningful result ****** ****** ****** ```{r} ``` So far we have been able to print out a subset of our data by specifying a set of numeric indices (e.g. first 10 rows etc). Lets say we're interested in high-grade tumours, in which case we might not know in advance which rows these correspond to - `==` `>`, `<`, `!=` can used to make a *logical comparison* - result is a `TRUE` or `FALSE` indicating whether each entry satisfies the test condition, or not. + however, if a particular entry in the vector is `NA` the resulting logical vector will have `NA` at the same positions ```{r eval=FALSE} metadata$grade == 3 metadata$age > 70 ``` Such a logical vector can then be used for subsetting - `which` can be used to make sure there are no `NA` values in the logical vectors + it gives the numeric indices that correspond to `TRUE` - here, we don't specify any column indices inside the `[]` + R will print all the columns + however, don't forget the ,! + if you do, R will still try and do something. It almost certainly be what you expected ```{r} metadata[which(metadata$grade == 3),] ``` Can use same expression to subset the columns of the expression matrix - why can we do this? Because the *columns* of the expression matrix are in the same order as the *rows* of the metadata + don't believe me? see below... - this isn't a coincidence. the data have been carefully curated to ensure that this is the case - data stored in online repositories are often organised in this way ```{r eval=FALSE} colnames(eValues) rownames(metadata) colnames(eValues) == rownames(metadata) all(colnames(eValues) == rownames(metadata)) ``` - we can subset the expression data according to our clinical data, and save the result to a file ```{r eval=FALSE} highGradeExpression <- eValues[,metadata$grade==3] write.csv(highGradeExpression, file="highGradeTumours.csv") ``` - in fact, we can subset the entire `vdx` object by sample subsets if we wish ```{r} vdx[,metadata$grade==3] ``` Previously, we used a boxplot to visualise the expression levels of all genes in a given sample to look for trends across the dataset. Another use for a boxplot is to visualise the expression level of a particular gene with respect to the sample metadata - we can extract the column of interest with a `$` and use the *formula* syntax + `table` in this case will tell us how many observations of each category are present - R will be clever and convert the factor into a `factor` type if required ```{r} fac <- metadata$er table(fac) boxplot(eValues[1,] ~ fac, xlab="ER Status", ylab="Expression Level", col=c("steelblue","goldenrod")) ``` Performing a test to assess significance follows similar syntax - `t.test` is the generic function to perform a t-test, and can be adapted to different circumstances + e.g. if our data are paired, or not + see the help for `t.test` for more details - for now, we will gloss over testing assumptions on the data such as requiring a *normal* (Gaussian) distribtion and multiple testing correction ```{r} t.test(eValues[1,]~fac) ``` ## Accessing feature (gene) information We could be lucky, and the first row in the expression matrix could be our favourite gene of interest! However, this is unlikely to be the case and we will have to figure out which row we want to plot - we can use another aspect of the `nki` object; the *feature data* - there is a handy `fData` function to access these data - again, this gives us a *data frame* - this is a pre-built table supplied with the dataset + later in the course we will look at using online services and databases for annotation and converting between identifiers ```{r} features <- fData(vdx) class(features) features[1:10,] ``` As we know, gene symbols (more-common gene names) can be accessed using the `$` syntax; returning a vector ```{r eval=FALSE} features$Gene.symbol ``` We could inspect the data frame manually (e.g. using `View(features)`) and identfy the row number corresponding to our gene of interest. However, as aspiring R programmers, there is a better way - `==` to test for exact matching - `match` will return the *index* of the first match - `grep` can be used for partial matches - each of the above will give an *vector* that can be used to subset the expression values ```{r} which(features$Gene.symbol == "BRCA1") match("BRCA1",features$Gene.symbol) grep("BRCA1",features$Gene.symbol) grep("BRCA",features$Gene.symbol) ``` ****** ****** ****** ## Exercise - Verify that the rows of the feature matrix and the expression values are in the same order - Find the row corresponding to your favourite gene + if you don't have one, try `ESR1` + if you find multiple matches, pick the first one that occurs - Does the expression level of this gene appear to be associated with the ER status? ****** ****** ****** ```{r} ``` ## Testing all genes for significance Later in the course we will see how to execute a *differential expression* analysis for RNA-seq data, and discuss some of the issues surrounding this. For now we will perform a simple two-sample t-test for all genes in our study, and derive a results table - firstly, we load the `genefilter` package which has a very convenient function for performing many t tests in parallel - `rowttests` will run a t-test for each row in a given matrix and produce an output table + `statistic`; test statistic + `dm`; difference in means + `p.value`; the p-value - `rowttests` expects a *factor* as the second argument, so we have to use `as.factor` + as usual, we can get help by doing `?rowttests` ```{r} library(genefilter) tstats <- rowttests(eValues, as.factor(metadata$er)) head(tstats) hist(tstats$statistic) ?rowttests ``` The rows are ordered in the same way as the input matrix - to change this to increasing significance we can use the `order` function - when given a vector, `order` will return a vector of the same length giving the permutation that rearranges that vector into ascending or descending order + how can we find out what the default way of ordering is? ```{r} x <- c(9, 3, 4, 2, 1, 6,5, 10,8,7) x order(x) x[order(x)] ``` - so if we want to order by p-value we first use order on the p-value vector - this can then be used to re-arrange the rows of the table ```{r} tstats[order(tstats$p.value,decreasing = FALSE),] ``` - note that we could save the result of `order` as a variable and then use in the subsetting + either is fine, it's a matter of personal preference ```{r} p.order <- order(tstats$p.value,decreasing = FALSE) tstats[p.order,] ``` However, the table we get isn't immediately useful unless we can recognise the manufacturer probe IDs - to provide extra annotation to the table, we can *column bind* (`cbind`) the columns of test statistic with those from the feature matrix - be careful though, we can only do this in cases where the rows are in direct correspondence ```{r} all(rownames(tstats) == rownames(features)) tstats.annotated <- cbind(tstats, features[,c("Gene.symbol","EntrezGene.ID","Chromosome.location")]) tstats.annotated ``` Now when we order by p-value, the extra columns that we just added allow us to interpret the results more easily ```{r} tstats.ordered <- tstats.annotated[order(tstats$p.value,decreasing = FALSE),] tstats.ordered ``` We can also query this table to look for our favourite genes of interest - `%in%` is a simplified way to perform matches to multiple items in a vector ```{r} tstats.ordered[grep("ESR1",tstats.ordered$Gene.symbol),] tstats.ordered[tstats.ordered$Gene.symbol %in% c("ESR1","GATA3","FOXA1"),] ``` ****** ****** ****** ## Exercise - From the annotated table above, select all genes with p-values less than 0.05 - Write this data frame as a `csv` file - (OPTIONAL) Use the `p.adjust` to produce a vector of p-values that are adjusted. Add this as an extra column to your table of results and write as a file ****** ****** ****** ```{r} tstats.ordered$p.value.adjusted <- p.adjust(tstats.ordered$p.value) tstats.ordered ``` ### (Optional) Downloading data from GEO For those that might be interested, here is an overview of the commands one might use to download data from the Gene Expression Omnibus. The `GEOquery` package is used and you have to know the *accession* number of the dataset that you want to analyse. Typically this would be stated in a publication in the form *GSE....*. ```{r eval=FALSE} ## If not installed already, install GEOquery with ## source("http://www.bioconductor.org/biocLite.R") ## biocLite("GEOquery") library(GEOquery) mydata <- getGEO("GSE1729") ``` The `mydata` object that is created is a list in R. This is used because some studies might involve data generated on different platforms, which have separate annotations. ```{r} length(mydata) mydata <- mydata[[1]] mydata boxplot(exprs(mydata),outline=FALSE) ``` For more detailed information on microarrays, [see our course](http://bioinformatics-core-shared-training.github.io/microarray-analysis/)