In this section we are going to cover the basics of dimensionality reduction. These methods allow us to represent our multi-dimensional data (with thousands of cells) in a reduced set of dimensions for visualisation and more efficient downstream analysis.
In dimensionality reduction we attempt to find a way to represent all the the information we have in expression space in a way that we can easily interpret. High dimensional data has several issues. There is a high computational requirement to performing analysis on 30,000 genes and 47,000 cells (in the Caron dataset). Humans live in a 3D world; we can’t easily visualise all the dimensions. And then there is sparsity. As the number of dimensions increases the distance between two data points increases and becomes more invariant. This invariance causes us problems when we try to cluster the data into biologically similar groupings. By applying some dimensionality reducing methods we can solve these issues to a level where we can make interpretations.
Before we start, let’s load our packages and read our data in.
# Load the libraries we will need for this practical
library(Seurat)
library(tidyverse)
# set default ggplot theme
theme_set(theme_classic())
We will load the Seurat object generated in the Normalisation section, which contains normalised counts for 500 cells per sample. For demonstration purposes we are not using the full dataset but you would in your analyses.
# load the preprocessed Seurat object with 500 cells per sample
seurat_object <- readRDS("RObjects/SCT.500.rds")
Many scRNA-seq analysis procedures involve comparing cells based on their expression values across thousands of genes. Thus, each individual gene represents a dimension of the data (and the total number of genes represents the “dimensionality” of the data). More intuitively, if we had a scRNA-seq data set with only two genes, we could visualise our data in a two-dimensional scatterplot, with each axis representing the expression of a gene and each point in the plot representing a cell. Intuitively, we can imagine the same for 3 genes, represented as a 3D plot. Although it becomes harder to imagine, this concept can be extended to data sets with thousands of genes (dimensions), where each cell’s expression profile defines its location in the high-dimensional expression space.
As the name suggests, dimensionality reduction aims to reduce the number of separate dimensions in the data. This is possible because different genes are correlated if they are affected by the same biological process. Thus, we do not need to store separate information for individual genes, but can instead compress multiple features into a single dimension, e.g., an “eigengene” (Langfelder and Horvath 2007). This reduces computational work in downstream analyses like clustering, as calculations only need to be performed for a few dimensions, rather than thousands. It also reduces noise by averaging across multiple genes to obtain a more precise representation of the patterns in the data. And finally it enables effective visualisation of the data, for those of us who are not capable of visualizing more than 2 or 3 dimensions.
Here, we will cover three methods that are most commonly used in scRNA-seq analysis:
Before we go into the details of each method, it is important to mention that while the first method (PCA) can be used for downstream analysis of the data (such as cell clustering), the latter two methods (t-SNE and UMAP) should only be used for visualisation and not for any other kind of analysis.
One of the most used and well-known methods of dimensionality reduction is principal components analysis (PCA). This method performs a linear transformation of the data, such that a set of variables (genes) are turned into new variables called Principal Components (PCs). These principal components combine information across several genes in a way that best captures the variability observed across samples (cells).
Watch the video linked below for more details of how PCA works:
After performing a PCA, there is no data loss, i.e. the total number of variables does not change. Only the fraction of variance captured by each variable differs.
Each PC represents a dimension in the new expression space. The first PC explains the highest proportion of variance possible. The second PC explains the highest proportion of variance not explained by the first PC. And so on: successive PCs each explain a decreasing amount of variance not captured by the previous ones.
The advantage of using PCA is that the total amount of variance explained by the first few PCs is usually enough to capture most of the signal in the data. Therefore, we can exclude the remaining PCs without much loss of information. The stronger the correlation between the initial variables, the stronger the reduction in dimensionality. We will see below how we can choose how many PCs to retain for our downstream analysis.
As discussed in the Preprocessing and QC section, Seurat objects contain a slot that can store representations of our data in reduced dimensions. This is useful as we can keep all the information about our single-cell data within a single object.
The RunPCA() function can be used to run PCA on a seurat object, and returns an updated version of the single cell object with the PCA result added to the reductions slot.
By default the RunPCA() function will only use the variable features identified in the SCTransform step.
# Run PCA
seurat_object <- RunPCA(seurat_object,
features = VariableFeatures(seurat_object))
## PC_ 1
## Positive: FXYD2, CHI3L2, TRBC1, CASC15, ENSG00000259097, TRBC2, CD3D, BCL11B, TOX, ENSG00000236283
## MAL, CD1E, TMEM132D, ALDH1A2, MIR181A1HG, CCDC26, PCBP3, ENSG00000259345, TSHR, SPRED2
## PTPRM, ITM2A, ELOVL4, LINC01811, ENSG00000236656, ENSG00000271955, GNAQ, SCN2A, CAMK4, PEX5L
## Negative: HBA2, HBB, HBA1, HBD, AHSP, HBM, ALAS2, CD74, CA1, SLC25A37
## BLVRB, HLA-DRA, SLC4A1, GYPB, IFI27, TCL1A, SNCA, GYPA, ENSG00000287092, EBF1
## HEMGN, TCL1B, CD24, HMBS, LINC03000, HLA-DPB1, CD79B, MME, CA2, PSD3
## PC_ 2
## Positive: HBB, HBA2, HBA1, HBD, AHSP, HBM, CA1, ALAS2, BLVRB, SLC4A1
## SLC25A37, GYPB, SNCA, GYPA, IFI27, HEMGN, PRDX2, HMBS, CA2, RHAG
## CD36, ANK1, FECH, SOX6, EPB42, KLF1, RHCE, SELENBP1, BPGM, GMPR
## Negative: ENSG00000287092, MAML3, EBF1, TCL1A, AFF3, CD74, BACH2, TCF4, VPREB1, HLA-DRA
## FLT3, CDK14, PLEKHG1, PSD3, PCDH9, MEF2C, CD79B, ERG, STIM2, PDE4D
## AUTS2, MME, CD24, LARGE1, CD9, DGKD, NIBAN3, ZCCHC7, ENSG00000288101, STK32B
## PC_ 3
## Positive: S100A4, S100A6, TYROBP, SRGN, LYZ, LGALS1, S100A9, FCER1G, S100A8, CST3
## S100A11, FCN1, VCAN, FTL, MNDA, S100A10, CSTA, IFI30, S100A12, ENSG00000257764
## CCDC200, NAMPT, CTSS, CFD, TYMP, COTL1, ANXA1, FOS, MS4A6A, ATP2B1
## Negative: LINC03000, FXYD2, CHI3L2, CASC15, STMN1, TUBA1B, ENSG00000259097, TUBB, TCL1B, H4C3
## DNTT, TOX, H1-0, CACNB2, MDGA2, CALN1, ENSG00000227706, MME, UHRF1, HMGB2
## CD1E, ENSG00000271955, PTPRM, MDK, LINC01811, ENSG00000285534, ENSG00000290032, ENSG00000236283, CD24, CDK6
## PC_ 4
## Positive: MAML3, ENSG00000287092, CCDC26, FLT3, AFF3, PLEKHG1, ENSG00000288101, STIM2, CDK14, LINC-PINT
## LNCAROD, AHSP, FGF13, VPREB1, SLC25A37, GPC6, DGKD, HBD, PLXDC2, HBM
## SFMBT2, MIR181A1HG, TCF4, IL3RA, CA1, ALAS2, SLC4A1, NAV1, IPCEF1, BLVRB
## Negative: LINC03000, TCL1B, CD74, CACNB2, B2M, ENSG00000285534, RPS27, MDGA2, HLA-DRA, ACTB
## LTB, ENSG00000227706, RPL41, STMN1, CD24, MDK, NRG3, CD52, ENSG00000290032, JUNB
## IFITM1, CCN2, HLA-DPB1, PCLO, H1-0, DUSP1, ARHGAP24, CYGB, CD27, KCNN1
## PC_ 5
## Positive: SKAP1, IFITM1, BACH2, JUNB, RPS27, IGKC, TC2N, ZNF331, PCED1B-AS1, SCML4
## IGHM, IL32, NELL2, TRAC, LEPROTL1, ANK3, TNIK, ACSM3, CCL5, RPL34
## B2M, CLEC2D, BTG1, RPL21, LINC-PINT, AFF3, INPP4B, CD247, ANKRD44, CCR7
## Negative: LYZ, CST3, TYROBP, LGALS1, S100A9, S100A8, FCN1, VCAN, FCER1G, S100A4
## CSTA, S100A6, MNDA, S100A11, S100A12, IFI30, CCDC200, ENSG00000257764, H4C3, TUBA1B
## FTL, NAMPT, CFD, FXYD2, CHI3L2, LST1, HMGB2, CXCL8, TYMP, LRMDA
Running this function will add a new slot to our Seurat object called pca which contains the PCA representation of our data. This can be accessed using the Reductions() function.
# check the new reductions slot
Reductions(seurat_object)
## [1] "pca"
The output above tells us changes in which genes are represented by each principle component. One of the first things to investigate after doing a PCA is how much variance is explained by each PC. We can check the percentage of variance explained by each PC using the Stdev() function.
# check the variance explained by each PC
Stdev(seurat_object, reduction = "pca")
## [1] 15.897273 15.248416 13.441293 11.362094 10.841569 10.133046 9.024764 7.373261 7.056065 6.859558 6.498357 5.817431 5.727912 5.309042 4.930905 4.691352 4.505406
## [18] 4.320838 4.236679 3.950957 3.748484 3.713057 3.619418 3.604557 3.537215 3.503355 3.435216 3.304214 3.198848 3.158598 3.074507 3.059673 3.000566 2.948486
## [35] 2.891171 2.878351 2.833344 2.810347 2.804543 2.745095 2.739610 2.705372 2.694963 2.626253 2.609604 2.595554 2.555939 2.549913 2.528860 2.496451
The typical way to view this information is using what is known as a “scree plot”. In Seurat the function is called ElbowPlot(), and it plots the standard deviation of each PC, which is a measure of how much variance is explained by that PC. The more variance explained, the more important that PC is for representing the data. We can use this plot to decide how many PCs to retain for downstream analysis.
# plot the variance explained by each PC
ElbowPlot(seurat_object, ndims = 50)
We can see how the two first PCs explain a substantial amount of the variance, and very little variation is explained beyond 10-15 PCs.
To visualise our cells in the reduced dimension space defined by PC1 and PC2, we can use the DimPlot() function.
# plot the PCA
DimPlot(seurat_object,
reduction = "pca")
The proximity of cells in this plot reflects the similarity of their expression profiles.
We can also plot different PCs, using the dims option:
# plot PC2 and PC3
DimPlot(seurat_object,
reduction = "pca",
dims = c(2, 3))
We can also split the plot by different options using the split.by option. For example, we can split the plot by sample group.
# plot PC1 and PC2 split by sample group
DimPlot(seurat_object,
reduction = "pca",
split.by = "SampleGroup")
The t-Distributed Stochastic Neighbor Embedding (t-SNE) approach addresses the main shortcoming of PCA, which is that it can only capture linear transformations of the original variables (genes). Instead, t-SNE allows for non-linear transformations, while preserving the local structure of the data. This means that neighbourhoods of similar samples (cells, in our case) will appear together in a t-SNE projection, with distinct cell clusters separating from each other.
As you can see below, compared to PCA we get much tighter clusters of samples, with more structure in the data captured by two t-SNE axis, compared to the two first principal components.
We will not go into the details of the algorithm here, but briefly it involves two main steps:
Calculating a similarity matrix between every pair of samples. This similarity is scaled by a Normal distribution, such that points that are far away from each other are “penalised” with a very low similarity score. The variance of this normal distribution can be thought of as a “neighbourhood” size when computing similarities between cells, and is parameterised by a term called perplexity.
Then, samples are projected on a low-dimensional space (usually two dimensions) such that the similarities between the points in this new space are as close as possible to the similarities in the original high-dimensional space. This step involves a stochastic algorithm that “moves” the points around until it converges on a stable solution. In this case, the similarity between samples is scaled by a t-distribution (that’s where the “t” in “t-SNE” comes from), which is used instead of the Normal to guarantee that points within a cluster are still distinguishable from each other in the 2D-plane (the t-distribution has “fatter” tails than the Normal distribution).
Watch this video if you want to learn more about how t-SNE works:
There are two important points to remember:
See this interactive article on “How to Use t-SNE Effectively”, which illustrates how changing these parameters can lead to widely different results.
Importantly, because of the non-linear nature of this algorithm, strong interpretations based on how distant different groups of cells are from each other on a t-SNE plot are discouraged, as they are not necessarily meaningful. This is why it is often the case that the x- and y-axis scales are omitted from these plots (as in the example above), as they are largely uninterpretable. Therefore, the results of a t-SNE projection should be used for visualisation only and not for downstream analysis (such as cell clustering).
Similarly to how we did with PCA, there are functions that can run a t-SNE directly on our SingleCellExperiment object. We will leave this exploration for you to do in the following exercises, but the basic code is very similar to that used with PCA. For example, the following would run t-SNE with default options:
# run t-SNE using default options
seurat_object <- RunTSNE(seurat_object,
reduction = "pca")
# confirm a new reduction was added to the object
Reductions(seurat_object)
## [1] "pca" "tsne"
# plot the t-SNE
DimPlot(seurat_object,
reduction = "tsne")
Notice by default it colours using the orig.ident column in the meta.data.
We want to achieve the following:
The code below shows how you can:
Reductions slot of the Seurat object.seurat_object <- RunTSNE(seurat_object,
reduction = "pca",
dims = 1:10,
perplexity = 30,
seed = 123,
reduction.name = "TSNE_perplex30",
reduction.key = "TSNE30_")
DimPlot(seurat_object,
reduction = "TSNE_perplex30")
Answer
# re-run the t-SNE with a different seed
seurat_object <- RunTSNE(seurat_object,
reduction = "pca",
dims = 1:10,
perplexity = 30,
seed = 321,
reduction.name = "TSNE_perplex30_seed321",
reduction.key = "TSNE30seed321_")
# plot the new t-SNE
DimPlot(seurat_object,
reduction = "TSNE_perplex30_seed321")
Facet these plots by SampleName to better understand where each marker is mostly expressed
Hint
The function DimPlot() has an option called split.by that allows you to facet the plot by any of the columns in the meta.data slot of your seurat object. You can use this to facet by SampleName. It’s best to specify the number of columns in the facet using the ncol option to make the plot easier to read.
Answer
# plot the t-SNE faceted by SampleName
DimPlot(seurat_object,
reduction = "TSNE_perplex30_seed321",
split.by = "SampleName",
ncol = 4)
Rerun the t-SNE using different perplexity values (for example 5 and 500). Name those reductions as “TSNE_perplex5” and “TSNE_perplex500” respectively.
Answer
First re-run the t-SNE with different perplexity levels (running at 500 make take a little time)
# re-run the t-SNE with perplexity 5
seurat_object <- RunTSNE(seurat_object,
reduction = "pca",
dims = 1:10,
perplexity = 5,
seed = 321,
reduction.name = "TSNE_perplex5",
reduction.key = "TSNE5_")
# re-run the t-SNE with perplexity 500
seurat_object <- RunTSNE(seurat_object,
reduction = "pca",
dims = 1:10,
perplexity = 500,
seed = 321,
reduction.name = "TSNE_perplex500",
reduction.key = "TSNE500_")
# visualise all projections using DimPlot
DimPlot(seurat_object, reduction = "TSNE_perplex5")
DimPlot(seurat_object, reduction = "TSNE_perplex30")
DimPlot(seurat_object, reduction = "TSNE_perplex500")
Instead of colouring by SampleName we can colour by expression of known cell markers by using the FeaturePlot() function.
Hint
You can replace what we colour by with any of the gene names in our dataset as they are stored as the rownames in our object. Look at the FeaturePlot() help page to find the correct argument to use.
Answer
E.g for CD79A:
# colour tsne by CD79A expression
FeaturePlot(seurat_object,
features = "CD79A",
reduction = "TSNE_perplex30")
Some things to note from our data exploration:
Simiarly to t-SNE, UMAP performs a non-linear transformation of the data to project it down to lower dimensions. One difference to t-SNE is that this method claims to preserve both local and global structures (i.e. the relative positions of clusters are, most of the times, meaningful). However, it is worth mentioning that there is some debate as to whether this is always the case, as is explored in this recent paper by Chari, Banerjee and Pachter (2021).
Compared to t-SNE, the UMAP visualization tends to have more compact visual clusters with more empty space between them. It also attempts to preserve more of the global structure than t -SNE. From a practical perspective, UMAP is much faster than t-SNE, which may be an important consideration for large datasets.
Similarly to t-SNE, since this is a non-linear method of dimensionality reduction, the results of a UMAP projection should be used for visualisation only and not for downstream analysis (such as cell clustering).
Running UMAP is very similar to what we’ve seen already for PCA and t-SNE, only the function name changes:
# run UMAP using the first 10 PCs
seurat_object <- RunUMAP(seurat_object,
reduction = "pca",
dims = 1:10)
## 16:33:40 UMAP embedding parameters a = 0.9922 b = 1.112
## 16:33:40 Read 5500 rows and found 10 numeric columns
## 16:33:40 Using Annoy for neighbor search, n_neighbors = 30
## 16:33:40 Building Annoy index with metric = cosine, n_trees = 50
## 0% 10 20 30 40 50 60 70 80 90 100%
## [----|----|----|----|----|----|----|----|----|----|
## **************************************************|
## 16:33:40 Writing NN index file to temp file /tmp/RtmpAtSWg0/file468e37d56d57
## 16:33:40 Searching Annoy index using 1 thread, search_k = 3000
## 16:33:41 Annoy recall = 100%
## 16:33:41 Commencing smooth kNN distance calibration using 1 thread with target n_neighbors = 30
## 16:33:42 Initializing from normalized Laplacian + noise (using RSpectra)
## 16:33:42 Commencing optimization for 500 epochs, with 215456 positive edges
## 16:33:42 Using rng type: pcg
## 16:33:46 Optimization finished
# confirm a new reduction was added to the object
Reductions(seurat_object)
## [1] "pca" "tsne" "TSNE_perplex30" "TSNE_perplex30_seed321" "TSNE_perplex5" "TSNE_perplex500" "umap"
# visualise the UMAP
DimPlot(seurat_object,
reduction = "umap")
Because this UMAP also involves a series of randomization steps, setting the random-generator seed (as we did above) is critical if we want to obtain reproducible results after each run.
Like t-SNE, UMAP has its own suite of hyperparameters that affect the visualization. Of these, the number of neighbors (n_neighbors) and the minimum distance between embedded points (min_dist) have the greatest effect on the granularity of the output. If these values are too low, random noise will be incorrectly treated as high-resolution structure, while values that are too high will discard fine structure altogether in favor of obtaining an accurate overview of the entire dataset. Again, it is a good idea to test a range of values for these parameters to ensure that they do not compromise any conclusions drawn from a UMAP plot.
See this interactive article that goes into more depth about the underlying methods, and explores the impacts of changing the n_neighbours and min_dist parameters: Understanding UMAP.
Similarly to what we did with t-SNE, we will explore this further in the following exercise.
Our main objectives are:
RunUMAP() function to find the correct argument to use).Answer
# run the UMAP with 30 neighbours
seurat_object <- RunUMAP(seurat_object,
reduction = "pca",
dims = 1:10,
seed.use = 123,
n.neighbors = 30,
reduction.name = "UMAP_neighbors30",
reduction.key = "UMAPneighbors30_")
## 16:33:47 UMAP embedding parameters a = 0.9922 b = 1.112
## 16:33:47 Read 5500 rows and found 10 numeric columns
## 16:33:47 Using Annoy for neighbor search, n_neighbors = 30
## 16:33:47 Building Annoy index with metric = cosine, n_trees = 50
## 0% 10 20 30 40 50 60 70 80 90 100%
## [----|----|----|----|----|----|----|----|----|----|
## **************************************************|
## 16:33:47 Writing NN index file to temp file /tmp/RtmpAtSWg0/file468e354ca22d
## 16:33:47 Searching Annoy index using 1 thread, search_k = 3000
## 16:33:48 Annoy recall = 100%
## 16:33:49 Commencing smooth kNN distance calibration using 1 thread with target n_neighbors = 30
## 16:33:49 Initializing from normalized Laplacian + noise (using RSpectra)
## 16:33:49 Commencing optimization for 500 epochs, with 215456 positive edges
## 16:33:49 Using rng type: pcg
## 16:33:53 Optimization finished
Now visualise the resulting UMAP projection.
Answer
# visualise the resulting UMAP projection
DimPlot(seurat_object,
reduction = "UMAP_neighbors30")
Run the UMAP with 5 and 500 neighbours and compare the results. Similarly to the TSNE, running with higher neighbour values will take more time.
Answer
# run the UMAP with 5 neighbours
seurat_object <- RunUMAP(seurat_object,
reduction = "pca",
dims = 1:10,
seed.use = 123,
n.neighbors = 5,
reduction.name = "UMAP_neighbors5",
reduction.key = "UMAPneighbors5_")
## 16:33:54 UMAP embedding parameters a = 0.9922 b = 1.112
## 16:33:54 Read 5500 rows and found 10 numeric columns
## 16:33:54 Using Annoy for neighbor search, n_neighbors = 5
## 16:33:54 Building Annoy index with metric = cosine, n_trees = 50
## 0% 10 20 30 40 50 60 70 80 90 100%
## [----|----|----|----|----|----|----|----|----|----|
## **************************************************|
## 16:33:54 Writing NN index file to temp file /tmp/RtmpAtSWg0/file468e222b5a00
## 16:33:54 Searching Annoy index using 1 thread, search_k = 500
## 16:33:55 Annoy recall = 100%
## 16:33:55 Commencing smooth kNN distance calibration using 1 thread with target n_neighbors = 5
## 16:33:55 Initializing from normalized Laplacian + noise (using RSpectra)
## 16:33:56 Commencing optimization for 500 epochs, with 31772 positive edges
## 16:33:56 Using rng type: pcg
## 16:33:57 Optimization finished
# run the UMAP with 500 neighbours
seurat_object <- RunUMAP(seurat_object,
reduction = "pca",
dims = 1:10,
seed.use = 123,
n.neighbors = 500,
reduction.name = "UMAP_neighbors500",
reduction.key = "UMAPneighbors500_")
## 16:33:57 UMAP embedding parameters a = 0.9922 b = 1.112
## 16:33:57 Read 5500 rows and found 10 numeric columns
## 16:33:57 Using Annoy for neighbor search, n_neighbors = 500
## 16:33:57 Building Annoy index with metric = cosine, n_trees = 50
## 0% 10 20 30 40 50 60 70 80 90 100%
## [----|----|----|----|----|----|----|----|----|----|
## **************************************************|
## 16:33:58 Writing NN index file to temp file /tmp/RtmpAtSWg0/file468e523550b1
## 16:33:58 Searching Annoy index using 1 thread, search_k = 50000
## 16:34:16 Annoy recall = 100%
## 16:34:16 Commencing smooth kNN distance calibration using 1 thread with target n_neighbors = 500
## 16:34:18 Initializing from normalized Laplacian + noise (using RSpectra)
## 16:34:19 Commencing optimization for 500 epochs, with 892628 positive edges
## 16:34:19 Using rng type: pcg
## 16:34:27 Optimization finished
# visualise all projections using DimPlot
DimPlot(seurat_object, reduction = "UMAP_neighbors5")
DimPlot(seurat_object, reduction = "UMAP_neighbors30")
DimPlot(seurat_object, reduction = "UMAP_neighbors500")
Answer
# plot the two projections
DimPlot(seurat_object, reduction = "TSNE_perplex30")
DimPlot(seurat_object, reduction = "UMAP_neighbors30")
To answer some of those questions:
Key Points:
sessionInfo()
## R version 4.5.1 (2025-06-13)
## Platform: x86_64-pc-linux-gnu
## Running under: Ubuntu 22.04.5 LTS
##
## Matrix products: default
## BLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.10.0
## LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.10.0 LAPACK version 3.10.0
##
## locale:
## [1] LC_CTYPE=C.UTF-8 LC_NUMERIC=C LC_TIME=C.UTF-8 LC_COLLATE=C.UTF-8 LC_MONETARY=C.UTF-8 LC_MESSAGES=C.UTF-8 LC_PAPER=C.UTF-8
## [8] LC_NAME=C LC_ADDRESS=C LC_TELEPHONE=C LC_MEASUREMENT=C.UTF-8 LC_IDENTIFICATION=C
##
## time zone: Europe/London
## tzcode source: system (glibc)
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] lubridate_1.9.4 forcats_1.0.1 stringr_1.6.0 dplyr_1.1.4 purrr_1.2.0 readr_2.1.5 tidyr_1.3.1 tibble_3.3.0 ggplot2_4.0.1
## [10] tidyverse_2.0.0 Seurat_5.3.1 SeuratObject_5.2.0 sp_2.2-0
##
## loaded via a namespace (and not attached):
## [1] deldir_2.0-4 pbapply_1.7-4 gridExtra_2.3 rlang_1.1.6 magrittr_2.0.4 RcppAnnoy_0.0.22 otel_0.2.0
## [8] spatstat.geom_3.6-0 matrixStats_1.5.0 ggridges_0.5.7 compiler_4.5.1 png_0.1-8 vctrs_0.6.5 reshape2_1.4.5
## [15] pkgconfig_2.0.3 fastmap_1.2.0 labeling_0.4.3 promises_1.5.0 rmarkdown_2.30 tzdb_0.5.0 xfun_0.54
## [22] cachem_1.1.0 jsonlite_2.0.0 goftest_1.2-3 later_1.4.4 spatstat.utils_3.2-0 irlba_2.3.5.1 parallel_4.5.1
## [29] cluster_2.1.8.1 R6_2.6.1 ica_1.0-3 spatstat.data_3.1-9 bslib_0.9.0 stringi_1.8.7 RColorBrewer_1.1-3
## [36] reticulate_1.44.0 spatstat.univar_3.1-4 parallelly_1.45.1 lmtest_0.9-40 jquerylib_0.1.4 scattermore_1.2 Rcpp_1.1.0
## [43] knitr_1.50 tensor_1.5.1 future.apply_1.20.0 zoo_1.8-14 sctransform_0.4.2 timechange_0.3.0 httpuv_1.6.16
## [50] Matrix_1.7-4 splines_4.5.1 igraph_2.2.1 tidyselect_1.2.1 abind_1.4-8 dichromat_2.0-0.1 yaml_2.3.10
## [57] spatstat.random_3.4-2 spatstat.explore_3.5-3 codetools_0.2-20 miniUI_0.1.2 listenv_0.10.0 lattice_0.22-7 plyr_1.8.9
## [64] withr_3.0.2 shiny_1.11.1 S7_0.2.1 ROCR_1.0-11 evaluate_1.0.5 Rtsne_0.17 future_1.67.0
## [71] fastDummies_1.7.5 survival_3.8-3 polyclip_1.10-7 fitdistrplus_1.2-4 pillar_1.11.1 KernSmooth_2.23-26 plotly_4.11.0
## [78] generics_0.1.4 rprojroot_2.1.1 RcppHNSW_0.6.0 hms_1.1.4 scales_1.4.0 globals_0.18.0 xtable_1.8-4
## [85] glue_1.8.0 lazyeval_0.2.2 tools_4.5.1 data.table_1.17.8 RSpectra_0.16-2 RANN_2.6.2 dotCall64_1.2
## [92] cowplot_1.2.0 grid_4.5.1 nlme_3.1-168 patchwork_1.3.2 cli_3.6.5 spatstat.sparse_3.1-0 spam_2.11-1
## [99] viridisLite_0.4.2 uwot_0.2.4 gtable_0.3.6 sass_0.4.10 digest_0.6.38 progressr_0.18.0 ggrepel_0.9.6
## [106] htmlwidgets_1.6.4 farver_2.1.2 htmltools_0.5.8.1 lifecycle_1.0.4 httr_1.4.7 here_1.0.2 mime_0.13
## [113] MASS_7.3-65