1 Introduction

Functional enrichment analysis methods such as gene set enrichment analysis (GSEA) have been widely used for analyzing gene expression data. GSEA is a powerful method to infer results of gene expression data at a level of gene sets by calculating enrichment scores for predefined sets of genes. GSEA depends on the availability and accuracy of gene sets. There are overlaps between terms of gene sets or categories because multiple terms may exist for a single biological process, and it can thus lead to redundancy within enriched terms. In other words, the sets of related terms are overlapping. Using deep learning, this pakage is aimed to predict enrichment scores for unique tokens or words from text in names of gene sets to resolve this overlapping set issue. Furthermore, we can coin a new term by combining tokens and find its enrichment score by predicting such a combined tokens.

Text can be seen as sequential data, either as a sequence of characters or as a sequence of words. Recurrent Neural Network (RNN) operating on sequential data is a type of the neural network. RNN has been applied to a variety of tasks including natural language processing such as machine translation. However, RNN suffers from the problem of long-term dependencies which means that RNN struggles to remember information for long periods of time. An Long Short-Term Memory (LSTM) network is a special kind of RNN that is designed to solve the long-term dependency problem. The bidirectional LSTM network consists of two distinct LSTM networks, termed the forward LSTM and the backward LSTM, which process the sequences in opposite directions. Gated Recurrent Unit (GRU) is a simplified version of LSTM with less number of parameters, and thus, the total number of parameters can be greatly reduced for a large neural network. LSTM and GRU are known to be successful remedies to the long-term dependency problem. The above models take terms of gene sets as input and enrichment scores as output to predict enrichment scores of new terms.



2 Example

2.1 Terms of gene sets

2.1.1 GSEA

Consider a simple example. Once GSEA is performed, the result calculated from GSEA is fed into the algorithm to train the deep learning models.

##                                                        pathway         pval
##    1:                                     Mitotic Prometaphase 0.0001520219
##    2:                  Resolution of Sister Chromatid Cohesion 0.0001537043
##    3:                                      Cell Cycle, Mitotic 0.0001255020
##    4:                             RHO GTPases Activate Formins 0.0001534684
##    5:                                               Cell Cycle 0.0001227446
##   ---                                                                      
## 1416: Downregulation of SMAD2 3:SMAD4 transcriptional activity 0.0022655188
## 1417:                                  HATs acetylate histones 0.0002779322
## 1418:                   TRAF3-dependent IRF activation pathway 0.0010962508
## 1419:                                     Nephrin interactions 0.0013498313
## 1420:                                  Interleukin-6 signaling 0.0004174494
##              padj         ES       NES nMoreExtreme size
##    1: 0.004481064  0.7253270  2.963541            0   82
##    2: 0.004481064  0.7347987  2.954314            0   74
##    3: 0.004481064  0.5594755  2.751403            0  317
##    4: 0.004481064  0.6705979  2.717798            0   78
##    5: 0.004481064  0.5388497  2.688064            0  369
##   ---                                                   
## 1416: 0.028982313 -0.6457899 -1.984552            9   16
## 1417: 0.006365544 -0.4535612 -1.994238            0   68
## 1418: 0.017020529 -0.7176839 -2.022102            4   12
## 1419: 0.019558780 -0.6880106 -2.025979            5   14
## 1420: 0.008590987 -0.8311374 -2.079276            1    8
##                                      leadingEdge
##    1:   66336,66977,12442,107995,66442,52276,...
##    2:   66336,66977,12442,107995,66442,52276,...
##    3:   66336,66977,12442,107995,66442,12571,...
##    4:   66336,66977,107995,66442,52276,67629,...
##    5:   66336,66977,12442,107995,66442,19361,...
##   ---                                           
## 1416:    66313,20482,20481,17127,17128,83814,...
## 1417: 74026,319190,244349,75560,13831,246103,...
## 1418:   56489,12914,54131,54123,56480,217069,...
## 1419:   109711,14360,20742,17973,18708,12488,...
## 1420:        16194,16195,16451,12402,16452,20848


2.1.2 deep learning and embedding

Since deep learning architectures are incapable of processing characters or words in their raw form, the text needs to be converted to numbers as inputs. Word embeddings are the texts converted into numbers. For tokenization, unigram and bigram sequences are used as default. An integer is assigned to each token, and then each term is converted to a sequence of integers. The sequences that are longer than the given maximum length are truncated, whereas shorter sequences are padded with zeros. Keras is a higher-level library built on top of TensorFlow. It is available in R through the keras package. The input to the Keras embedding are integers. These integers are of the tokens. This representation is passed to the embedding layer. The embedding layer acts as the first hidden layer of the neural network.

##            token_term      pred
##    1:    TAK1 complex  2.735628
##    2: DNA Replication  2.693894
##    3:      MAP kinase  2.320575
##    4:      Cell Cycle  2.300612
##    5:     Maintenance  2.245384
##   ---                          
##  996:       Pre mRNAs -1.613444
##  997:   Nuclear event -1.663582
##  998:  cotransporters -1.665559
##  999:             Ion -1.798039
## 1000:            NFkB -1.823667
##                                                                       pathway
## 614                               TGF-beta receptor signaling activates SMADs
## 615                                    Signaling by TGF-beta Receptor Complex
## 624                             Downregulation of TGF-beta receptor signaling
## 1267 TGF-beta receptor signaling in EMT epithelial to mesenchymal transition 
##             pval       padj         ES       NES
## 614  0.042648445 0.21175102 -0.4407843 -1.551889
## 615  0.004676539 0.04614047 -0.4244875 -1.763638
## 624  0.033777574 0.18377071 -0.4868887 -1.591561
## 1267 0.726840855 0.88913295 -0.2872284 -0.788917


2.1.3 Monte Carlo p-value

Deep learning models predict only enrichment scores. The p-values of the scores are not provided by the model. So, the Monte Carlo p-value method is used within the algorithm. Computing the p-value for a statistical test can be easily accomplished via Monte Carlo. The ordinary Monte Carlo is a simulation technique for approximating the expectation of a function for a general random variable, when the exact expectation cannot be found analytically. The Monte Carlo p-value method simply simulates a lot of datasets under the null, computes a statistic for each generated dataset, and then computes the percentile rank of observed value among these sets of simulated values. The number of tokens used for each simulation is the same to the length of the sequence of the corresponding term. If a new text does not have any tokens, its p-value is not available.

##                     new_text test_value MC_p_value adj_p_value
## 1 Cell Cycle DNA Replication  3.4307842      0.000       0.000
## 2                 Cell Cycle  2.3006120      0.006       0.012
## 3            DNA Replication  2.6938937      0.002       0.006
## 4                 Cycle Cell  0.7583389      0.278       0.278
## 5            Replication DNA  1.8878999      0.008       0.012
## 6        TGF - beta receptor -1.5659361      0.060       0.072
##        new_text test_value MC_p_value adj_p_value
## 1 datum science  0.0712381         NA          NA


2.1.4 visualization

You are allowed to create a visualization of your model architecture.

## Model: "sequential"
## ________________________________________________________________________________
## Layer (type)                        Output Shape                    Param #     
## ================================================================================
## embedding (Embedding)               (None, 30, 50)                  50050       
## ________________________________________________________________________________
## bidirectional (Bidirectional)       (None, 64)                      21248       
## ________________________________________________________________________________
## dense (Dense)                       (None, 1)                       65          
## ================================================================================
## Total params: 71,363
## Trainable params: 71,363
## Non-trainable params: 0
## ________________________________________________________________________________


3 Case Study

The “airway” dataset has four cell lines with two conditions, control and treatment with dexamethasone. By using the package “DESeq2”, differntially expressed genes between controls and treated samples are identified from the gene expression data. Then the log2FC is used as a score for GSEA. For GSEA, GOBP for human is obtained from the package “org.Hs.eg.db”, by using the package “BiocSet”. GSEA is performed by the package “fgsea”. Since “fgsea” can accept a list, the type of gene set is converted to a list. Finally, the result of GSEA is fitted to a deep learning model, and then enrichment scores of new terms can be predicted.



4 Session information

## R version 4.1.2 (2021-11-01)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 20.04.3 LTS
## 
## Matrix products: default
## BLAS:   /home/biocbuild/bbs-3.14-bioc/R/lib/libRblas.so
## LAPACK: /home/biocbuild/bbs-3.14-bioc/R/lib/libRlapack.so
## 
## locale:
##  [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
##  [3] LC_TIME=en_GB              LC_COLLATE=C              
##  [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
##  [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
##  [9] LC_ADDRESS=C               LC_TELEPHONE=C            
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
## [1] fgsea_1.20.0 ttgsea_1.2.1 keras_2.7.0 
## 
## loaded via a namespace (and not attached):
##  [1] sass_0.4.0             jsonlite_1.7.2         here_1.0.1            
##  [4] RhpcBLASctl_0.21-247.1 bslib_0.3.1            assertthat_0.2.1      
##  [7] lgr_0.4.3              yaml_2.2.1             slam_0.1-49           
## [10] pillar_1.6.4           lattice_0.20-45        glue_1.5.1            
## [13] reticulate_1.22        digest_0.6.29          RColorBrewer_1.1-2    
## [16] colorspace_2.0-2       htmltools_0.5.2        Matrix_1.4-0          
## [19] tm_0.7-8               rsparse_0.5.0          pkgconfig_2.0.3       
## [22] qdapRegex_0.7.2        textstem_0.1.4         textshape_1.7.3       
## [25] DiagrammeR_1.0.6.1     purrr_0.3.4            scales_1.1.1          
## [28] whisker_0.4            BiocParallel_1.28.3    tibble_3.1.6          
## [31] generics_0.1.1         ggplot2_3.3.5          ellipsis_0.3.2        
## [34] NLP_0.2-1              magrittr_2.0.1         crayon_1.4.2          
## [37] evaluate_0.14          float_0.2-6            stopwords_2.3         
## [40] tokenizers_0.2.1       fansi_0.5.0            SnowballC_0.7.0       
## [43] xml2_1.3.3             koRpus_0.13-8          tools_4.1.2           
## [46] data.table_1.14.2      lifecycle_1.0.1        stringr_1.4.0         
## [49] munsell_0.5.0          compiler_4.1.2         jquerylib_0.1.4       
## [52] lexicon_1.2.1          mlapi_0.1.0            rlang_0.4.12          
## [55] grid_4.1.2             rstudioapi_0.13        rappdirs_0.3.3        
## [58] htmlwidgets_1.5.4      visNetwork_2.1.0       base64enc_0.1-3       
## [61] rmarkdown_2.11         koRpus.lang.en_0.1-4   sylly_0.1-6           
## [64] gtable_0.3.0           DBI_1.1.1              R6_2.5.1              
## [67] gridExtra_2.3          sylly.en_0.1-3         tfruns_1.5.0          
## [70] textclean_0.9.3        knitr_1.36             dplyr_1.0.7           
## [73] tensorflow_2.7.0       fastmap_1.1.0          zeallot_0.1.0         
## [76] utf8_1.2.2             rprojroot_2.0.2        fastmatch_1.1-3       
## [79] text2vec_0.6           syuzhet_1.0.6          stringi_1.7.6         
## [82] parallel_4.1.2         Rcpp_1.0.7             vctrs_0.3.8           
## [85] png_0.1-7              tidyselect_1.1.1       xfun_0.29



5 References

Alterovitz, G., & Ramoni, M. (Eds.). (2011). Knowledge-Based Bioinformatics: from Analysis to Interpretation. John Wiley & Sons.

Consoli, S., Recupero, D. R., & Petkovic, M. (2019). Data Science for Healthcare: Methodologies and Applications. Springer.

DasGupta, A. (2011). Probability for Statistics and Machine Learning: Fundamentals and Advanced Topics. Springer.

Ghatak, A. (2019). Deep Learning with R. Springer.

Hassanien, A. E., & Elhoseny, M. (2019). Cybersecurity and Secure Information Systems: Challenges and Solutions and Smart Environments. Springer.

Leong, H. S., & Kipling, D. (2009). Text-based over-representation analysis of microarray gene lists with annotation bias. Nucleic acids research, 37(11), e79.

Micheas, A. C. (2018). Theory of Stochastic Objects: Probability, Stochastic Processes and Inference. CRC Press.

Shaalan, K., Hassanien, A. E., & Tolba, F. (Eds.). (2017). Intelligent Natural Language Processing: Trends and Applications (Vol. 740). Springer.