Back to Multiple platform build/check report for BioC 3.16:   simplified   long
A[B]CDEFGHIJKLMNOPQRSTUVWXYZ

This page was generated on 2023-04-12 11:04:58 -0400 (Wed, 12 Apr 2023).

HostnameOSArch (*)R versionInstalled pkgs
nebbiolo2Linux (Ubuntu 20.04.5 LTS)x86_644.2.3 (2023-03-15) -- "Shortstop Beagle" 4502
palomino4Windows Server 2022 Datacenterx644.2.3 (2023-03-15 ucrt) -- "Shortstop Beagle" 4282
lconwaymacOS 12.5.1 Montereyx86_644.2.3 (2023-03-15) -- "Shortstop Beagle" 4310
Click on any hostname to see more info about the system (e.g. compilers)      (*) as reported by 'uname -p', except on Windows and Mac OS X

CHECK results for baySeq on nebbiolo2


To the developers/maintainers of the baySeq package:
- Please allow up to 24 hours (and sometimes 48 hours) for your latest push to git@git.bioconductor.org:packages/baySeq.git to
reflect on this report. See How and When does the builder pull? When will my changes propagate? for more information.
- Make sure to use the following settings in order to reproduce any error or warning you see on this page.

raw results

Package 128/2183HostnameOS / ArchINSTALLBUILDCHECKBUILD BIN
baySeq 2.32.0  (landing page)
Thomas J. Hardcastle
Snapshot Date: 2023-04-10 14:00:05 -0400 (Mon, 10 Apr 2023)
git_url: https://git.bioconductor.org/packages/baySeq
git_branch: RELEASE_3_16
git_last_commit: c6e69d6
git_last_commit_date: 2022-11-01 11:04:42 -0400 (Tue, 01 Nov 2022)
nebbiolo2Linux (Ubuntu 20.04.5 LTS) / x86_64  OK    OK    ERROR  
palomino4Windows Server 2022 Datacenter / x64  OK    OK    OK    OK  UNNEEDED, same version is already published
lconwaymacOS 12.5.1 Monterey / x86_64  OK    OK    OK    OK  UNNEEDED, same version is already published

Summary

Package: baySeq
Version: 2.32.0
Command: /home/biocbuild/bbs-3.16-bioc/R/bin/R CMD check --install=check:baySeq.install-out.txt --library=/home/biocbuild/bbs-3.16-bioc/R/site-library --timings baySeq_2.32.0.tar.gz
StartedAt: 2023-04-10 19:09:37 -0400 (Mon, 10 Apr 2023)
EndedAt: 2023-04-10 19:17:28 -0400 (Mon, 10 Apr 2023)
EllapsedTime: 471.0 seconds
RetCode: 1
Status:   ERROR  
CheckDir: baySeq.Rcheck
Warnings: NA

Command output

##############################################################################
##############################################################################
###
### Running command:
###
###   /home/biocbuild/bbs-3.16-bioc/R/bin/R CMD check --install=check:baySeq.install-out.txt --library=/home/biocbuild/bbs-3.16-bioc/R/site-library --timings baySeq_2.32.0.tar.gz
###
##############################################################################
##############################################################################


* using log directory ‘/home/biocbuild/bbs-3.16-bioc/meat/baySeq.Rcheck’
* using R version 4.2.3 (2023-03-15)
* using platform: x86_64-pc-linux-gnu (64-bit)
* using session charset: UTF-8
* checking for file ‘baySeq/DESCRIPTION’ ... OK
* checking extension type ... Package
* this is package ‘baySeq’ version ‘2.32.0’
* checking package namespace information ... OK
* checking package dependencies ... OK
* checking if this is a source package ... OK
* checking if there is a namespace ... OK
* checking for hidden files and directories ... OK
* checking for portable file names ... OK
* checking for sufficient/correct file permissions ... OK
* checking whether package ‘baySeq’ can be installed ... OK
* checking installed package size ... OK
* checking package directory ... OK
* checking ‘build’ directory ... OK
* checking DESCRIPTION meta-information ... OK
* checking top-level files ... OK
* checking for left-over files ... OK
* checking index information ... OK
* checking package subdirectories ... OK
* checking R files for non-ASCII characters ... OK
* checking R files for syntax errors ... OK
* checking whether the package can be loaded ... OK
* checking whether the package can be loaded with stated dependencies ... OK
* checking whether the package can be unloaded cleanly ... OK
* checking whether the namespace can be loaded with stated dependencies ... OK
* checking whether the namespace can be unloaded cleanly ... OK
* checking loading without being on the library search path ... OK
* checking dependencies in R code ... OK
* checking S3 generic/method consistency ... OK
* checking replacement functions ... OK
* checking foreign function calls ... OK
* checking R code for possible problems ... OK
* checking Rd files ... OK
* checking Rd metadata ... OK
* checking Rd cross-references ... OK
* checking for missing documentation entries ... OK
* checking for code/documentation mismatches ... OK
* checking Rd \usage sections ... OK
* checking Rd contents ... OK
* checking for unstated dependencies in examples ... OK
* checking contents of ‘data’ directory ... OK
* checking data for non-ASCII characters ... OK
* checking data for ASCII and uncompressed saves ... OK
* checking files in ‘vignettes’ ... OK
* checking examples ... OK
Examples with CPU (user + system) or elapsed time > 5s
                 user system elapsed
baySeq-package 30.571  0.167  30.740
getPriors      22.738  0.092  22.831
getLikelihoods  8.343  0.035   8.380
* checking for unstated dependencies in vignettes ... OK
* checking package vignettes in ‘inst/doc’ ... OK
* checking running R code from vignettes ...
  ‘baySeq.Rnw’... OK
  ‘baySeq_generic.Rnw’... failed to complete the test
 ERROR
Errors in running code in vignettes:
when running code in ‘baySeq_generic.Rnw’
  ...
}
<bytecode: 0x556fd6554e38>
<environment: namespace:baySeq>
 --- function search by body ---
Function getLikelihoods in namespace baySeq has this body.
 ----------- END OF FAILURE REPORT -------------- 
Fatal error: length > 1 in coercion to logical
sh: 0: getcwd() failed: No such file or directory

... incomplete output.  Crash?

* checking re-building of vignette outputs ... NOTE
Error(s) in re-building vignettes:
--- re-building ‘baySeq.Rnw’ using Sweave
Loading required package: GenomicRanges
Loading required package: stats4
Loading required package: BiocGenerics

Attaching package: ‘BiocGenerics’

The following objects are masked from ‘package:stats’:

    IQR, mad, sd, var, xtabs

The following objects are masked from ‘package:base’:

    Filter, Find, Map, Position, Reduce, anyDuplicated, aperm, append,
    as.data.frame, basename, cbind, colnames, dirname, do.call, duplicated,
    eval, evalq, get, grep, grepl, intersect, is.unsorted, lapply, mapply,
    match, mget, order, paste, pmax, pmax.int, pmin, pmin.int, rank, rbind,
    rownames, sapply, setdiff, sort, table, tapply, union, unique, unsplit,
    which.max, which.min

Loading required package: S4Vectors

Attaching package: ‘S4Vectors’

The following objects are masked from ‘package:base’:

    I, expand.grid, unname

Loading required package: IRanges
Loading required package: GenomeInfoDb
Loading required package: abind
Loading required package: parallel
Finding priors...done.
Length of priorReps:1000
Length of priorSubset:1000
Length of subset:1000
Length of postRows:1000
Finding priors...done.
Finding posterior likelihoods...Length of priorReps:0
Length of priorSubset:3000
Length of subset:3000
Length of postRows:3000
Analysing part 1 of 1
Preparing data...........................................................done.
Estimating likelihoods......done!
done.
Warning in summarisePosteriors(cD) :
  No orderings contained in countData object.
Finding priors...done.
Finding posterior likelihoods...Length of priorReps:0
Length of priorSubset:3000
Length of subset:3000
Length of postRows:3000
Analysing part 1 of 1
Preparing data.........................................................done.
Estimating likelihoods......done!
done.
--- finished re-building ‘baySeq.Rnw’

--- re-building ‘baySeq_generic.Rnw’ using Sweave
Loading required package: parallel
Loading required package: GenomicRanges
Loading required package: stats4
Loading required package: BiocGenerics

Attaching package: ‘BiocGenerics’

The following objects are masked from ‘package:stats’:

    IQR, mad, sd, var, xtabs

The following objects are masked from ‘package:base’:

    Filter, Find, Map, Position, Reduce, anyDuplicated, aperm, append,
    as.data.frame, basename, cbind, colnames, dirname, do.call, duplicated,
    eval, evalq, get, grep, grepl, intersect, is.unsorted, lapply, mapply,
    match, mget, order, paste, pmax, pmax.int, pmin, pmin.int, rank, rbind,
    rownames, sapply, setdiff, sort, table, tapply, union, unique, unsplit,
    which.max, which.min

Loading required package: S4Vectors

Attaching package: ‘S4Vectors’

The following objects are masked from ‘package:base’:

    I, expand.grid, unname

Loading required package: IRanges
Loading required package: GenomeInfoDb
Loading required package: abind
Finding priors...done.
Finding posterior likelihoods...Length of priorReps:0
Length of priorSubset:1000
Length of subset:1000
Length of postRows:1000
Analysing part 1 of 1
Preparing data..........................................................done.
Estimating likelihoods......done!
Warning in makeOrderings(listPosts[[cc]]) :
  No valid ordering function available.
done.
Finding priors...done.
Finding posterior likelihoods...Length of priorReps:0
Length of priorSubset:1000
Length of subset:1000
Length of postRows:1000
Analysing part 1 of 1
Preparing data....................................................done.
Estimating likelihoods......done!
Warning in makeOrderings(listPosts[[cc]]) :
  No valid ordering function available.
done.
Finding priors...done.
Finding posterior likelihoods...Length of priorReps:0
Length of priorSubset:1000
Length of subset:1000
Length of postRows:1000
Analysing part 1 of 1
Preparing data......................................................done.
Estimating likelihoods......done!
done.
Finding priors...done.
Finding posterior likelihoods...Length of priorReps:0
Length of priorSubset:1000
Length of subset:1000
Length of postRows:1000
Analysing part 1 of 1
Preparing data............................................................done.
Estimating likelihoods......done!
done.
 ----------- FAILURE REPORT -------------- 
 --- failure: length > 1 in coercion to logical ---
 --- srcref --- 
: 
 --- package (from environment) --- 
baySeq
 --- call from context --- 
getLikelihoods(nbCD, modelPriorSets = list(A = 1:100, B = 101:1000), 
    cl = cl)
 --- call from argument --- 
pET %in% c("iteratively", "BIC") && (any(sapply(modelPriorValues, 
    length) < 100) & sapply(modelPriorSets, is.null))
 --- R stacktrace ---
where 1: getLikelihoods(nbCD, modelPriorSets = list(A = 1:100, B = 101:1000), 
    cl = cl)
where 2: eval(expr, .GlobalEnv)
where 3: eval(expr, .GlobalEnv)
where 4: withVisible(eval(expr, .GlobalEnv))
where 5: doTryCatch(return(expr), name, parentenv, handler)
where 6: tryCatchOne(expr, names, parentenv, handlers[[1L]])
where 7: tryCatchList(expr, classes, parentenv, handlers)
where 8: tryCatch(expr, error = function(e) {
    call <- conditionCall(e)
    if (!is.null(call)) {
        if (identical(call[[1L]], quote(doTryCatch))) 
            call <- sys.call(-4L)
        dcall <- deparse(call, nlines = 1L)
        prefix <- paste("Error in", dcall, ": ")
        LONG <- 75L
        sm <- strsplit(conditionMessage(e), "\n")[[1L]]
        w <- 14L + nchar(dcall, type = "w") + nchar(sm[1L], type = "w")
        if (is.na(w)) 
            w <- 14L + nchar(dcall, type = "b") + nchar(sm[1L], 
                type = "b")
        if (w > LONG) 
            prefix <- paste0(prefix, "\n  ")
    }
    else prefix <- "Error : "
    msg <- paste0(prefix, conditionMessage(e), "\n")
    .Internal(seterrmessage(msg[1L]))
    if (!silent && isTRUE(getOption("show.error.messages"))) {
        cat(msg, file = outFile)
        .Internal(printDeferredWarnings())
    }
    invisible(structure(msg, class = "try-error", condition = e))
})
where 9: try(withVisible(eval(expr, .GlobalEnv)), silent = TRUE)
where 10: evalFunc(ce, options)
where 11: tryCatchList(expr, classes, parentenv, handlers)
where 12: tryCatch(evalFunc(ce, options), finally = {
    cat("\n")
    sink()
})
where 13: driver$runcode(drobj, chunk, chunkopts)
where 14: utils::Sweave(...)
where 15: engine$weave(file, quiet = quiet, encoding = enc)
where 16: doTryCatch(return(expr), name, parentenv, handler)
where 17: tryCatchOne(expr, names, parentenv, handlers[[1L]])
where 18: tryCatchList(expr, classes, parentenv, handlers)
where 19: tryCatch({
    engine$weave(file, quiet = quiet, encoding = enc)
    setwd(startdir)
    output <- find_vignette_product(name, by = "weave", engine = engine)
    if (!have.makefile && vignette_is_tex(output)) {
        texi2pdf(file = output, clean = FALSE, quiet = quiet)
        output <- find_vignette_product(name, by = "texi2pdf", 
            engine = engine)
    }
}, error = function(e) {
    OK <<- FALSE
    message(gettextf("Error: processing vignette '%s' failed with diagnostics:\n%s", 
        file, conditionMessage(e)))
})
where 20: tools:::.buildOneVignette("baySeq_generic.Rnw", "/home/biocbuild/bbs-3.16-bioc/meat/baySeq.Rcheck/vign_test/baySeq", 
    TRUE, FALSE, "baySeq_generic", "", "/tmp/RtmpBwuygy/file20d597400557eb.rds")

 --- value of length: 2 type: logical ---
    A     B 
FALSE FALSE 
 --- function from context --- 
function (cD, prs, pET = "BIC", marginalise = FALSE, subset = NULL, 
    priorSubset = NULL, bootStraps = 1, bsNullOnly = TRUE, conv = 1e-04, 
    nullData = FALSE, weightByLocLikelihoods = TRUE, modelPriorSets = list(), 
    modelPriorValues = list(), returnAll = FALSE, returnPD = FALSE, 
    verbose = TRUE, discardSampling = FALSE, modelLikes = TRUE, 
    cl = NULL, tempFile = NULL, largeness = 1e+08) 
{
    .likeDataObs <- function(xdata, densityFunction, groups, 
        consensus = FALSE, differentWeights = differentWeights, 
        modelLikes = TRUE) {
        logsum <- function(x) {
            max(x, max(x, na.rm = TRUE) + log(sum(exp(x - max(x, 
                na.rm = TRUE)), na.rm = TRUE)), na.rm = TRUE)
        }
        PDgivenr.Consensus <- function(number, cts, xrobs, xcobs, 
            sobs, priors, groups, priorWeights, numintSamp, differentWeights, 
            densityFunction) {
            prior <- lapply(1:ncol(priors), function(jj) priors[, 
                jj])
            repcts <- apply(cts, (1:length(dim(cts)))[-2], function(x) rep(x, 
                nrow(priors)))
            xobs <- c(xrobs, lapply(xcobs, function(obs) array(obs, 
                dim = c(ncol(cts) * nrow(priors), dim(obs)[-c(1:2)], 
                  1))), lapply(sobs, function(obs) {
                slobs <- obs
                if (is.vector(slobs) || is.factor(slobs) || length(dim(slobs)) == 
                  1) {
                  return(rep(slobs, nrow(priors)))
                } else apply(slobs, 2:length(dim(slobs)), function(x) rep(x, 
                  nrow(priors)))
            }))
            xobs <- c(xobs, list(dim = datdim))
            datalikes <- matrix(densityFunction(repcts, observables = xobs, 
                parameters = lapply(prior, function(priorpar) rep(priorpar, 
                  each = ncol(cts)))), ncol = ncol(cts), byrow = TRUE)
            if (differentWeights) {
                ld <- sapply(1:length(groups), function(grpnum) {
                  group <- groups[[grpnum]]
                  wts <- priorWeights[[grpnum]]
                  sampInfo <- numintSamp[[grpnum]]
                  sum(sapply(1:length(levels(group)), function(gg) {
                    selcts <- group == levels(group)[gg] & !is.na(group)
                    weightings <- wts[[gg]]
                    nzWts <- weightings != 0
                    wsInfo <- which(sampInfo[[gg]][, 1] == number)
                    logsum(rowSums(datalikes[nzWts, selcts, drop = FALSE], 
                      na.rm = TRUE) + log(weightings[nzWts])) - 
                      log(sum(weightings[nzWts]))
                  }))
                })
            }
            else {
                weightings <- priorWeights[[1]][[1]]
                wsInfo <- which(numintSamp[[1]][[1]][, 1] == 
                  number)
                weightings[numintSamp[[1]][[1]][wsInfo, 2]] <- weightings[numintSamp[[1]][[1]][wsInfo, 
                  2]] - numintSamp[[1]][[1]][wsInfo, 3]
                nzWts <- weightings != 0
                datalikes <- datalikes[nzWts, , drop = FALSE]
                lweight <- log(weightings[nzWts])
                lsumweight <- log(sum(weightings[nzWts]))
                matld <- sapply(whmat, function(grp) logsum(rowSums(datalikes[, 
                  grp, drop = FALSE]) + lweight) - lsumweight)
                ld <- sapply(split(matld[matmat], zid), sum)
            }
            ld
        }
        PDgivenr <- function(number, cts, xrobs, xcobs, sobs, 
            priors, group, wts, sampInfo, differentWeights, modelLikes = TRUE, 
            densityFunction) {
            glikes <- sapply(1:length(levels(group)), function(gg) {
                selcts <- group == levels(group)[gg] & !is.na(group)
                weightings <- wts[[c(1, gg)[differentWeights + 
                  1]]]
                nzWts <- weightings != 0
                prior <- lapply(1:ncol(priors[[gg]]), function(jj) priors[[gg]][nzWts, 
                  jj])
                xobs <- c(xrobs, lapply(xcobs, function(obs) array(.sliceArray(list(NULL, 
                  which(selcts)), obs, drop = FALSE), dim = c(sum(nzWts) * 
                  sum(selcts), dim(obs)[-(1:2)], 1))), lapply(sobs, 
                  function(obs) {
                    slobs <- .sliceArray(list(which(selcts)), 
                      obs, drop = FALSE)
                    if (is.vector(slobs) || is.factor(slobs) || 
                      length(dim(slobs)) == 1) {
                      return(rep(slobs, sum(nzWts)))
                    } else apply(slobs, 2:length(dim(slobs)), 
                      function(x) rep(x, sum(nzWts)))
                  }))
                xobs <- c(xobs, list(dim = datdim))
                repcts <- apply(.sliceArray(list(NULL, which(selcts)), 
                  cts, drop = FALSE), (1:length(dim(cts)))[-2], 
                  function(x) rep(x, sum(nzWts)))
                wsInfo <- which(sampInfo[[c(1, gg)[differentWeights + 
                  1]]][, 1] == number)
                likeD <- rowSums(matrix(densityFunction(repcts, 
                  observables = xobs, parameters = lapply(prior, 
                    function(priorpar) rep(priorpar, each = sum(selcts)))), 
                  ncol = sum(selcts), byrow = TRUE), na.rm = TRUE) + 
                  log(weightings[nzWts]) - log(sum(weightings[nzWts]))
                if (modelLikes) {
                  mL <- logsum(likeD)
                  return(mL)
                }
                else {
                  lD <- rep(-Inf, nrow(priors[[gg]]))
                  lD[nzWts] <- likeD
                  return(lD)
                }
            })
            if (modelLikes) {
                return(sum(glikes))
            }
            else return(glikes)
        }
        xid <- xdata$id
        cts <- xdata$data
        xrobs <- xdata$rowObs
        xcobs <- xdata$cellObs
        if (consensus) {
            PDlikes <- PDgivenr.Consensus(number = xid, cts = cts, 
                xrobs = xrobs, xcobs = xcobs, sobs = sampleObservables, 
                priors = CDpriors, groups = groups, priorWeights = priorWeights, 
                numintSamp = numintSamp, differentWeights = differentWeights, 
                densityFunction = densityFunction[[1]])
        }
        else {
            PDlikes <- lapply(1:length(CDpriors), function(gg) PDgivenr(number = xid, 
                cts = cts, xrobs = xrobs, xcobs = xcobs, sobs = sampleObservables, 
                priors = CDpriors[[gg]], group = groups[[gg]], 
                wts = priorWeights[[c(1, gg)[differentWeights + 
                  1]]], sampInfo = numintSamp[[c(1, gg)[differentWeights + 
                  1]]], differentWeights = differentWeights, 
                modelLikes = modelLikes, densityFunction <- densityFunction[[gg]]))
            if (modelLikes) 
                PDlikes <- unlist(PDlikes)
        }
        return(PDlikes)
    }
    if (!inherits(cD, what = "countData")) 
        stop("variable 'cD' must be of or descend from class 'countData'")
    if (length(modelPriorValues) == 0 & !missing(prs)) 
        modelPriorValues <- prs
    listPosts <- list()
    if (!(class(subset) == "integer" | class(subset) == "numeric" | 
        is.null(subset))) 
        stop("'subset' must be integer, numeric, or NULL")
    if (is.null(subset)) 
        subset <- 1:nrow(cD)
    if (is.null(priorSubset)) 
        priorSubset <- subset
    if (length(modelPriorSets) == 0) 
        modelPriorSets <- list(subset)
    if (any(duplicated(unlist(modelPriorSets)))) 
        stop("Some element appears twice in the modelPriorSets list")
    if (!all(subset %in% unlist(modelPriorSets))) 
        stop("The modelPriorSets list does not contain all the data specified in the `subset' parameter (or all data, if this parameter is not specified).")
    modelPriorSets <- lapply(modelPriorSets, function(x) x[x %in% 
        subset])
    if (is.numeric(modelPriorValues)) 
        modelPriorValues <- lapply(modelPriorSets, function(x) modelPriorValues)
    if (is.list(modelPriorValues) && length(modelPriorValues) == 
        0) {
        modelPriorValues <- lapply(modelPriorSets, function(x) rep(NA, 
            length(cD@groups)))
    }
    if (length(modelPriorValues) != length(modelPriorSets)) 
        stop("The length of 'modelPriorValues' (if a list) must be identical to that of 'modelPriorSets' (or zero).")
    if (pET %in% c("none", "iteratively")) 
        lapply(modelPriorValues, function(prs) {
            if (length(prs) != length(cD@groups)) 
                stop("All members of modelPriorValues must be of same length as the number of groups in the 'cD' object")
            if (any(prs < 0)) 
                stop("Negative values in all members of modelPriorValues are not permitted")
            if (!nullData & sum(prs) != 1) 
                stop("If 'nullData = FALSE' then all members of modelPriorValues should sum to 1.")
            if (nullData & sum(prs) >= 1) 
                stop("If 'nullData = TRUE' then all members of modelPriorValues should sum to less than 1.")
        })
    if (pET %in% c("iteratively", "BIC") && (any(sapply(modelPriorValues, 
        length) < 100) & sapply(modelPriorSets, is.null))) 
        warning("Some subsets contain fewer than one hundred members; estimation of model priors may be unstable")
    if (is.null(conv)) 
        conv <- 0
    groups <- cD@groups
    CDpriors <- cD@priors$priors
    cdDF <- cD@densityFunction
    if (length(cdDF) == 1) 
        cdDF <- lapply(1:length(groups), function(ii) cdDF[[1]])
    if (length(cD@densityFunction) == 1) {
        densityFunction <- lapply(1:length(groups), function(ii) cD@densityFunction[[1]]@density)
    }
    else densityFunction <- lapply(1:length(groups), function(ii) cD@densityFunction[[ii]]@density)
    numintSamp <- cD@priors$sampled
    weights <- cD@priors$weights
    if (is.null(weights)) 
        weights <- rep(1, nrow(cD@priors$sampled))
    nullWeights <- cD@priors$nullWeights
    data <- cD@data
    datdim <- dim(cD)
    sampleObservables <- cD@sampleObservables
    rowObservables <- cD@rowObservables
    if (!("seglens" %in% names(cD@cellObservables) || "seglens" %in% 
        names(rowObservables))) 
        rowObservables <- c(rowObservables, list(seglens = rep(1, 
            nrow(cD))))
    if (!("libsizes" %in% names(sampleObservables))) 
        sampleObservables <- c(sampleObservables, list(seglens = rep(1, 
            ncol(cD))))
    if (is.matrix(CDpriors)) 
        consensus <- TRUE
    else consensus <- FALSE
    if (is.numeric(weights) & is.matrix(numintSamp) & bootStraps == 
        1 & !nullData) {
        differentWeights <- FALSE
        numintSamp <- list(list(numintSamp))
        priorWeights <- .constructWeights(numintSamp = numintSamp, 
            weights = weights, CDpriors = CDpriors, consensus = consensus)
    }
    else {
        differentWeights <- TRUE
        if (discardSampling) 
            numintSamp[, 1] <- NA
        if (is.matrix(numintSamp)) 
            numintSamp <- lapply(groups, function(x) lapply(1:length(levels(x)), 
                function(z) numintSamp))
        if (is.null(weights)) 
            weights <- lapply(numintSamp, function(x) lapply(x, 
                function(z) weights = rep(1, nrow(z))))
        if (is.numeric(weights)) 
            weights <- lapply(numintSamp, function(x) lapply(x, 
                function(z) weights = weights))
        priorWeights <- .constructWeights(numintSamp = numintSamp, 
            weights = weights, CDpriors = CDpriors, consensus = consensus)
    }
    if (nullData) {
        ndelocGroup <- which(unlist(lapply(cD@groups, function(x) all(x[!is.na(x)] == 
            x[!is.na(x)][1]))))
        if (length(ndelocGroup) == 0) 
            stop("If 'nullData = TRUE' then there must exist some vector in groups whose members are all identical")
        else ndelocGroup <- ndelocGroup[1]
        nullFunction <- cD@densityFunction[[ndelocGroup]]@nullFunction
        modifyNullPriors <- cD@densityFunction[[ndelocGroup]]@modifyNullPriors
        if (is.null(body(nullFunction)) & nullData) {
            warning("nullData cannot be TRUE if no nullFunction is specified within the supplied densityFunction object.")
            nullData <- FALSE
        }
        else {
            groups <- c(groups, null = groups[ndelocGroup])
            densityFunction <- c(densityFunction, densityFunction[ndelocGroup])
            cdDF <- c(cdDF, cdDF[ndelocGroup])
            ndenulGroup <- length(groups)
            numintSamp[[ndenulGroup]] <- numintSamp[[ndelocGroup]]
            CDpriors[[ndenulGroup]] <- modifyNullPriors(CDpriors[[ndelocGroup]], 
                datdim)
            if (!consensus) 
                ndePriors <- nullFunction(CDpriors[[ndelocGroup]][[1]])
            else ndePriors <- nullFunction(CDpriors)
            if (weightByLocLikelihoods && "locLikelihoods" %in% 
                slotNames(cD) && nrow(cD@locLikelihoods) > 0) {
                newts <- exp(rowSums(log(1 - exp(cD@locLikelihoods)), 
                  na.rm = TRUE))[cD@priors$sampled[, 1]]
                weights <- lapply(groups, function(x) lapply(levels(x), 
                  function(jj) return(1 - newts)))
                weights[[ndenulGroup]][[1]] <- newts
                cD
            }
            else {
                if (is.null(nullWeights)) {
                  nullweights <- priorWeights[[ndelocGroup]][[1]]
                  sep <- bimodalSeparator(ndePriors[ndePriors > 
                    -Inf], nullweights[ndePriors > -Inf])
                  modelPriorValues <- lapply(modelPriorValues, 
                    function(prs) c(prs, 1 - sum(prs)))
                  priorWeights[[ndenulGroup]] <- priorWeights[[ndelocGroup]]
                  priorWeights[[ndenulGroup]][[1]] <- priorWeights[[ndenulGroup]][[1]] * 
                    as.numeric(ndePriors <= sep)
                  priorWeights[[ndelocGroup]][[1]] <- priorWeights[[ndelocGroup]][[1]] * 
                    as.numeric(ndePriors > sep)
                  weights[[ndenulGroup]] <- weights[[ndelocGroup]]
                  weights[[ndelocGroup]][[1]][numintSamp[[ndelocGroup]][[1]][, 
                    2] %in% which(ndePriors <= sep)] <- 0
                  weights[[ndenulGroup]][[1]][numintSamp[[ndenulGroup]][[1]][, 
                    2] %in% which(ndePriors > sep)] <- 0
                }
                else weights[[ndenulGroup]] <- nullWeights
            }
            priorWeights <- .constructWeights(numintSamp = numintSamp, 
                weights = weights, CDpriors = CDpriors, consensus = consensus)
        }
    }
    propest <- NULL
    converged <- FALSE
    if (consensus) {
        z <- lapply(groups, function(grp) split(1:ncol(cD), as.numeric(grp)))
        whmat <- unique(do.call("c", z))
        matmat <- match(do.call("c", z), whmat)
        zid <- rep(1:length(z), sapply(z, length))
    }
    else {
        matmat <- NULL
        whmat <- NULL
        zid <- NULL
    }
    if (!is.null(cl)) 
        clusterExport(cl, c("CDpriors", "datdim", "densityFunction", 
            "sampleObservables", ".sliceArray", "matmat", "whmat", 
            "zid"), envir = environment())
    if (verbose) 
        message("Finding posterior likelihoods...", appendLF = FALSE)
    if (bootStraps > 1) {
        priorReps <- unique(unlist(sapply(numintSamp, function(x) as.integer(unique(sapply(x, 
            function(z) z[, 1]))))))
        priorReps <- priorReps[priorReps > 0 & !is.na(priorReps)]
        if (!all(priorReps %in% 1:nrow(cD@data)) & bootStraps > 
            1) {
            warning("Since the sampled values in the '@priors' slot are not available, bootstrapping is not possible.")
            bootStraps <- 1
        }
    }
    else priorReps <- c()
    message("Length of priorReps:", length(priorReps))
    message("Length of priorSubset:", length(priorSubset))
    message("Length of subset:", length(subset))
    postRows <- unique(c(priorReps, priorSubset, subset))
    message("Length of postRows:", length(postRows))
    .fastUniques <- function(x) {
        if (nrow(x) > 1) {
            return(c(TRUE, rowSums(x[-1L, , drop = FALSE] == 
                x[-nrow(x), , drop = FALSE]) != ncol(x)))
        }
        else return(TRUE)
    }
    whunq <- TRUE
    ncuts <- ceiling(length(groups) * as.double(nrow(cD))/largeness)
    if (ncuts == 1) 
        splitRows <- list(postRows)
    else splitRows <- split(postRows, cut(postRows, ncuts, labels = FALSE))
    posteriors <- NULL
    for (cc in 1:bootStraps) {
        if (cc > 1) {
            if (!bsNullOnly | !nullData) {
                weights <- lapply(1:length(numintSamp), function(ii) lapply(1:length(numintSamp[[ii]]), 
                  function(jj) exp(posteriors[numintSamp[[ii]][[jj]][, 
                    1], ii])))
            }
            else {
                weights <- lapply(1:length(numintSamp), function(ii) lapply(1:length(numintSamp[[ii]]), 
                  function(jj) {
                    if (ii == ndenulGroup) 
                      weights = exp(posteriors[numintSamp[[ii]][[jj]][, 
                        1], ii])
                    else weights = 1 - exp(posteriors[numintSamp[[ii]][[jj]][, 
                      1], ndenulGroup])
                    weights
                  }))
            }
        }
        ps <- NULL
        if (is.null(cl)) {
            if (cc > 1) 
                priorWeights <- .constructWeights(numintSamp = numintSamp, 
                  weights = weights, CDpriors = CDpriors)
        }
        for (ss in 1:length(splitRows)) {
            if (verbose) 
                message("Analysing part ", ss, " of ", length(splitRows))
            if (verbose) 
                message("Preparing data...", appendLF = FALSE)
            sliceData <- list()
            sliceData <- lapply(splitRows[[ss]], function(id) {
                if (verbose) 
                  if (sample(1:round(nrow(data)/50), 1) == 1) 
                    message(".", appendLF = FALSE)
                list(id = id, data = asub(data, id, dims = 1, 
                  drop = FALSE), cellObs = lapply(cD@cellObservables, 
                  function(cob) asub(cob, id, dims = 1, drop = FALSE)), 
                  rowObs = lapply(rowObservables, function(rob) asub(rob, 
                    id, dims = 1, drop = FALSE)))
            })
            if (verbose) 
                message("done.")
            if (verbose) 
                message("Estimating likelihoods...", appendLF = FALSE)
            if (is.null(cl)) {
                tps <- lapply(sliceData, .likeDataObs, densityFunction = densityFunction, 
                  groups = groups, consensus = consensus, differentWeights = differentWeights, 
                  modelLikes = modelLikes)
            }
            else {
                clusterExport(cl, "numintSamp", envir = environment())
                clusterCall(cl, .constructWeights, numintSamp = numintSamp, 
                  weights = weights, CDpriors = CDpriors, withinCluster = TRUE, 
                  consensus = consensus)
                getLikesEnv <- new.env(parent = .GlobalEnv)
                environment(.likeDataObs) <- getLikesEnv
                tps <- parLapplyLB(cl[1:min(length(cl), length(postRows[whunq]))], 
                  sliceData, .likeDataObs, densityFunction = densityFunction, 
                  groups = groups, consensus = consensus, differentWeights = differentWeights, 
                  modelLikes = modelLikes)
            }
            if (!is.null(tempFile)) 
                save(tps, file = paste(tempFile, "_", ss, ".RData", 
                  sep = ""))
            if (verbose) 
                message("...done!")
            ps <- c(ps, tps)
        }
        rps <- matrix(NA, ncol = length(groups), nrow = nrow(cD@data))
        rps[postRows[whunq], ] <- do.call("rbind", ps)
        if (returnPD) 
            return(rps)
        restprs <- lapply(1:length(modelPriorSets), function(pp) {
            pSub <- intersect(priorSubset, modelPriorSets[[pp]])
            prs <- modelPriorValues[[pp]]
            if (pET == "iterative" || (pET == "BIC" & all(is.na(modelPriorValues[[pp]])))) {
                if (length(pSub) == nrow(cD) && all(1:nrow(cD) == 
                  pSub)) 
                  pps <- rps
                else pps <- rps[pSub, , drop = FALSE]
                restprs <- getPosteriors(ps = pps, prs, pET = pET, 
                  marginalise = FALSE, groups = groups, priorSubset = NULL, 
                  eqOverRep = lapply(cdDF, function(x) x@equalOverReplicates(dim(cD))), 
                  cl = cl)$priors
            }
            else restprs <- prs
            restprs
        })
        restprs <- lapply(restprs, function(x) {
            names(x) <- names(groups)
            x
        })
        names(restprs) <- names(modelPriorSets)
        ppsPosts <- lapply(1:length(modelPriorSets), function(pp) {
            pSub <- postRows
            if (length(pSub) == nrow(cD) && all(1:nrow(cD) == 
                pSub)) 
                pps <- rps
            else pps <- rps[pSub, , drop = FALSE]
            pps <- getPosteriors(ps = pps, prs = restprs[[pp]], 
                pET = "none", marginalise = marginalise, groups = groups, 
                priorSubset = NULL, cl = cl)
            list(pps = pps, pSub = pSub)
        })
        if (is.null(posteriors)) 
            posteriors <- matrix(NA, ncol = length(groups), nrow = nrow(cD@data))
        compPosts <- do.call("c", lapply(ppsPosts, function(x) x$pSub))
        newPosts <- posteriors
        for (ii in 1:length(ppsPosts)) {
            newPosts[ppsPosts[[ii]]$pSub, ] <- ppsPosts[[ii]]$pps$posteriors
        }
        if (any(!is.na(posteriors))) 
            if (all(abs(exp(posteriors[compPosts, , drop = FALSE]) - 
                exp(newPosts[compPosts, , drop = FALSE])) < conv)) 
                converged <- TRUE
        posteriors <- newPosts
        cat(".")
        if (returnAll | converged | cc == bootStraps) {
            retPosts <- posteriors
            retPosts[priorReps[!(priorReps %in% subset)], ] <- NA
            nullPosts <- matrix(ncol = 0, nrow = 0)
            if (nullData) {
                nullPosts <- retPosts[, ndenulGroup, drop = FALSE]
                retPosts <- retPosts[, -ndenulGroup, drop = FALSE]
            }
            colnames(retPosts) <- names(cD@groups)
            if (nullData) {
                cD@priors$weights <- weights[-ndenulGroup]
                cD@priors$nullWeights <- weights[[ndenulGroup]]
            }
            else cD@priors$weights <- weights
            listPosts[[cc]] <- (new(class(cD), cD, posteriors = retPosts, 
                nullPosts = nullPosts, priorModels = restprs))
            listPosts[[cc]] <- makeOrderings(listPosts[[cc]])
        }
        if (converged) 
            (break)()
    }
    if (!is.null(cl)) 
        clusterEvalQ(cl, rm(list = ls()))
    if (verbose) 
        message("done.")
    if (!returnAll) 
        return(listPosts[[cc]])
    else {
        if (length(listPosts) == 1) 
            return(listPosts[[1]])
        else return(listPosts)
    }
}
<bytecode: 0x56527d564940>
<environment: namespace:baySeq>
 --- function search by body ---
Function getLikelihoods in namespace baySeq has this body.
 ----------- END OF FAILURE REPORT -------------- 
Fatal error: length > 1 in coercion to logical
SUMMARY: processing the following file failed:
  ‘baySeq_generic.Rnw’

Error: Vignette re-building failed.
Execution halted

* checking PDF version of manual ... OK
* DONE

Status: 1 ERROR, 1 NOTE
See
  ‘/home/biocbuild/bbs-3.16-bioc/meat/baySeq.Rcheck/00check.log’
for details.


Installation output

baySeq.Rcheck/00install.out

##############################################################################
##############################################################################
###
### Running command:
###
###   /home/biocbuild/bbs-3.16-bioc/R/bin/R CMD INSTALL baySeq
###
##############################################################################
##############################################################################


* installing to library ‘/home/biocbuild/bbs-3.16-bioc/R/site-library’
* installing *source* package ‘baySeq’ ...
** using staged installation
** R
** data
** byte-compile and prepare package for lazy loading
** help
*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded from temporary location
** testing if installed package can be loaded from final location
** testing if installed package keeps a record of temporary installation path
* DONE (baySeq)

Tests output


Example timings

baySeq.Rcheck/baySeq-Ex.timings

nameusersystemelapsed
allModels0.2520.0010.251
baySeq-classes0.1050.0040.110
baySeq-package30.571 0.16730.740
bimodalSep0.0010.0000.001
densityFunction000
getLibsizes0.0050.0000.005
getLikelihoods8.3430.0358.380
getPosteriors0.0040.0000.005
getPriors22.738 0.09222.831
getTPs0.0070.0000.007
makeOrderings0.0370.0000.036
marginaliseEqual0.2330.0050.236
marginalisePairwise0.2330.0030.236
plotMA.CD0.0230.0010.022
plotPosteriors0.0520.0040.055
plotPriors0.0310.0000.031
selectTop0.0140.0000.014
summarisePosteriors0.0060.0000.005
topCounts0.0180.0000.017