CRAN Package Check Results for Package mllrnrs

Last updated on 2025-12-04 11:50:19 CET.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 0.0.6 4.88 190.87 195.75 OK
r-devel-linux-x86_64-debian-gcc 0.0.6 3.57 152.66 156.23 OK
r-devel-linux-x86_64-fedora-clang 0.0.6 40.00 278.27 318.27 ERROR
r-devel-linux-x86_64-fedora-gcc 0.0.7 34.00 304.78 338.78 OK
r-devel-windows-x86_64 0.0.6 7.00 265.00 272.00 OK
r-patched-linux-x86_64 0.0.6 4.23 212.98 217.21 OK
r-release-linux-x86_64 0.0.6 5.12 199.83 204.95 OK
r-release-macos-arm64 0.0.7 1.00 56.00 57.00 OK
r-release-macos-x86_64 0.0.7 6.00 257.00 263.00 OK
r-release-windows-x86_64 0.0.6 7.00 266.00 273.00 OK
r-oldrel-macos-arm64 0.0.7 1.00 64.00 65.00 OK
r-oldrel-macos-x86_64 0.0.7 7.00 270.00 277.00 OK
r-oldrel-windows-x86_64 0.0.6 11.00 362.00 373.00 OK

Check Details

Version: 0.0.6
Check: tests
Result: ERROR Running ‘testthat.R’ [2m/14m] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > # https://github.com/Rdatatable/data.table/issues/5658 > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mllrnrs) > > test_check("mllrnrs") CV fold: Fold1 CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 37.042 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 61.245 seconds 3) Running FUN 2 times in 2 thread(s)... 0.988 seconds OMP: Warning #96: Cannot form a team with 24 threads, using 2 instead. OMP: Hint Consider unsetting KMP_DEVICE_THREAD_LIMIT (KMP_ALL_THREADS), KMP_TEAMS_THREAD_LIMIT, and OMP_THREAD_LIMIT (if any are set). CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 30.162 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 55.36 seconds 3) Running FUN 2 times in 2 thread(s)... 2.693 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 32.292 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 107.661 seconds 3) Running FUN 2 times in 2 thread(s)... 2.562 seconds CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold2 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold3 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold1 Saving _problems/test-binary-356.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold2 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold3 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold1 Saving _problems/test-multiclass-294.R CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 31.15 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 9.486 seconds 3) Running FUN 2 times in 2 thread(s)... 3.691 seconds CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 37.293 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 17.279 seconds 3) Running FUN 2 times in 2 thread(s)... 2.458 seconds CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 30.833 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 10.54 seconds 3) Running FUN 2 times in 2 thread(s)... 2.465 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 52.604 seconds subsample colsample_bytree min_child_weight learning_rate max_depth <num> <num> <num> <num> <num> 1: 1.0 0.8 1 0.1 5 2: 0.8 1.0 1 0.2 5 3: 1.0 1.0 5 0.2 5 4: 0.6 0.8 1 0.1 5 5: 0.6 0.8 5 0.2 5 6: 0.8 0.8 5 0.2 5 7: 0.8 0.8 1 0.1 1 8: 0.6 0.6 1 0.2 5 9: 0.6 1.0 1 0.1 1 10: 0.6 0.8 1 0.2 5 errorMessage <char> 1: FUN returned these elements with length > 1: Score,metric_optim_mean 2: FUN returned these elements with length > 1: Score,metric_optim_mean 3: FUN returned these elements with length > 1: Score,metric_optim_mean 4: FUN returned these elements with length > 1: Score,metric_optim_mean 5: FUN returned these elements with length > 1: Score,metric_optim_mean 6: FUN returned these elements with length > 1: Score,metric_optim_mean 7: FUN returned these elements with length > 1: Score,metric_optim_mean 8: FUN returned these elements with length > 1: Score,metric_optim_mean 9: FUN returned these elements with length > 1: Score,metric_optim_mean 10: FUN returned these elements with length > 1: Score,metric_optim_mean Saving _problems/test-regression-309.R CV fold: Fold1 Saving _problems/test-regression-352.R [ FAIL 4 | WARN 3 | SKIP 3 | PASS 22 ] ══ Skipped tests (3) ═══════════════════════════════════════════════════════════ • On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5', 'test-multiclass.R:57:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-binary.R:356:5'): test nested cv, grid, binary:logistic - xgboost ── Error in `.get_best_setting(results = outlist$summary, opt_metric = "metric_optim_mean", param_names = param_names, higher_better = metric_higher_better)`: nrow(best_row) == 1 is not TRUE Backtrace: ▆ 1. └─xgboost_optimizer$execute() at test-binary.R:356:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. └─mlexperiments:::.run_optimizer(...) 11. └─mlexperiments:::.optimize_postprocessing(...) 12. └─mlexperiments:::.get_best_setting(...) 13. └─base::stopifnot(nrow(best_row) == 1) ── Error ('test-multiclass.R:294:5'): test nested cv, grid, multi:softprob - xgboost, with weights ── Error in `.get_best_setting(results = outlist$summary, opt_metric = "metric_optim_mean", param_names = param_names, higher_better = metric_higher_better)`: nrow(best_row) == 1 is not TRUE Backtrace: ▆ 1. └─xgboost_optimizer$execute() at test-multiclass.R:294:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. └─mlexperiments:::.run_optimizer(...) 11. └─mlexperiments:::.optimize_postprocessing(...) 12. └─mlexperiments:::.get_best_setting(...) 13. └─base::stopifnot(nrow(best_row) == 1) ── Error ('test-regression.R:309:5'): test nested cv, bayesian, reg:squarederror - xgboost ── Error in `(function (FUN, bounds, saveFile = NULL, initGrid, initPoints = 4, iters.n = 3, iters.k = 1, otherHalting = list(timeLimit = Inf, minUtility = 0), acq = "ucb", kappa = 2.576, eps = 0, parallel = FALSE, gsPoints = pmax(100, length(bounds)^3), convThresh = 1e+08, acqThresh = 1, errorHandling = "stop", plotProgress = FALSE, verbose = 1, ...) { startT <- Sys.time() optObj <- list() class(optObj) <- "bayesOpt" optObj$FUN <- FUN optObj$bounds <- bounds optObj$iters <- 0 optObj$initPars <- list() optObj$optPars <- list() optObj$GauProList <- list() optObj <- changeSaveFile(optObj, saveFile) checkParameters(bounds, iters.n, iters.k, otherHalting, acq, acqThresh, errorHandling, plotProgress, parallel, verbose) boundsDT <- boundsToDT(bounds) otherHalting <- formatOtherHalting(otherHalting) if (missing(initGrid) + missing(initPoints) != 1) stop("Please provide 1 of initGrid or initPoints, but not both.") if (!missing(initGrid)) { setDT(initGrid) inBounds <- checkBounds(initGrid, bounds) inBounds <- as.logical(apply(inBounds, 1, prod)) if (any(!inBounds)) stop("initGrid not within bounds.") optObj$initPars$initialSample <- "User Provided Grid" initPoints <- nrow(initGrid) } else { initGrid <- randParams(boundsDT, initPoints) optObj$initPars$initialSample <- "Latin Hypercube Sampling" } optObj$initPars$initGrid <- initGrid if (nrow(initGrid) <= 2) stop("Cannot initialize with less than 3 samples.") optObj$initPars$initPoints <- nrow(initGrid) if (initPoints <= length(bounds)) stop("initPoints must be greater than the number of FUN inputs.") sinkFile <- file() on.exit({ while (sink.number() > 0) sink() close(sinkFile) }) `%op%` <- ParMethod(parallel) if (parallel) Workers <- getDoParWorkers() else Workers <- 1 if (verbose > 0) cat("\nRunning initial scoring function", nrow(initGrid), "times in", Workers, "thread(s)...") sink(file = sinkFile) tm <- system.time(scoreSummary <- foreach(iter = 1:nrow(initGrid), .options.multicore = list(preschedule = FALSE), .combine = list, .multicombine = TRUE, .inorder = FALSE, .errorhandling = "pass", .verbose = FALSE) %op% { Params <- initGrid[get("iter"), ] Elapsed <- system.time(Result <- tryCatch({ do.call(what = FUN, args = as.list(Params)) }, error = function(e) e)) if (any(class(Result) %in% c("simpleError", "error", "condition"))) return(Result) if (!inherits(x = Result, what = "list")) stop("Object returned from FUN was not a list.") resLengths <- lengths(Result) if (!any(names(Result) == "Score")) stop("FUN must return list with element 'Score' at a minimum.") if (!is.numeric(Result$Score)) stop("Score returned from FUN was not numeric.") if (any(resLengths != 1)) { badReturns <- names(Result)[which(resLengths != 1)] stop("FUN returned these elements with length > 1: ", paste(badReturns, collapse = ",")) } data.table(Params, Elapsed = Elapsed[[3]], as.data.table(Result)) })[[3]] while (sink.number() > 0) sink() if (verbose > 0) cat(" ", tm, "seconds\n") se <- which(sapply(scoreSummary, function(cl) any(class(cl) %in% c("simpleError", "error", "condition")))) if (length(se) > 0) { print(data.table(initGrid[se, ], errorMessage = sapply(scoreSummary[se], function(x) x$message))) stop("Errors encountered in initialization are listed above.") } else { scoreSummary <- rbindlist(scoreSummary) } scoreSummary[, `:=`(("gpUtility"), rep(as.numeric(NA), nrow(scoreSummary)))] scoreSummary[, `:=`(("acqOptimum"), rep(FALSE, nrow(scoreSummary)))] scoreSummary[, `:=`(("Epoch"), rep(0, nrow(scoreSummary)))] scoreSummary[, `:=`(("Iteration"), 1:nrow(scoreSummary))] scoreSummary[, `:=`(("inBounds"), rep(TRUE, nrow(scoreSummary)))] scoreSummary[, `:=`(("errorMessage"), rep(NA, nrow(scoreSummary)))] extraRet <- setdiff(names(scoreSummary), c("Epoch", "Iteration", boundsDT$N, "inBounds", "Elapsed", "Score", "gpUtility", "acqOptimum")) setcolorder(scoreSummary, c("Epoch", "Iteration", boundsDT$N, "gpUtility", "acqOptimum", "inBounds", "Elapsed", "Score", extraRet)) if (any(scoreSummary$Elapsed < 1) & acq == "eips") { cat("\n FUN elapsed time is too low to be precise. Switching acq to 'ei'.\n") acq <- "ei" } optObj$optPars$acq <- acq optObj$optPars$kappa <- kappa optObj$optPars$eps <- eps optObj$optPars$parallel <- parallel optObj$optPars$gsPoints <- gsPoints optObj$optPars$convThresh <- convThresh optObj$optPars$acqThresh <- acqThresh optObj$scoreSummary <- scoreSummary optObj$GauProList$gpUpToDate <- FALSE optObj$iters <- nrow(scoreSummary) optObj$stopStatus <- "OK" optObj$elapsedTime <- as.numeric(difftime(Sys.time(), startT, units = "secs")) saveSoFar(optObj, 0) optObj <- addIterations(optObj, otherHalting = otherHalting, iters.n = iters.n, iters.k = iters.k, parallel = parallel, plotProgress = plotProgress, errorHandling = errorHandling, saveFile = saveFile, verbose = verbose, ...) return(optObj) })(FUN = function (...) { kwargs <- list(...) args <- .method_params_refactor(kwargs, method_helper) set.seed(self$seed) res <- do.call(private$fun_bayesian_scoring_function, args) if (isFALSE(self$metric_optimization_higher_better)) { res$Score <- as.numeric(I(res$Score * -1L)) } return(res) }, bounds = list(subsample = c(0.2, 1), colsample_bytree = c(0.2, 1), min_child_weight = c(1L, 10L), learning_rate = c(0.1, 0.2), max_depth = c(1L, 10L)), initGrid = structure(list(subsample = c(1, 0.8, 1, 0.6, 0.6, 0.8, 0.8, 0.6, 0.6, 0.6), colsample_bytree = c(0.8, 1, 1, 0.8, 0.8, 0.8, 0.8, 0.6, 1, 0.8), min_child_weight = c(1, 1, 5, 1, 5, 5, 1, 1, 1, 1), learning_rate = c(0.1, 0.2, 0.2, 0.1, 0.2, 0.2, 0.1, 0.2, 0.1, 0.2), max_depth = c(5, 5, 5, 5, 5, 5, 1, 5, 1, 5)), out.attrs = list(dim = c(subsample = 3L, colsample_bytree = 3L, min_child_weight = 2L, learning_rate = 2L, max_depth = 2L), dimnames = list(subsample = c("subsample=0.6", "subsample=0.8", "subsample=1.0"), colsample_bytree = c("colsample_bytree=0.6", "colsample_bytree=0.8", "colsample_bytree=1.0"), min_child_weight = c("min_child_weight=1", "min_child_weight=5"), learning_rate = c("learning_rate=0.1", "learning_rate=0.2"), max_depth = c("max_depth=1", "max_depth=5"))), row.names = c(NA, -10L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x555d20bccd10>), iters.n = 2L, iters.k = 2L, otherHalting = list(timeLimit = Inf, minUtility = 0), acq = "ucb", kappa = 3.5, eps = 0, parallel = TRUE, gsPoints = 125, convThresh = 1e+08, acqThresh = 1, errorHandling = "stop", plotProgress = FALSE, verbose = 1)`: Errors encountered in initialization are listed above. Backtrace: ▆ 1. └─xgboost_optimizer$execute() at test-regression.R:309:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. └─mlexperiments:::.run_optimizer(...) 11. └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper) 12. ├─base::do.call(...) 13. └─mlexperiments (local) `<fn>`(...) 14. ├─base::do.call(ParBayesianOptimization::bayesOpt, args) 15. └─ParBayesianOptimization (local) `<fn>`(...) ── Error ('test-regression.R:352:5'): test nested cv, grid - xgboost ─────────── Error in `.get_best_setting(results = outlist$summary, opt_metric = "metric_optim_mean", param_names = param_names, higher_better = metric_higher_better)`: nrow(best_row) == 1 is not TRUE Backtrace: ▆ 1. └─xgboost_optimizer$execute() at test-regression.R:352:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. └─mlexperiments:::.run_optimizer(...) 11. └─mlexperiments:::.optimize_postprocessing(...) 12. └─mlexperiments:::.get_best_setting(...) 13. └─base::stopifnot(nrow(best_row) == 1) [ FAIL 4 | WARN 3 | SKIP 3 | PASS 22 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-fedora-clang