This package provides functionality to directly estimate a density ratio
- Fast: Computationally intensive code is executed in
C++usingRcppandRcppArmadillo. - Automatic: Good default hyperparameters that can be optimized in
cross-validation (we do recommend understanding those parameters
before using
densityratioin practice). - Complete: Several density ratio estimation methods, such as
unconstrained least-squares importance fitting (
ulsif()), Kullback-Leibler importance estimation procedure (kliep()), ratio of estimated densities (naive()), ratio of estimated densities after dimension reduction (naivesubspace()), and least-squares heterodistributional subspace search (lhss(); experimental). - User-friendly: Simple user interface, default
predict(),print()andsummary()functions for all density ratio estimation methods; built-in data sets for quick testing.
You can install the development version of densityratio from
GitHub with:
# install.packages("devtools")
devtools::install_github("thomvolker/densityratio")The package contains several functions to estimate the density ratio
between the numerator data and the denominator data. To illustrate the
functionality, we make use of the in-built simulated data sets
numerator_data and denominator_data, that both consist of the same
five variables.
library(densityratio)
head(numerator_data)
#> # A tibble: 6 × 5
#> x1 x2 x3 x4 x5
#> <fct> <fct> <dbl> <dbl> <dbl>
#> 1 A G1 -0.0299 0.967 -1.26
#> 2 C G1 2.29 -0.475 2.40
#> 3 A G1 1.37 0.577 -0.172
#> 4 B G2 1.44 -0.193 -0.708
#> 5 A G1 1.01 2.23 2.01
#> 6 C G2 1.83 0.762 3.71
set.seed(1)
fit <- ulsif(
df_numerator = numerator_data$x5,
df_denominator = denominator_data$x5,
nsigma = 5,
nlambda = 5
)
class(fit)
#> [1] "ulsif"We can ask for the summary() of the estimated density ratio object,
that contains the optimal kernel weights (optimized using
cross-validation) and a measure of discrepancy between the numerator and
denominator densities.
summary(fit)
#>
#> Call:
#> ulsif(df_numerator = numerator_data$x5, df_denominator = denominator_data$x5, nsigma = 5, nlambda = 5)
#>
#> Kernel Information:
#> Kernel type: Gaussian with L2 norm distances
#> Number of kernels: 200
#> Optimal sigma: 0.8951539
#> Optimal lambda: 0.03162278
#> Optimal kernel weights (loocv): num [1:200] 0.021815 0.007418 0.018196 0.015729 -0.000559 ...
#>
#> Pearson divergence between P(nu) and P(de): 0.2925
#> For a two-sample homogeneity test, use 'summary(x, test = TRUE)'.To formally evaluate whether the numerator and denominator densities differ significantly, you can perform a two-sample homogeneity test as follows.
summary(fit, test = TRUE)
#>
#> Call:
#> ulsif(df_numerator = numerator_data$x5, df_denominator = denominator_data$x5, nsigma = 5, nlambda = 5)
#>
#> Kernel Information:
#> Kernel type: Gaussian with L2 norm distances
#> Number of kernels: 200
#> Optimal sigma: 0.8951539
#> Optimal lambda: 0.03162278
#> Optimal kernel weights (loocv): num [1:200] 0.021815 0.007418 0.018196 0.015729 -0.000559 ...
#>
#> Pearson divergence between P(nu) and P(de): 0.2925
#> Pr(P(nu)=P(de)) < .001The probability that numerator and denominator samples share a common data generating mechanism is very small.
The ulsif-object also contains the (hyper-)parameters used in
estimating the density ratio, such as the centers used in constructing
the Gaussian kernels (fit$centers), the different bandwidth parameters
(fit$sigma) and the regularization parameters (fit$lambda). Using
these variables, we can obtain the estimated density ratio using
predict().
# obtain predictions for the numerator samples
newx5 <- seq(from = -3, to = 6, by = 0.05)
pred <- predict(fit, newdata = newx5)
ggplot() +
geom_point(aes(x = newx5, y = pred, col = "ulsif estimates")) +
stat_function(mapping = aes(col = "True density ratio"),
fun = dratio,
args = list(p = 0.4, dif = 3, mu = 3, sd = 2),
size = 1) +
theme_classic() +
scale_color_manual(name = NULL, values = c("#de0277", "purple")) +
theme(legend.position = c(0.8, 0.9),
text = element_text(size = 20))Currently, none of the functions in the densityratio package accept
non-numeric variables (e.g., having categorical variables will return an
error message).
ulsif(
df_numerator = numerator_data$x1,
df_denominator = denominator_data$x2
)
#> Error in check.dataform(nu, de): Currently only numeric data is supported.However, transforming the variables into numeric variables will work, and can give a reasonable estimate of the ratio of proportions in the different data sets (although there is some regularization applied).
fit_cat <- ulsif(
df_numerator = numerator_data$x1 |> as.numeric(),
df_denominator = denominator_data$x1 |> as.numeric()
)
#> Warning in check.sigma(nsigma, sigma_quantile, sigma, dist_nu): There are duplicate values in 'sigma', only the unique values are used.
aggregate(
predict(fit_cat) ~ numerator_data$x1,
FUN = unique
)
#> numerator_data$x1 predict(fit_cat)
#> 1 A 1.3748725
#> 2 B 1.3554649
#> 3 C 0.6299005
table(numerator_data$x1) / table(denominator_data$x1)
#>
#> A B C
#> 1.3928571 1.4612069 0.6007752After transforming all variables to numeric variables, it is possible to calculate the density ratio over the entire multivariate space of the data.
fit_all <- ulsif(
df_numerator = numerator_data |> lapply(as.numeric) |> data.frame(),
df_denominator = denominator_data |> lapply(as.numeric) |> data.frame()
)
summary(fit_all, test = TRUE, parallel = TRUE)
#>
#> Call:
#> ulsif(df_numerator = data.frame(lapply(numerator_data, as.numeric)), df_denominator = data.frame(lapply(denominator_data, as.numeric)))
#>
#> Kernel Information:
#> Kernel type: Gaussian with L2 norm distances
#> Number of kernels: 200
#> Optimal sigma: 1.459182
#> Optimal lambda: 0.3359818
#> Optimal kernel weights (loocv): num [1:200] -0.0314 0.1168 0.0739 0.0966 0.0212 ...
#>
#> Pearson divergence between P(nu) and P(de): 0.5015
#> Pr(P(nu)=P(de)) < .001Besides ulsif(), the package contains several other functions to
estimate a density ratio.
naive()estimates the numerator and denominator densities separately, and subsequently takes there ratio.kliep()estimates the density ratio directly through the Kullback-Leibler importance estimation procedure.
fit_naive <- naive(
df_numerator = numerator_data$x5,
df_denominator = denominator_data$x5
)
fit_kliep <- kliep(
df_numerator = numerator_data$x5,
df_denominator = denominator_data$x5
)
pred_naive <- predict(fit_naive, newdata = newx5)
pred_kliep <- predict(fit_kliep, newdata = newx5)
ggplot(data = NULL, aes(x = newx5)) +
geom_point(aes(y = pred, col = "ulsif estimates")) +
geom_point(aes(y = pred_naive, col = "naive estimates")) +
geom_point(aes(y = pred_kliep, col = "kliep estimates")) +
stat_function(aes(x = NULL, col = "True density ratio"),
fun = dratio, args = list(p = 0.4, dif = 3, mu = 3, sd = 2),
size = 1) +
theme_classic() +
scale_color_manual(name = NULL, values = c("#8400b8", "#510070","#de0277", "purple")) +
theme(legend.position = c(0.8, 0.9),
text = element_text(size = 20))Although all methods perform reasonable and approximate the true density
ratio relatively well, ulsif() comes closes to the true density ratio
function in this example.
This package is still in development, and I’ll be happy to take feedback and suggestions. Please submit these through GitHub Issues.
Books
- General information about the density ratio estimation framework: Sugiyama, Suzuki and Kanamori (2012). Density Ratio Estimation in Machine Learning
Papers
- Density ratio estimation for the evaluation of the utility of synthetic data: Volker, De Wolf and Van Kesteren (2023). Assessing the utility of synthetic data: A density ratio perspective
Volker, T.B. (2023). densityratio: Distribution comparison through density ratio estimation. https://doi.org/10.5281/zenodo.8307819

