Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
83 changes: 78 additions & 5 deletions src/reporting/render_report/report-template.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -585,15 +585,15 @@ method_details <- purrr::map_dfr(method_info, function(.method) {
metric_details <- purrr::map_dfr(metric_info, function(.metric) {
data.frame(
metric = .metric$name,
metric_label = .metric$label
metric_label = .metric$label,
metric_maximize = .metric$maximize
)
}) |>
dplyr::arrange(metric)

metric_maximize <- purrr::map_lgl(metric_info, "maximize") |>
purrr::set_names(metric_details$metric)

metric_reverse <- names(metric_maximize)[metric_maximize == FALSE]
metric_reverse <- metric_details |>
dplyr::filter(metric_maximize == FALSE) |>
dplyr::pull(metric)

scores <- purrr::map_dfr(task_results$results, function(.result) {
if (!.result$succeeded) {
Expand Down Expand Up @@ -1181,11 +1181,18 @@ funkyheatmap::funky_heatmap(

::: {.callout-note}
This table displays the scaled metric scores.
After scaling, higher scores are always better and any missing values are set to 0.
The "Overall" dataset gives the mean score across all of the actual datasets.

Raw scores are also provided to help with understanding any issues but should not be used to compare performance.

Sort and filter the table to check scores you are interested in.
:::

::: {.panel-tabset}

### Scaled scores {.unnumbered .unlisted}

```{r}
#| label: results-table
#| eval: !expr has_controls
Expand Down Expand Up @@ -1246,3 +1253,69 @@ reactable::reactable(
searchable = TRUE
)
```

### Raw scores {.unnumbered .unlisted}

```{r}
#| label: results-table-raw
#| eval: !expr has_controls
dataset_scores_raw <- complete_scores |>
dplyr::select(dataset, method, metric, value) |>
tidyr::pivot_wider(
names_from = metric,
values_from = value
)

table_data_raw <- dataset_scores_raw |>
dplyr::mutate(
dataset = factor(
dataset,
levels = dataset_details$dataset,
labels = dataset_details$dataset_label
),
method = factor(
method,
levels = method_details$method,
labels = method_details$method_label
)
) |>
dplyr::relocate(dataset, .after = method) |>
dplyr::arrange(dataset, method)

reactable::reactable(
table_data_raw,

columns = c(
list(
method = reactable::colDef(
name = "Method",
sticky = "left"
),
dataset = reactable::colDef(
name = "Dataset",
sticky = "left",
style = list(borderRight = "2px solid #999"),
headerStyle = list(borderRight = "2px solid #999")
)
),
purrr::map(metric_details$metric_label,
function(.metric_label) {
reactable::colDef(
name = .metric_label,
format = reactable::colFormat(digits = 3)
)
}
) |>
purrr::set_names(metric_details$metric)
),

highlight = TRUE,
striped = TRUE,
defaultPageSize = 25,
showPageSizeOptions = TRUE,
filterable = TRUE,
searchable = TRUE
)
```

:::