Skip to content

[wip] starting point for folds#153

Merged
BernhardAhrens merged 25 commits intomainfrom
ba/kfold_example
Sep 29, 2025
Merged

[wip] starting point for folds#153
BernhardAhrens merged 25 commits intomainfrom
ba/kfold_example

Conversation

@BernhardAhrens
Copy link
Collaborator

No description provided.

@BernhardAhrens BernhardAhrens changed the title start with folds [wip] starting point for folds Sep 18, 2025
hybrid_model,
ds,
();
folds = folds,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure, maybe is better to do an outer loop over the folds? That way we could use pmap, distributed over them.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it should be pmap-able, see the for loop in the last commit

Copy link
Member

@lazarusA lazarusA Sep 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean that we can pass the k_fold split directly into train, no need for the new folds argument.


for val_fold in 1:k
@info "Split data outside of train function. Training fold $val_fold of $k"
sdata = split_data(ds, hybrid_model; val_fold = val_fold, folds = folds)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we can also do the split outside of train. I guess then we don't blow up train with more keyword arguments and can get rid of the additional ones I added for kfold. @lazarusA

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, please.

# ? split training and validation data
(x_train, y_train), (x_val, y_val) = split_data(data, hybridModel; split_by_id=split_by_id, shuffleobs=shuffleobs, split_data_at=split_data_at)

(x_train, y_train), (x_val, y_val) = split_data(data, hybridModel; kwargs...)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pass everything for data handling via kwargs...?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe, we need to make that works (I tried this already and somehow it was failing in some cases), hence some tests will be needed. Let's do these updates to split_data and train in a new PR, only dealing with that.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you remember which cases were failing?

shuffleobs=false,
split_by_id=nothing,
split_data_at=0.8,
# Data handling parameters are now passed via kwargs...
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would not blow up the length of train arguments further but even decrease it

@BernhardAhrens
Copy link
Collaborator Author

BernhardAhrens commented Sep 29, 2025

@lazarusA I think this is more less finished - I would like to merge it soon:

  • I added tests for the kwargs... formulation
  • seems to work as intended - I added one or two initial tests for DimensionalData but I have to implement it for the GenericHybridModel
  • I added a tutorial for cross-validation by making an md file from a script via Literate.jl. This is automated (ChatGPT help)

Any objections? Sorry that it is a PR on multiple things

@lazarusA
Copy link
Member

Great, as usual, if it works, merge! 🙂

@BernhardAhrens BernhardAhrens merged commit 51062bf into main Sep 29, 2025
4 checks passed
@BernhardAhrens BernhardAhrens deleted the ba/kfold_example branch September 29, 2025 07:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants