Continuation: use Lux.Training.single_train_step!#198
Conversation
* update CI * pretty * update k * be explicit * more f * formatting
* fix compat * rel path
|
I reworked the loss functions to match the signature Lux expects At least one of them. There is the other signature that @Qfl3x implemented but I think the one with these inputs
fits best to what we already had as |
|
Let's test more before we merge @lazarusA |
| @info "Check the saved output (.png, .mp4, .jld2) from training at: $(tmp_folder)" | ||
|
|
||
| prog = Progress(nepochs, desc = "Training loss", enabled = show_progress) | ||
| loss(hybridModel, ps, st, (x, y)) = lossfn( |
| struct MultiNNModel | ||
| struct MultiNNModel <: LuxCore.AbstractLuxContainerLayer{ | ||
| ( | ||
| :NNs, :predictors, :targets, :scale_nn_outputs, |
There was a problem hiding this comment.
we should dispatch on this, -> the NNs, instead of having 2 different structs, only one struct HybridModel end and an internal function HybridModel end that does the current constructHybridModel bit. For now, is good that this already works but we should address this in a following PR.
|
Yes, sounds good! Feel free to add or rm what you deem necessary. I also experimented a bit with making Enzyme work in the aforementioned script - but run into a problem/dead end with DimensionalData, so far: https://github.com/EarthyScience/EasyHybrid.jl/tree/ba/enzyme_tries |
yes, we need a new |
| struct MultiNNHybridModel | ||
| struct MultiNNHybridModel <: LuxCore.AbstractLuxContainerLayer{ | ||
| ( | ||
| :NNs, #:predictors, :forcing, :targets, |
There was a problem hiding this comment.
the interface should only deal with Layers, and not static inputs, see
Zygote doesn't complain about it, but ForwardDiff will fail trying to apply preserves_state_type.
Leaving these as comments now, to be clean up later, once things are more stable they should be removed.
No description provided.