Caio Davi
Jul 14, 2021

--

Thanks for your question, Jhtariq, but there is no such assumption. We are literally adding the base case into the loss. So, when t=0, g(t) =u0, respecting the initial condition.

When t!=0, the training will occur with a "shift" of u0. The NN(t) will adapt itself to this shift and, at the end, g(t) will approximate the real function (at least theoretically).

Please, let me know if you need more details.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Caio Davi
Caio Davi

Written by Caio Davi

Ph.D. student @ Texas A&M University

Responses (1)

Write a response