Estimating, predicting, estimation, generalization, training, and evaluation are not the same as guessing.
Wow. Just wow. So you're saying that an
estimate is not a guess? Here's a definition:
"What is an estimate in Math? Definition for estimate in math is an approximate value close enough to the correct value. A lot of guesses are made to make math easier and clearer." Same with generalization (aka
inductive reasoning): a
guess is made that a small number of specific examples may be used to predict the behavior of a category of phenomena
in general.
FYI, an educated (aka "informed") guess is still a guess.
Guessing implies a level of randomness that machine learning and neural networks are not based on.
Then they're doing a poor job. FYI, much of the actual science of mathematics - mathematical concepts, not simple calculations - is
based on guesses. Look up "greatest lower bound" and "least upper bound."
Also, look up computer simulation of physical systems (e.g.
finite element analysis). The computer model makes a series of
guesses as to the state of a system at a point, then, using physical principles, propagates the results of those
guesses to the whole system, compares the results at specific points to known values, adjusts the initial guesses in response, redoes the sequence, etc., etc. It's an
iterative method that consists literally of a series of guesses that get refined from one iteration to another until the estimate matches known values (often
boundary conditions) within a sufficiently small approximation.
Also FYI, no amount of code can change the simple fact that, with nonlinear systems,
an input-response pair cannot not determine a unique transfer function.