algorithm - Aggregate/Extrapolate sparse samples in multi-dimensional space -
imagine there function evaluates elevation of surface given floating point x , y coordinate (and additional components in accordance dimensionality):
double computeelevation(double x, double y, ...., double z) { }
this not analytic function , derivatives cannot computed. need find direction in surface steepest given {x, y} pair. single evaluation expensive (think seconds or minutes in worst case scenarios).
my typical approach in 2d case sample surface in n locations adjacent {x, y}, fit curve through samples , search curve highest point, search not suffer expensive evaluation:
in above image p0 given coordinate. {s0, s1, s2, s3} 4 random-ishly placed samples around p0 , pm highest point on curve. thus, vector pm-p0 direction of steepest ascent.
but have no idea how scale n dimensions, or whether there smarter algorithms doing this.
the number of dimensions potentially quite large (dozens hundreds), whatever method end using must work when there fewer samples there dimensions. i'm not looking exact answer, impossible, half-way decent approximation welcome.
ps. i'm doing in c#, not matters great deal don't have access non-c# language features.
it looks trying estimate gradient set of random samples near given point.
unfortunately if have n
dimensions, need minimum of n+1
points estimate gradient. fewer points, dimensions need drop out, , you'll estimating lower dimensional projection of gradient. said, if capture k
dimensions, odds project sqrt(k/n)
of length of true gradient.
here 1 approach. suppose have sampled k+1
random points around point , further assume linearly independent. pick 1 of them "origin", , you'll have k
dimensions. find n-k
more points orthogonal of previous ones, , put in origin's value. (that cause dimensions not represented in gradient.)
now have n
vectors , estimate of dot product of gradient each. take each standard unit vector, , write linear combination of vectors. same linear combination of slopes give estimate component of gradient. of unit vectors, add them together, , voila, have estimate of gradient.
please note if trying use near, , far points, of not linearly independent, approach won't work , you'll need more complicated.
Comments
Post a Comment