The new forums will be named Coin Return (based on the most recent vote)! You can check on the status and timeline of the transition to the new forums here.
The Guiding Principles and New Rules document is now in effect.
So in a program I'm writing I need to get an estimate for a gradient of a function f at a point, but I don't actually have the function. All I have is values for the function around the point in question- so if I need the gradient of f at (x,y) I can get f(x,y), f(x-h, y), f(x+h, y), etc. Is there way to incorporate all these values into the gradient estimate other than using a central difference? h is big enough in this case that I'm worried about this masking oscillations in f.
Just make a discrete array of the nearby points and choose the direction of greatest increase.
I can't envision this being too complicated since you can get the nearby values with the data you've got.
Maybe I was too vague. I already have a discrete array of values for the function at regular intervals (to be more precise, a NxNxN array A of values). I wanted to come up with the direction of greatest increase by finding the gradient at a point. Right now, I'm doing this with a central difference- my gradient vector for a point x, y, z is <A[x+1][y][z] - A[x-1][y][z], A[x][y+1][z] - A[x][y-1][z], A[x][y][z+1] - A[x][y][z-1]>. I was wondering if there's another way to do this.
I'm assuming a 2nd order approximation of the first derivative of your function is sufficient. Just do the central difference operation for df/dx (or whichever way you are taking your gradient) and find the maximum value after finding the derivative at each discrete point.
So say you have an NxN array of discrete points (first dimension is x, second dimension is y). Then, the derivative of the function in the x-direction is at point (i,j)
(df/dx) at i,j = (f(i+h,j)-f(i-h,j))/(2*h) and
(df/dy) at i,j = (f(i,j+h)-f(i,j-h))/(2*h)
and there you go. h is the spacing between discrete points (assuming constant)
Posts
I can't envision this being too complicated since you can get the nearby values with the data you've got.
Maybe I was too vague. I already have a discrete array of values for the function at regular intervals (to be more precise, a NxNxN array A of values). I wanted to come up with the direction of greatest increase by finding the gradient at a point. Right now, I'm doing this with a central difference- my gradient vector for a point x, y, z is <A[x+1][y][z] - A[x-1][y][z], A[x][y+1][z] - A[x][y-1][z], A[x][y][z+1] - A[x][y][z-1]>. I was wondering if there's another way to do this.
I haven't done any serious math in years, so I'm not sure what a higher order central difference is.
So say you have an NxN array of discrete points (first dimension is x, second dimension is y). Then, the derivative of the function in the x-direction is at point (i,j)
(df/dx) at i,j = (f(i+h,j)-f(i-h,j))/(2*h) and
(df/dy) at i,j = (f(i,j+h)-f(i,j-h))/(2*h)
and there you go. h is the spacing between discrete points (assuming constant)