The new forums will be named Coin Return (based on the most recent vote)! You can check on the status and timeline of the transition to the new forums here.
The Guiding Principles and New Rules document is now in effect.
This is an assignment question I'm asking - so I'm looking for advice to guide my learning if at all possible. I've been using MATLAB for ~6 weeks and having a hard time.
The question isn't too important here, I'll just give a little context. I've formed some log-likelihood function l(sig1,sig2;X,Y) and I would like to maximise the function in order to find maximum likelihood estimators for the parameters sig1and sig2. This requires use of thenewtonmethod which I believe I understand the basic idea of.
So what's my problem?
The solutions for sigma1and sigma2 will not converge. I've been given the initial values of [0,0] to work with and apparently the program should only take around 56 iterations before convergence occurs. I've noticed that if I place
within the for loop then my solution doesn't change at all, but if it's outside of the for loop then my solution just blows up indefinitely.
So what am I doing wrong? I've checked my derivatives very carefully and I'm confident I've entered the math correctly.
inside the for loop. As you describe above, it's impossible for the solution to improve at all unless you do this- if you don't change theta1 and theta2 within the loop, H never changes and inc never changes and you're just adding the same amount to oldtheta over and over.
The problem is probably in your math, but I'm going to avoid giving myself a refresher on Newton-Raphson because you've got an assignment here. : )
(Edit: read something wrong, so making my advice less bad : ) PS are you passing it x and y in the correct order? As a matter of style, I'd avoid the order (y,x).
At a glance, your Hessian seems correct, and your NM seems correct, but something just seems off about your likelihood function. I'm definitely no expert on this, but presumably you've taken the logarithm of a product of evaluations, and then you shouldn't be stuck with exp's, unless your initial function has some sort of e^(e^theta) thing going on.
The first test is to run the program using the optimized numbers. The program should end in one iteration. If not, you have a math error in the calculation. If the program does end in one iteration, then test one initial values on both sides of the known solution, setting the other to the optimized value. It they diverge away when they should collapse, there is a sign error.
Was the original function given? It looks a bit strange (though I haven't done much with Poisson distributions). You must have some kind of model relating x and y that we aren't seeing, and I'm sure there's one that gives that solution, so it's possible that it's fine. Just difficult to see without knowing. Everything from the PMF down looks fine at a cursory glance.
Hi MrT, no that's all the information we have been given. I'm actually quite new to stat too and I hadn't thought of that.
When speaking with others they all had done the same as me mathematically but they found convergence which tells me the problem is in my code.
I'm curious about why it might not be a conditional probability. Could it be because we are given y observations with an associated x observaion given by the model?
It just occurred to me that all of the elements in my Hessian are negative which means I should change the sign of my inc to a positive. I'll let you know if this works! Thanks for the advice so far.
Posts
theta1=oldtheta(1);
theta2=oldtheta(2);
inside the for loop. As you describe above, it's impossible for the solution to improve at all unless you do this- if you don't change theta1 and theta2 within the loop, H never changes and inc never changes and you're just adding the same amount to oldtheta over and over.
The problem is probably in your math, but I'm going to avoid giving myself a refresher on Newton-Raphson because you've got an assignment here. : )
(Edit: read something wrong, so making my advice less bad : ) PS are you passing it x and y in the correct order? As a matter of style, I'd avoid the order (y,x).
It also seems unlikely that your likelihood function would involve directly summing your data.
Y being Poisson does not necessarily imply that Y|X is Poisson (unless there's some amazing feature of Poisson distributions that ensures that).
The equation you gave at the top should really be Pr(Y=y|X=x), so knowing the joint distribution or generating function is probably necessary.
When speaking with others they all had done the same as me mathematically but they found convergence which tells me the problem is in my code.
I'm curious about why it might not be a conditional probability. Could it be because we are given y observations with an associated x observaion given by the model?
It just occurred to me that all of the elements in my Hessian are negative which means I should change the sign of my inc to a positive. I'll let you know if this works! Thanks for the advice so far.
Thanks for all of your replies!