tag:blogger.com,1999:blog-1740670447258719504.post6607033071700317593..comments2024-03-27T03:00:27.024-04:00Comments on Facts & other stubborn things: I will be presenting on this todayEvanhttp://www.blogger.com/profile/12259004160963531720noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-1740670447258719504.post-85562586158236585982013-04-12T01:21:30.988-04:002013-04-12T01:21:30.988-04:00This comment has been removed by the author.Absalonhttps://www.blogger.com/profile/09131268683451462949noreply@blogger.comtag:blogger.com,1999:blog-1740670447258719504.post-65368266925145003812013-04-12T01:20:47.023-04:002013-04-12T01:20:47.023-04:00Quadratic convergence is when you have an iterativ...Quadratic convergence is when you have an iterative method and your error in effect squares with each iteration so you might have a sequence where the first iteration has an error of .1, the second iteration the error is .01, the third iteration you have .0001 and the fourth iteration the error is .00000001. If your method and your problem permit quadratic convergence it's awesome. That is probably what is going on with your other cases. Newton's method for finding a root of a function will display quadratic convergence once the estimate is close to a root. <br /><br />Absalonhttps://www.blogger.com/profile/09131268683451462949noreply@blogger.comtag:blogger.com,1999:blog-1740670447258719504.post-7487531675980390832013-04-11T07:30:02.773-04:002013-04-11T07:30:02.773-04:00Yep, added it to the first slide with the picture ...Yep, added it to the first slide with the picture above. Went over very well :)<br /><br />Most of the presentations have been on empirical papers which are more interesting because they have a story and neat identification tricks. So it was nice to have this at the beginning of mine, since mine was one of only two presentations that was more about textbook type material.Daniel Kuehnhttp://www.factsandotherstubbornthings.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-1740670447258719504.post-88873193134549567572013-04-11T07:28:40.650-04:002013-04-11T07:28:40.650-04:00Haha - I like "37-year-old patriot discovers ...Haha - I like "37-year-old patriot discovers "weird" trick to end slavery to the Bayesian monopoly."<br /><br />When I first clicked through I thought you gave me a bad link - then I realized, nope - this is it.Daniel Kuehnhttp://www.factsandotherstubbornthings.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-1740670447258719504.post-80812216219862497862013-04-11T07:25:56.113-04:002013-04-11T07:25:56.113-04:00Not sure what you mean by quadratic convergence (u...Not sure what you mean by quadratic convergence (unless you just mean literally following a quadratic function). The model is a multinomial logit model with a bunch of crap in it. I tried some others just to get the screenshot but they actually optimized too quickly! So I had to pick one that I knew would take a long time. <br /><br />With multinomial logit you're multiplying a bunch of small probabilities together for the likelihood function, so that likelihood will be relatively small for any given solution and the log-likelihood will be large and negative.<br /><br />Stata has default tolerances for when to stop optimizing. It may be getting into rounding errors at that point, but I would think by the time it ran into rounding errors the tolerances would tell it to stop.Daniel Kuehnhttp://www.factsandotherstubbornthings.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-1740670447258719504.post-37615640278999560152013-04-10T15:04:29.133-04:002013-04-10T15:04:29.133-04:00Looks like you have quadratic convergence for the ...Looks like you have quadratic convergence for the first two iterations and then it falls off pretty quickly. If any part of the code is written in single precision, you are just fighting round-off errors by iteration 10. <br /><br />All of this assumes that the answer is even meaningful. Log likelihood = - 2 million does not seem meaningful if you really mean "log likelihood".Absalonhttps://www.blogger.com/profile/09131268683451462949noreply@blogger.comtag:blogger.com,1999:blog-1740670447258719504.post-72613968815068398442013-04-10T13:50:05.141-04:002013-04-10T13:50:05.141-04:00That's exactly what my computer screen looked ...That's exactly what my computer screen looked like when I tried to write an agent-based simulation of the indulgence market. Also pretty much every single ones of my forays into simulations, machine learning and my day to day job: compiling, running tests, running the flaky tests again...<br /><br />I have a good link for your slides: http://oneweirdkerneltrick.com/PrometheeFeuhttp://prometheefeu.wordpress.comnoreply@blogger.comtag:blogger.com,1999:blog-1740670447258719504.post-36046121273289497302013-04-10T11:48:52.666-04:002013-04-10T11:48:52.666-04:00Oh I am TOTALLY adding that to my slides...
...or...Oh I am TOTALLY adding that to my slides...<br /><br />...or would that be unprofessional? I like to keep things somewhat lighter and sometimes I don't know.Daniel Kuehnhttp://www.factsandotherstubbornthings.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-1740670447258719504.post-9256626134721385692013-04-10T11:47:12.436-04:002013-04-10T11:47:12.436-04:00That was my computer screen yesterday! I maximized...That was my computer screen yesterday! I maximized the shit out of that likelihood function. <br /><br />Also, potentially relevant: http://xkcd.com/303/Ryanhttps://www.blogger.com/profile/18341935691462262579noreply@blogger.com