If there are any paper reading clubs out there that ask the presenter to replicate the results without looking at the author’s code, I would love to join
This is something that I would be interested in as well. I’ve been attempting to reproduce MQTransformer: Multi-Horizon Forecasts with Context Dependent and Feedback-Aware Attentionfrom scratch, but I am finding it difficult, partially due to my present lack of experience with reproducing DL papers. The code for MQTransformer is not available, at least to my knowledge. Also, there are several other papers which use LSTMs or Transformers architectures for forecasting that I hope to reproduce and/or employ for use on Metaculus API data in the coming few months. If reproducing ML papers from scratch and replicating their results (especially DL for forecasting) sounds interesting (perhaps I could publish these reproductions w/ additional tests in ReScience C) to anyone, please DM me, as I would be willing to collaborate.
Hi there—one of the authors of MQTransformer here. Feel free to send us an email and we can help you with this! (Our emails should be on the paper—if you cant find it, let us know here and we’ll add it)
There’s no difference in the actual model (or its architecture) - but we realized that the “trades” (this can be made more precise if you’d like) MQT would be a martingale against encompass a large class of volatility definitions, so we gave an example of a novel volatility measure (or a trade) that isn’t the classical definition and showed MQT works well against it (Theorem 8.1 and Eqn 14).
This is something that I would be interested in as well. I’ve been attempting to reproduce MQTransformer: Multi-Horizon Forecasts with Context Dependent and Feedback-Aware Attention from scratch, but I am finding it difficult, partially due to my present lack of experience with reproducing DL papers. The code for MQTransformer is not available, at least to my knowledge. Also, there are several other papers which use LSTMs or Transformers architectures for forecasting that I hope to reproduce and/or employ for use on Metaculus API data in the coming few months. If reproducing ML papers from scratch and replicating their results (especially DL for forecasting) sounds interesting (perhaps I could publish these reproductions w/ additional tests in ReScience C) to anyone, please DM me, as I would be willing to collaborate.
Hi there—one of the authors of MQTransformer here. Feel free to send us an email and we can help you with this! (Our emails should be on the paper—if you cant find it, let us know here and we’ll add it)
This is great; thank you! I will send an email in the coming month. Also, I suppose a quick clarification, but what’s the relation between: MQTransformer: Multi-Horizon Forecasts with Context Dependent and Feedback-Aware Attention and MQTransformer: Multi-Horizon Forecasts with Context Dependent Attention and Optimal Bregman Volatility
Looking forward to it!
There’s no difference in the actual model (or its architecture) - but we realized that the “trades” (this can be made more precise if you’d like) MQT would be a martingale against encompass a large class of volatility definitions, so we gave an example of a novel volatility measure (or a trade) that isn’t the classical definition and showed MQT works well against it (Theorem 8.1 and Eqn 14).