At first glance that one looks pretty mediocre—not obviously good or bad. I would guess it’s slightly worse than one-of-everything.
This one has a 43% win rate.
It looks like your current code is using (max_distance—d) as the discount factor rather than 1/(d + 1). I tried both of those as well as 2^(-d) and got very different results with each. It appears you’re also using a threshold to pre-filter decks, so d will probably be much less than max_distance anyway. I’m not sure I entirely follow your code, but it looks like you’re just looking at decks that appear in the dataset rather than all possible decks?
At first glance that one looks pretty mediocre—not obviously good or bad. I would guess it’s slightly worse than one-of-everything.
This one has a 43% win rate.
It looks like your current code is using (max_distance—d) as the discount factor rather than 1/(d + 1). I tried both of those as well as 2^(-d) and got very different results with each. It appears you’re also using a threshold to pre-filter decks, so d will probably be much less than max_distance anyway. I’m not sure I entirely follow your code, but it looks like you’re just looking at decks that appear in the dataset rather than all possible decks?
Yes, I am looking at decks that appear in the dataset, and more particularly at decks that have faced a deck similar to the rival’s.
Good to know that one gets similar results using the different scoring functions.
I guess that maybe the approach does not work that well ¯\_(ツ)_/¯
Seeking clarification here: which of these decks are you currently submitting? If you need more time to decide, let me know.
Ah sorry for the lack of clarity—let’s stick to my original submission for PVE
That would be:
[0,1,0,1,0,0,9,0,0,1,0,0]