Here is my very bad approach after spending ~one hour playing around with the data
Filter decks that fought against a similar to the rivals deck, using a simple measure of distance (sum of absolute differences between the deck components)
Compute a ‘score’ of the decks. The score is defined as the sum of 1/deck_distance(deck) * (1 or −1 depending on whether the deck won or lost against the challenger)
Report the deck with the maximum score
So my submission would be: [0,1,0,1,0,0,9,0,0,1,0,0]
This is fairly close to my result for worst deck to face a one-of-each deck (I would guess it has less than 10% win rate). Is it possible you flipped a sign somewhere?
I think you might be right, since the deck is quite undiverse and according to the rest diversity is important. That being said, I could not find the mistake in the code at a glance :/
Do you have any opinions on [1, 1, 0, 1, 0, 1, 2, 1, 1, 3, 0, 1]? This would be the worst deck amongst the decks that played against a deck similar to the rival’s in my code, according to my code.
At first glance that one looks pretty mediocre—not obviously good or bad. I would guess it’s slightly worse than one-of-everything.
This one has a 43% win rate.
It looks like your current code is using (max_distance—d) as the discount factor rather than 1/(d + 1). I tried both of those as well as 2^(-d) and got very different results with each. It appears you’re also using a threshold to pre-filter decks, so d will probably be much less than max_distance anyway. I’m not sure I entirely follow your code, but it looks like you’re just looking at decks that appear in the dataset rather than all possible decks?
Here is my very bad approach after spending ~one hour playing around with the data
Filter decks that fought against a similar to the rivals deck, using a simple measure of distance (sum of absolute differences between the deck components)
Compute a ‘score’ of the decks. The score is defined as the sum of 1/deck_distance(deck) * (1 or −1 depending on whether the deck won or lost against the challenger)
Report the deck with the maximum score
So my submission would be: [0,1,0,1,0,0,9,0,0,1,0,0]
Code
This is fairly close to my result for worst deck to face a one-of-each deck (I would guess it has less than 10% win rate). Is it possible you flipped a sign somewhere?
EDIT: It in fact has a 31% win rate.
Thank you for bringing this up!
I think you might be right, since the deck is quite undiverse and according to the rest diversity is important. That being said, I could not find the mistake in the code at a glance :/
Do you have any opinions on [1, 1, 0, 1, 0, 1, 2, 1, 1, 3, 0, 1]? This would be the worst deck amongst the decks that played against a deck similar to the rival’s in my code, according to my code.
At first glance that one looks pretty mediocre—not obviously good or bad. I would guess it’s slightly worse than one-of-everything.
This one has a 43% win rate.
It looks like your current code is using (max_distance—d) as the discount factor rather than 1/(d + 1). I tried both of those as well as 2^(-d) and got very different results with each. It appears you’re also using a threshold to pre-filter decks, so d will probably be much less than max_distance anyway. I’m not sure I entirely follow your code, but it looks like you’re just looking at decks that appear in the dataset rather than all possible decks?
Yes, I am looking at decks that appear in the dataset, and more particularly at decks that have faced a deck similar to the rival’s.
Good to know that one gets similar results using the different scoring functions.
I guess that maybe the approach does not work that well ¯\_(ツ)_/¯
Seeking clarification here: which of these decks are you currently submitting? If you need more time to decide, let me know.
Ah sorry for the lack of clarity—let’s stick to my original submission for PVE
That would be:
[0,1,0,1,0,0,9,0,0,1,0,0]
Could you try reformatting this, please? It looks like your answer hasn’t been successfully spoilered out.
Thank you!
Fixed, thanks!