Here’s a possible construction, building on Dweomite’s suggestion to use error correction codes. We treat the selection of the random variables that Bob sees as a binary erasure channel with probability n−sn of erasure. Choose some error-correcting code for the BEC. Now, Alice constructs a function in the following way: For each input x, randomly choose a subset S of size s. Apply the decoder for our error-correcting code to the subset; we obtain some string y[1]. Alice chooses a function f which is constant on pre-images of a given string y under this mapping. She could either choose such an f uniformly at random, or possibly treat the selection of inputs as another binary erasure channel(on sequences of 1s and 0s indexed by the string y) and use another error-correcting code.
I don’t follow the construction. Alice don’t know x and S when choosing f. If she is taking the preimage for all 2^n values of x, each with a random S, she will have many overlapping preimages.
Yes. But I don’t see why that’s a problem? Which preimage a given x would be assigned to is random. The hope is that repeated trials would give the same preimage frequently enough for it to be a meaningful partition of the input space. How well it would work depends on the details of the ECC but I suspect it would work reasonably well in many cases. You could also just apply the decoder directly to the string x but I thought that might be a bit more unnatural since in reality Bob will never see the full string.
Here’s a possible construction, building on Dweomite’s suggestion to use error correction codes. We treat the selection of the random variables that Bob sees as a binary erasure channel with probability n−sn of erasure. Choose some error-correcting code for the BEC. Now, Alice constructs a function in the following way: For each input x, randomly choose a subset S of size s. Apply the decoder for our error-correcting code to the subset; we obtain some string y[1]. Alice chooses a function f which is constant on pre-images of a given string y under this mapping. She could either choose such an f uniformly at random, or possibly treat the selection of inputs as another binary erasure channel(on sequences of 1s and 0s indexed by the string y) and use another error-correcting code.
You could also repeat this procedure several times and take a majority vote.
I don’t follow the construction. Alice don’t know x and S when choosing f. If she is taking the preimage for all 2^n values of x, each with a random S, she will have many overlapping preimages.
Yes. But I don’t see why that’s a problem? Which preimage a given x would be assigned to is random. The hope is that repeated trials would give the same preimage frequently enough for it to be a meaningful partition of the input space. How well it would work depends on the details of the ECC but I suspect it would work reasonably well in many cases. You could also just apply the decoder directly to the string x but I thought that might be a bit more unnatural since in reality Bob will never see the full string.