Hey Jacob! My comment has a coded example with biases:
import torch
W = torch.tensor([[-1, 1],[1,1],[1,-1]])
x = torch.tensor([[0,1], [1,1],[1,0]])
b = torch.tensor([0, -1, 0])
y = torch.nn.functional.relu(x@W.T + b)
This is for the encoder, where y will be the identity (which is sparse for the hidden dimension).
Hey Jacob! My comment has a coded example with biases:
This is for the encoder, where y will be the identity (which is sparse for the hidden dimension).
Nice, this is exactly what I was asking for. Thanks!