What you’re calling ablation redundancy is a measure of nonlinearity of the feature being measured, not any form of redundancy, and the view you quote doesn’t make sense as stated, as nonlinearity, rather than redundancy, would be necessary for its conclusion. If you’re trying to recover some feature f:Rn→R, and there’s any vector v∈Rn and scalar c∈R such that f(x)=v⋅x+c for all data x∈Rn (regardless of whether there are multiple such v,c, which would happen if the data is contained in a proper affine subspace), then there is a direction such that projection along it makes it impossible for a linear probe to get any information about the value of f. That direction is Σv, where Σ is the covariance matrix of the data. This works because if w⊥Σv, then the random variables x↦w⋅x and x↦v⋅x are uncorrelated (since Cov(v⋅x,w⋅x)=wTΣv=0), and thus w⋅x is uncorrelated with f(x).
If the data is normally distributed, then we can make this stronger. If there’s a vector v and a function g such that f(x)=g(v⋅x) (for example, if you’re using a linear probe to get a binary classifier, where it classifies things based on whether the value of a linear function is above some threshhold), then projecting along Σv removes all information about f. This is because uncorrelated linear features of a multivariate normal distribution are independent, so if w⊥Σv, then w⋅x is independent of v⋅x, and thus also of f(x). So the reason what you’re calling high ablation redundancy is rare is that low ablation redundancy is a consequence of the existence of any linear probe that gets good performance and the data not being too wildly non-Gaussian.
Yep, high ablation redundancy can only exist when features are nonlinear. Linear features are obviously removable with a rank-1 ablation, and you get them by running CCS/Logistic Regression/whatever. But I don’t care about linear features since it’s not what I care about since it’s not the shape the features have (Logistic Regression & CCS can’t remove the linear information).
The point is, the reason why CCS fails to remove linearly available information is not because the data “is too hard”. Rather, it’s because the feature is non-linear in a regular way, which makes CCS and Logistic Regression suck at finding the direction which contains all linearly available data (which exists in the context of “truth”, just as it is in the context of gender and all the datasets on which RLACE has been tried).
I’m not sure why you don’t like calling this “redundancy”. A meaning of redundant is “able to be omitted without loss of meaning or function” (Lexico). So ablation redundancy is the normal kind of redundancy, where you can remove sth without losing the meaning. Here it’s not redundant, you can remove a single direction and lose all the (linear) “meaning”.
I’m not sure why you don’t like calling this “redundancy”. A meaning of redundant is “able to be omitted without loss of meaning or function” (Lexico). So ablation redundancy is the normal kind of redundancy, where you can remove sth without losing the meaning. Here it’s not redundant, you can remove a single direction and lose all the (linear) “meaning”.
Suppose your datapoints are (x,y)∈R2 (where the coordinates x and y are independent from the standard normal distribution), and the feature you’re trying to measure is x2+y2. A rank-1 linear probe will retain some information about the feature. Say your linear probe finds the x coordinate. This gives you information about x2+y2; your expected value for this feature is now x2+1, an improvement over its a priori expected value of 2. If you ablate along this direction, all you’re left with is the y coordinate, which tells you exactly as much about the feature x2+y2 as the x coordinate does, so this rank-1 ablation causes no loss in performance. But information is still lost when you lose the x coordinate, namely the contribution of x2 from the feature. The thing that you can still find after ablating away the x direction is not redundant with the the rank-1 linear probe in the x direction you started with, but just contributes the same amount towards the feature you’re measuring.
The point is, the reason why CCS fails to remove linearly available information is not because the data “is too hard”. Rather, it’s because the feature is non-linear in a regular way, which makes CCS and Logistic Regression suck at finding the direction which contains all linearly available data (which exists in the context of “truth”, just as it is in the context of gender and all the datasets on which RLACE has been tried).
Disagree. The reason CCS doesn’t remove information is neither of those, but instead just that that’s not what it’s trained to do. It doesn’t fail, but rather never makes any attempt. If you’re trying to train a function such that f(1,1)=1 and f(−1,−1)=−1, then f(x,y)=x will achieve optimal loss just like f(x,y)=12(x+y) will.
What you’re calling ablation redundancy is a measure of nonlinearity of the feature being measured, not any form of redundancy, and the view you quote doesn’t make sense as stated, as nonlinearity, rather than redundancy, would be necessary for its conclusion. If you’re trying to recover some feature f:Rn→R, and there’s any vector v∈Rn and scalar c∈R such that f(x)=v⋅x+c for all data x∈Rn (regardless of whether there are multiple such v,c, which would happen if the data is contained in a proper affine subspace), then there is a direction such that projection along it makes it impossible for a linear probe to get any information about the value of f. That direction is Σv, where Σ is the covariance matrix of the data. This works because if w⊥Σv, then the random variables x↦w⋅x and x↦v⋅x are uncorrelated (since Cov(v⋅x,w⋅x)=wTΣv=0), and thus w⋅x is uncorrelated with f(x).
If the data is normally distributed, then we can make this stronger. If there’s a vector v and a function g such that f(x)=g(v⋅x) (for example, if you’re using a linear probe to get a binary classifier, where it classifies things based on whether the value of a linear function is above some threshhold), then projecting along Σv removes all information about f. This is because uncorrelated linear features of a multivariate normal distribution are independent, so if w⊥Σv, then w⋅x is independent of v⋅x, and thus also of f(x). So the reason what you’re calling high ablation redundancy is rare is that low ablation redundancy is a consequence of the existence of any linear probe that gets good performance and the data not being too wildly non-Gaussian.
Yep, high ablation redundancy can only exist when features are nonlinear. Linear features are obviously removable with a rank-1 ablation, and you get them by running CCS/Logistic Regression/whatever. But I don’t care about linear features since it’s not what I care about since it’s not the shape the features have (Logistic Regression & CCS can’t remove the linear information).
The point is, the reason why CCS fails to remove linearly available information is not because the data “is too hard”. Rather, it’s because the feature is non-linear in a regular way, which makes CCS and Logistic Regression suck at finding the direction which contains all linearly available data (which exists in the context of “truth”, just as it is in the context of gender and all the datasets on which RLACE has been tried).
I’m not sure why you don’t like calling this “redundancy”. A meaning of redundant is “able to be omitted without loss of meaning or function” (Lexico). So ablation redundancy is the normal kind of redundancy, where you can remove sth without losing the meaning. Here it’s not redundant, you can remove a single direction and lose all the (linear) “meaning”.
Suppose your datapoints are (x,y)∈R2 (where the coordinates x and y are independent from the standard normal distribution), and the feature you’re trying to measure is x2+y2. A rank-1 linear probe will retain some information about the feature. Say your linear probe finds the x coordinate. This gives you information about x2+y2; your expected value for this feature is now x2+1, an improvement over its a priori expected value of 2. If you ablate along this direction, all you’re left with is the y coordinate, which tells you exactly as much about the feature x2+y2 as the x coordinate does, so this rank-1 ablation causes no loss in performance. But information is still lost when you lose the x coordinate, namely the contribution of x2 from the feature. The thing that you can still find after ablating away the x direction is not redundant with the the rank-1 linear probe in the x direction you started with, but just contributes the same amount towards the feature you’re measuring.
Disagree. The reason CCS doesn’t remove information is neither of those, but instead just that that’s not what it’s trained to do. It doesn’t fail, but rather never makes any attempt. If you’re trying to train a function such that f(1,1)=1 and f(−1,−1)=−1, then f(x,y)=x will achieve optimal loss just like f(x,y)=12(x+y) will.