It’s unclear to me whether there’s a credible accusation against y’all. So, in the interest of wanting to not have to worry about such a thing when and if I apply to stuff through nonlinear -
What are your plans for how to remove yourself from the equation by nature of providing a mechanically checkable tool that does not permit you to intervene on who can apply to who? In general, that’s what I expect a good networking tool to do. I wouldn’t want uncertainty about the validity of the nonlinear group to compromise an application, especially if this is at risk of turning out to be another scam like FTX turned out to be; I imagine it’d be a smaller issue than that, but of course you wouldn’t want to promise it’s not a scam, as that promise is vacuous, adding no information to the external view of whether it is one. The only way to verify such a thing is to design for mechanistic auditability, that is to say, processes that do not have a step on which a shaky reputationed person can exert influence, such as an open source application tool.
With that in mind, I am in fact interested in applying to some sort of funding process. I just don’t want to be accepting unreasonable reputational risk by depending on a challenged reputation with no mechanistic safeguarding against the hypothesized behaviors represented by reputational concern
level. I’d ask others to do as much with any org I was at.
It’s unclear to me whether there’s a credible accusation against y’all. So, in the interest of wanting to not have to worry about such a thing when and if I apply to stuff through nonlinear -
What are your plans for how to remove yourself from the equation by nature of providing a mechanically checkable tool that does not permit you to intervene on who can apply to who? In general, that’s what I expect a good networking tool to do. I wouldn’t want uncertainty about the validity of the nonlinear group to compromise an application, especially if this is at risk of turning out to be another scam like FTX turned out to be; I imagine it’d be a smaller issue than that, but of course you wouldn’t want to promise it’s not a scam, as that promise is vacuous, adding no information to the external view of whether it is one. The only way to verify such a thing is to design for mechanistic auditability, that is to say, processes that do not have a step on which a shaky reputationed person can exert influence, such as an open source application tool.
With that in mind, I am in fact interested in applying to some sort of funding process. I just don’t want to be accepting unreasonable reputational risk by depending on a challenged reputation with no mechanistic safeguarding against the hypothesized behaviors represented by reputational concern level. I’d ask others to do as much with any org I was at.