Ambience and physical comfort are surprisingly important. In particular:
Lighting: Have lots of it! Ideally incandescent but at least ≥ 95 CRI (and mostly ≤ 3500k) LED, ideally coming from somewhere other than the center of the ceiling, ideally being filtered through a yellow-ish lampshade that has some variation in its color so the light that gets emitted has some variation too (sort of like the sun does when filtered through the atmosphere).
Food/drink: Have lots of it! Both in terms of quantity and variety. The cost to workshop quality of people not having their preferences met here sufficiently outweighs the cost of buying too much food, that in general it’s worth buying too much as a policy. It’s particularly important to meet people’s (often, for rationalists, amusingly specific) specific dietary needs, have a variety of caffeine options, and provide a changing supply of healthy, easily accessible snacks.
Furniture: As comfortable as possible, and arranged such that multiple small conversations are more likely to happen than one big one.
What are the effects of following, and of not following, these guidelines? What tests have you run to determine these effects, and is the data from those tests available for download?
We have not conducted thorough scientific investigation of our lamps, food or furniture. Just as one might have reasonable confidence in a proposition like “tired people are sometimes grumpy” without running an RCT, one can I think be reasonably confident that e.g. vegetarians will be upset if there’s no vegetarian food, or that people will be more likely to clump in small groups if the chairs are arranged in small groups.
I agree the lighting recommendations are quite specific. I have done lots of testing (relative to e.g. the average American) of different types of lamps, with different types of bulbs in different rooms, and have informally gathered data about people’s preferences. I have not done this formally, since I don’t think that would be worth the time, but in my informal experience, the bulb preferences of the subset of people who report any strong lighting preferences at all tend to correlate strongly with that bulb’s CRI. Currently incandescents have the highest CRI of commonly-available bulbs, so I generally recommend those. My other suggestions were developed via a similar process.
Pretty sure effect sizes are obvious—I’ve been to events without enough snacks, and people leave early because they’re tired and out of energy. I think lighting also has obvious effect sizes when you try it, and also room layout just obviously changes the accordance’s of a space (classroom lecture vs sitting in circle vs kitchen etc).
Added: I don’t think I disagree much with the things Said and others say below, I just meant to say that I don’t think that careful statistics is required to have robust beliefs about these topics.
My guess is also that CFAR has seen many datapoints in this space, and could answer Said’s question fine. I don’t expect them to have run controlled experiments, but I do expect them to have observed a large variety of different lighting setups, food/drink availability and furniture arrangements, and would be able to give aggregate summaries of their experiences with that.
Surely we’re not taking seriously recommendations based on “it’s just obvious”…? (There’s at least some sort of journal of events that notes these parameters and records apparent effects, that can be perused for patterns, etc.… right?)
Besides which, consider this:
Lighting: Have lots of it! Ideally incandescent but at least ≥ 95 CRI (and mostly ≤ 3500k) LED, ideally coming from somewhere other than the center of the ceiling, ideally being filtered through a yellow-ish lampshade that has some variation in its color so the light that gets emitted has some variation too (sort of like the sun does when filtered through the atmosphere).
These are very specific recommendations! I assume this means that the CFAR folks tried a bunch of variations—presumably in some systematic, planned way—and determined that these particular parameters are optimal. So… how was this determination made? What was the experimentation like? Surely it wasn’t just… “we tried some stuff in an ad-hoc manner, and this particular very specific set of parameters ended up being ‘obviously’ good”…?
EDIT: Let me put it another way:
What will happen if, instead of incandescent lighting, I use halogen bulbs? What if the light is 90 CRI instead of 95+? If it’s 4500K instead of 3500K—or, conversely, if it’s 2700K? What if the light is in the center of the ceiling? What if the lampshade is greenish and not yellowish? Etc., etc.—what specifically ought I expect to observe, if I depart from the recommended lighting pattern in each of those ways (and others)?
I assume this means that the CFAR folks tried a bunch of variations—presumably in some systematic, planned way—and determined that these particular parameters are optimal.
Why do you assume this? I would guess it was local hill climbing. (The base rate for local hill climbing is much higher than for systematic search, isn’t it?)
The base rate for local hill climbing is much higher than for systematic search, isn’t it?
No doubt it is. But then, the base rate for many things is much higher than the base rate for the corresponding more “optimal” / “rational” / “correct” versions of those things. Should I assume in each case that CFAR does everything in the usual way, and not the rarer–but–better way? (Surely a depressing stance to take, if accurate…)
On the meta level, I claim that doing things the usual way most of the time is the optimal / rational / correct way to do things. Resources are not infinite, trade-offs exist, etc.
Strongly second this. Running a formal experiment is often much more costly from a decision theoretic perspective than other ways of reducing uncertainty.
I think that you, and ESRogs, and possibly also habryka (though probably less so, if at all), have rather misunderstood the thrust of my comments.
I was not, and am not, suggesting that CFAR run experiments in a systematic (not ‘formal’—that is a red herring) way, nor am I saying that they should have done this.
Rather, what I was attempting to point out was that Adam Scholl’s comment, with its specific recommendations (especially the ones about lighting), would make sense if said recommendations were arrived at via a process of systematic experimentation (or, indeed, any even semi-systematic approach). On the other hand, suggestions such as “at least ≥ 95 CRI (and mostly ≤ 3500k) LED, ideally coming from somewhere other than the center of the ceiling, ideally being filtered through a yellow-ish lampshade” make no sense at all if arrived at via… what, exactly? Just trying different things and seeing which of them seemed like it was good?
If you missed it before, I would like to draw your attention to the part of this comment of mine elsethread that comes after the “EDIT” note. Judging from the specificity of his recommendations, I must assume that Adam can answer the questions I ask there.
On the other hand, suggestions such as “at least ≥ 95 CRI (and mostly ≤ 3500k) LED, ideally coming from somewhere other than the center of the ceiling, ideally being filtered through a yellow-ish lampshade” make no sense at all if arrived at via… what, exactly? Just trying different things and seeing which of them seemed like it was good?
Why not?
If you’re running many, many events, and one of your main goals is to get good conversations happening you’ll begin to build up an intuition about which things help and hurt. For instance, look at a room, and be like “it’s too dark in here.” Then you go get your extra bright lamps, and put them in the middle of the room, and everyone is like “ah, that is much better, I hadn’t even noticed.”
It seems like if you do this enough, you’ll end up with pretty specific recommendations like what Adam outlined.
Actually, I think this touches on something that is useful to understand about CFAR in general.
Most of our “knowledge” (about rationality, about running workshops, about how people can react to x-risk, etc.) is what I might call “trade knowledge”, it comes from having lots of personal experience in the domain, and building up good procedures via mostly-trial and error (plus metacognition and theorizing about noticed problems might be, and how to fix them).
This is distinct from scientific knowledge, which is build up from robustly verified premises, tested by explicit attempts at falsification.
(I’m reminded of an old LW post, that I can’t find, about Eliezer giving some young kid (who wants to be a writer) writing advice, while a bunch of bystanders signal that they don’t regard Eliezer as trustworthy.)
For instance, I might lead someone through an IDC like process at a CFAR workshop. This isn’t because I’ve done rigorous tests (or I know of others who have done rigorous tests) of IDC, or because I’ve concluded from the neuroscience literature the IDC is the optimal process for arriving at true beliefs.
Rather, its that I (and other CFAR staff) have interacted with people who have a conflict between beliefs / models / urges / “parts”, a lot, in addition to spending even more time engaging with those problems in ourselves. And from that exploration, this IDC-process seems to work well, in the sense of getting good results. So, I have a prior that it will be useful for the nth person. (Of course sometime this isn’t the case, because people can be really different, and occasionally a tool will be ineffective, or even harmful, despite being extremely useful for most people.)
The same goes for, for instance, whatever conversational facilitation acumen I’ve acquired. I don’t want to be making a claim that, say, “finding a Double Crux is the objectively correct process, or the optimal process, for resolving disagreements.” Only that I’ve spent a lot of time resolving disagreements, and, at least sometimes, at least for me, this strategy seems to help substantially.
I can also give theoretical reasons why I think it works, but those theory reasons are not much of a crux: if a person can’t seem to make something useful happen when they try to Double Crux, but something useful does happen when they do this other thing, I think they should do the other thing, theory be damned. It might be that that person is trying to apply the Double Crux pattern in a domain that its not suited for (but I don’t know that, because I haven’t tried to work in that domain yet), or it might be that they’re missing a piece or doing it wrong, and we might be able to iron it out if I observed their process, or maybe they have some other skill that I don’t have myself, and they’re so good at that skill that trying to do the Double Crux thing is a step backwards (in the same way that there are different schools of martial arts).
The fact that my knowledge, and CFAR’s knowledge, in these domains is trade knowledge has some important implications:
It means that our content is path dependent. There are probably dozens or hundreds of stable, skilled “ways of engaging with minds.” If you’re trying to build trade knowledge you will end up gravitating to one cluster, and build out skill and content there, even if that cluster is a local optimum, and another cluster is more effective overall.
It means that you’re looking for skill, more than declarative third-person knowledge and that you’re not trying to make things that are legible to other fields. A carpenter wants to have good techniques for working with wood, and in most cases doesn’t care very much if his terminology or ontology lines up with that of botany.
For instance, maybe to the carpenter there are 3 kinds of knots in wood, and they need to be worked with in different ways, but he’s actually conflating 2 kinds of biological structures in the first type, and the second and third type are actually the same biological structure, but flipped vertically (because sometimes the wood is “upside down” from the orientation of the tree). The carpenter, qua carpenter, doesn’t care about this. He’s just trying to get the job done. But that doesn’t mean that bystanders should get confused and think that the carpenter thinks that he has discovered some new, superior framework of botany.
It means that a lot of content can only easily be conveyed tacitly, and in person, or at least, making it accessible via writing, etc. is an additional hard task.
Carpentry (I speculate) involves a bunch of subtle tacit, perceptual maneuvers, like (I’m making this up) learning to tell when the wood is “smooth to the grain” or “soft and flexible”, and looking at a piece of wood and knowing that you should cut it up top near the knot, even though that seems like it it would be harder to work around, because of how “flat” it gets down the plank. (I am still totally making this up.) It is much easier to convey these things to a learner who is right there with you, so that you can watch their process, and, for instance, point out exactly what you mean by “soft and flexible” via iterated demonstration.
That’s not to say that you couldn’t figure out how to teach the subtle art of carpentry via blog post or book, but you would have to figure out how to do that (and it would still probably be worse than learning directly from someone skilled). This is related to why CFAR has historically been reluctant to share the handbook: the handbook sketches the techniques, and is a good reminder, but we don’t think it conveys the techniques particularly well, because that’s really hard.
A carpenter might say that his knowledge is trade knowledge and not scientific knowledge, and when challenged to provide some evidence that this supposed “trade knowledge” is real, and is worth something, may point to the chairs, tables, cabinets, etc., which he has made. The quality of these items may be easily examined, by someone with no knowledge of carpentry at all. “I am a trained and skilled carpenter, who can make various useful things for you out of wood” is a claim which is very, very easy to verify.
But as I understand it, CFAR has considerable difficulty providing, for examination, any equivalent of a beautifully-made oak cabinet. This makes claims of “trade knowledge” rather more dubious.
(I’m reminded of an old LW post, that I can’t find, about Eliezer giving some young kid (who wants to be a writer) writing advice, while a bunch of bystanders signal that they don’t regard Eliezer as trustworthy.)
And from that exploration, this IDC-process seems to work well, in the sense of getting good results.
An important clarification, at least from my experience of the metacognition, is that it’s both getting good results and not triggering alarms (in the form of participant pushback or us feeling skeevy about doing it). Something that gets people to nod along (for the wrong reasons) or has some people really like it and other people really dislike it is often the sort of thing where we go “hmm, can we do better?”
I think every debrief document I’ve interacted with (which are all before CFAR got a permanent venue) included a section on “thoughts on the venue and layout” as well as “thoughts and food and snacks” that usually discussed the effects of how the food and snacks were handled and how the venue seemed to affect the workshop (and whether CFAR should go back to that venue in the future). I am not sure whether that meets your threshold for systematicness, but it should at least allow a cross-verification of the listed patterns with observations at the time of the workshops in different conditions.
It’s a start, at least! If all the parameters (i.e., CRI / color temperature / etc. of the lighting, and of course furniture layout and so on) were recorded each time, and if notes on effects were taken consistently, then this should allow at least some rough spotting of patterns. Is this data available somewhere, in aggregated form? How comprehensive is it (i.e., how far back does it date, and how complete is the coverage of CFAR events)?
My guess is someone could dig up the debriefs for probably almost all workshops for the past 4 years, though synthesizing that is probably multiple days of work. I don’t expect specific things like CRI to have been recorded, but I do expect the sections to say stuff like “all the rooms were too dark, and this one room had an LED light in it that gave me headaches, and I’ve also heard from one attendee that they didn’t like being in that room”, which would allow you to derive a bunch of those parameters from context.
See this comment elsethread. To summarize a bit: it was not (and is not) my intention to ask or require anyone to do the sort of digging and synthesis that you describe[1]. Rather, I was wondering how the specific recommendations listed in Adam Scholl’s comment were arrived at (if not via a process even as systematic as synthesis from informal debriefs)—and, in consequence, how exactly those recommendations are to be understood (that is: “this is one point in the space of possibilities which we have stumbled upon and which seems good”? or, “this is the optimal point in the possibility space”? what are we to understand about the shape of the surrounding fitness landscape across the dimensions described? etc.).
Though of course it would be interesting to do, regardless! If the debriefs can be made available for public download, en masse, I suspect a number of people would be interested in sifting through them for this sort of data, and much other interesting info as well.
Ambience and physical comfort are surprisingly important. In particular:
Lighting: Have lots of it! Ideally incandescent but at least ≥ 95 CRI (and mostly ≤ 3500k) LED, ideally coming from somewhere other than the center of the ceiling, ideally being filtered through a yellow-ish lampshade that has some variation in its color so the light that gets emitted has some variation too (sort of like the sun does when filtered through the atmosphere).
Food/drink: Have lots of it! Both in terms of quantity and variety. The cost to workshop quality of people not having their preferences met here sufficiently outweighs the cost of buying too much food, that in general it’s worth buying too much as a policy. It’s particularly important to meet people’s (often, for rationalists, amusingly specific) specific dietary needs, have a variety of caffeine options, and provide a changing supply of healthy, easily accessible snacks.
Furniture: As comfortable as possible, and arranged such that multiple small conversations are more likely to happen than one big one.
What are the effects of following, and of not following, these guidelines? What tests have you run to determine these effects, and is the data from those tests available for download?
We have not conducted thorough scientific investigation of our lamps, food or furniture. Just as one might have reasonable confidence in a proposition like “tired people are sometimes grumpy” without running an RCT, one can I think be reasonably confident that e.g. vegetarians will be upset if there’s no vegetarian food, or that people will be more likely to clump in small groups if the chairs are arranged in small groups.
I agree the lighting recommendations are quite specific. I have done lots of testing (relative to e.g. the average American) of different types of lamps, with different types of bulbs in different rooms, and have informally gathered data about people’s preferences. I have not done this formally, since I don’t think that would be worth the time, but in my informal experience, the bulb preferences of the subset of people who report any strong lighting preferences at all tend to correlate strongly with that bulb’s CRI. Currently incandescents have the highest CRI of commonly-available bulbs, so I generally recommend those. My other suggestions were developed via a similar process.
Pretty sure effect sizes are obvious—I’ve been to events without enough snacks, and people leave early because they’re tired and out of energy. I think lighting also has obvious effect sizes when you try it, and also room layout just obviously changes the accordance’s of a space (classroom lecture vs sitting in circle vs kitchen etc).
Added: I don’t think I disagree much with the things Said and others say below, I just meant to say that I don’t think that careful statistics is required to have robust beliefs about these topics.
My guess is also that CFAR has seen many datapoints in this space, and could answer Said’s question fine. I don’t expect them to have run controlled experiments, but I do expect them to have observed a large variety of different lighting setups, food/drink availability and furniture arrangements, and would be able to give aggregate summaries of their experiences with that.
Surely we’re not taking seriously recommendations based on “it’s just obvious”…? (There’s at least some sort of journal of events that notes these parameters and records apparent effects, that can be perused for patterns, etc.… right?)
Besides which, consider this:
These are very specific recommendations! I assume this means that the CFAR folks tried a bunch of variations—presumably in some systematic, planned way—and determined that these particular parameters are optimal. So… how was this determination made? What was the experimentation like? Surely it wasn’t just… “we tried some stuff in an ad-hoc manner, and this particular very specific set of parameters ended up being ‘obviously’ good”…?
EDIT: Let me put it another way:
What will happen if, instead of incandescent lighting, I use halogen bulbs? What if the light is 90 CRI instead of 95+? If it’s 4500K instead of 3500K—or, conversely, if it’s 2700K? What if the light is in the center of the ceiling? What if the lampshade is greenish and not yellowish? Etc., etc.—what specifically ought I expect to observe, if I depart from the recommended lighting pattern in each of those ways (and others)?
Why do you assume this? I would guess it was local hill climbing. (The base rate for local hill climbing is much higher than for systematic search, isn’t it?)
No doubt it is. But then, the base rate for many things is much higher than the base rate for the corresponding more “optimal” / “rational” / “correct” versions of those things. Should I assume in each case that CFAR does everything in the usual way, and not the rarer–but–better way? (Surely a depressing stance to take, if accurate…)
Yes, when the better way takes more resources.
On the meta level, I claim that doing things the usual way most of the time is the optimal / rational / correct way to do things. Resources are not infinite, trade-offs exist, etc.
EDIT: for related thoughts, see Vaniver’s recent post on T-Shaped Organizations.
Strongly second this. Running a formal experiment is often much more costly from a decision theoretic perspective than other ways of reducing uncertainty.
I think that you, and ESRogs, and possibly also habryka (though probably less so, if at all), have rather misunderstood the thrust of my comments.
I was not, and am not, suggesting that CFAR run experiments in a systematic (not ‘formal’—that is a red herring) way, nor am I saying that they should have done this.
Rather, what I was attempting to point out was that Adam Scholl’s comment, with its specific recommendations (especially the ones about lighting), would make sense if said recommendations were arrived at via a process of systematic experimentation (or, indeed, any even semi-systematic approach). On the other hand, suggestions such as “at least ≥ 95 CRI (and mostly ≤ 3500k) LED, ideally coming from somewhere other than the center of the ceiling, ideally being filtered through a yellow-ish lampshade” make no sense at all if arrived at via… what, exactly? Just trying different things and seeing which of them seemed like it was good?
If you missed it before, I would like to draw your attention to the part of this comment of mine elsethread that comes after the “EDIT” note. Judging from the specificity of his recommendations, I must assume that Adam can answer the questions I ask there.
Why not?
If you’re running many, many events, and one of your main goals is to get good conversations happening you’ll begin to build up an intuition about which things help and hurt. For instance, look at a room, and be like “it’s too dark in here.” Then you go get your extra bright lamps, and put them in the middle of the room, and everyone is like “ah, that is much better, I hadn’t even noticed.”
It seems like if you do this enough, you’ll end up with pretty specific recommendations like what Adam outlined.
Actually, I think this touches on something that is useful to understand about CFAR in general.
Most of our “knowledge” (about rationality, about running workshops, about how people can react to x-risk, etc.) is what I might call “trade knowledge”, it comes from having lots of personal experience in the domain, and building up good procedures via mostly-trial and error (plus metacognition and theorizing about noticed problems might be, and how to fix them).
This is distinct from scientific knowledge, which is build up from robustly verified premises, tested by explicit attempts at falsification.
(I’m reminded of an old LW post, that I can’t find, about Eliezer giving some young kid (who wants to be a writer) writing advice, while a bunch of bystanders signal that they don’t regard Eliezer as trustworthy.)
For instance, I might lead someone through an IDC like process at a CFAR workshop. This isn’t because I’ve done rigorous tests (or I know of others who have done rigorous tests) of IDC, or because I’ve concluded from the neuroscience literature the IDC is the optimal process for arriving at true beliefs.
Rather, its that I (and other CFAR staff) have interacted with people who have a conflict between beliefs / models / urges / “parts”, a lot, in addition to spending even more time engaging with those problems in ourselves. And from that exploration, this IDC-process seems to work well, in the sense of getting good results. So, I have a prior that it will be useful for the nth person. (Of course sometime this isn’t the case, because people can be really different, and occasionally a tool will be ineffective, or even harmful, despite being extremely useful for most people.)
The same goes for, for instance, whatever conversational facilitation acumen I’ve acquired. I don’t want to be making a claim that, say, “finding a Double Crux is the objectively correct process, or the optimal process, for resolving disagreements.” Only that I’ve spent a lot of time resolving disagreements, and, at least sometimes, at least for me, this strategy seems to help substantially.
I can also give theoretical reasons why I think it works, but those theory reasons are not much of a crux: if a person can’t seem to make something useful happen when they try to Double Crux, but something useful does happen when they do this other thing, I think they should do the other thing, theory be damned. It might be that that person is trying to apply the Double Crux pattern in a domain that its not suited for (but I don’t know that, because I haven’t tried to work in that domain yet), or it might be that they’re missing a piece or doing it wrong, and we might be able to iron it out if I observed their process, or maybe they have some other skill that I don’t have myself, and they’re so good at that skill that trying to do the Double Crux thing is a step backwards (in the same way that there are different schools of martial arts).
The fact that my knowledge, and CFAR’s knowledge, in these domains is trade knowledge has some important implications:
It means that our content is path dependent. There are probably dozens or hundreds of stable, skilled “ways of engaging with minds.” If you’re trying to build trade knowledge you will end up gravitating to one cluster, and build out skill and content there, even if that cluster is a local optimum, and another cluster is more effective overall.
It means that you’re looking for skill, more than declarative third-person knowledge and that you’re not trying to make things that are legible to other fields. A carpenter wants to have good techniques for working with wood, and in most cases doesn’t care very much if his terminology or ontology lines up with that of botany.
For instance, maybe to the carpenter there are 3 kinds of knots in wood, and they need to be worked with in different ways, but he’s actually conflating 2 kinds of biological structures in the first type, and the second and third type are actually the same biological structure, but flipped vertically (because sometimes the wood is “upside down” from the orientation of the tree). The carpenter, qua carpenter, doesn’t care about this. He’s just trying to get the job done. But that doesn’t mean that bystanders should get confused and think that the carpenter thinks that he has discovered some new, superior framework of botany.
It means that a lot of content can only easily be conveyed tacitly, and in person, or at least, making it accessible via writing, etc. is an additional hard task.
Carpentry (I speculate) involves a bunch of subtle tacit, perceptual maneuvers, like (I’m making this up) learning to tell when the wood is “smooth to the grain” or “soft and flexible”, and looking at a piece of wood and knowing that you should cut it up top near the knot, even though that seems like it it would be harder to work around, because of how “flat” it gets down the plank. (I am still totally making this up.) It is much easier to convey these things to a learner who is right there with you, so that you can watch their process, and, for instance, point out exactly what you mean by “soft and flexible” via iterated demonstration.
That’s not to say that you couldn’t figure out how to teach the subtle art of carpentry via blog post or book, but you would have to figure out how to do that (and it would still probably be worse than learning directly from someone skilled). This is related to why CFAR has historically been reluctant to share the handbook: the handbook sketches the techniques, and is a good reminder, but we don’t think it conveys the techniques particularly well, because that’s really hard.
I don’t think this works.
A carpenter might say that his knowledge is trade knowledge and not scientific knowledge, and when challenged to provide some evidence that this supposed “trade knowledge” is real, and is worth something, may point to the chairs, tables, cabinets, etc., which he has made. The quality of these items may be easily examined, by someone with no knowledge of carpentry at all. “I am a trained and skilled carpenter, who can make various useful things for you out of wood” is a claim which is very, very easy to verify.
But as I understand it, CFAR has considerable difficulty providing, for examination, any equivalent of a beautifully-made oak cabinet. This makes claims of “trade knowledge” rather more dubious.
You’re thinking of You’re Calling *Who* A Cult Leader?
An important clarification, at least from my experience of the metacognition, is that it’s both getting good results and not triggering alarms (in the form of participant pushback or us feeling skeevy about doing it). Something that gets people to nod along (for the wrong reasons) or has some people really like it and other people really dislike it is often the sort of thing where we go “hmm, can we do better?”
Thank you for this clarification.
I think every debrief document I’ve interacted with (which are all before CFAR got a permanent venue) included a section on “thoughts on the venue and layout” as well as “thoughts and food and snacks” that usually discussed the effects of how the food and snacks were handled and how the venue seemed to affect the workshop (and whether CFAR should go back to that venue in the future). I am not sure whether that meets your threshold for systematicness, but it should at least allow a cross-verification of the listed patterns with observations at the time of the workshops in different conditions.
It’s a start, at least! If all the parameters (i.e., CRI / color temperature / etc. of the lighting, and of course furniture layout and so on) were recorded each time, and if notes on effects were taken consistently, then this should allow at least some rough spotting of patterns. Is this data available somewhere, in aggregated form? How comprehensive is it (i.e., how far back does it date, and how complete is the coverage of CFAR events)?
My guess is someone could dig up the debriefs for probably almost all workshops for the past 4 years, though synthesizing that is probably multiple days of work. I don’t expect specific things like CRI to have been recorded, but I do expect the sections to say stuff like “all the rooms were too dark, and this one room had an LED light in it that gave me headaches, and I’ve also heard from one attendee that they didn’t like being in that room”, which would allow you to derive a bunch of those parameters from context.
See this comment elsethread. To summarize a bit: it was not (and is not) my intention to ask or require anyone to do the sort of digging and synthesis that you describe[1]. Rather, I was wondering how the specific recommendations listed in Adam Scholl’s comment were arrived at (if not via a process even as systematic as synthesis from informal debriefs)—and, in consequence, how exactly those recommendations are to be understood (that is: “this is one point in the space of possibilities which we have stumbled upon and which seems good”? or, “this is the optimal point in the possibility space”? what are we to understand about the shape of the surrounding fitness landscape across the dimensions described? etc.).
Though of course it would be interesting to do, regardless! If the debriefs can be made available for public download, en masse, I suspect a number of people would be interested in sifting through them for this sort of data, and much other interesting info as well.