3D “Renderers” of Random Sampling/Monte Carlo estimation
This came out of thinking about explaining sparseness in higher dimensions*, and I started getting into visuals (as I am known to do). I’ve see most of the things below described in 2D (sometimes incredibly well!), but 3D is a tad trickier, while not being nearly as painfully-unintuitive as 4D. Here’s what I came up with!
In a simple case of Monte Carlo/random-sampling from a three-dimensional space with a one-dimensional binary or float output...
You’re trying to work out what is in a sealed box using only a thin needle (analogy for the function that produces your results). The needle has a limited number of draws, but you can aim the needle really well. You can use it to find out what color the object in the box is at each of the exact points you decided to measure. You also have an open box in which to mark off the exact spot in space at which you did the draw, using Unobtainium to keep it in place.
Each of your samples is a fixed point in space, with a color set by the needle(function/sampler’s) output/result. These are going to be the cores that you base your render around. From there, you can...
Nearest Neighbor Render
For Nearest Neighbor (NN), you can see what happens when you situate colored-or-clear balloons at each of the sample points, and start simultaneously blowing up all of them up with their sample point as a loose center. Gradually, this will give you a bad “rendering” of whatever the color-results were describing.
(Technically it’s Voronoi regions; balloons are more evocative)
Decision Trees
Decision trees are when you fill in the space using colored rectangular blocks of varying sizes, based on the sample points.
Gradients
Most other methods involve creating a gradient between 2 different points that yielded different colors. It could be a linear gradient, an exponential gradient, a step function, a sine wave, some other complicated gradient… as you get more complicated, you better hope that your priors were really good!
Neural Net
A human with lots of prior experience looking at similar objects tries to infer what the rest looks like. (This is kinda a troll answer, I know. Thinking about it.)
Sample All The Things!
3D Pointillism even just sounds like a really bad idea. Pointillism on a 2D surface in a 3D space, not as bad of an idea! Sparsity at work?
Bonus
The Blind Men and an Elephant fable is an example of why you don’t want to do a render from a single datapoint, or else you might decide that the entire world is “pure rope.” If they’d known from sound where each other were located when they touched the elephant, and had listened to what the others were observing, they might have actually Nearest Neighbor’d themselves a pretty decent approximate elephant.
*Specifically, how it’s dramatically more effort to get a high-resolution pixel-by-pixel picture of a high-dimensional space. But if you’re willing to sacrifice accuracy for faster coverage, in high dimensions a single point’s Nearest Neighbor zone can sloppily cover a whole lot of ground (or multidimensional “volume”).
I stumbled into a satisfying visual description:
3D “Renderers” of Random Sampling/Monte Carlo estimation
This came out of thinking about explaining sparseness in higher dimensions*, and I started getting into visuals (as I am known to do). I’ve see most of the things below described in 2D (sometimes incredibly well!), but 3D is a tad trickier, while not being nearly as painfully-unintuitive as 4D. Here’s what I came up with!
Also, someone’s probably already done this, but… shrug?
The Setup
In a simple case of Monte Carlo/random-sampling from a three-dimensional space with a one-dimensional binary or float output...
You’re trying to work out what is in a sealed box using only a thin needle (analogy for the function that produces your results). The needle has a limited number of draws, but you can aim the needle really well. You can use it to find out what color the object in the box is at each of the exact points you decided to measure. You also have an open box in which to mark off the exact spot in space at which you did the draw, using Unobtainium to keep it in place.
Each of your samples is a fixed point in space, with a color set by the needle(function/sampler’s) output/result. These are going to be the cores that you base your render around. From there, you can...
Nearest Neighbor Render
For Nearest Neighbor (NN), you can see what happens when you situate colored-or-clear balloons at each of the sample points, and start simultaneously blowing up all of them up with their sample point as a loose center. Gradually, this will give you a bad “rendering” of whatever the color-results were describing.
(Technically it’s Voronoi regions; balloons are more evocative)
Decision Trees
Decision trees are when you fill in the space using colored rectangular blocks of varying sizes, based on the sample points.
Gradients
Most other methods involve creating a gradient between 2 different points that yielded different colors. It could be a linear gradient, an exponential gradient, a step function, a sine wave, some other complicated gradient… as you get more complicated, you better hope that your priors were really good!
Neural Net
A human with lots of prior experience looking at similar objects tries to infer what the rest looks like. (This is kinda a troll answer, I know. Thinking about it.)
Sample All The Things!
3D Pointillism even just sounds like a really bad idea. Pointillism on a 2D surface in a 3D space, not as bad of an idea! Sparsity at work?
Bonus
The Blind Men and an Elephant fable is an example of why you don’t want to do a render from a single datapoint, or else you might decide that the entire world is “pure rope.” If they’d known from sound where each other were located when they touched the elephant, and had listened to what the others were observing, they might have actually Nearest Neighbor’d themselves a pretty decent approximate elephant.
*Specifically, how it’s dramatically more effort to get a high-resolution pixel-by-pixel picture of a high-dimensional space. But if you’re willing to sacrifice accuracy for faster coverage, in high dimensions a single point’s Nearest Neighbor zone can sloppily cover a whole lot of ground (or multidimensional “volume”).