1000x1000x2000 is.. hm two gigabytes at one byte per point, so probably at least eight, and possibly more depending on Java types and what exactly you’re storing.
You probably want some form of sparse array, or something more suited to the problem at hand.
It shouldn’t be possible to crash your computer by running out of memory (it’ll just kill the program instead), but it can sometimes be hard to tell the difference when swap gets involved.
Oh, I’m using a sparse array all right—I “only” have to deal with, oh, 10,000,000 actual voxels. And to be fair, I was being colloquial, not accurate (in retrospect that was foolish on a site so closely linked to AI research) - JVM just threw an OutOfMemoryException and exited gracefully.
It’ll vary depending on JVM and Java version and Operating System, but a modern 32-bit Windows environment will default to a maximum heap size of 1/4th of total available memory or 256 MB, and even a 64-bit Windows or Linux server-mode Java environment usually defaults to only one or two GB. With ‘only’ 10 million voxels, those defaults only give you 200 bytes per voxel in the best-case scenario (and 25.6 bytes per voxel in a plausible scenario), which can be deceptively easy to overfill.
You can change that with the -Xmx option when initiating the JVM in order to use more available RAM, which may at least buy you additional time. Software profilers will also let you get a better idea of how much space you’re allocating, as well.
If you want the system to be growable beyond the current point, you probably need a better way to compartmentalize the design and work on smaller subsets individually, though.
1000x1000x2000 is.. hm two gigabytes at one byte per point, so probably at least eight, and possibly more depending on Java types and what exactly you’re storing.
You probably want some form of sparse array, or something more suited to the problem at hand.
It shouldn’t be possible to crash your computer by running out of memory (it’ll just kill the program instead), but it can sometimes be hard to tell the difference when swap gets involved.
Oh, I’m using a sparse array all right—I “only” have to deal with, oh, 10,000,000 actual voxels. And to be fair, I was being colloquial, not accurate (in retrospect that was foolish on a site so closely linked to AI research) - JVM just threw an OutOfMemoryException and exited gracefully.
It’ll vary depending on JVM and Java version and Operating System, but a modern 32-bit Windows environment will default to a maximum heap size of 1/4th of total available memory or 256 MB, and even a 64-bit Windows or Linux server-mode Java environment usually defaults to only one or two GB. With ‘only’ 10 million voxels, those defaults only give you 200 bytes per voxel in the best-case scenario (and 25.6 bytes per voxel in a plausible scenario), which can be deceptively easy to overfill.
You can change that with the -Xmx option when initiating the JVM in order to use more available RAM, which may at least buy you additional time. Software profilers will also let you get a better idea of how much space you’re allocating, as well.
If you want the system to be growable beyond the current point, you probably need a better way to compartmentalize the design and work on smaller subsets individually, though.