Still, there are a number of necessary operations at the assembly/machine level to perform a flop, and presumably much of the same operations are used when computing a hash. At the very least, you have to move around memory, add values, etc. There should be level of commensurably in that respect, right?
Unfortunately, there isn’t; in most architectures, the integer and bitwise operations that SHA256 uses and the floating-point operations that FLOPs measure aren’t even using the same silicon, except for some common parts that set up the operations but don’t limit the rate at which they’re done. A typical CPU will do both types of operations, just not with the same transistors, and not with any predictable ratio between the two performance numbers. A GPU will typically be specialized towards one or the other, and this is why AMD does so much better than nVidia. An FPGA or ASIC won’t do floating point at all.
But certainly all of these components can do floating point arithmetic, even if it requires special programming. People could use computers to add decimals before floating-point specialized subsystems existed. And you wouldn’t say that an abacus can’t handle floating point arithmetic “because it has no mechanism to split the beads”.
In this case, the emulation would be going the other way—using floating point to emulate integer arithmetic. This can probably be done, but it’d be dramatically less efficient than regular integer arithmetic. (Note that “arithmetic” in this case means mainly bitwise rotation, AND, OR, and XOR).
Unfortunately, there isn’t; in most architectures, the integer and bitwise operations that SHA256 uses and the floating-point operations that FLOPs measure aren’t even using the same silicon, except for some common parts that set up the operations but don’t limit the rate at which they’re done. A typical CPU will do both types of operations, just not with the same transistors, and not with any predictable ratio between the two performance numbers. A GPU will typically be specialized towards one or the other, and this is why AMD does so much better than nVidia. An FPGA or ASIC won’t do floating point at all.
But certainly all of these components can do floating point arithmetic, even if it requires special programming. People could use computers to add decimals before floating-point specialized subsystems existed. And you wouldn’t say that an abacus can’t handle floating point arithmetic “because it has no mechanism to split the beads”.
In this case, the emulation would be going the other way—using floating point to emulate integer arithmetic. This can probably be done, but it’d be dramatically less efficient than regular integer arithmetic. (Note that “arithmetic” in this case means mainly bitwise rotation, AND, OR, and XOR).