Agner wrote:
These kinds of parallelism have no theoretical limits
There are very real theoretical limits to multiprocessing. In particular, you can only access O(time^3) (abuse of notation, I know) "computational resources" (be they cores, units of memory, etc) - and that's assuming 3D construction. With 2D construction, you can only access O(time^2) "computational resources".It's even worse when you consider that heat / power / information transfer / structural strength scale with area, not volume. At large enough scales, it actually bottlenecks at O(time^2) due to these effects. Note that this may require reconsiderations as to which algorithms to use for various things. Anything that requires a "constant-time" lookup from anything larger than o(n^2) elements is no longer actually constant-time, for instance. |