|CPU speed||Bandwidth||Round-trip Latency||Latency*speed||CPU speed||Bandwidth||Round-trip Latency||Latency*speed||20 MIPS||11.06 kbps (HGA), 600 bps (LGA)||20-40 minutes during mission||24 G instructions||1.5 GIPS||100 kbps - 50 Gbps||188 mS||282 M instructions|
The comparison of the 24 x 109 instructions wasted for the communications roundtrip to Mars versus 282 x 106 instructions wasted for the roundtrip to Paris is interesting. Using a 1.5 year system performance doubling time, this means that in 1.5 log2(24E9 / 282E6) = 9.6 years, talking to Paris will waste as many instructions as talking to Mars. And San Diegans will no longer be able to distinguish Parisians from Martians.
Of course, there are many caveats/considerations. Perhaps in 9 years, wasting so many instructions is no big deal. If the kinds of events that we are reacting to will continue to have the same time scale, e.g., events in the real world such as would be needed by robotics control, perhaps, then the fact that latency is constant when processing speed increases is not a great concern. On the other hand, if the time-scale of events that we are dealing with are also shrinking in the same rate that processing speed is increasing, e.g., the events originate in software systems, then the latency-speed product is critical.
The security issue is critical. In traditional client-server, RPC-mediated applications, the client-side computation is not vulnerable to tampering. The server could lie about the results of the RPCs that it sends back, but that's it. The local computation can involve data that are private to the user (e.g., cryptographic keys, representations of electronic money, credit card numbers), and the results of that computation, modulo servers lying, will be correct. This is not at all the case with remote computation, and determining to what extent we can send code to remote servers and trust the results is critical to the feasibility of a general purpose remote execution infrastructure.
firstname.lastname@example.org, last updated