An Overview Of The Distributed Computing Landscape

People have been trying to build distributed compute networks since the 1990’s; In 1996, GIMPS used distributed compute to search for prime numbers and in 1999, Seti@Home used volunteers’ compute power to search for extraterrestrial life.

Now 25 years later, the final pieces seem to be in place. Cryptocurrency makes possible machine-to-machine payments, which allows participants to get compensated for contributing CPU. Fields such as machine learning, 3D simulation and biological computation are driving up demand for compute resources.

We’ve been looking at distributed computing projects and wanted to share how different projects are tackling growing the number of machines connected to the network and isolating tasks from the compute nodes they run on.

Below are our early findings. We hope they are useful, and let us know if you have any feedback.

Approaches To Growing The Network

Metcalfe’s law applies to compute networks: the more machines there are on the network, the more likely a machine will be available to accept a new task when needed.

Growing a compute network is difficult to do, especially as the space is increasingly crowded. To clarify – the issue isn’t that people already have installed something and don’t want to install something else, but rather that there is a lot of noise for a project to break through.

Here are four interesting approaches we are seeing:

Approach #1: Make it easy for anyone to participate in the network. One example of this is KingsDS (pre-beta). To join, all you need to do is visit a URL in the browser and let the tab run in the background.

Approach #2: Help other applications get compensated for pooling their own users’ resources. An example of this is  FREEDcoin (pre-beta). They offer an SDK for game developers. When players launch games running the FREEDcoin SDK, they are given the opportunity to contribute their CPU in return for in-game prizes. It’s a win-win-win: FREEDcoin gets to add high-power gaming PC’s to their network, game developers can monetize their games without showing ads, and players have the opportunity to earn virtual prizes.

Approach #3: Build the client so that each node can both submit and complete tasks. Golem’s (beta) client can be used to submit tasks and to compute them. That means each one of their end users can also easily become a compute node. This helps them grow both sides of their network evenly.

Approach #4: The last approach is to be the supplier of compute resources for other computing projects. One example is SONM (beta), a project trying to help other compute networks scale up quickly. With SONM’s open marketplace, machines can advertise how much RAM, CPU and GPU they have available in a standardized format. Any project using SONM can then search the entire SONM network for a machine with available resources.

Approaches To Isolating Tasks From Host Machines

One challenge is ensuring that tasks cannot read or modify memory of their host machine and vice versa. If multiple tasks are running simultaneously on a machine, it’s important that they are isolated from each other as well.

It’s a tough challenge to keep data private; even though SONM runs all tasks in Docker containers, they also have partners that run nodes sign NDA’s.  Most projects are relying on existing container runtimes like Docker to satisfy this requirement. Makes total sense – who wants to reinvent wheels. However, there are two projects in this space that are doing something unique and are worth calling out.

Enigma (pre-beta) is designing what they call “secret contracts” – these are compute nodes much like smart contracts but because every piece of data is split across multiple nodes working on the same compute task, no single node can read any data. They do this using a cryptographic method developed in the 1980’s called multi-party computation. Enigma is building out their own chain that will be able to do the storage and compute.

Keep (pre-beta) is another project taking a similar approach. They are also using multi-party computation to shard encrypted data to perform computation without the compute nodes being able to read any inputted data. With Keep, the storage and compute of private data happens inside clusters and the output gets published on the blockchain.

One Last Thought: Narrow Vs. Broad Use Cases

There are two approaches one could take for a distributed computing project: build a general compute tool that could accept any workload or accept only a narrow range of tasks.

Most of USV’s portfolio companies started by doing one thing, and doing that one thing allowed them to grow and build a network and a platform around that one thing. (e.g. Cloudflare, Stash, Carta, etc.)

I tend to think that the same pattern will work well for compute networks: starting with one narrow use case (such as training machine learning models, rendering 3D shapes, and folding proteins) will help a project move quickly and over time grow into other compute areas.

Albert likens this to WeChat’s growth: WeChat started with chat and the success of chat allowed them to grow their network so that they could build other applications like payments, ecommerce and gaming, and now WeChat is a general use tool.

There’s a question around what is the right use case to start with. There seem to be two paths: one is starting with training machine learning tasks (machine learning is one of the drivers for increased demand for computing resources). The other is starting with a use case like 3D rendering or academic/scientific computation where there is no overhead of private data to protect.

Wrapping Up

This space is early, but an exciting prospect. Not only will greater competition in compute providers drive down prices and fuel innovation, but there may be a new class of applications (such as VR and autonomous vehicles) that may only be made possible when distributed computing will be hundreds of milliseconds closer to end devices than us-west-2.

If you have ideas or projects you’re working on, we want to hear from you. Reach out, I’m [email protected].

Recommended in Open, Decentralized Data