Monday, August 2, 2021

Cloud and security, that the Cubbit’s swarm

A revolutionary idea: create a network of small boxes that make your data always available but encrypted and readable only by the direct owner. When your Cubbit is offline, the data saved on the cloud is accessible anyway, because it is enhanced by other small storage boxes around the world. But no one can read them except the primary user. This is, in short, the technology offered by Cubbit, which has a mantra in mind: to create an innovative digital “swarm”.

Cubbit’s infrastructure revolves around three players: the user, the swarm, and the coordinator. The user has an access to the Cubbit directly via device (computer or phone). The swarmis a distributed, P2P network of Cubbit Cells where data is stored. The coordinator is a suite of machine learning algorithms that optimize the payload distribution on the network, while also taking care of security and metadata. It’s also in charge of triggering the recovery procedure for files on the swarm. These three components interact to enable safe and private cloud storage inside a zero-knowledge architecture, ensuring that no one in the system, not even the coordinator, can access the users’ data.

Client-side encryption

The client generates a new AES 256 key and uses it to encrypt the file. To allow users to sign in and retrieve their keys from any device, this “file key” is stored on the coordinator in an encrypted form, using a master key that is derived by the user’s password. This “zero-knowledge” scheme ensures that no third party, not even the coordinator, can access the user’s data and keys. The encrypted file is split into “N” chunks and then processed into “K” additional redundancy shards through Reed Solomon error-correcting codes. This procedure allows the retrieval of the payload even if individual cells go offline, as long as you can reach any set of “N” cells. Parameters are dynamically chosen and optimized such that the probability of downtime is lower than 10^-6.

Next, the owner of the file asks the coordinator for authorization to upload it to Cubbit. The coordinator, in addition to taking care of this, assigns a location to the file inside the swarm, determining which hosting peers are most suitable. To do so, the coordinator runs a fitness function in order to both nullify the probability of losing files due to natural disasters and grant a constant network performance. In other words, the coordinator spreads the chunks as far as possible, while also minimizing network latency and other factors (bandwidth usage, storage optimization).

Each of the N+K shards is stored on a different Cubbit Cell, called a ‘hosting peer’, this means that Cells don’t contain the user’s files but encrypted shards of other people’s files. To make this possible, the coordinator facilitates peer-to-peer connections when needed, acting as a handshake server. Thanks to Reed Solomon, uptime is guaranteed as long as at least N hosting peers are online at the same time.

Courtesy of Cubbit

Network self-healing

The coordinator monitors the uptime status of each Cell and triggers a recovery procedure when the total number of online shards hits a certain threshold – namely, N + K/2. If more than half the K hosting peers go offline, the coordinator alerts the remaining hosting peers, which in turn contact other Cells via peer-to-peer, end-to-end encrypted channels to fully restore the number of online shards to the maximum level. It is worth noting that peers can retrieve the missing shards without the intervention of the original owner as they work on encrypted payloads. While redundancy parameters alone are tuned to guarantee a statistical uptime of c.a. 99.9999%, this recovery procedure virtually pushes the uptime to 100% by handling history effects such as permanently-disconnected peers and redistributing missing shards over new entries of the swarm. This is how the “zero knowledge” cloud storage works.

The environmental element

According to the Cubbit team, the internet infrastructure is responsible, as of today, for the 10% of the total worldwide energy demand. Data centers account for one third of it, making “the cloud”, despite the ephemeral name, an ecological monster that consumes as much as the entire United Kingdom (66 millions inhabitants and 5th world’s economy). «Cubbit is based on small, optimized single-board computers, which have an impact per GB that is 10 times smaller than data center racks. Moreover, it can leverage on geographical proximity to avoid long data transfers, which, in certain cases, can be as much consuming as storage itself. The result is that an average storage plan of 5 TB will save, choosing Cubbit over traditional cloud storage, the equivalent consumption of an always-on fridge in a year».

The story so far

Cubbit was founded in 2016 in Bologna, Italy, by Alessandro Cillario (COO), Marco Moschettini (CTO), Stefano Onofri (CEO) and Lorenzo Posani (CSO). Now is supported by numerous, highly-renowned international businesses and institutions. Over the course of the last 4 years, Cubbit has raised €3.3M from multiple investors and grants and it has also entered into the top 1% of Kickstarter campaigns ever. Cubbit counts Techstars and Barclays amongst its global partners and has achieved international recognition through multiple awards granted by organizations such as Mastercard, and the European Commission.

Originally designed to be a distributed cloud network, in 2019 Cubbit launched its first consumer product Cubbit Cells and in 2020 the team activated a business service: Cubbit for Teams. Cubbit has won numerous international awards, become one of the top 1% of Kickstarter campaigns of all time, raised over $1,000,000 in crowdfunding and gained support from more than 3,000 backers worldwide.

Antonino Caffo
I love technology in all its forms but I am particularly interested in consumer devices and cyber security. Quite curious about the new developments of the hyper-connected society. I'm almost always online, if it's not me it's my avatar.

Related Articles

Latest Articles