There seems to be a consensus nowadays that medical information in the United States doubles every 73 days estimated to arrive at hefty 2.3 zettabytes (ZB) by 2020. But even these optimistic calculations do not take into account what happened this month when Google announced “Towards Federated Learning At Scale: System Design” and happily said:
“We have reached a state of maturity sufficient to deploy the system in production and solve applied learning problems over tens of millions of real-world devices; we anticipate uses where the number of devices reaches billions” .
It is now clear that after the renewed interest in ML (Machine Learning 2013-2016) followed by DL (Deep Learning) (2016-2019), the next deep dive of Neuromorphic Computing will be FL (Federated Learning 2019-).
Algorithms have been around since the dawn of humanity but they needed to be made by humans so that together with their input produced a desired output. With ML humans enter input and desired output and out comes the algorithm, made by an algorithm.
No need to be programmed, but they still need to be trained by humans and they need their data to be presented in a structured way so that there is a lot of human labor involved in collecting, cleaning and labeling data, and evaluate models by steering it in the right direction.
DL trains itself, by processing multi-layered networks of data, it needs almost no human intervention, the machine learns from its own mistakes. The remaining problem however is the huge amount of front-end human labor in collecting data in the right formats and send this to a central location in the cloud where there is sufficient storage and compute power. Also there are new worries that come up with the transfer to the cloud like privacy, security and ownership (what happens with my data and the insights derived from it).
With FL the cloud is partly replaced by the crowd who are using an app where they collect data, train, compute and evaluate local data. Everything happens and does not leave the phone. Then they can federate that data globally by sending their “insights” which is technically a gradient to a cloud where all these gradients are averaged and they get an updated gradient back that improves their local prediction.
This greatly reduces privacy concerns (the data never leaves the phone, just the encrypted gradient), ownership (they own their data and their updates) and security (there is no single point of failure, hackers cannot hack millions of phones one by one).
But we should think beyond the smart phone and its users. Just as algorithms start to write themselves, devices without human intervention will start to collect information between each other.
Let’s think cheap MCUs (small computer on a single integrated circuit) deployable anywhere, without mains, docking or battery replacement. MCUs will just behave like many insect species, including ants and bees, who work together in colonies, and their cooperative behavior determines the survival of the entire group. The group operates like a single organism, with each individual in a colony acting like a cell in the body and becomes a "superorganism”.
Federated Deep learning only needs these small players like insects, ants, critters and bees to create big and smart things with immense, complex and adaptive social power and ambitious missions.
The future of Deep Federated Learning seems clearer every day: smartphones (mostly the spectrometer in the smartphone will be used) and MCUs consisting of a coin battery, a microcontroller and a sensor, running once a second so that it will last more than a year self-sufficiently. There are hundreds of billions of MCUs are already embedded in our environment and many billions will be sold in the future until the price goes asymptotically near zero.
Google’s Pete Warden says “Deep learning is compute-bound. Most of the time for neural networks is spent multiplying large matrices together, where the same numbers are used repeatedly in different combinations. This means that the CPU spends most of its time doing the arithmetic to multiply two cached numbers together, and much less time fetching new values from memory.”
But these devices will need a new and more robust medium to propagate.
Enter 5G, ideal for this kind of Federated Learning and IoT devices based on MCUs.
Massive multiple-input multiple-output antenna systems, millimeter wave communications, and ultra-dense 5G networks will make us soon always-on in real-time. Cheap vertical brains will be embedded on meshed MCUs running autonomously for years on coin cells, continuously extracting meaning from noisy signals harnessing the “digital exhaust” of a person and its “digital emissions” by using Federated Deep learning neural networks. A new near-zero energy data skin is going to clothe the planet.
Each device (physical or otherwise) will be connected and streaming data to millions of other smart data capture devices that will be creating live models of their vertical worlds. That enriched information from millions of graphics processing units will be sold back to other objects or their carbon, silicon or neuron users.
Passive collection will be monetized and become the service industry of virtual reality (VR) which will create parallel existential dimensions as a service.
That changes the data picture dramatically. Perhaps the future of medical data does not grow by 32 every year (2^5). With the coming data storm of 5G-MCU-FL it may be that the base and the exponent are growing and instead of an exponential future of medical data we are going into a factorial one (n!).
In such a future we will have no other choice than to outsource the journey to explore our bodies and minds to machines. The fabric of our reality, will no longer be based on matter or energy, but on data and code.
“We shape our tools and then our tools shape us.” (Marshall Mc Luhan).