World's smallest computer and unhackable data top IBM's annual future-tech list
IBM Research has released its annual "5 in 5" list, outlining five technologies that the company believes will be instrumental in reshaping society and business over the next five years. This year's list has a strong focus on security and AI, predicting that by 2023 we'll have unhackable encryption methods, unbiased AI and mainstream quantum computers.
While the technologies on show are rooted in research coming out of IBM, the unpredictable nature of progress means they're not always on the money. Previous lists have had mixed success, from the spot-on prediction that we'd be unlocking our phones with iris scans by now, to the more ambitious idea that by 2017 we'd have real-time baby translators like that episode of The Simpsons.
Blockchains, "crypto-anchors" and the world's smallest computer could counteract counterfeiting
With the explosion of Bitcoin and other cryptocurrencies, blockchain seems like a bit of a buzzword lately, but the technology could have far-reaching applications. Basically, a blockchain is a distributed ledger that doesn't allow existing "blocks" of data to be edited, creating a secure database that clearly shows the entire history of whatever information is entered. The first blockchain was invented to secure Bitcoin transactions, but they could be applied to anything that needs this kind of security.
In this case, it's fighting counterfeiting. Tiny, tamper-proof digital fingerprints that IBM calls crypto-anchors would be embedded into products or packaging, recording that product's entire journey from manufacturing to the end user. By scanning these tags, users can see its history in detail and be sure it's not a knockoff. These crypto-anchors could take the form of optical codes, edible magnetic ink splotches on pills or food products, or even the world's smallest computer. IBM has crammed up to a million transistors onto a chip the size of a grain of salt.
Quantum computers will become ubiquitous – for better or worse
Quantum computers are advancing fast, but for now they're still restricted to labs. Where traditional computers process information as bits, a series of ones and zeroes, quantum computers use qubits, where data can be either a one, zero, or both at the same time. That allows them to conduct massive amounts of simultaneous calculations, opening up the possibility of solving currently-unsolvable problems.
But for quantum computers to become commercially useful, scientists predict they'll need to be built with at least 1 million qubits. Our current most advanced prototype is Google's Bristlecone, boasting 72 qubits, so there's still a long way to go, but IBM is confident that they'll be out of the lab and into the hands of the public within five years.
Lattice cryptography will be unhackable
As useful as they may be for medicine and data centers, if quantum computers were to fall into the wrong hands they could be used to brute-force their way through our current best encryption methods. So, IBM is building some stronger defenses.
The new security method is known as lattice cryptography, which hides sensitive data inside elaborate, multi-dimensional lattices. These structures are so dense and complex that the researchers believe no algorithm will ever be able to crack them, allowing traditional computers to stand up to quantum cyberattacks.
This technology is also being used to build what's known as Fully Homomorphic Encryption (FHE). Normally, files are encrypted while they're being transmitted, but open while sitting idle on a computer or in use. This technique allows files to remain encrypted even while calculations are performed, meaning information is never vulnerable. The other benefit is that it remains anonymized, so, for example, credit agencies could calculate a client's credit score without specifically looking at their details, or medical professionals could share patient data without revealing their identity.
AI systems will be free of human biases
Like a child picking up their parents' bad habits, AI systems can inherit the unconscious biases of their creators. And with AI becoming ever more common and over 180 defined biases to try to avoid, that problem could keep getting worse.
Whether we realize or not, our decisions can be influenced by racial, gender or ideological biases, and these can all be transferred to AI systems which we depend on to be impartial. The MIT-IBM Watson AI Lab is working to identify the kinds of principles that people use to make decisions, and teach AI to spot inconsistencies that might indicate a bias. Future AI systems could be trained to apply human values and principles to their decision-making.
Autonomous microscopes use plankton as living water sensors
Water is essentially the most important resource on Earth, yet the quality of water supplies can be hard to track in real-time. Specialty sensors are often deployed, but they're usually on the lookout for specific markers and won't notice when other things enter the ecosystem.
Conveniently, plankton, the microscopic organisms that inhabit many natural bodies of water, can act as natural sensors. Keeping a close eye on them can tell us a lot about the health of the water, and IBM says it's currently working on small, autonomous microscopes that could analyze and track plankton in the wild.
These microscopes could be fitted with AI systems to analyze the behavior and health of microorganisms in real time, alerting observers to changes in temperature or chemical composition to give early warnings for events like oil spills and algae blooms.
These five technologies sound like reasonable predictions based on the direction of today's technologies, but we'll reserve judgement for the versions of ourselves looking back from 2023.
Please keep comments to less than 150 words. No abusive material or spam will be published.
For example, if a gender is one of the parameters with a huge training set, after training, it turns out the weight of the gender parameter is very low, implying gender is not significant in future prediction. However, by adding weight to the parameter will nullify the training. If you think it is not right, getting better or more training data. Don't twist the data or algorithm to suit your bias.