Exponential Technology Report – 6 November 2019

This is the weekly report focusing on the news stories for this week that focus on exponential technologies.

We are holding an Exponential Organisation event on 27 November 2019. If you are in Johannesburg then it would be great to have you. Apart from the talks we will also have WITS showcasing their really great robotic hand development and Eden Labs brings VR headsets to show us their amazing work. The cost is R495 and you can book your ticket on Quicket. I will have a link in my show notes as well.

A reminder that the exponential technologies fall into different categories, which are 3D printing and digital fabrication, Artificial intelligence (AI), Augmented and virtual reality (AR, VR), Autonomous vehicles, Blockchain, Data Science, Digital biology and biotechnology, Digital medicine, Drone technology, Internet of things, Nanotechnology , Networks and computing systems, Quantum computing and Robotics.

For this report the format I use is to go through the exponential technologies in alphabetical order. All the links to the articles can be found by clicking on the image. I hope you enjoy it!

3D Printing and Digital Fabrication

Article: “Liquid-in-liquid printing method could put 3D-printed organs in reach” - www.sciencemag.org

3D-printed tissues and organs could revolutionize transplants, drug screens, and lab models—but replicating complicated body parts such as gastric tracts, windpipes, and blood vessels is a major challenge. That’s because these vascularized tissues are hard to build up in traditional solid layer-by-layer 3D printing without constructing supporting scaffolding that can later prove impossible to remove.

One potential solution is replacing these support structures with liquid—a specially designed fluid matrix into which liquid designs could be injected before the “ink” is set and the matrix is drained away. But past attempts to make such aqueous structures have literally collapsed, as their surfaces shrink and their structures crumple into useless blobs.

So, researchers from China turned to water-loving, or hydrophilic, liquid polymers that create a stable membrane where they meet, thanks to the attraction of their hydrogen bonds. The researchers say various polymer combinations could work; they used a polyethylene oxide matrix and an ink made of a long carbohydrate molecule called dextran.

They pumped their ink into the matrix with an injection nozzle that can move through the liquid and even suck up and rewrite lines that have already been drawn. The resulting liquid structures can hold their shape for as long as 10 days before they begin to merge, the team reported last month in Advanced Materials.

Using their new method, the researchers printed an assortment of complex shapes—including tornadoesque whirls, single and double helices (above), branched treelike shapes, and even one that resembles a goldfish. Once printing is finished, the shapes are set by adding polyvinyl alcohol to the inky portion of the structure. That means, the scientists say, that complex 3D-printed tissues made by including living cells in the ink could soon be within our grasp

Artificial Intelligence (AI)

Article: – “AI is making literary leaps – now we need the rules to catch up” – www.theguardian.com

Last February, OpenAI, an artificial intelligence research group based in San Francisco, announced that it has been training an AI language model called GPT-2, and that it now “generates coherent paragraphs of text, achieves state-of-the-art performance on many language-modelling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarisation – all without task-specific training”.

If true, this would be a big deal. But, said OpenAI, “due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”

Given that OpenAI describes itself as a research institute dedicated to “discovering and enacting the path to safe artificial general intelligence”, this cautious approach to releasing a potentially powerful and disruptive tool into the wild seemed appropriate. But it appears to have enraged many researchers in the AI field for whom “release early and release often” is a kind of mantra. After all, without full disclosure – of program code, training dataset, neural network weights, etc – how could independent researchers decide whether the claims made by OpenAI about its system were valid? The replicability of experiments is a cornerstone of scientific method, so the fact that some academic fields may be experiencing a “replication crisis” (a large number of studies that prove difficult or impossible to reproduce) is worrying. We don’t want the same to happen to AI.

If the row over GPT-2 has had one useful outcome, it is a growing realisation that the AI research community needs to come up with an agreed set of norms about what constitutes responsible publication (and therefore release). At the moment, as Prof Rebecca Crootof points out in an illuminating analysis on the Lawfare blog, there is no agreement about AI researchers’ publication obligations. And of all the proliferating “ethical” AI guidelines, only a few entities explicitly acknowledge that there may be times when limited release is appropriate. At the moment, the law has little to say about any of this – so we’re currently at the same stage as we were when governments first started thinking about regulating medicinal drugs.

Artificial Intelligence (AI)

Article: “'Terminator' at 35: How AI and the militarization of tech has evolved” – www.nbcnews.com

In the 35-year span since the launch of the first Terminator movie, a variety of technological advancements in AI and robotics have brought elements of "Terminator" closer to reality. Artificial intelligence experts are confident, however, that the kind of independent AI and humanoid robots of the movie franchise are still far off.

But they also offer a warning: the developments that people have made in AI and military technology could create their own kind of "Judgement Day."

"AI is a powerful technology, but it’s a tool, not unlike a pencil," Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, told NBC News. "How it’s used is in the hands of people."

AI may not yet boast self-awareness, but it already rivals and in some cases surpasses human intelligence across a range of applications, including reading CT scans and spotting shoplifters as well as helping self-driving cars navigate crowded cities. Developers have not have made artificially intelligent machines to look like Arnold Schwarzenegger, but they can at least make one sound exactly like podcast host Joe Rogan to the point where it can fool human listeners.

Panicking at this stage of the technology's development, Etizoni said, "is like being worried about overpopulation on Mars before we even have gotten a person on Mars."

When might we feel the need to push the panic button? Estimates vary wildly. Some experts interviewed by NBC News predict the singularity — roughly defined as the time when an artificial intelligence will surpass human intelligence and be able to evolve autonomously — will arrive as soon as 15 years from now. Others say it will be closer to a century.

One point that everyone agrees on, however, is a computer will eventually surpass its creators. And when it does, it's not clear it will be possible to program enough safeguards for humans to remain the apex programmers.

Augmented and Virtual Reality (AR, VR)

Article: “Google update makes Chrome ready for web-based VR” - www.engadget.com

VR systems are getting more advanced, but they're still primarily available on niche hardware and software. That could be about to change, with the latest beta version of Google's Chrome browser supporting web-based VR.

The beta of Chrome 79 reveals the details about the support for web-based virtual reality (VR) experiences. Developers can create websites with content including games, 360-degree videos and immersive art using the WebXR Device API, with controllers supported by the GamePad API. Sites can be displayed on a smartphone or on a head-mounted display such as an Oculus Quest.

Support for web-based VR content will be coming to other Chromium-powered browsers in addition to Chrome soon, including Firefox Reality, Oculus Browser, Edge and Magic Leap's Helio. In the future, Chrome and other browsers will support augmented reality features as well.

You can download the beta of Chrome 79 now, or wait for the features to make their way to the main Chrome browser on December 10th.

Autonomous Vehicles

Article: “MIT uses shadows to help autonomous vehicles see around corners” - www.techcrunch.com

We’re still not at the point where autonomous vehicle systems can best human drivers in all scenarios, but the hope is that eventually, technology being incorporated into self-driving cars will be capable of things humans can’t even fathom — like seeing around corners. There’s been a lot of work and research put into this concept over the years, but MIT’s newest system uses relatively affordable and readily available technology to pull off this seemingly magic trick.

MIT researchers (in a research project backed by Toyota Research Institute) created a system that uses minute changes in shadows to predict whether or not a vehicle can expect a moving object to come around a corner, which could be an effective system for use not only in self-driving cars, but also in robots that navigate shared spaces with humans — like autonomous hospital attendants, for instance.

This system employs standard optical cameras, and monitors changes in the strength and intensity of light using a series of computer vision techniques to arrive at a final determination of whether shadows are being projected by moving or stationary objects, and what the path of said object might be.

In testing so far, this method has actually been able to best similar systems already in use that employ lidar imaging in place of photographic cameras and that don’t work around corners. In fact, it beats the lidar method by over half a second, which is a long time in the world of self-driving vehicles, and could mean the difference between avoiding an accident and, well, not.


Article: “Bitcoin Spikes As China Embraces Blockchain” – www.forbes.com

Bitcoin saw a major jolt last week after Chinese president Xi Jinping expressed his support for blockchain, saying the country needs to take advantage of the opportunities the technology provides. Bitcoin soared from $7,500 to $10,500 in just a few hours following Xi’s remarks.

Digital Biology and Biotechnology

Article: “Meet the pigs that could solve the human organ transplant crisis” - www.technologyreview.com

The facility lies midway between Munich’s city center and its international airport, roughly 23 miles to the north. From the outside, it still looks like the state-run farm it once was, but peer through the windows of the old farmhouse and you’ll see rooms stuffed with cutting-edge laboratory equipment.

When Kessler unlocks one pen to show off its resident, a young sow wanders out and starts exploring. Like other pigs here, the sow is left nameless, so her caregivers won’t get too attached. She has to be coaxed back behind a metal gate. To the untrained eye, she acts and looks like pretty much any other pig, but smaller.

It’s what’s inside this animal that matters. Her body has been made a little less pig-like, with four genetic modifications that make her organs more likely to be accepted when transplanted into a human. If all goes according to plan, the heart busily pumping inside a pig like this might one day beat instead inside a person.

Different types of tissues from genetically engineered pigs are already being tested in humans. In China, researchers have transplanted insulin-producing pancreatic islet cells from gene-edited pigs into people with diabetes. A team in South Korea says it’s ready to try transplanting pig corneas into people, once it gets government approval. And at Massachusetts General Hospital, researchers announced in October that they had used gene-edited pig skin as a temporary wound covering for a person with severe burns. The skin patch, they say, worked as effectively as human skin, which is much harder to obtain.

But when it comes to life-or-death organs, like hearts and livers, transplant surgeons still must rely on human parts. One day, the dream goes, genetically modified pigs like this sow will be sliced open, their hearts, kidneys, lungs and livers sped to transplant centers to save desperately sick patients from death.

Drone Technology

Article: “The Drone Wars Are Already Here” – www.bloomberg.com