Why IoT should become the “internet of transparency”

Algorithms are essential to IoT.

Connected devices automatically drive our cars; to control the light, heat and safety of our home; and buy for us. Portable monitors monitor our heart rates and oxygen levels, tell us when to get up and how to move and keep detailed logs of our location. Smart cities, powered by a host of IoT devices and applications, monitors the lives of millions of people around the world by directing traffic, sanitation, public administration and security. The reach and influence of IoT in our daily lives would be unimaginable without algorithms, but how much do we know about algorithmic function, logic, and security?

Most algorithms operate at computational speeds and complexities that hinder effective human review. Or work in a black box. In addition, most IoT application algorithms are proprietary and work in a double black box. This status quo can be acceptable if the results are positive, and the algorithms are not damaged. Unfortunately, this is not always the case.

When black box algorithms fail and do material, physical, social, or economic damage, they also hurt the IoT movement. Such mistakes erode the social and political trust the industry needs to ensure the wider adoption of smart devices, which is key to advancing the field.

Unobtrusive algorithms can be expensive, even deadly

Black box algorithms can result in significant real-world problems. For example, there is an indescribable stretch of road in Yosemite Valley, California that is constantly on the rise confuses self-driving cars, and at this time, we still have no answer as to why. The open road is naturally full of risks and dangers, but what about your own home? Smart assistants are there to listen to your voice and fulfill your wishes and orders on shopping, heating, security and just about any other home function that lends itself to automation. However, what happens when the smart assistant starts acting dumb and listens not to you but to the TV?

It’s an anecdote circulating the net about many smart home assistants initiating unwanted online shopping because Jim Patton, host of CW6 News of San Diego, uttered the phrase, “Alexa ordered me a dollhouse.” Whether this happened on this grand scale is out of the question. The real problem is that the dollhouse event sounds very believable and, once again, raises doubts about the internal workings of the IoT devices to which we have entrusted so much of our daily lives, comfort and safety.

From the IoT perspective, the intangible damage of such cases is considerable. When one autonomous vehicle fails, all autonomous vehicles receive a reputable success. When one smart home assistant does stupid things, the intelligence of all the smart home assistants is questioned.

The data elephant in the room

Whenever an algorithm makes the wrong decision, its vendors promise thorough research and a quick fix. However, due to the inherent, for-profit nature of all these algorithms, authorities and the general public have no way of controlling what improvements have taken place. In the end, we have to take companies at their word. Repeated offenses make this a difficult question.

One of the main reasons for companies not to disclose the internal workings of their algorithms – as far as they can understand them – is that they do not want to show all the operations they perform with our data. Self-driving cars keep detailed logs of each trip. Home helpers track activities around the house; record temperature, light and volume settings; and keep your shopping list up to date. All this personally identifiable information is collected centrally to let algorithms learn and flow the information into targeted advertisements, detailed consumer profiles, behavioral pushes and direct manipulation.

Think back to the time Cambridge Analytica has been effectively armed 87 million unsuspecting social media users profile information to misinform voters and could have helped overthrow an entire U.S. presidential election. If your list of friends and some online discussion groups is sufficient for an algorithm to indicate the best ways to influence your beliefs and behaviors, what deeper and stronger manipulation can be enabled by the detailed protocols of your heart rate, movement, and sleep patterns?

Companies have a vested interest in keeping algorithms opaque because this allows them to tune in to their profit targets and accumulate huge centralized databases of sensitive user data along the way. As more and more users wake up to this painful but necessary realization, IoT adoption and development is slowly approaching a grinding halt, and skepticism is building a mountain ahead of the algorithmic progress that never was. What should we do?

The transition to the “internet of transparency”

The most urgent focus should be on making algorithms more comprehensible and transparent. To maximize trust and eliminate the adverse effects of algorithmic opacity, IoT must become the “internet of transparency”. The industry can create transparency by decoupling AI from centralized data collection and making as many algorithms as possible open source. Technologies as disguised federal learning and edge AI enables these positive steps. We need the will to pursue them. It won’t be easy, and some big tech companies won’t fall without a fight, but we’ll all be better off on the other side.

About the author

Leif-Nissen Lundbæk, PhD, is a co-founder and CEO of Xayn. His work focuses mainly on algorithms and applications for privacy-preserving AI. In 2017, he founded the privacy technical company along with professor and chief research officer Michael Huth and COO Felix Hahmann. The Xayn mobile app is a private search and discovery browser for the Internet – combining a search engine, a discovery source, and a mobile browser with a focus on privacy, personalization, and intuitive design. Winner of the first Porsche Innovation Contest, the Berlin-based AI company worked with Porsche, Daimler, Deutsche Bahn and Siemens.

Leave a Reply

Your email address will not be published.