Technology from it’s beginning has been good and bad. The bad from it has come from us weaponizing whether it be images, ideas or even people. The technology provides us with tools that are much better at doing the bad things that we keep doing nonetheless.
We’re living in a period of technological wonderment, but many of the shiniest new technologies come with their own built-in potential for harm. Here are a few topics that we think will be the most prevalent in the future.
CYBERCRIMINALS AND RANSOMWARE
Hackers and Cybercrime have been around for quite some time now but as the times evolve and more people are connected their threat rises. CISA (Cybersecurity and Infrastructure Security Agency) called ransomware “the most visible cybersecurity risk playing out across our nation’s networks. CISA says that many attacks—in which a cybercriminal seizes and encrypts a person’s or organization’s data and then extorts the victim for cash—are never reported because the victim organization pays off the cybercriminals and doesn’t want to publicize its insecure systems.
Some data, like health information, is far more valuable to the owner and can yield a bigger payoff if held for ransom. Thieves can capture or quarantine large blocks of clinical information that’s critical for patient care, like test results or medication data. When lives are at stake, a hospital is in a poor position to negotiate. One hospital actually shut down permanently in November after a ransomware attack in August.
Deepfakes are media that take a person in an existing image or video and replace them with someone else’s likeness using artificial neural networks. They often combine and superimpose existing media onto source media using machine learning techniques known as autoencoders and generative adversarial networks.
Technology has turned this type of thing into a far darker art. Algorithms that can identify and analyze images have developed to a point where it’s possible to create convincing video or audio footage depicting a person doing or saying something they really didn’t. Such “deepfake” content, skillfully created and deployed with the right subject matter at the right time, could cause serious harm to individuals, or even calamitous damage to whole nations. Imagine a deepfaked President Trump taking to Facebook to declare war on North Korea. Or a deepfake of Trump’s 2020 opponent saying something disparaging about black voters.
Artificially Intelligent Computers
When we talk about artificial intelligence, there’s almost always someone there to offer calming words about how AI will work with humans and not against them. That may be perfectly true now, but the scale and complexity of neural networks is growing quickly. Elon Musk has said that AI is the biggest danger facing humankind.
Compare the tech advancements in Quantum Computers the AI could evolve much more than we can comprehend. The creation and training of deep neural networks is a bit of a dark art, with secrets hidden within a black box that’s too complex for most people to understand. Neural networks are designed in a long and convoluted process to create a desired result.
The bigger fear is that neural networks, given enough compute power, can learn from data far faster than humans can. Not only can they make inferences faster than the human brain, but they’re far more scalable. Hundreds of machines can work together on the same complex problem. By comparison, the way humans share information with each other is woefully slow and bandwidth-constrain