AIMultiple ResearchAIMultiple Research

Dark side of neural networks explained [2024]

Dark side of neural networks explained [2024]Dark side of neural networks explained [2024]

Neural networks are complex but as much exciting for many reasons. They also motivate us to understand our cognitive mechanism better, and then reflect it to machines. We already have amazing examples of deep neural networks such as Google DeepMind’s AlphaGo which beat Lee Sedol, winner of 18 world titles and widely considered to be the greatest player of the past decade.

Image classification, natural language processing, and computerized axial tomography classification are some of the areas where neural networks are used. Neural networks are smart in their specific domains but lack generalization capabilities. Their intelligence needs adjustments.

Understand how neural networks work in 1 minute

Talking about neural nets without explaining how they work would be a bit pointless. So here’s the summary:

Neural nets are composed of neurons that take a single input parameter and manipulate it. You can think of a parameter as a pixel in an image that we want to classify. Then, we have synapses that connect neurons to other neurons. After the learning process, they gain weight, amplifying or decreasing the output of neurons. A network of neurons and synapses connecting them for the network, which uses training data to adjust the weights of its synapses. And the AI researchers adjust the shape and size of the neural net to fine-tune it.

This is the ultra high level view but you can already understand why there’s a dark side to neural nets.

“The Dark Side” of neural networks

An image recognition neural network can include millions of hidden layers until the network reaches the solution that there’s a sunset in this image. There’s a very misunderstood dark side to this process because most of the time even that creators of the AI don’t know in detail what connections the neural networks make and what kind of neurons they create. It is easy to calculate the error from the output and adjust it but this doesn’t mean we know what goes on in the numerous layers of neural connections. So the dark side is speculatively misinterpreted as the dark side in Star Wars movies. It just means there’s too much data and that it would take us a significant effort to see what’s happening. Still, it can give goosebumps to some. If AI developers have more idea about what’s going on it would be easier to develop more complex neural networks.

One of the most known efforts to solve this puzzle is Google’s DeepDream. The DeepDream was developed for the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) in 2014. The software is designed to detect faces and other patterns in images, to automatically classify images. However, once trained, the network can also be run in reverse, being asked to adjust the original image slightly so that a given output neuron. This can be used for visualizations to understand the emergent structure of the neural network better and is the basis for the DeepDream concept. The optimization resembles backpropagation, however instead of adjusting the network weights, the weights are held fixed and the input is adjusted. Backpropagation is the process of adjusting the weights of neural connections.

Figure: Deep dream of an image recognition system that learned to identify animals

google deep dream

Source: Google

Even though there’s still room to grow for neural networks, they already are in our daily lives. Thanks to services like Google Translate, we benefit from neural networks even if we are not aware of it. Just like in our daily lives, AI is also ready to transform work. For applications of AI in the enterprise, you can check out AI applications in marketing, sales, customer service, IT, data or analytics.

If you are still curious, want to learn the math behind neural nets, and want to explore a bit more of this dark area in an extremely simple example, here’s a more detailed explanation.

Deep neural nets in excruciating detail

Let’s assume that we have a neural network with a four-pixel black and white camera and we want it to identify whether the images captured by the camera are solid, vertical, diagonal, or horizontal. This can’t be done with simple rules about the brightness of the pixels. Their relation to other pixels is also in play. To tackle this complication, we create a neuron for every pixel and give them values between -1 (black) and 1 (white) which determine their brightness. When values of these input neurons are added, a new neuron is created. Neurons are connected with synapses like their counterpart in the human brain.

While connecting with a new neuron, neural networks weigh the value of the input neurons with a number between 1 and -1. In this case, the output neuron’s value can’t be larger than 1 because we have a fixed spectrum. If the value of neural inputs is larger than 1, the neural network squashes it with a sigmoid function so the output value stays between the fixed interval. White connections are positive and black ones are negative values and the thickness of the line shows the magnitude of the weight.

Every neuron is weighted with 4 different values (but there could be 400 and 4 million) so we have 4 new neurons and these neurons are also connected and weighted. This creates a layer.

This process continues after the neural network gets a clear understanding of what’s in the image according to our set of output values which are solid, vertical, diagonal and horizontal. There could be 4 or 4,000 layers according to what we want to get as an output. In the image below we can see a hypothetical case of how our neural network understood that the image is horizontal.

Black dots and connections show negative values, white ones show positive, and grey ones are zeroes. The reason that there are no gray lines is they are ignored because they do not affect the output. But neural networks never give outputs that are true but only the closest to the truth. For example, an output neuron that has a positive value would never be the maximum value we set for our interval in the first place. This is a way of saying there would never be a certainty about the answer neural networks give but they are very good approximations.

Clarifai.com’s image classification AI would be a great example of a lot more complex version of what is told above. As it is seen in the image below, instead of setting one of the outputs as horizontal it was set as sunset. And the output value of sunset is around 0.9 which tells us that there’s about 90% probability of a sunset shown in the picture.

Hope I didn’t manage to lose you in detail. If I did, thankfully there are solution providers to take care of the science part of AI. These articles about AI may also interest you if you want to learn more:

Feel free to get our help in exploring B2B AI solution providers:

Find the Right Vendors
Access Cem's 2 decades of B2B tech experience as a tech consultant, enterprise leader, startup entrepreneur & industry analyst. Leverage insights informing top Fortune 500 every month.
Cem Dilmegani
Principal Analyst
Follow on

Cem Dilmegani
Principal Analyst

Cem has been the principal analyst at AIMultiple since 2017. AIMultiple informs hundreds of thousands of businesses (as per similarWeb) including 60% of Fortune 500 every month.

Cem's work has been cited by leading global publications including Business Insider, Forbes, Washington Post, global firms like Deloitte, HPE, NGOs like World Economic Forum and supranational organizations like European Commission. You can see more reputable companies and media that referenced AIMultiple.

Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised businesses on their enterprise software, automation, cloud, AI / ML and other technology related decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization.

He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch and Business Insider.

Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.

To stay up-to-date on B2B tech & accelerate your enterprise:

Follow on

Next to Read

Comments

Your email address will not be published. All fields are required.

0 Comments