The student news site of Whitney Young Magnet High School in Chicago, Illinois.

BEACON

The student news site of Whitney Young Magnet High School in Chicago, Illinois.

BEACON

The student news site of Whitney Young Magnet High School in Chicago, Illinois.

BEACON

Purchasing Pressure this Valentine’s day: How much should you really spend?
Cupid’s Advice

Cupid’s Advice

February 20, 2024

Teammates to Friends: How Sports Foster Human Relationships

Teammates to Friends: How Sports Foster Human Relationships

February 20, 2024

How Much Does a Relationship Really Matter?

How Much Does a Relationship Really Matter?

February 20, 2024

Whitney Young Teacher Appreciation

Whitney Young Teacher Appreciation

February 20, 2024

What Not To Get on Valentine’s Day

What Not To Get on Valentine’s Day

February 20, 2024

Is Artificial Intelligence Even Intelligent?

Is Artificial Intelligence Even Intelligent?

Throughout most of current-day media, there are boatloads of stories about AI. Whether praising it or warning of its fears, almost all sources seem to be confident that it can be used as a tool, and that it can simulate human behavior. But can it be used as reliably as a tool? Can it be intelligent like a human? I believe that not only is the current wave of AI reporting mostly hype, but it also fails to understand the fundamental mathematics behind the technology and why it can’t fully replicate human intelligence, but rather only be an approximation of it.

Imagine you want to make an AI that can tell the difference between dogs and cats. The input to the function – or model – would be all of the pixels in the image. Each of those pixels would be given a ‘neuron’ in the neural network. Each neuron would pass its outputs to each of the inputs of the next ‘layer’ of neurons, and this would repeat until the output layer is reached, with one neuron for cats and one neuron for dogs. 

The current ‘version’ of AI that has everyone on edge is Generative AI, which is just the above method in reverse. It takes a prompt through a LLM (Large Language Model), and passes that into the neural network’s output (in the above example it would be cats or dogs). Essentially, it uses the pre-trained weights and biases to make an ‘input’ based on the user’s prompt as the output, and it should recreate all of the ‘cat and dog’ images that it has been trained with. For example, a prompt of “dog” would get passed in from the LLM to the output neuron corresponding to dogs, and then the network is activated in reverse, generating the initial layer of pixels to then be converted to an image.

The key here is what the neuron uses to pass data on to another neuron. For this, the neuron considers the weights, and biases of all of its inputs, then passes those onto an activation function. The most common activation function, by far, is a ReLU (Rectified Linear Unit). Don’t be intimidated by the name, a ReLU is represented by the maximum of 0 or the input. This is based on the human brain’s action potential, which only outputs anything if the input is above a certain voltage threshold.

Data scientists and self proclaimed ‘AI bros’ seem to be satisfied with the resolution of ReLUs being able to simulate a human neuron, and they then conclude that a full neural network, once complex enough, simulates intelligence, hence the name ‘artificial intelligence.’ However, there is no way to be certain that a ReLU is a sufficiently accurate representation of the human neuron. There are many complex aspects that go into its biology (including hormones and blockers), but can it be described as a single function? On top of that, the method of organizing neurons into ‘layers’ does not exist within the biological world. For example: there are – by definition – no loops in a string of neurons in these types of neural networks, and the human brain has many of these loops.

Even if it could successfully replicate these traits, the method of obtaining the output is flawed as well. How can it replicate human ingenuity if all that the network is creating is a statistical aggregation of its training data? There will never be any creative plot twists to any story it writes, nor new perspectives on issues that have yet to be written about. 

On top of that, it will never be able to form a line of reasoning based on what it has done. In order to give an explanation in words, it would have to make an approximation of what an explanation would be like for a question like that, essentially not even knowing what it did itself. I don’t believe that this method of neural networks can, by any standards, be considered intelligent.

Leave a Comment
More to Discover

Comments (0)

All BEACON Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *