The University of Utah's Independent Student Voice

The Daily Utah Chronicle

The University of Utah's Independent Student Voice

The Daily Utah Chronicle

The University of Utah's Independent Student Voice

The Daily Utah Chronicle

Write for Us
Want your voice to be heard? Submit a letter to the editor, send us an op-ed pitch or check out our open positions for the chance to be published by the Daily Utah Chronicle.
@TheChrony
Print Issues
Write for Us
Want your voice to be heard? Submit a letter to the editor, send us an op-ed pitch or check out our open positions for the chance to be published by the Daily Utah Chronicle.
@TheChrony

Head to Head: Rise Of Artificial Intelligence Could Limit Human Thinking

Head+to+Head%3A+Rise+Of+Artificial+Intelligence+Could+Limit+Human+Thinking

Should we be worried about the future of artificial intelligence?

Obviously computer intelligence has many beneficial uses, and I’m grateful for those. But our reliance on these machines is troubling. I’m not just talking about how everyone is on their phones all the time. What I mean is that we sometimes sacrifice our autonomy and moral judgment in the name of cold, statistical calculations.

A lot of really brilliant people fall into some serious pitfalls when it comes to fear of machine learning. We see things like AlphaGo and start to worry because here’s a machine that can beat the world champion at probably the hardest game known to man, one with a seemingly uncountable number of variations. But even something like AlphaGo is really just an advanced sorting machine.

The idea of AI on the scale of human intelligence is still unfeasible for our generation. At best, computer intelligence really just means looking at patterns and applying solutions. This is a kind of thinking, but it’s based in mathematics. For a long time the only thing AI will get better at is looking at a larger scope of history of specific quantities, interpreting that information in a specifically patterned way and bringing up the results of that information. A program understands language similar to how we do, interpreting the meaning of words by looking at the words around it, but it does not think like a human does. The words are merely mathematical inputs, and a new or uncommon word would be impossible to interpret if the sample data is too small.

We like to personify artificial intelligence because that’s just what we do as humans. We look at our competitors and think, ‘What if they’re like me?’ We’ve been doing it with animals and mythical figures for centuries. Now we’re competing with machines in games and in the workplace, so it’s only natural that we would personify them.

Still, there are reasonable fears we should be aware of as computer intelligence pervades every aspect of our lives. Who knows how many jobs have been made obsolete thanks to machine assembly? I’m also concerned when it comes to writing. While the human mind is incredibly inventive, popular news articles typically aren’t. Considering the vast amount of written material available on the internet, it’s not unfeasible that a program could essentially “write” web articles by sampling millions of written articles and sorting through the data to make a cohesive new article. It may sound a little crazy, but consider this: a few weeks ago a program wrote a novel that came close to winning a national literary contest in Japan. Its co-author (a human) fed the computer many samples of existing novels and it successfully produced a short story from the data.

Also, consider how Google always seems to know what recommendations to show you. This isn’t based in human interference; it’s a program that “reads” the content you’re reading and uses word frequency and sentence similarity to compare it to other works across the web. It’s not unfeasible that a computer could write a perfectly cohesive summary of the Zika virus simply by sampling two or three news sources and a few books to organize the information in a new and interesting way.

The real fear is how much we’ll come to rely on machine learning to solve problems. Computer systems are incredibly rigid. If an option is not accounted for, it doesn’t exist. Many of our problems can be addressed using quantitative research, but a conclusion can’t simply be based on numbers. But as computers become more and more ingrained into the functioning of our society, we rely more on their reasoning as opposed to our own. When a computer says “no,” does that really mean no? The rigidness of code cannot account for the unique case, and the simplest answer to give is always “no.” While the human mind is flexible and can adjust for various circumstances, the program is ruled by its systems and cannot account for an option a developer hadn’t already prepared for.

The problem could be anything. I could be filling out an insurance claim or deciding how many battles the Allies need to lose for Germany to not suspect the Enigma code was broken. Quantitative reasoning simply interprets all qualitative values as numbers, looks at the patterns or equation, and provides the output. Its ease of use without the need for qualitative judgment makes machine thinking very appealing and it’s caught on nearly everywhere. But we still have these everyday problems that require more contextual understanding that a machine cannot interpret. We see it in cases where Siri responds to rape or depression with “I don’t know what that is.” Or when facial recognition software can’t identify darker skin tones.

So yes, I am worried about the future of AI — not because I’m afraid of its ultimate domination or subjugation of humankind but because I’m worried about its effect on the future of human thinking. Computer programs are designed to work for the majority of cases, but they are also inherently limited, and potentially, limiting. We’ve all been in the minority at some point, where the computer has said, “I’m afraid I can’t do that.”

[email protected]

Leave a Comment

Comments (0)

The Daily Utah Chronicle welcomes comments from our community. However, the Daily Utah Chronicle reserves the right to accept or deny user comments. A comment may be denied or removed if any of its content meets one or more of the following criteria: obscenity, profanity, racism, sexism, or hateful content; threats or encouragement of violent or illegal behavior; excessively long, off-topic or repetitive content; the use of threatening language or personal attacks against Chronicle members; posts violating copyright or trademark law; and advertisement or promotion of products, services, entities or individuals. Users who habitually post comments that must be removed may be blocked from commenting. In the case of duplicate or near-identical comments by the same user, only the first submission will be accepted. This includes comments posted across multiple articles. You can read more about our comment policy here.
All The Daily Utah Chronicle Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *