Showing top 0 results 0 results found
Showing top 0 results 0 results found
Artificial intelligence has always been a hot topic. The problem of creating a machine that can think, make its own decisions and at the same time lacks human limitations, has always aroused controversies.
Why? Because what is the artificial intelligence, what do we know about it? Is it going to be human's best friend or maybe the worst enemy?
We all remember what the infamous Skynet has done to our planet in Terminator series, right? We can also recall this nasty feeling of a cold shiver running down our spines when Neo has woken up from the Matrix.
After watching such films, we can wonder why our scientists want to build intelligent machines in a first place. One would ask: why putting humanity at risk? But still, people are working to make AI better, smarter and stronger.
But what if the benefits outweigh the risks?
In today’s post, I’d like to take a closer look at what the artificial intelligence is, and how it is already being used in business.
The risks of AI research
Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.
Every person endowed with an imagination can come up with at least a couple examples of risk associated with artificial intelligence.
We can start from the world war and machines fighting with humans for superiority. It sounds like a complete Sci-Fi, but it’s known as an AI takeover scenario.
In fact, a superintelligent machine would not be motivated by the same emotional desire to be powerful that often drives human beings. A machine could be motivated to take over the world so it can achieve their goals.
Nick Bostrom, a Swedish philosopher, has described in 2003 a situation that would happen if people don't design the “AI morality.” He imagined what would happen if we create a machine that is solely designed to create paperclips, but will not be taught to value human life the most.
Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.
Nick Bostrom, "Ethical Issues in Advanced Artificial Intelligence"
Of course, such advanced technology is still unavailable, but it’s a matter of time until it’s developed (some say that human-level AI would happen by 2060).
The problem of AI ethic is even more important than we can imagine. We might think that if we program your AI to respect life above anything else, but what will happen if it understands it as protecting our planet and all species from humans?
That’s why many big names in the field of science and technology like Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates and others, point out how important it is to be cautious with artificial intelligence as it might be a threat to the mankind. Elon Musk has even donated a huge amount of money to research into the dangers of artificial intelligence.
AI takeover might be the darkest scenario, but it's not the only one.
Another problem is that many people are afraid of another technological revolution. Many of you might ask: where’s the problem? And I’d response: in technological unemployment and the danger of the economic crisis.
We had already dealt with it in the 20th century. There were two large peak periods in unemployment and both took place in the United States: in the 1930s and 1960. Both these periods ended when US joined WWII in 1941 and the Vietnam war in 1965, so we cannot learn from our ancestors how to solve this problem.
Another problem that AI opponents are pointing out is that AI can be used as a weapon. In wrong hands, it can turn into a weapon of mass destruction, can lead to an AI arms race and, in the end, to an AI war.
Now, when I described such a terrifying vision of the future, it might seem that AI research is the worst idea the humanity could come from.
But should we be afraid of it?
Let’s imagine for a moment that we have already solved the problem with AI morality and let’s take a look at the benefits it could give us.
Artificial intelligence benefits
Since Deep Blue defeated Kasparov, the world chess champion, in 1997, nothing is the same. We used to think that we’re the most intelligent creatures on Earth, and then, one day; we had to face the truth that the pupil surpassed the master.
This information saddened many people, but is it such bad news?
We didn’t worry in 1961 when the world's first all-electronic desktop calculator was announced, so why should we now? Instead of treating AI as a threat, let’s treat is as an opportunity.
Artificial intelligence and machine learning is already used in tasks ranging from planning space missions to forecasting job growth. And you know what? Chances of error are much smaller than if these jobs were held by people.
Also, intelligent robots are the best space explorers. This solution has a lot of benefits, but the most important ones are that in a case of an accident we don’t risk any lives and machines have much bigger chances to survive in the hostile environment. That is also why they can make the most dangerous tasks.
Another benefit is that since robots could overcome the limitations humans have, they can help us to explore the Earth: dive into the oceans or dig in Earth’s surface. They can help us to understand the world we’re living in even better.
There’s also one controversial benefit. Artificial intelligence is great in carrying out repetitive and time-consuming tasks more efficiently than people, with a smaller margin of error. But although many people might think about unemployment, let’s see the good side of it.
In 1870, the average American worker clocked up about 75 hours per week. Now, the average US workweek is 34.4 hours. To go even further, many companies in Sweden shifted to a 6-hours work day! Is it a dark side of technological progress? Don’t think so.
The best thing is that AI can be very helpful in the medical field. Algorithms can help to assess patients health and indicate if the side effects of medicines will occur. AI can be helpful in neurology as it can stimulate the brain and diagnose it. Last but not least, one day it might turn out that artificial surgeons are more precise and less prone to mistakes than humans.
Sounds awesome, I know!
What is the artificial intelligence we’re living with
Although the risks of implementing artificial intelligence are so serious, and although many people are afraid of it, some might not see that we already use AI in our lives.
Are you a gamer? If so, you’ve probably dealt with AI more often than you thought. AI has been used in video games for a very long time (since the very first video games), but it developed since then.
If you played “Assassin's Creed” and spent lots of time trying to outmaneuver this pesky bowman, you know what I’m talking about.
Remember Siri? The "intelligent assistant" that allows giving your mobile device voice commands, that was announced by Apple in 2011? Yup, that’s it.
The benefit of Siri is that you don’t have to type your question to perform a search or an operation. That’s why Siri is awesome for drivers, as it helps to keep roads safe. It’s easy to use and comfortable to use, so it’s not a surprise that since 2011 we’ve been given Google Now, Cortana and Amazon Echo.
Such assistants are also being used in online customer service. And it’s very simple. You enter a website, you see a live chat, so you chat and get a response. The thing is, there is no one behind the monitor as you’re chatting with a bot.
The funny thing is that even before chat bots become so popular; our LiveChat support was receiving questions if “they’re real!” It seems like people were ready for chatbots for a long time! (If, accidentally, you're also ready for chatting, sign up for a free LiveChat trial and build your own bot!).
AI is also used in many applications. If you’ve been using Spotify, Last.fm or Netflix, you know that upon listening to particular music genre or watching and rating particular films, it will recommend more music and movies based on your judgment.
Artificial intelligence can also be used as a tool helping to develop art.
Did you know that Japanese AI program wrote a short novel and almost won a literary price? Or that there is an AI called Benjamin that “listened” to over 30,000 songs and composed this beautiful song?
Want more examples?
In 2012 Google's driverless cars made their ways onto California's roads. The idea was controversial, but the goal was to make roads safer and let people spend the time that would normally spend behind the wheel, in more productive way.
Everything seemed to be a bit weird but interesting, and then, in 2014, Google presented a new concept for their driverless car that had neither a steering wheel nor pedals. This idea even heated up the social debate about AI’s ethics.
Because many people were asking questions about how would the car behave in case of life threat of driver or pedestrians. It might happen that the car may have to determine whether to swerve away from a pedestrian while harming its passenger, or swerve away from several pedestrians while harming its passenger.
The bottom line is that someone will have to go through all possible situations and program the car so it chooses the option where the most lives will be saved.
And, to be honest, I think that this problem should be solved before driverless car release.
Are we ready for artificial intelligence?
The AI I am talking about is weak (or narrow) AI, which can perform certain tasks at a human level or higher.
Artificial super-intelligence doesn’t exist yet, and maybe we should be happy about it. That’s because we should ask ourselves an important question: if we’re ready for artificial intelligence.
The thing is that while people are concerned about AI’s morality, we’re still not able to decide what morality means to us.
Here are very interesting results of surveys regarding driverless cars taken by MIT:
In a series of surveys taken last year, the researchers found that people generally take a utilitarian approach to safety ethics: They would prefer autonomous vehicles to minimize casualties in situations of extreme danger. That would mean, say, having a car with one rider swerve off the road and crash to avoid a crowd of 10 pedestrians.\nAt the same time, the survey’s respondents said, they would be much less likely to use a vehicle programmed that way. Essentially, people want driverless cars that are as pedestrian-friendly as possible — except for the vehicles they would be riding in.
Another thing is that, well, people are unpredictable creatures. We are driven by emotions and that’s why our behavior is often unreasonable. We also have a great sense of humor and we’re good at jokes.
What if we design artificial intelligence, so it learns from us? What would be the result?
Luckily, we don’t have to fantasize about what would happen as Microsoft Research have
already tested it. They came up with the idea of turning a machine learning program loose on Twitter, to learn how humans interact with each other.
Their expectations were high and thought they'd end up with a superintelligent bot. The reality, however, has verified their dreams.They underestimated the power of social media trolling. In 24 hours @TayandYou turned from a nice, politically correct bot into a Hitler and sex loving, homophobic racist.
On the next day, the account was closed and all offensive tweets were removed, but it was a great lesson for AI researchers anyway. It was also a good way to remind people that we need to carefully implement the ability to learn in machines.
But to be honest, I don’t think that we should be afraid of artificial intelligence because they lack the most important thing: emotions. Machines can be much more intelligent and come up with solutions we would never think of, but still – it’s people who make them perfect.
What do you think about the AI development? Should we be afraid? Let me know in the comments!
Photo courtesy of A Health Blog via Creative Commons. The video game screen is taken from Assassin's Creed IV, driverless car photo courtesy of Google.