Potential Risks of Artificial Intelligence
What Are Some Potential Risks of Artificial Intelligence?
Artificial intelligence, or the AI as it is also often referred to, is a subject that has been discussed a lot in the last decade. It is developing rapidly, making the workflow of most businesses easier and more efficient. Even in the everyday life of many people AI has shown great potential and is already being implemented in many different apps, making life easier and less complicated. AI has brought many advantages to us and science is paving the way for much more to come so it is safe to say that AI will be indispensable in the future, if it isn’t already.
But just as every medal has two sides, so does AI. This technology also comes with many potential risks and disadvantages. Many experts and technical masterminds of our time are expressing their concerns over the problems AI might cause in the future and therefore we need to be careful to address these issues while they are still able to be corrected. What do we mean by that?
There are lots of things that need to be considered in regards to these particular issues. In this article we will try to describe some of the risks that the dazzlingly fast development of AI might bring to our world and what measure need to be taken in order to monitor and guide that progress in the right direction.
1. Jobs
We are sure that everybody already had the opportunity to hear or read about the potential treat that machines and automation might present to old school, human based workplaces. Some people might suffer from various degrees of anxiety about machines stealing their jobs. That fear might be well-founded, job automation is a great risk to many people: about 25 % of Americans might lose their job because at some point machines will be able to replace them. Especially at risk are low-wage positions in which a person does repetitive tasks, like jobs in administration or food-service. However, even some university graduates are at risk, advanced machine learning algorithms might be able to replace them in some complex work positions because they are becoming more refined, especially through the use of neural networks and deep learning.
But we can’t really say that robots will completely push out humans from the job market. Employees will simply have to adjust, educate themselves and find a way to work by cooperating with AI, making the best possible use of its efficiency and mechanical logic. AI isn’t still perfect, for example it isn’t able to make judgement calls, so the human factor will still be decisive when working alongside machines.
There is a lot of AI based technology that uses automated solutions which need to be trained and this training depends from human input. A good example for this are machine translations which gain input from large number of human generated translations. Another good example is transcription software that get the training data from accurate transcriptions done by professional human transcribers. This way the software gets enhanced little by little, refining its algorithms through real life examples. Human transcribers benefit from the software because it helps them make transcripts faster. The software generates a rough, draft version of the transcript, which is then edited and corrected by the transcriber. This saves a lot of time, and means that in the end the final product will be delivered faster and will be more accurate.
2. The problem of bias
A great thing about algorithms is that they always make fair, non-biased decisions, in sharp contrast to subjective and emotional humans. Or do they? The truth is that the decision-making process of any automated software depends on the data that they have been trained on. So, there is a risk of discrimination in occasions when for example a certain segment of the population is not represented enough in the used data. Facial recognition software is already being investigated for some of these problems, cases of bias already occurred.
One great example of how biased the artificial intelligence can be is COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). This is a risk-and-needs assessment tool for prediction of recidivism risk among offenders. This algorithm-based tool was investigated and results have shown that the COMPAS data were seriously racially biased. For example, according to the data, African-American defendants were more likely to be incorrectly judged to have a higher risk of recidivism than other races. The algorithm also tended to make the opposite mistake with people of white race.
So, what happened here? The algorithm is data-dependent so if the data are biased, the software will likely give biased results as well. Sometimes it also has something to do with how the data was collected.
Automated Speech Recognition technology can also be biased depending on gender or race due to the fact that training data aren’t necessarily selected in matter that would ensure enough inclusiveness.
3. Safety concerns
There are some problems with artificial intelligence that are so dangerous that they can lead to accidents. One of the more prominent examples of applied AI technology is the self-driving car. Many experts believe that this is the future of transportation. But the main thing that is hindering the immediate implementation of self-driving cars into traffic are its malfunctions which might endanger the life of passengers and pedestrians. The debate on the threat which autonomous vehicles could pose on the roads is still very actual. There are people who think that there could be less accidents if self-driving cars were allowed on the road. On the other hand, there are studies that have shown that they might cause lots of crashes, because many of their actions will be based on the preferences set by the driver. Now it is up to the designers to choose between safety and people’s life and rider preferences (like average speed and some other driving habits). The main goal of self-driving cars in any case should be the reduction of automobile accidents, through the implementation of efficient AI algorithms and advanced sensors that can detect and even predict any possible traffic scenarios. However, real life is always more complicated that any program, so the limitations of this technology are still one of the limiting factors for its widespread implementation. Another problem is the factor of trust. For many people with years and years of driving experience, putting all the trust into digital hands might be seen as an act of symbolic capitulation to digital trends. In any case, until all this is resolved, some advanced technological solutions have already been implemented in newer cars, and human drivers can benefit from various sensors, assisted braking solutions and cruise controls.
4. Malicious purposes
Technology should serve people’s needs and be used to make their lives easier, more enjoyable and it should save everyone’s precious time. But sometimes AI technology has also been used for malicious purposes, in a way that poses a significant risk to our physical, digital and political security.
- Physical security: One potential risk of AI, that sounds quite dramatic at first and might chill you to your bones is a potential war between technologically advanced countries, carried out by autonomous weapon systems programmed to kill in the most efficient and ruthless manner. This is why it is extremely important to regulate the development of such military technology through treaties, regulations and sanctions, in order to safeguard the humanity from the ominous risk of AI based warfare.
- Digital security: Hackers are already a threat to our digital safety and AI software is already being used for advanced hacking. With the development of such software, hackers will be more efficient in their misdeeds and our online identity will be more vulnerable to theft. Privacy of your personal data might be compromised even more through subtle malware, powered by AI and made even more dangerous through the use of deep learning. Imagine a digital thief, lurking in the back of your favorite programs, becoming more cunning day by day, learning from million real life examples of software use and crafting complex identity thefts based on that data.
- Political security: in the turbulent times we live in, the fear of fake news and fraudulent recordings is quite justified. AI could do much damage by automated disinformation campaigns, which can be extremely dangerous during elections.
So, to conclude, we might ask ourselves how much damage artificial intelligence could do to us and can it do more harm than good to mankind.
Experts state that ethical development and regulatory bodies will play a major part when it comes to mitigating the disadvantages that artificial intelligence might cause to our lives. Whatever happens, we are sure that it will have a huge impact on our world in the future.
Speech recognition software, based on advanced AI protocols is already being used, and it brings many advantages to the business world: workflows are faster and simpler. Gglot is a big player in this field and we are heavily investing into developing our technology further.