Potential Risks of Artificial Intelligence

Artificial intelligence, or the AI as it is also often referred to, is a subject that has been discussed a lot in the last decade. It is developing rapidly, making the workflow of most businesses easier and more efficient. Even in the everyday life of many people AI has shown great potential and is already being implemented in many different apps, making life easier and less complicated. AI has brought many advantages to us and science is paving the way for much more to come so it is safe to say that AI will be indispensable in the future, if it isn’t already.

But just as every medal has two sides, so does AI. This technology also comes with many potential risks and disadvantages. Many experts and technical masterminds of our time are expressing their concerns over the problems AI might cause in the future and therefore we need to be careful to address these issues while they are still able to be corrected. What do we mean by that?

 

1. Jobs

Untitled 1 3

 

But we can’t really say that robots will completely push out humans from the job market. Employees will simply have to adjust, educate themselves and find a way to work by cooperating with AI, making the best possible use of its efficiency and mechanical logic. AI isn’t still perfect, for example it isn’t able to make judgement calls, so the human factor will still be decisive when working alongside machines.

   

2. The problem of bias

One great example of how biased the artificial intelligence can be is COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). This is a risk-and-needs assessment tool for prediction of recidivism risk among offenders. This algorithm-based tool was investigated and results have shown that the COMPAS data were seriously racially biased. For example, according to the data, African-American defendants were more likely to be incorrectly judged to have a higher risk of recidivism than other races. The algorithm also tended to make the opposite mistake with people of white race.

Automated Speech Recognition technology can also be biased depending on gender or race due to the fact that training data aren’t necessarily selected in matter that would ensure enough inclusiveness.

3. Safety concerns

Untitled 2 2

 

4. Malicious purposes

  • Physical security: One potential risk of AI, that sounds quite dramatic at first and might chill you to your bones is a potential war between technologically advanced countries, carried out by autonomous weapon systems programmed to kill in the most efficient and ruthless manner. This is why it is extremely important to regulate the development of such military technology through treaties, regulations and sanctions, in order to safeguard the humanity from the ominous risk of AI based warfare.
  • Digital security: Hackers are already a threat to our digital safety and AI software is already being used for advanced hacking. With the development of such software, hackers will be more efficient in their misdeeds and our online identity will be more vulnerable to theft. Privacy of your personal data might be compromised even more through subtle malware, powered by AI and made even more dangerous through the use of deep learning. Imagine a digital thief, lurking in the back of your favorite programs, becoming more cunning day by day, learning from million real life examples of software use and crafting complex identity thefts based on that data.
Untitled 3 2