WHY IS ARTIFICIAL INTELLIGENCE BAD?

WHY IS ARTIFICIAL INTELLIGENCE BAD?

Artificial Intelligence is not necessarily bad. It is just how humans design and build it. If we rush into building AI systems today without much thought of future benefits and consequences, humans really can create some serious, even irreversible, situations.

Above all, it is important to perform research into AI Safety and act on it.

When developing Artificially Intelligent systems at the Smart Operating System (scOS) we address any problem with a debate for a workaround. Here are some common negatives of AI you may hear often, along with our solutions.

AI Will Replace Our Jobs

Globally, there is an ever-increasing list of careers. Currently, there are over 12,000 different career opportunities, some with large amounts of people acting and some with smaller groups of people acting.

Within these careers, there may be dangerous jobs, somewhere the availability of people to do the job is scarce. Therefore, using AI in this situation to dynamically access risks and perform the job can help save many lives. But even if there are still people that want to do the dangerous job, AI can be used as an assistant to humans.

At scOS, this is what we see; working with AI in a partnership, where they assist us, and we assist them. As a result, the same level of jobs are available, only that they are safer and in most cases, can become less complex. Some may argue that scOS will replace jobs within the security industry – particularly those within monitoring stations. However, scOS actually acts as a significant tool to these types of jobs, where scOS can develop intelligence based on the information it sources. From there monitoring stations can proactively be informed on situations that affect the personal security of individuals. They can now take on more customers too as they will be more resourceful with their own staff.

We also view that AI will generate new jobs – even jobs that do not exist yet. At scOS, this is highly likely, and we will require all types of skilled individuals, from psychologists to technical engineers to our facilities and estates teams. In the worst case that a job needs to be phased out, we will speak with the affected staff members to discuss which area within the company they would like to move to, all with the adequate training provided.

The Rise of Deepfakes

From Deepfake Video where an individual is presenting to a camera while visually looking like someone else, to Deepfake Audio where a person can speak and sound like another individual.

Ironically, AI itself as well as non-AI features can be used to defend against Deepfakes. Embedding digital artefacts into video and audio can be unnoticeable to humans but are incredibly sensitive to Deepfake spotting algorithms.

with or without AI, when watching a Deepfake video take note of any head, face or body movements that will allow you to distinguish oddities. This can easily be seen in the eyes of a subject.

AI Bias

Many errors in computing are down to human error, and this includes AI bias. This is where a system is designed and built by a group of smart individuals that use their knowledge of problems.

A Faulty Tower

Here is an example where someone is designing a building for construction. This new building is supposed to be a high-tech office space as a dedicated HQ for 1,000 staff members. It has elegant meeting rooms, stairs made from strengthened glass like you see in some Apple Stores, and 12 different levels.

The design of the building is presented to the company, and they love it! So construction begins right.

4 years later, the building is complete, and after a ceremony, they invite their staff for a tour and to find their desks.

However, there is a very big problem. As the building was designed and reviewed by people without mobility issues, there was a complete oversight for their staff members who were wheelchair users.

What was the design team thinking!? Did they think they can get up and walk?

Actually, they did not think of it at all, and it was not done maliciously either. It was just a complete oversight of thinking of everyone’s needs in the world.

AI Bias Summary

As a result of this, you can understand the immediate complications involved with AI Bias within a development team. Firstly, the team has to think of diversity and where the data they are using to train AI models is being sourced from, and how balanced it is.

The team has to think about inclusion for all types of people with gender and social equality, or for those with disabilities.

The solution to this is to run assessments on each feature of an AI model to look for any risks and then the solution. An easy way to solve this is to have diverse teams. The more diverse the team is, the lower the chance there is for an extremely biased AI. However, training must be given to the team so they know the procedures for voicing concerns during development.