Some individuals within the field of AI believe that providing Artificial Intelligence with a set of rules, or guiding principles, will help to safeguard humanity from the negatives of the technology. The best way to achieve this would be to supply it with credible news articles, opinion pieces and editorials, and literature.

However, this would be a hopeless solution as we will only be providing the technological “species” with a snapshot in time – forgetting any artefacts that truly represent the whole of humanities history. If you have ever put a time capsule together, you will understand that you only provide it with things relevant to that date. These items are probably not relevant today nor will you make the same decisions on what to include within the time capsule, as your context on the world, your family and friends, and yourself are now different.

The origins of the rules and laws we have in the world today that are followed by every nation and individual were always created by a few people, which even in the best cases they are not future-proof. However, they have been developed throughout history to help guide our decisions and each one works together to help build flexibility depending on situations and circumstances. Agreeing on a basic set of guidelines permits us to optimise ourselves.

In the case of AI, there would be no way to create a set of rules. We could not possibly write out all of the rules to correctly optimise for humanity, and this is because while thinking machines may be fast and powerful, they lack the ability to have flexibility. That is that a rule created today may be abided to, but as soon as there is a subtle difference in what conflicts with the rule, it may mean something completely different to the AI.

Therefore, knowing that it would be very time consuming and a managerial nightmare to control a set of strict rules to follow, we should bring our attention to the humans who are making the system. These people should be asking themselves solicitous questions:

  • What is our motivation for AI?
  • Is the motiovation for AI aligned with the best interests of humanity and how is this the case?
  • What are our own biases?
    • What ideas, expriences, and values have we failed to include in our team?
    • Who have we overlooked?
  • Have we included people unlike ourselves for the purpose of making the future of AI better – or have we simply included diversity on our team to meet certain quotas?
  • How can we ensure that our behaviour is inclusive?
  • How are the technological, economic, and social implications of AI understood by those involved in its creation?
  • What rights should we have to interogate data sets, algorithms, and processes being used to make decisions on our behalf?
    • When is the best time to begin an interegation of this and what would be the cause?
  • Who gets to define the value of a human life and what is the value weighed against?
  • Does the leadership of our organisation and our AI teams reflect many different kinds of people?
    • If so, what are the plans to make sure information is shared equitably?
    • If not, what are the plans to mitigate the risk of a lack of diversity?
  • What role do those commercialising AI play in addressing the social implications of AI?
  • Should we continue to compare AI to human thinking, or is it better for us to see it as something different?
  • Is it OK to make AI that recognises and responds to human emotion?
  • Is it OK to build AI systems capable of mimicking human emotions, especially if it is learning from us in real time?
  • What is the acceptable point to which we are OK with AI learning and involving without humans directly in the loop?
  • Under what circumstances could an AI simulate and experience common human emotions? What about pain, loss, and loneliness? ANd are we OK with causing that suffering to AI?
  • Are we developing AI to seek a deeper understanding of ourselves?

Our current development track for AI prioritizes efficiency and automation. This potentially means we, as humans, have less control and choice over our thousands of daily activities, even those that are seemingly insignificant. An example could be your discover weekly playlist on Spotify where, each week, the whole playlist is refreshed based on your listening history. You have no active choice as to which song(s) can stay in that playlist, and which can go. Yes, you may have other playlists where you can move those songs into – giving you back your choice, and inevitably helping you to find more music. But what if you wanted more control over how often the playlist is refreshed or whether it should just create a new playlist for you – saving all of the previous songs just as they are. To do that, you would have to be a key developer at Spotify with information proving that this is a needed feature – all some thing you are unlikely going to go and do because why should you have to?

Our future living begins with a loss of control over little things. What may seem insignificant at first because we naturally just make-do and move on, will not be so insignificant in 50 years. Each of these tiny paper cuts turning into 50 years worth of paper cuts.

Stakeholders in AI mostly talk about or envision a single catestrophic event that either ends humanity or completely disrupts our way of life forever. However, its the steady erosion of humanity we take for granted today.

At Smart CIty Operating System (scOS), while we are pro-AI and believe we can turn it into a fantastic partner, as a maker of AI it is our duty to explore all views and perspectives to build it right that way, both humanity and AI itself is thankful for our choices.

You Mention Spotify, but What about scOS as a Better Example?

From the above article, scOS can be seen as a direct contributor to the “paper cuts” being made, as we decide when your lights turn on, how they should turn on, whether your emotions should have an impact on their temperature (in colour) or their brightness. Or maybe when there’s suspicious activity, a threat or an intruder, scOS will make automated decisions for you to prevent anything bad from happening, we may give you instructions on what to do, inform you, or just give you advice.

While we can just go ahead with this being done as part of goodwill in providing you better safety and security, and making your life easier. We have to look into this deeply, and seriously.

In all of these circumstances, and understanding that our system is designed to make life efficient and more autonomous, we design AI to act like an assistant that can be everywhere and do everything simultaneously, it is there to serve you while you give your focus to other things you need to do. And if there is ever something you want to change or provide your input into, our AI is being built and designed to provide you with the flexibility to do that; you ultimately always have control. And this goes past our AI, providing you with the flexibility to continue making executive decisions on events is demonstrated within all of our user interfaces (UI) or control points. Although this is tapping into the UI side of our software, it is still important because it is about how easily you can interact and communicate with our AI.