InvestSMART

Rules for Robots: Considering the Legal Ramifications of AI

Steve Sammartino considers the legal and regulatory aspects of AI and the impact on corporations and investors as more and more decisions are thrust into the hands of autonomous decision-making intelligence systems.
By · 8 Jun 2021
By ·
8 Jun 2021 · 5 min read
comments Comments

Seems a day doesn’t go by without an announcement being made about an AI outperforming humans at a particular task or intellectual pursuit. However, it is a rare event when AI is reported to have acted with autonomous intent to end human life.

This week, a number of reputable publications, including New Scientist, proclaimed that drones may have attacked humans fully autonomously for the first time. In a report published in March earlier this year, a United Nations Security Council report referred to a January 2021 incident during the Libyan Civil War. The report stated:

“…lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true “fire, forget and find” capability.”

Buried in the jargon is the inference that human beings were killed by autonomous robots acting without human supervision. It raises massive issues outside of our current regulatory parameters that won’t just impact warfare, but also current social convention and even our investment portfolios.

Accelerating AI Systems

Currently, most of the AI systems implemented around the world are narrow by nature. Narrow AI systems perform a singular task within a defined parameter set. They can drive a car, use pixel patterns to recognise a face, sort through data using algorithms to determine what should appear in a feed, recognise and interact with natural human language, or are actuators designed to move through a physical environment.

All of these are becoming exponentially better. We tend to consider AI systems that may pose a future danger in very vague general terms – the kind can that think and perform like a human. This is a significant shortcoming in our perceptions. This approach gives us a false sense of security that AI is always just over the horizon and not here yet. On the contrary, it is already here. It is largely unregulated and making independent decisions.

Artificial Intelligence As A Legal Entity

Potential negative externalities of AI now pose not only a risk to our society, but also to our investments. This shouldn’t be considered a threat to only technology firms creating them, but additionally to the businesses implementing them inside their corporate structure. 

Here is a question. If AIs are making decisions, when do they become a form of legal entity like a corporation? And if they do, who has the duty of care for their decisions? The creator of the AI? The corporation using the AI? Or the people implementing the AI, making decisions about what data is used to feed it and essentially train it?

These are difficult questions to answer and desperately need a new legal construct as much as stock trading does.

Accountability for AI systems is tricky area of the law, because there is no clear line of responsibility. As decisions are increasingly thrust into the hands of autonomous decision-making intelligence systems, we need a frame of reference to guide the impact it has on both consumers and corporations.

Rules for Robots

Renowned science fiction writer and philosopher, the late Isaac Asimov, developed a set of principles for the development of advanced robotic systems, as it was then called.

Still enormously influential in science and technology circles, Asimov’s Three Laws of Robotics are:

  1. A robot (AI) may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot (AI) must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot (AI) must protect its own existence as long as such protection does not conflict with the First or Second Law.

We can already see that all three laws are regularly ignored by the unfettered development of AI. Given AI runs much deeper than privacy infringements and the frequent large scale hacks we see today, it is time to start thinking about a world where literally everything is integrated with AIs – even humans.

Robots with rights

Accelerating technological change is hard to cope with. Rapid social and digital facilitation of once invisible views of marginalised groups in society are now mainstream issues. Corporations must respond quickly to the speed of AI development.

Prompted by the rapid onset of augmented human intelligence and AI, a few Techno-Utopians believe that robots themselves should have rights. Yes, you read that correctly.

Until very recently, I used to think it was one of the most ridiculous ideas I’ve ever heard especially considering how many humans already lack basic rights. However, I’ve changed my mind. So, let’s run a thought experiment and consider a few consequences of robots not having rights:

  • What if robots evolve to a point where they can actually feel pain?
  • What happens if we can’t tell the difference between a human and a robot? What or who are we hurting?
  • What happens if people merge with certain technologies or robots? Do only certain parts of these entities have rights?
  • What if others own or control software in our bodies? Does the software have rights? Who has the rights over the technology – the host or the licensor?
  • What if someone were manipulated to destroy a robot or an AI, but then it turned out to be a human, or inside a human?

How will the way we treat an AI impact how we behave as humans? We are the sum of our actions which bleed into all aspects of our lives and influence how humanity behaves. If ever there were a time to consider the seemingly ridiculous, then this is it.

During a technology upheaval where new possibilities astound us, being able to change our minds is something we all need to get better at. If it seems like we need an incentive to consider the outlandish, then the potential legal risk in our investment portfolio might motivate law makers. This isn’t the first time we’ve invented new legal constructs to deal with technological advancements – and the best time to do it is before it ends up in the courts.

Google News
Follow us on Google News
Go to Google News, then click "Follow" button to add us.
Share this article and show your support
Free Membership
Free Membership
Steve Sammartino
Steve Sammartino
Keep on reading more articles from Steve Sammartino. See more articles
Join the conversation
Join the conversation...
There are comments posted so far. Join the conversation, please login or Sign up.