Last night, the AI Forum New Zealand launched its report Artificial Intelligence — Shaping a Future New Zealand. According to the report, AI has the potential to increase New Zealand GDP by up to $54 billion.
The report also acknowledges that AI raises new ethical concerns and will have long term implications for legal principles. Our laws are based on a core assumption – that the doer of the regulated act or perpetrator of a crime will be human. While we have chosen to create new legal persons, like companies, we still hold humans ultimately responsible for their actions (hence why directors can be held personally liable in certain circumstances).
While we are not quite at the stage where computers are acting entirely of their own volition (i.e. the singularity), and some believe this may never occur, others (like Ray Kurzweil, Google’s Director of Engineering) believe that it may happen as soon as 2045. And once the technology is capable of making decisions with little or no human input, then our laws (that assume a human actor at some point in the decision making process) will no longer be fit for purpose.
Given the phenomenal uptake of the technology and the opportunities and challenges it presents, a pro-active approach to the regulation of this technology is critical. In the same way that the arrival of AI will shake the principles on which many laws are based – we should now be looking at those basic principles to determine how to accommodate a different form of actor.
As the technology becomes increasingly widespread, society will also need to address related ethical questions such as who will be responsible when things go wrong, for example, when a driverless car is involved in a collision.