Tech
I
May 10, 2018

AI - the debate continues

Last night I spoke on the panel at the Great AI Debate, hosted by The AI Show. The debate focused on the implications of AI for news and media.

After an hour's discussion and some excellent questions from the crowd, we concluded that we have to think carefully now about how we want the technology to interact with us in the future. Nigel Parker from Microsoft said that the technology is currently predominantly machine learning - it's not yet deep learning or true AI. But, even though we're not quite at the stage where there is no human input at all, we should prepare for a future where that may be possible. Key considerations include:

  • Should we always be told that we are interacting with the technology?
  • What fundamental principles should apply to the use of the technology?
  • Do we need to make changes to the law now to prepare for consequences done to and by true AI?

On the first issue, the panel generally agreed that to build trust, we should know when we are dealing with AI instead of a real person. But just because a business has told us something doesn't mean we really know it: the information will be in the terms and conditions. Only a few hands were raised in the crowd when I asked who reads those!On the second, we can look to big vendors like Microsoft and IBM who are already working from carefully considered core principles to make sure they are providing AI "for good". The World Economic Forum has also published 5 principles to keep AI ethical. Of course, to be effective, the creators of AI will need to buy into and incorporate those principles into their creations.

So is one possible way forward, to enshrine those principles in law? Maybe, but to do that we first need to agree on the core principles we want to apply to the technology. The bigger issue for now, is that AI is not a "person" and so, in almost all cases, isn't a legal actor. At the moment, we need to look behind the technology to find the legal person (human or company) involved. As the technology evolves, the ability to do that will reduce. That will upset the entire premise of all law: that there is a legal person to hold responsible. It's a big issue that requires considerable thought and discussion. And, given the rate at which the technology is developing and is being adopted, that thought and discussion must happen now.

So where to from here? The work done by the AI Forum to produce the Shaping a Future New Zealand report will help us focus on these issues and the next steps we need to take. Most importantly, we need to continue the debate - not just within the tech industry but across New Zealand because AI is something that will touch all of our lives in the very near future.

We live in a media landscape where it's already hard to know who to trust and what stories are real or fake.

No items found.

Article Link

Dowload Resource

Dowload Resource

Insights

Tech
May 10, 2018

AI - the debate continues

Last night I spoke on the panel at the Great AI Debate, hosted by The AI Show. The debate focused on the implications of AI for news and media.

After an hour's discussion and some excellent questions from the crowd, we concluded that we have to think carefully now about how we want the technology to interact with us in the future. Nigel Parker from Microsoft said that the technology is currently predominantly machine learning - it's not yet deep learning or true AI. But, even though we're not quite at the stage where there is no human input at all, we should prepare for a future where that may be possible. Key considerations include:

  • Should we always be told that we are interacting with the technology?
  • What fundamental principles should apply to the use of the technology?
  • Do we need to make changes to the law now to prepare for consequences done to and by true AI?

On the first issue, the panel generally agreed that to build trust, we should know when we are dealing with AI instead of a real person. But just because a business has told us something doesn't mean we really know it: the information will be in the terms and conditions. Only a few hands were raised in the crowd when I asked who reads those!On the second, we can look to big vendors like Microsoft and IBM who are already working from carefully considered core principles to make sure they are providing AI "for good". The World Economic Forum has also published 5 principles to keep AI ethical. Of course, to be effective, the creators of AI will need to buy into and incorporate those principles into their creations.

So is one possible way forward, to enshrine those principles in law? Maybe, but to do that we first need to agree on the core principles we want to apply to the technology. The bigger issue for now, is that AI is not a "person" and so, in almost all cases, isn't a legal actor. At the moment, we need to look behind the technology to find the legal person (human or company) involved. As the technology evolves, the ability to do that will reduce. That will upset the entire premise of all law: that there is a legal person to hold responsible. It's a big issue that requires considerable thought and discussion. And, given the rate at which the technology is developing and is being adopted, that thought and discussion must happen now.

So where to from here? The work done by the AI Forum to produce the Shaping a Future New Zealand report will help us focus on these issues and the next steps we need to take. Most importantly, we need to continue the debate - not just within the tech industry but across New Zealand because AI is something that will touch all of our lives in the very near future.

We live in a media landscape where it's already hard to know who to trust and what stories are real or fake.

No items found.

Article Link

Dowload Resource

Dowload Resource

Insights

Get in Touch