AI and Automation

With the development of AI and the automation of more kinds of work, it is inevitable that many current jobs will be lost. I don’t aim to stifle innovation. Instead, I want to focus on what is truly important: protecting workers. I want to make policy that helps people transition to other jobs and ensures that workers will always have a livable income. For some reason, Canadian politicians have not been developing adequate policies to manage the changes that AI and automation will bring. That’s why I knew I had to run in this election. I have gotten to know experts who work in  emerging technologies, and I have a background in considering the ethics and potential regulation of technologies like AI. One of the main reasons I decided to run for office is to work on AI technology and regulation. Ongoing technological advancement is something that will affect every area of our lives, whether or not we see that right now. Technology changes quickly. We can’t afford to wait and see how new advances unfold and then legislate in response to the problems that appear. We need to consult experts and proactively create laws to manage automation of jobs, the privacy and safety concerns of new technologies, and to make sure that Lethal Autonomous Weapon Systems are universally banned. I know these experts personally, because I’ve been working on these issues with them. These are complex challenges, but I am equipped to handle them.

Job automation can be understandably frightening for anyone at risk of losing their job to software, especially for people who are barely making ends meet and can’t afford to be jobless, even for a short time. By managing job automation, we can make sure that we will still have access to meaningful work, and the means to care for ourselves and our families. That means properly tracking automation in different sectors so that we know where work will be, providing access to education and training programs, and implementing a guaranteed livable income. I have helped the Green Party of Canada create a specific approach to help ease the job automation transition for workers who are laid in instances where AI advancements cause the elimination of those workers’ positions. 

My goal in handling the automation of jobs well is to make sure that the benefits of automation go to the people, not solely to the privileged few who directly profit from technological advancements. We need to ensure that new technologies are not leaving out marginalized people. We must not let technological developments worsen economic and social inequalities. Part of the way to approach this problem is to institute a tax for large corporations minimally equivalent to the income tax paid by employees who have lost positions due to AI, so as to not suddenly lose the tax base when there are many positions lost at once (small businesses will be exempt). However, I acknowledge that defining and enforcing a tax like this is an incredibly complex process. We will very seriously research the process and consult with experts to find the best possible way to implement such a measure. The overarching goal of using measures and tools like this is to make transitions to a more automated workforce as seamless as possible and to ensure that the people still receive the benefits. AI and automation can actually make some sectors safer and less stressful. We can use automation to make our lives more sustainable and to give us more space for families and hobbies. People are so much more than the jobs we do, and we need to recognize that as part of any steps we take to prepare for automation.

Legislation on artificial intelligence is woefully under-discussed and misunderstood. Most of my work in this area has been done with an organization called the Future of Life Institute. To get a sense of what they do with regards to AI development, you can look to the 23 Asilomar principles, “a set of 23 principles intended to promote the safe and beneficial development of artificial intelligence[i].” Developing the AI sector can be a positive thing; we want to move towards innovation, not legislate it away. But to do that effectively, Canada needs to set up a legislative and regulatory framework for AI research in Canada. This would be done by striking a parliamentary committee to examine the range of issues related to the development of AI. Members of the committee would be chosen by experts in the field so that its members would be capable of creating the most informed framework possible to be used for drafting AI legislation. It should also be noted that such a committee would NOT be a military body.

It is critical that this panel be formed based on the opinions of experts in the fields of AI and other emerging technologies. Having done work analyzing policy related to ethics and safety in AI, as well as in other areas, I have seen first hand how any legislation has to be extremely simplified before it can be handled by law-makers who do not have experience in technology research. You cannot simply drop complex legislation concerning AI or technology in just anyone’s lap and expect them to be able to handle it with the requisite nuance and insight. We need individuals who can properly assess the potential impacts of any AI related legislation to prevent issues potential privacy or security issues that won’t be obvious to non-experts.

Speaking of potential issues in AI legislation, I want to make something perfectly clear: fully autonomous weapons have no place in our world. That may seem like an obvious statement, but it needs to be said loudly and clearly, because governments like ours have been asked, and have been unwilling to commit to banning this type of weapon system. These types of weapons are unlike any we know, and they could have devastating consequences in new ways that we haven’t considered. As such, Lethal Autonomous Weapon Systems should be banned in all forms, in all places in the world. It is clearly laid out in the Green Party of Canada platform that we will ban autonomous weapons in Canada and push for a global pact to make them illegal. These weapons should not exist. It is as simple as that.

To me, these issues are non-partisan. Every party should be able to get behind legislation that will protect the people of Canada from getting left behind in the wake of ever increasing technological advancements. It doesn’t matter to me whether or not my signature is on the bill or bills that help Canada manage our future with regards to technology regulation. I only care that those bills gets passed.

 ———-

[i] As noted in the Institute’s 2018 report, the state of California unanimously adopted new legislation in support of the Future of Life Institute’s Asilomar AI Principles (p. 8).

https://futureoflife.org/wp-content/uploads/2019/02/2018-Annual-Report.pdf?x36312