Main Article Content
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology, 7(3), 149-155.
Balakrishnan, A., & Lovelace Jr., B. (2017). IBM CEO: Jobs of the future won't be blue or white collar, they'll be 'new collar'. CNBCe.com. Retrieved 2 April 2017, from http://www.cnbc.com/2017/01/17/ibm-ceo-says-ai-will-be-a-partnership-between-man-and-machine.html
Bostrom, N. (2002). Existential risks. Journal of Evolution and Technology, 9(1), 1-31.
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. OUP Oxford.
Bostrom, N. (2015). What happens when our computers get smarter than we are? Retrieved 3 April 2017, from https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are
Bostrom, N., Dafoe, A., & Flynn, C. (2016). Policy Desiderata in the Development of Machine Superintelligence. Retrieved 17 April 2017, from https://www.fhi.ox.ac.uk/wp-content/uploads/Policy-Desiderata-in-the-Development-of-Machine-Superintelligence.pdf
Breazeal, C. (2002). Regulation and entrainment in human—robot interaction. The International Journal of Robotics Research, 21(10-11), 883-902. Retrieved 28 March 2017, from http://groups.csail.mit.edu/lbr/hrg/2001/ijrr.pdf
Cassell, J. (2016). Davos 2016 - Issue Briefing: Infusing Emotional Intelligence into AI. YouTube. Retrieved 28 March 2017, from https://www.youtube.com/watch?v=YV4fTOSpUxs
Cornell University. (2011). Approach to the Problem (IRL). Cornell University – Blog – Machine Learning for ICS (applying machine learning algorithms to the effort of computational sustainability). Retrieved 17 April 2017, from http://blogs.cornell.edu/ml4ics/2011/05/09/approach-to-the-problem-irl/
Evans, O., & Goodman, N. D. (2015). Learning the preferences of bounded agents. In NIPS 2015 Workshop on Bounded Optimality.
Evans, O., Stuhlmüller, A., & Goodman, N. D. (2015). Learning the preferences of ignorant, inconsistent agents. arXiv preprint arXiv:1512.05832.
European Union. (2016). European Parliament - Committee on Legal Affairs - Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics. Retrieved 28 March 2017, from http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN
Goertzel, B. (2007). Human-level artificial general intelligence and the possibility of a technological singularity: A reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil. Artificial Intelligence, 171(18), 1161-1173.
Goodfellow, I., Papernot, N., Huang, S., Duan, Y., Abbeel, P., & Clark, J. (2017). Attacking Machine Learning with Adversarial Examples. Open AI – Blog Web Site. Retrieved 16 April 2017, from https://blog.openai.com/adversarial-example-research/
Kurzweil, R. (2005). The Singularity is Near: When Humans Transcend Biology. Penguin.
Lewis, R. L., Howes, A., & Singh, S. (2014). Computational rationality: Linking mechanism and behavior through bounded utility maximization. Topics in Cognitive Science, 6(2), 279-311.
Lindh, C. (2016). Robots: Lifesavers or Terminators?. CNN.com International. Retrieved 30 March 2017, from http://edition.cnn.com/2016/09/24/world/robot-morality-machine-ethics/
MIT Technology Review. (2014). Do we need Asimov’s laws?. Retrieved November 19, 2016, from https://www.technologyreview.com/s/527336/do-we-need-asimovs-laws/
Morris, D. Z. (2016). U.N. Moves Towards Possible Ban on Autonomous Weapons. Fortune.com – Artificial Intelligence. Retrieved January 26, 2017 from http://fortune.com/2016/12/24/un-ban-autonomous-weapons/
Nadella, S. (2017). Davos 2017 - Artificial Intelligence. YouTube. Retrieved 28 March 2017, from https://www.youtube.com/watch?v=iqzdD_n-bOs
Naughton, K. (2015). Should a Driverless Car Decide Who Lives or Dies?. Bloomberg.com. Retrieved 2 April 2017, from https://www.bloomberg.com/news/articles/2015-06-25/should-a-driverless-car-decide-who-lives-or-dies-in-an-accident-
Ng, A. Y., & Russell, S. J. (2000). Algorithms for inverse reinforcement learning. In Icml (pp. 663-670).
Orseau, L., & Armstrong, S. (2016). Safely interruptible agents. In Uncertainty in Artificial Intelligence: 32nd Conference (UAI 2016), edited by Alexander Ihler and Dominik Janzing (pp. 557-566).
Pavaloiu, A. (2016). The Impact of Artificial Intelligence on Global Trends. Journal of Multidisciplinary Developments, 1(1), 21-37.
Russell, S. (2016). Rationality and intelligence: A brief update. In Fundamental Issues of Artificial Intelligence (pp. 7-28). Springer International Publishing.
Russell, S., & Norvig, P. (2010). Artificial intelligence: A Modern Approach. New Jersey: Pearson.
Schoettle, B., & Sivak, M. (2014). A Survey of Public Opinion about Autonomous and Self-Driving Vechicles, in the U.S., the U.K. and Australia. Report No. UMTRI-2014-21. The University of Michigan Transportation Research Institute.
Soares, N., Fallenstein, B., Armstrong, S., & Yudkowsky, E. (2015). Corrigibility. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence.
Tufekci, Z. (2016). Machine intelligence makes human morals more important. Ted.com. Retrieved 28 March 2017, from https://www.ted.com/talks/zeynep_tufekci_machine_intelligence_makes_human_morals_more_important#t-1049953
Wallach, W. (2014). A dangerous master: How to keep technology from slipping beyond our control. Basic Books.
Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford University Press.
World Economic Forum. (2015). Davos 2015 - A Brave New World. YouTube. Retrieved 28 March 2017, from https://www.youtube.com/watch?v=wGLJXO08IYo
World Economic Forum. (2016). The Future of Jobs Employment, Skills and Workforce Strategy for the Fourth Industrial Revolution. Retrieved 6 April 2017, from http://www3.weforum.org/docs/WEF_Future_of_Jobs.pdf
World Economic Forum. (2017). Top 9 Ethical Issues in Artificial Intelligence. Retrieved 28 March 2017, from https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/
Yampolskiy, R. V. (2013). Artificial intelligence safety engineering: Why machine ethics is a wrong approach. In Philosophy and Theory of Artificial Intelligence (pp. 389-396). Springer Berlin Heidelberg.
Yudkowsky, E. (2008). Artificial intelligence as a positive and negative factor in global risk. Global Catastrophic Risks, 1(303), 184.
Additional Reading – Extended Bibliography
Armstrong, S. (2015). Motivated Value Selection for Artificial Agents. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence. Retrieved 28 March 2017, from http://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10183/10126
Beavers, A. F. (2011). 21 Moral Machines and the Threat of Ethical Nihilism. Robot ethics: The ethical and social implications of robotics, 333.
Belloni, A., Berger, A., Boissier, O., Bonnet, G., Bourgne, G., Chardel, P. A., ... & Mermet, B. (2015). Dealing with Ethical Conflicts in Autonomous Agents and Multi-Agent Systems. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence. Retrieved 28 March 2017, from http://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10109/10127
Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573-1576.
Bostrom, N. (2017). Interactions between the AI Control Problem and the Governance Problem | Nick Bostron. Future of Life Institute. Retrieved 28 March 2017, from https://futureoflife.org/wp-content/uploads/2017/01/Nick_Bostrom.pdf?x33688
Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. The Cambridge Handbook of Artificial Intelligence, 316-334. Retrieved 29 March 2017, from http://intelligence.org/files/EthicsofAI.pdf
Collart, J., Gateau, T., Fabre, E., & Tessier, C. (2015). Human-Robot Systems Facing Ethical Conflicts: A Preliminary Experimental Protocol. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence. Retrieved 29 March 2017, from http://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10120/10130
Guerini, M., Pianesi, F., & Stock, O. (2015). Is It Morally Acceptable for A System to Lie to Persuade Me?. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence. Retrieved 29 March 2017, from http://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10108/10132
Hunter, A. (2015). On Keeping Secrets: Intelligent Agents and the Ethics of Information Hiding. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence. Retrieved 29 March 2017, from http://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10076/10134
Kuipers, B. (2016). Human-Like Morality and Ethics for Robots. In AAAI-16 Workshop on Artificial Intelligence, Ethics and Society. Retrieved 30 March 2017, from http://www.aaai.org/ocs/index.php/WS/AAAIW16/paper/view/12581/12351
Scherer, M. U. (2016). Regulating artificial intelligence systems: Risks, challenges, Competencies, and strategies. Harvard Journal of Law & Technology, 29(2), 354–398. doi:10.2139/ssrn.2609777