Ethical Artificial Intelligence - An Open Question

Main Article Content

Alice Pavaloiu Utku Kose

Abstract

Artificial Intelligence (AI) is an effective science which employs strong enough approaches, methods, and techniques to solve unsolvable real-world based problems. Because of its unstoppable rise towards the future, there are also some discussions about its ethics and safety. Shaping an AI-friendly environment for people and a people-friendly environment for AI can be a possible answer for finding a shared context of values for both humans and robots. In this context, objective of this paper is to address the ethical issues of AI and explore the moral dilemmas that arise from ethical algorithms, from pre-set or acquired values. In addition, the paper will also focus on the subject of AI safety. As general, the paper will briefly analyze the concerns and potential solutions to solving the ethical issues presented and increase readers’ awareness on AI safety as another related research interest.

Article Details

How to Cite
PAVALOIU, Alice; KOSE, Utku. Ethical Artificial Intelligence - An Open Question. Journal of Multidisciplinary Developments, [S.l.], v. 2, n. 2, p. 15-27, apr. 2017. ISSN 2564-6095. Available at: <http://jomude.com/index.php/jomude/article/view/43>. Date accessed: 25 nov. 2017.
Section
Natural Sciences - Regular Research Paper

References

Abbeel, P., & Ng, A. Y. (2011). Inverse reinforcement learning. In Encyclopedia of machine learning (pp. 554-558). Springer US.

Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology, 7(3), 149-155.

Balakrishnan, A., & Lovelace Jr., B. (2017). IBM CEO: Jobs of the future won't be blue or white collar, they'll be 'new collar'. CNBCe.com. Retrieved 2 April 2017, from http://www.cnbc.com/2017/01/17/ibm-ceo-says-ai-will-be-a-partnership-between-man-and-machine.html

Bostrom, N. (2002). Existential risks. Journal of Evolution and Technology, 9(1), 1-31.

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. OUP Oxford.

Bostrom, N. (2015). What happens when our computers get smarter than we are? Retrieved 3 April 2017, from https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are

Bostrom, N., Dafoe, A., & Flynn, C. (2016). Policy Desiderata in the Development of Machine Superintelligence. Retrieved 17 April 2017, from https://www.fhi.ox.ac.uk/wp-content/uploads/Policy-Desiderata-in-the-Development-of-Machine-Superintelligence.pdf

Breazeal, C. (2002). Regulation and entrainment in human—robot interaction. The International Journal of Robotics Research, 21(10-11), 883-902. Retrieved 28 March 2017, from http://groups.csail.mit.edu/lbr/hrg/2001/ijrr.pdf

Cassell, J. (2016). Davos 2016 - Issue Briefing: Infusing Emotional Intelligence into AI. YouTube. Retrieved 28 March 2017, from https://www.youtube.com/watch?v=YV4fTOSpUxs

Cornell University. (2011). Approach to the Problem (IRL). Cornell University – Blog – Machine Learning for ICS (applying machine learning algorithms to the effort of computational sustainability). Retrieved 17 April 2017, from http://blogs.cornell.edu/ml4ics/2011/05/09/approach-to-the-problem-irl/

Evans, O., & Goodman, N. D. (2015). Learning the preferences of bounded agents. In NIPS 2015 Workshop on Bounded Optimality.

Evans, O., Stuhlmüller, A., & Goodman, N. D. (2015). Learning the preferences of ignorant, inconsistent agents. arXiv preprint arXiv:1512.05832.

European Union. (2016). European Parliament - Committee on Legal Affairs - Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics. Retrieved 28 March 2017, from http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN

Goertzel, B. (2007). Human-level artificial general intelligence and the possibility of a technological singularity: A reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil. Artificial Intelligence, 171(18), 1161-1173.

Goodfellow, I., Papernot, N., Huang, S., Duan, Y., Abbeel, P., & Clark, J. (2017). Attacking Machine Learning with Adversarial Examples. Open AI – Blog Web Site. Retrieved 16 April 2017, from https://blog.openai.com/adversarial-example-research/

Kurzweil, R. (2005). The Singularity is Near: When Humans Transcend Biology. Penguin.

Lewis, R. L., Howes, A., & Singh, S. (2014). Computational rationality: Linking mechanism and behavior through bounded utility maximization. Topics in Cognitive Science, 6(2), 279-311.

Lindh, C. (2016). Robots: Lifesavers or Terminators?. CNN.com International. Retrieved 30 March 2017, from http://edition.cnn.com/2016/09/24/world/robot-morality-machine-ethics/

MIT Technology Review. (2014). Do we need Asimov’s laws?. Retrieved November 19, 2016, from https://www.technologyreview.com/s/527336/do-we-need-asimovs-laws/

Morris, D. Z. (2016). U.N. Moves Towards Possible Ban on Autonomous Weapons. Fortune.com – Artificial Intelligence. Retrieved January 26, 2017 from http://fortune.com/2016/12/24/un-ban-autonomous-weapons/

Nadella, S. (2017). Davos 2017 - Artificial Intelligence. YouTube. Retrieved 28 March 2017, from https://www.youtube.com/watch?v=iqzdD_n-bOs

Naughton, K. (2015). Should a Driverless Car Decide Who Lives or Dies?. Bloomberg.com. Retrieved 2 April 2017, from https://www.bloomberg.com/news/articles/2015-06-25/should-a-driverless-car-decide-who-lives-or-dies-in-an-accident-

Ng, A. Y., & Russell, S. J. (2000). Algorithms for inverse reinforcement learning. In Icml (pp. 663-670).

Orseau, L., & Armstrong, S. (2016). Safely interruptible agents. In Uncertainty in Artificial Intelligence: 32nd Conference (UAI 2016), edited by Alexander Ihler and Dominik Janzing (pp. 557-566).

Pavaloiu, A. (2016). The Impact of Artificial Intelligence on Global Trends. Journal of Multidisciplinary Developments, 1(1), 21-37.

Russell, S. (2016). Rationality and intelligence: A brief update. In Fundamental Issues of Artificial Intelligence (pp. 7-28). Springer International Publishing.

Russell, S., & Norvig, P. (2010). Artificial intelligence: A Modern Approach. New Jersey: Pearson.

Schoettle, B., & Sivak, M. (2014). A Survey of Public Opinion about Autonomous and Self-Driving Vechicles, in the U.S., the U.K. and Australia. Report No. UMTRI-2014-21. The University of Michigan Transportation Research Institute.

Soares, N., Fallenstein, B., Armstrong, S., & Yudkowsky, E. (2015). Corrigibility. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence.

Tufekci, Z. (2016). Machine intelligence makes human morals more important. Ted.com. Retrieved 28 March 2017, from https://www.ted.com/talks/zeynep_tufekci_machine_intelligence_makes_human_morals_more_important#t-1049953

Wallach, W. (2014). A dangerous master: How to keep technology from slipping beyond our control. Basic Books.

Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford University Press.

World Economic Forum. (2015). Davos 2015 - A Brave New World. YouTube. Retrieved 28 March 2017, from https://www.youtube.com/watch?v=wGLJXO08IYo

World Economic Forum. (2016). The Future of Jobs Employment, Skills and Workforce Strategy for the Fourth Industrial Revolution. Retrieved 6 April 2017, from http://www3.weforum.org/docs/WEF_Future_of_Jobs.pdf

World Economic Forum. (2017). Top 9 Ethical Issues in Artificial Intelligence. Retrieved 28 March 2017, from https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/

Yampolskiy, R. V. (2013). Artificial intelligence safety engineering: Why machine ethics is a wrong approach. In Philosophy and Theory of Artificial Intelligence (pp. 389-396). Springer Berlin Heidelberg.

Yudkowsky, E. (2008). Artificial intelligence as a positive and negative factor in global risk. Global Catastrophic Risks, 1(303), 184.


Additional Reading – Extended Bibliography

Armstrong, S. (2015). Motivated Value Selection for Artificial Agents. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence. Retrieved 28 March 2017, from http://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10183/10126

Beavers, A. F. (2011). 21 Moral Machines and the Threat of Ethical Nihilism. Robot ethics: The ethical and social implications of robotics, 333.

Belloni, A., Berger, A., Boissier, O., Bonnet, G., Bourgne, G., Chardel, P. A., ... & Mermet, B. (2015). Dealing with Ethical Conflicts in Autonomous Agents and Multi-Agent Systems. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence. Retrieved 28 March 2017, from http://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10109/10127

Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573-1576.

Bostrom, N. (2017). Interactions between the AI Control Problem and the Governance Problem | Nick Bostron. Future of Life Institute. Retrieved 28 March 2017, from https://futureoflife.org/wp-content/uploads/2017/01/Nick_Bostrom.pdf?x33688

Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. The Cambridge Handbook of Artificial Intelligence, 316-334. Retrieved 29 March 2017, from http://intelligence.org/files/EthicsofAI.pdf

Collart, J., Gateau, T., Fabre, E., & Tessier, C. (2015). Human-Robot Systems Facing Ethical Conflicts: A Preliminary Experimental Protocol. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence. Retrieved 29 March 2017, from http://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10120/10130

Guerini, M., Pianesi, F., & Stock, O. (2015). Is It Morally Acceptable for A System to Lie to Persuade Me?. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence. Retrieved 29 March 2017, from http://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10108/10132

Hunter, A. (2015). On Keeping Secrets: Intelligent Agents and the Ethics of Information Hiding. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence. Retrieved 29 March 2017, from http://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10076/10134

Kuipers, B. (2016). Human-Like Morality and Ethics for Robots. In AAAI-16 Workshop on Artificial Intelligence, Ethics and Society. Retrieved 30 March 2017, from http://www.aaai.org/ocs/index.php/WS/AAAIW16/paper/view/12581/12351

Scherer, M. U. (2016). Regulating artificial intelligence systems: Risks, challenges, Competencies, and strategies. Harvard Journal of Law & Technology, 29(2), 354–398. doi:10.2139/ssrn.2609777