Main Article Content
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
 Kose, U., & Koc, D. (Ed.). (2014). Artificial Intelligence applications in distance education. IGI Global.
 Pavaloiu, A., & Kose, U. (2017). Ethical artificial intelligence-an open question. arXiv preprint arXiv:1706.03021.
 Kose, U. (2017). Development of artificial intelligence based optimization algorithms. PhD. Thesis. Selcuk University, Institute of Natural Sciences, Dept. of Computer Engineering.
 Deperlioglu, O., Kose, U., Gupta, D., Khanna, A., & Sangaiah, A. K. (2020). Diagnosis of heart diseases by a secure Internet of Health Things system based on Autoencoder Deep Neural Network. Computer Communications, 162, 31-50.
 Neill, Daniel B. (2013). Using Artificial Intelligence to Improve Hospital Inpatient Care. IEE Computer Society.
 Krittanawong C, Zhang H, Wang Z, Aydar M, Kitai T. (2017). Artificial intelligence in precision cardiovascular medicine. J Am Coll Cardiol. 2017; 69:2657–64.
 Walton-Rivers, J., Williams, P.R., Bartle, R., Perez-Liebana D., Lucas, S.M. (2017). Evaluating and modelling hanabi-playing agents, 2017 IEEE Congress on Evolutionary Computation (CEC).
 Rajpurkar, P., Irvin, J., Zhu, K. (2017). CheXNet: radiologist-level pneumonia detection on chest x-rays with deep learning. 05225
 Pan, Y.H. (2016). Heading toward artificial intelligence 2.0. Engineering, 2(4), 409–413. http://dx.doi.org/10.1016/J.ENG.2016.04.018.
 McFarland, M. (2017). Google uses AI to help diagnose breast cancer, Erişim Tarihi:1.04.2021.http://money.cnn.com/2017/03/03/technology/google-breast-cancer-ai/.
 Cheung, C.W., Tsang I.T., Wong, K.H. (2017). Robot Avatar: A Virtual Tourism Robot for People With Disabilities. International Journal of Computer Theory And Engineering, Singapore, (9)3, 229-234.
 Johnson, K.W., Soto, J.T., Glicksberg, B.S., Shameer, K., Miotto, R., Ali, M., Ashley, E., Dudley, J.T. (2018). Artificial Intelligence in Cardiology. Journal of the American College of Cardiology, 71(23), 2668-2679.
 Park SH & Han K. (2018). Methodologic guide for evaluating clinical performance and effect of artificial intelligence technology for medical diagnosis and prediction. Radiology; 286:800–809.
 Vassev, E. (2016). Safe artificial intelligence and formal methods. In International Symposium on Leveraging Applications of Formal Methods (pp. 704-713). Springer, Cham.
 Yampolskiy, R. V. (2016). Taxonomy of pathways to dangerous artificial intelligence. In Workshops at the Thirtieth AAAI Conference on Artificial Intelligence.
 Köse, U. (2018). Are we safe enough in the future of artificial intelligence? A discussion on machine ethics and artificial intelligence safety. BRAIN. Broad Research in Artificial Intelligence and Neuroscience, 9(2), 184-197.
 Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2014). Intriguing properties of neural networks. 6199.
 Goodfellow, I. J., Shlens, J., Szegedy, C. (2015). Explaining and harnessing adversarial examples. 6572.
 Hu, W., Tan, Y. (2017). Generating adversarial malware examples for black-box attacks based on GAN.05983.
 Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M. K. (2016). Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 24-28 October, Vienna, 1528-1540.
 Liu, Y., Chen, X., Liu, C., Song, D., (2016). Delving into transferable adversarial examples and black-box attacks. 02770.
 Liang, B., Li, H., Su, M., Bian, P., Li, X., Shi, W. (2017). Deep text classification can be fooled. 08006.
 Gümüş, F. (2019). Artificial Intelligence Applications, Effects and Future in Museums. Istanbul University, Institute of Social Sciences, MS Thesis, Istanbul. Hosseini, H., Kannan, S., Zhang, B., Poovendran, R., 2017. Deceiving Google's Perspective API Built for Detecting Toxic Comments.08138.
 Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., and Swami, A. (2017). Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security (pp. 506-519). ACM.
 Samanta, S., Mehta, S. (2018). Generating Adversarial Text Samples. In Advances in Information Retrieval, Proceedings of the 40th European Conference on Information Retrieval Research, 26–29 March, Grenoble, 744-749.
 Ebrahimi, J., Lowd, D., Dou, D. (2018). On Adversarial Examples for Character-Level Neural Machine Translation. 09030.
 Carlini, N., Wagner, D. (2018). Audio adversarial examples: Targeted attacks on speech to-text. In 2018 IEEE Security and Privacy Workshops (SPW) (pp. 1-7). IEEE.
 Akhtar, N., Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, 14410-14430.
 Ilyas, A., Engstrom, L., Athalye, A., Lin, J. (2018). Black-box adversarial attacks with limited queries and information. arXiv preprint arXiv:1804.08598.
 Su J., Vargas, D.V., Sakurai K. (2017). One pixel attack for fooling deep neural networks, CoRR, 08864.
 Vasiljevic, I., Chakrabarti, A., & Shakhnarovich, G. (2016). Examining the impact of blur on recognition by convolutional networks. arXiv preprint arXiv:1611.05760.
 Zheng, S., Song, Y., Leung, T., & Goodfellow, I. (2016). Improving the robustness of deep neural networks via stability training. In Proceedings of the ieee conference on computer vision and pattern recognition (pp. 4480-4488).