ВЕШТАЧКА ИНТЕЛИГЕНЦИЈА И СВЕТИ ОЦИ: СВЕТООТАЧКИ ВОДИЧ КРОЗ ИЗАЗОВЕ ВЕШТАЧКЕ ИНТЕЛИГЕНЦИЈЕ И СУПЕРИНТЕЛИГЕНЦИЈЕ
Abstract
Чланак има за циљ да истражи могућност примене учења које су
развили Oци Цркве у решавању низа савремених питања у развоју вештачке интелигенције.
У чланку се прво указује на начине употребе вештачке интелигенције у развоју система
надзора и безбедности, аутономног оружја, као што су дронови и роботи убице, и јавне
управе. Затим, се разматра питање да ли је упркос етички проблематичном коришћењу
вештачке интелигенције, ипак могуће да вештачка интелигенција стекне нека позитивна
знања о човечанству, која су у складу са принципима хришћанске, а посебно светоотачке
антропологије. Тврди се даље да употребом, односно злоупотребом вештачке
интелигенције у ова три друштвена сектора, вештачка интелигенција може стећи знање
о главним догмама хришћанске Цркве, наиме о Божијем оваплоћењу у људском обличју,
Божијој смрти на крсту и његовом васкрсењу из мртвих и о једнакости људског рода пред
Богом. Коначно, у раду се помера фокус на четири питања која се тичу: а) последица брзе
експлозије вештачке интелигенције, б) онтолошког статуса киборга и „uploads“, ц)
будућег односе вештачке суперинтелигенције и човечанства, и г) начина усклађивања
циљева супер интелигентних машина и човечанства. У решавање ових питања користи се
светоотачко учење о Богу као Творцу који ствара човечанство по свом лику и подобију,
учење о односу душе и тела развијено током оригенистичких спорова, халкидонско учење
о две несливене и нераздељиве природе Христове, познато и као диофизитство, и
диотелитско учење о вољи као природном, али личном својству.
References
America Legal Under International Law? (January 7, 2020). Available at SSRN:
https://ssrn.com/abstract=3515524 or http://dx.doi.org/10.2139/ssrn.3515524
Agamben, Giorgio (2003): Stato di eccezione. Turin: Bollati Boringhieri Editore.
Appling, Scott Darren & Briscoe, Erica J. (2017): The Perception of Social Bots by
Human and Machine. In: Proceedings of the Thirtieth International Florida Artificial
Intelligence Research Society Conference, Association for the Advancement of Artificial
Intelligence (www.aaai.org), pp. 20-25.
Bathrellos, Demetrios (2012): The Byzantine Christ. Oxford University Press.
Bostrom, Nick (2014): Superintelligence: Paths, Dangers, Strategies. Oxford
University Press.
Cahour, Béatrice, and Forzy, Jean-François (2009): Does projection into use improve
trust and exploration? An example with a cruise control system. Safety Science 47, 1260-1270.
Chu, L. C., Anandkumar, A., Chang Shin, H., & Fishman, E. K. (2020): The Potential
Dangers of Artificial Intelligence for Radiology and Radiologists. Journal of the American
College of Radiology. doi:10.1016/j.jacr.2020.04.010
Constas, Nicholas, (ed. and trans) (2014): On the Difficulties in the Church Fathers:
The Ambigua of Maximus the Confessor. 2 vols. Cambridge, MA: Harvard University Press.
Crenshaw, Kimberle (1989): Demarginalizing the Intersection of Race and Sex: A
Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics,
The University of Chicago Legal Forum 140, 139-167.
Dalenberg, David J. (2018): Preventing Discrimination in the Automated Targeting of
Job Advertisements, Computer Law & Security Review 34 (3), 615-627.
DARPA (1983) Strategic Computing: New Generation Computing Technology: A
Strategic Plan for its Development and Application to Critical Problems in Defense.
Washington, DC: Defense Advanced Research Projects Agency (DARPA).
Dillow, Clay (2016): All of These Countries Now Have Armed Drones, Fortune. 12
February, URL (consulted 19 November 2018): http://fortune.com/2016/02/12/these-countrieshave-
armed-drones/
Dubhashi, Devdatt & Lappin, Shalom (2017): AI dangers, Communications of the
ACM, 60 (2). 43-45.
Elliott, Anthony (2018): Automated mobilities: From weaponized drones to killer bots.
Journal of Sociology, 144078331881177. doi:10.1177/1440783318811777
Ellis, Evelyn & Watson, Philippa (2012): EU Anti Discrimination Law. Oxford: Oxford
University Press.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena,
E. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks,
Principles, and Recommendations. Minds and Machines. doi:10.1007/s11023-018-9482-5
Geist, Edward Moore (2016): It’s already too late to stop the AI arms race—We must
manage it instead, Bulletin of the Atomic Scientists, 72(5), 318-321.
doi:10.1080/00963402.2016.1216672
González, Roberto J. (2017). Hacking the citizenry?: Personality profiling, “big data”
and the election of Donald Trump. Anthropology Today, 33(3), 9–12. doi:10.1111/1467-
8322.12348
GPT-3, (8 September 2020). A robot wrote this entire article. Are you scared yet,
human? Guardian. https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-thisarticle-
gpt-3
Hoffman, R. R., Mueller, S. T., Klein, G. & Litman, J. (2018). Metrics for Explainable
AI: Challenges and Prospects. XAI Metrics.
Kendall, Frank (2014): Terms of Reference—Defense Science 2015 Summer Study on
Autonomy. Memorandum for Chairman, Defense Science Board,
www.acq.osd.mil/dsb/tors/TOR-2014-11-17-Summer_Study_2015_on_Autonomy.pdf.
Kessler, Glenn (2019, March 22): The Iraq War and WMDs: An intelligence failure
or White House spin? The Washington Post. https://www.washingtonpost.com/
politics/2019/03/22/iraq-war-wmds-an-intelligence-failure-or-white-house-spin/
Kurzweil, Ray 2005. The Singularity Is Near (Viking).
Kurzweil, Ray 2012. How to Create a Mind (Viking Penguin).
Levy, Antoine, 2015. “Γωνια: Looking into Corners of St Maximus’ Cosmic
Architecture.” In: Antoine Lévy, Pauli Annala, Olli Hallamaa and Diana Kaley (eds.), The
Architecture of the Cosmos. Helsinki: Luther- Agricola Society, 153–174
Lipson, Hod and Melba Kurman (2016) Driverless. Cambridge, MA: MIT Press.
Meisels, Tamar (2017). Targeted killing with drones? Old arguments, new
technologies. Philosophy & Society, 29 (1), 1–152.
Miller, Tim (2018): Explanation in artificial intelligence: Insights from the social
sciences. Artificial Intelligence. doi: 10.1016/j.artint.2018.07.007
Moravec, Hans (March 23, 2009). ‘Rise of the Robots--The Future of Artificial
Intelligence’, Scientific American.
Muir, Bonnie M. (1987). Trust between humans and machines, and the design of
decision aids. International Journal of Man–Machine Studies, 27: 527–539.
Muir, Bonnie M. (1994). Trust in automation Part 1: Theoretical issues in the study of
trust and human intervention in automated systems. Ergonomics, 37, 1905–1922.
Parfit, Derek. (1984). Reasons and Persons, Oxford: Clarendon Press.
Remagnino, P., Shihab, A. I., & Jones, G. A. (2004). Distributed intelligence for multicamera
visual surveillance. Pattern Recognition, 37(4), 675-689.
Roussi, Antoaneta (2020): Resisting the rise of facial recognition, Nature, 587: 350-
353. doi:10.1038/d41586-020-03188-2
Schmitt, Carl. (1985). Political Theology: Four Chapters on the Concept of
Sovereignity, Cambridge, MA and London: The MIT Press.
Styles, K. (2016) ‘40 Countries Are Working on Killer Robots and There’s No Law to
Say How We Use Them’, The Next Web 22 January https://thenextweb.com/us/2016/01/21/40-
countries-are-working-on-killer-robots-and-theres-no-law-to-say-how-we-use-them
Taddeo, Mariarosaria (2018). The limits of deterrence theory in cyberspace. Philosophy
& Technology, 31 (3), 339-355.
Tegmark, Max (2017): Life 3.0. Being Human in the Age of Artificial Intelligence.
Penguin.
US Defense (n.d.) ‘Unmanned Aircraft Systems (UAS)’:
https://www.defense.gov/UAS/
Thomson, Judith Jarvis (2008): People and Their Bodies, in T. Sider, J. Hawthorne &
D. W. Zimmerman (eds.), Contemporary Debates in Metaphysics, Blackwell, 155-176.
Weber, Max (2004), The Vocation Lectures. Indianapolis/ Cambridge: Hackett
Publishing Company 2004.
Weyerer, Jan C. & Paul F. Langer, P. (2019). Garbage In, Garbage Out. 20th Annual
International Conference on Digital Government Research on - Dg.o 2019.
doi:10.1145/3325112.3328220
Authors who publish with this journal agree to the following terms:
a) Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
b) Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
c) Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).