When hal kills who is to blame




















Stork and David Wilkins. Foreword by Arthur C. I became operational Table of Contents Foreword Arthur C. Clarke PDF Stork PDF 1. Supercomputer Design David J. Kuck PDF 1. Iyer PDF 1. Campbell PDF 1.

Olive PDF 2. Roger C. We will then read a couple of papers on the topic of self-driving cars, and the ethical questions they raise, including, but by no means limited to, those related to the Trolley Problem. Week 3. The question of deception, and if it is ever permissible to deceive, will be discussed. We will then read and discuss a paper that considers the question of autonomous weapons and the deep moral concerns posed by war robots.

Week 4. We will end the module with excerpts from a recent book on the existential risk posed by the development of Artificial General Intelligence. Written feedback on formative work will be provided two weeks after submission. Feedback on summative work will be provided before the end of term. Russell, S. Corea F. Studies in Big Data Floridi, L. Wiener, N. Wong, P. Nyholm, S. For they can be conceived of as moral patients as entities that can be acted upon for good or evil and also as moral agents as entities that can perform actions, again for good or evil.

In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents most interestingly for us, of AAs. We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not artificial agents have mental states, feelings, emotions and so on.

The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. Agenthood, and in particular moral agenthood, depends on a LoA. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it.

That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents.

This is a preview of subscription content, access via your institution. Rent this article via DeepDyve. Allen, C. Article Google Scholar. Arnold, A. Bedau, M. Boden, ed. Google Scholar. Cassirer, E. Danielson, P. Dennet, D. Stork, ed. Dixon, B. Epstein, R. Floridi, L. Negrotti, ed.

Cavalier, ed. Franklin, S.



0コメント

  • 1000 / 1000