Ethical considerations surrounding Artificial Intelligence

In computer science, Artificial Intelligence has turned into one of the most popular subjects and loads of students are shifting to this field. Researchers in Artificial Intelligence (AI) regularly use evolutionary algorithms to generate models of cognition. This generally involves a vast number of different versions being propagated, tested, modified, discarded, and propagated again until something suitably well-developed emerges out of the mess.

Assuming that AI research comes to create increasingly human-like cognizers and continues to use evolutionary methods to develop them – Will there be a point at which this type of process will begin to count as a sort of holocaust of innocent intelligences? Sure, the fittest survive and reproduce, but that’s like supporting the harshest of eugenics programs. Generation after generation is destroyed and created anew at a furious pace. Would a sufficiently intelligent AI in the midst of this have the right to live out its artificial life on a more human timescale? (Indeed, its artificial life could presumably extend far beyond that timescale if  ‘lived to the fullest’, which is also a right we generally would accord to people.)

The opposite impulse is to say that we don’t owe a thing to our mere creations. But then maybe the question would arise why an intelligence created by recombining DNA and gestating in a womb merits such vastly greater consideration than one created by writing a program (or, if you really want to anthropomorphize all the way, created by recombining older code and ‘gestating’ in a virtual testing environment).

(May as well consider this just a jumping-off point for talking about the ethical considerations surrounding artificial, man-made lives/intelligences in general and robots to be specific)





11 thoughts on “Ethical considerations surrounding Artificial Intelligence

  1. Will post my views.Nice topic BT
    Send it to PPC, SG and PDG sirs.Let us hear from them on the same. 🙂


  2. It’s not wrong to essentially run a factory of humans which are continually being birthed and killed in order to create one who’s really good at some experimental task?

    To be more precise – The AIs are presumably being activated, have to do their best to go about, let’s say, some tricky problem -solving task, and then are deleted or reproduced immediately thereafter. Would it really be ok to do something analogous to a human being?


  3. I think a distinction between sentience and intelligence might have some bearing. A sentient being can feel pain and fear. An AI probably can’t.
    This I am saying considering the primitive ones, I am not updated with the new research aptitude.


  4. Thanks Bhaskar for sending me the link. Interesting one.

    AI being my own research subject, still I have my own reservations in the matter you mentioned.

    I’m a pragmatist. I support whatever social practices work to reduce suffering for the greatest number of sentient beings. I define sentience as the ability to feel pain. No software will be able to do so for at least a few decades, it’s my own personal opinion keeping my vast experience in the subject.

    If I’m proved wrong and a pain-feeling software AI arises in the next decade, then ethicists will have to work at defining the rights of software entities. I still support minimizing suffering, as long as reasonable evidence of suffering can be adduced.

    P:S: Bhaskar when did you start taking interest in this subject? As far as my knowledge goes, you made sure to take every possible steps to avoid AI every semester.

    Again good discussion. Would love to see all your views. Arpan, doing something analogous to human would be suicidal and utterly unethical. There are many who say ‘Why not?’ For them I can only say, Well, if you want to bite that bullet it’s cool with me. But I am not going to be the one, and do keep all others from my league out of it.


  5. Thanks Sir for your comments. 🙂

    Well I feel it just depends on which ones we’re considering. Also, I’m not so sure it’s clear from cognitive science that an AI could be made to be genuinely intelligent but not at all sentient. Well that’s all seriously debatable. But it is indisputable that any AI that honestly approximated human cognitive capabilities, say, could pass the Turing Test as easily and naturally as a person does, would be FAR from ‘primitive’.


  6. I don’t know how we will ever decide a given AI is sentient and deserving of legal rights. I think our current technology is still quite a way from needing to make such determinations.


  7. Question arises Can the concept of “person” apply to non-humans?

    And don’t you all think that looking at intelligence alone is misguided.

    Arnab Banerjee


  8. Well Pravu, if it is (and I would not want to accept that definition myself), then the question has just shifted a level, to my mind. In that case I’d want to know what the relevant attributes of soul are.


  9. Boys you all seem to be going on a different direction now. Ah yes Bhaskar, you never shared your love for AI with us. This was never the impression you gave us all during your student days. Need to catch up with you and get updated.

    Coming back to the topic, all who want to put both at par, the AI created robot and human, can they answer a simple question. We make, break, again make the AI one as per our need. So now we could ask something like if an intelligent human who couldn’t feel pain and fear would be considered this disposable. If not then why put both at par, one a machine while other again maybe a machine full of emotions and yes even soul.



  10. Nice view points by all. Here’s an abstract from an article on the morality of robotic labor. Interesting one.

    A Moral Paradox in the Creation of Artificial Intelligence: by Mark Walker, Trinity College, University of Toronto.


    For moral reasons, we should not (now or in the future)
    create robots to replace humans in every undesirable job. At
    least some of the labour we might hope to avoid will require
    human – equivalent intelligence. If we make machines with
    human – equivalent intelligence then we must start thinking
    about them as our moral equivalents. If they are our moral
    equivalents then it is prima facie wrong to own them, or
    design them for the express purpose of doing our labour; for
    this would be to treat them as slaves, and it is wrong to treat
    our moral equivalents as slaves.


Comments are closed.