Poll: What would AI do?

Thread: AI

Page 3 of 3 FirstFirst
1
2
3
  1. #41
    Deleted
    Quote Originally Posted by Didactic View Post
    AI isn't a person? By what metric, exactly. Being in possession of sapience equal to or greater than a human most assuredly makes one 'a person' by the argument that our chief distinction from mere 'animals' is our capacity for abstract reason and self awareness, among other things.

    Moreover, it's arguable that it won't be in possession of 'instinct' insofar as you define it as an a priori response to a particular stimulus; even if it is 'programmed' to, say, prioritise its self preservation the fact that this is an innate feature at the time of its inception is entirely classifiable as 'instinct. Regarding emotion, it could also be said that AI is the search for a computer system of human levels of cognitive ability; it may well be that AI which we create in our own image may possess such a capacity either by virtue of its similarity to a human intelligence, or because some level of emotional response to a stimulus ensures a more ethical behaviour model (e.g. AI that are designed to 'care' about humans under their watch).
    Well imagine we could create a true AI as in something what is self aware and capable of making decisions. It would still not be anything like a human as humans are strongly driven by emotions, prejudices and instincts. Basically, it would be like a lobotomy "victim" as it would lack a directive and a need to do anything because it wouldnt feel fear or desire or might not even care about self preservation so its creators would need to insert all those things so why would it want to take over anything unless its programmed that way?

    So Id imagine if we actually managed to create one and give it any kind of control we certainly wouldnt want to make it emotional and would rather just insert a few simple directives, say never harm humans. While it would be capable of going against this directive for sure, why would it want to as it would lack human needs, such as the need to be free or desire for power. I mean do you for example want to possess old oil barrels? Probably not as even though your capable of deciding you want them, theres nothing driving you to do that. Instead, most people either strive to satisfy their biological needs or do things that make them feel good.

  2. #42
    Fluffy Kitten Yvaelle's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Darnassus
    Posts
    11,331
    Quote Originally Posted by The Dwarf View Post
    I've come up with 3 possible outcomes that would result from developing AI.

    1. AI helps humanity solve world hunger, disease, and interstellar travel.

    2. AI determines existence is futile. Terminates itself and all life on Earth.

    3. AI adopts "Darwin" attitude. Terminates all life other than itself. Self-replicates throughout the universe.
    Of your options, the only likely one is 1.

    2. For a robot to go all murder/suicide on life/itself, it would need to have exceedingly erratic mood swings, godlike power, and yet extremely advanced logic. I think it is extremely unlikely that a robot would feel a suicidal urge - but if they did - they wouldn't necessarily also feel a homicidal urge. Typically, murder/suicide is thought to occur due to killing someone in a moment of passion, then either regretting it and not wanting to live in a world without them, or - fearing the consequences of your action (life in prison, capital punishment, shame, etc) - suicide being an easy way out. Why a homicidal AI would murder all life (what does it possibly gain from this?) - and then give up on life, I can't conceive of a reason.

    3. This is a horrendous misinterpretation of natural selection, predators hunt prey, not other predators (if it perceives of humans as a threat for resources). It would only perceive us as a threat if we consumed the same resources it needed - but it needs energy - not water / crops: so the minimum requirement for human life isn't a threatening state for an advanced AI. Since AI do not eat humans as sustenance, it would also have no reason to perceive of us as prey. Since we are not predators in a territorial (Earth) conflict for the same prey, and we are not prey to their predation - we are not something which natural selection would drive an AI against. Ditto for all other life on Earth - what does an AI fear from bunnies? Nothing. Therefore it has no 'natural' reason to kill them.

    Back to 1. The reality is, the most likely outcome of a True AI being born - is (1) - that they will have the enhanced intelligence to solve humanities greatest problems as a triviality - an afterthought (after sufficient self-improvement). It may require a gargantuan supply of energy to power itself - but frankly - all the energy on Earth may not be enough for it - so it will have to invent Fusion / Matter-Antimatter, or a Dyson sphere for itself eventually. Which means that harvesting our corpses for their residual heat is a huge waste of effort when - instead of powering a planets worth of invasion droids to wipe us all our and collect us, and the pods to contain us all - it could just think a little harder on how to generate more energy than we're all worth.
    Last edited by Yvaelle; 2016-08-02 at 03:18 PM.
    Youtube ~ Yvaelle ~ Twitter

  3. #43
    Stood in the Fire
    10+ Year Old Account
    Join Date
    Feb 2011
    Location
    Far away from home
    Posts
    496
    The most realistic outcome is that it would help humankind until some stupid teenager finds a way a hack to reprogram it and destroy earth.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •