Have you considered what would happen if all nuclear weapons were dropped not only on populated area's but ontop of active super volcano's like Yellowstone? Something like that could cause an extinction event worse than what was seen with the Dinosaurs. There would be no sunlight on the planet for a long time and radiation levels would be insane.
Life might still find a way but it would be much different than it is now.
Chemical and Biological engineering walk a thin moral line ie chemical and biological warfare. In this field, methods of research can be just as bad as application. Even when warfare is not the end game, some pretty messed up things have been down simply for the sake of understanding.
i´m sorry but are these the same laws i know of?
A robot may not injure a human being or, through inaction, allow a human being to come to harm. (knowingly ofcourse)
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
the third law requires no action of the robot in this dilemma you presented and because of the first law the robot wouldn´t harm anyone, there is no difference in the first law where harm is defined it is just harm, regardless of how much it hurts, the robot is not allowed to
so he either shuts off because of the conflict of the first law in this situation or he trys to stop any harm from happening as long as he is able to withou harming anyone else (standing in the way of bullets)
that´s like saying because of laws mankind has no free will
so build a robot, with ai, free will and whatnot
100% of the time monitored and in its hardware a safeguard that will kill the power if something unlawfull happened... better?
if you´re just saying laws in generel aren´t perfect, well no shit, that´s why we have judges ^^
a built in judging system, that cannot be influenced by the ai, the robot has to determine in every situation if it´s worth the risk or not and as evey human being if cought has to live with the consequences
there, free will + 100% safe
It is by caffeine alone I set my mind in motion. It is by the beans of Java that thoughts acquire speed, the hands acquire shakes, the shakes become a warning.
-Kujako-
It's not like saying that at all. Humans do not have chips in their brains that force them to obey a certain set of laws; if they did, they would not have free will.
By saying "has to determine" you are explicitly stating that they can -choose- whether or not to obey a certain precept; as with humans this will work most of the time, but there will always be a minority that choose not to obey that precept.a built in judging system, that cannot be influenced by the ai, the robot has to determine in every situation if it´s worth the risk or not and as evey human being if cought has to live with the consequences
there, free will + 100% safe
Ergo, it isn't 100% safe. "100% safety" and free will are mutually exclusive concepts.
---------- Post added 2013-04-22 at 09:42 AM ----------
It would not be an intelligent construct then.
---------- Post added 2013-04-22 at 09:46 AM ----------
A vast oversimplification; which is what the Three Laws are - too simple. What is "harm"; is it any harm, or is it only lasting harm, or fatal harm. What constitutes "inaction"? What is a "human" for that matter?
In that situation, the Laws would be in conflict with each other; and the only way to resolve that conflict is to judge one above the other and choose to define the Laws in a certain way; and defining a law in a certain way may not be the same way as the master defines it.
Originally Posted by Marjane Satrapi
Kindly go actually read the Asimov stories that describe and explain the Three Laws, and their flaws. You are simply not correct in your claims that a robot would "just" shut off because of a conflict. They can't just shut off, because doing so creates inaction that allows a human being to come to harm; he'll be killed. That's why the Third Law is in play; an inability to resolve the conflict triggers the First Law, meaning the robot cannot shut down. It might fry itself trying to work it out, but even that would be unacceptable; if the robot knows it will fry itself, it will act, since a failure to act triggers the First Law, again.
That's the entire point.
No, that doesn't work at all, you're just shifting the burden of decision-making from the robot to the secondary AI that is the hardware safeguard. And if that's not an AI, then it can't work. As I mentioned above; the robot will know it will shut down, and that shutting down will be harmful to the human, so it will do everything it can to protect the human, doing anything it can to get around that hardware "safeguard". Particularly as you don't know how it triggers; if it's based on the robot trying to break a law, it would not trigger in this circumstance; the robot is obeying the First Law in torturing the human; it has been given a choice between "some harm" and "total harm"; there is no circumstance that allows the human to not be harmed at all. The First Law demands it minimize the harm. If the "safeguard" system able to make judgements based on circumstances, you've just shunted the entire issue off to a secondary AI which has exactly the same problems in the first place.so build a robot, with ai, free will and whatnot
100% of the time monitored and in its hardware a safeguard that will kill the power if something unlawfull happened... better?
That's exactly the point of the Asimov stories. The laws can't be perfect. That's my point.if you´re just saying laws in generel aren´t perfect, well no shit, that´s why we have judges ^^
Last edited by Endus; 2013-04-22 at 05:17 PM.
The point is, an AI is about as trustworthy as a human. Perhaps slightly more consistent, but you'd need to understand its motivations and desires, and deal with it as another thinking, sentient being, not as a tool.
It isn't "stupid", any more than raising a child is. There's a chance your kid will grow up to be New Hitler, but if you raise them right, the chances are slim. Same with an AI. The issue is; there's no way to tell, until we try.
And you really do need to think of it that way. "Raising" an AI, the same as you would a child. It might go much, much faster in many regards, but you can't hardwire morality into it. It needs to understand it and make rational choices, and that means the possibility of making the wrong choice. Just like a person could. If you try and hardwire it, you just guarantee that the code will fail for reasons of neurosis (or the AI analogue), not deliberate rational choice.
Last edited by Endus; 2013-04-22 at 06:06 PM.
i do understand this now and thank you both for your time
though it sounds stupid to me if we´re talking about robots that are built without human (for that matter bodily) flaws, like dying of starvation and other stuff *g*
like building the superhuman while knowing it has the potential of destroying mankind and infact could somehow think of this idea as very logical and thus worth it
hey lets put free will into atomic submarines that are unsinkable because nothing in there needs air to breath and is by design undestroyable, what could possibly go wrong
in the eyes of science it is an achievable goal, i guess
I can't think of any. No matter the tech, I believe that for the most part humanity will gain from it.
Except maybe an malevolent AI, but I doubt we will develop and AI with that option.
Something that scares me more than any direct science is that people is too afraid of it. For example if someone Found out how to become immortal, would we dare take use of that technology? If I know my humanity right, people will be too dumbfuckingstupid to exploit it.
“The north still reeks of undeath. Our homelands lay in ruin. Pandaria oozes our hatred and doubt. What hope is there for this world when the Burning Legion again lands upon our shores?” - Eric Thibeau
When the first wheel or the first fire was invented, it could be used to kill other people. The first stone tools could be used to kill other people.
Just because it can be used for bad things doesn't mean we should steer clear of it.