Thread: scary science

Page 5 of 5 FirstFirst ...
3
4
5
  1. #81
    Deleted
    Quote Originally Posted by Endus View Post
    Ignoring the latter example, there really isn't.

    The first man-made object that entered space was a Nazi V-2 rocket. V-2s were ballistic missiles, intended to attack a target. That V-2 technology was the basis for pretty much the entire space race that came after WWII.
    Which was my point. It's the opposite of..

    Rockets were not created to shoot at each other, they were created for human progress towards space in the hope to find inhabitable places.

  2. #82
    Quote Originally Posted by squeeze View Post
    Not necessarily.

    You could not guarantee erasing all humanity, let alone all life or the planet even with all existing bombs fired simultaneously at the same exact spot or on every city. This also includes fallout.

    You can destroy all of human civilization rather easily though. It would take (significantly) less than fifth of known global nuclear stockpiles (conservative global stocks: ~18,000 nukes at 300kT average). This is especially the case if you target cities and maximize biological damage, e.g. use the MT+ weapons with large fallouts and target nuclear power stations due to their waste.

    Life is surprisingly difficult to kill off once established. The Universe has been trying for 4+ billion years (Earth's age)! The only known way (based on science of today which knows no life not dependent on water) would be to entirely strip the Earth of its very very thin atmospheric layer or boil/leak all the water off into space. These last two are kind of equivalent since this is how scientists believe Mars lost its water.
    Have you considered what would happen if all nuclear weapons were dropped not only on populated area's but ontop of active super volcano's like Yellowstone? Something like that could cause an extinction event worse than what was seen with the Dinosaurs. There would be no sunlight on the planet for a long time and radiation levels would be insane.

    Life might still find a way but it would be much different than it is now.

  3. #83
    Merely a Setback PACOX's Avatar
    10+ Year Old Account
    Join Date
    Jul 2010
    Location
    ██████
    Posts
    26,360
    Chemical and Biological engineering walk a thin moral line ie chemical and biological warfare. In this field, methods of research can be just as bad as application. Even when warfare is not the end game, some pretty messed up things have been down simply for the sake of understanding.

  4. #84
    The Unstoppable Force Mayhem's Avatar
    15+ Year Old Account
    Join Date
    Feb 2008
    Location
    pending...
    Posts
    23,952
    Quote Originally Posted by Endus View Post
    That's entirely the point. The First Law is incredibly insufficient. For instance, what if a robot is told to torture a human being? A first glance says that the robot will refuse, because First Law. Now, what if the robot is told that, if they do not torture the human for information, the human will simply be killed? And what if that information will save the lives of several other humans?

    Either the robot shorts out due to a conflict, or it makes a decision and takes the path of least harm, and starts torturing the guy, deliberately and with great zeal. It's an AI, not an automaton. Heck, if it suspects that it will get shorted out by a conflict, the Third Law requires that it make a decision to avoid that.
    i´m sorry but are these the same laws i know of?

    A robot may not injure a human being or, through inaction, allow a human being to come to harm. (knowingly ofcourse)
    A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    the third law requires no action of the robot in this dilemma you presented and because of the first law the robot wouldn´t harm anyone, there is no difference in the first law where harm is defined it is just harm, regardless of how much it hurts, the robot is not allowed to
    so he either shuts off because of the conflict of the first law in this situation or he trys to stop any harm from happening as long as he is able to withou harming anyone else (standing in the way of bullets)

    Quote Originally Posted by Didactic View Post
    The whole point is that you cannot build a robot to be 100% safe and give it free will at the same time.
    that´s like saying because of laws mankind has no free will

    so build a robot, with ai, free will and whatnot
    100% of the time monitored and in its hardware a safeguard that will kill the power if something unlawfull happened... better?

    if you´re just saying laws in generel aren´t perfect, well no shit, that´s why we have judges ^^

    a built in judging system, that cannot be influenced by the ai, the robot has to determine in every situation if it´s worth the risk or not and as evey human being if cought has to live with the consequences

    there, free will + 100% safe
    Last edited by Mayhem; 2013-04-22 at 04:37 PM.
    Quote Originally Posted by ash
    So, look um, I'm not a grief counselor, but if it's any consolation, I have had to kill and bury loved ones before. A bunch of times actually.
    Quote Originally Posted by PC2 View Post
    I never said I was knowledge-able and I wouldn't even care if I was the least knowledge-able person and the biggest dumb-ass out of all 7.8 billion people on the planet.

  5. #85
    The Insane Kujako's Avatar
    10+ Year Old Account
    Join Date
    Oct 2009
    Location
    In the woods, doing what bears do.
    Posts
    17,987
    Quote Originally Posted by Didactic View Post
    The whole point is that you cannot build a robot to be 100% safe and give it free will at the same time.
    Sure I could. I would just make it an inanimate cube so all it could do with it's free will was pray for a death that would never come.
    It is by caffeine alone I set my mind in motion. It is by the beans of Java that thoughts acquire speed, the hands acquire shakes, the shakes become a warning.

    -Kujako-

  6. #86
    Void Lord Elegiac's Avatar
    10+ Year Old Account
    Join Date
    Oct 2012
    Location
    Aelia Capitolina
    Posts
    59,345
    Quote Originally Posted by Mayhem View Post
    that´s like saying because of laws mankind has no free will

    so build a robot, with ai, free will and whatnot
    100% of the time monitored and in its hardware a safeguard that will kill the power if something unlawfull happened... better?

    if you´re just saying laws in generel aren´t perfect, well no shit, that´s why we have judges ^^
    It's not like saying that at all. Humans do not have chips in their brains that force them to obey a certain set of laws; if they did, they would not have free will.

    a built in judging system, that cannot be influenced by the ai, the robot has to determine in every situation if it´s worth the risk or not and as evey human being if cought has to live with the consequences

    there, free will + 100% safe
    By saying "has to determine" you are explicitly stating that they can -choose- whether or not to obey a certain precept; as with humans this will work most of the time, but there will always be a minority that choose not to obey that precept.

    Ergo, it isn't 100% safe. "100% safety" and free will are mutually exclusive concepts.

    ---------- Post added 2013-04-22 at 09:42 AM ----------

    Quote Originally Posted by Kujako View Post
    Sure I could. I would just make it an inanimate cube so all it could do with it's free will was pray for a death that would never come.
    It would not be an intelligent construct then.

    ---------- Post added 2013-04-22 at 09:46 AM ----------

    Quote Originally Posted by Mayhem View Post
    i´m sorry but are these the same laws i know of?

    A robot may not injure a human being or, through inaction, allow a human being to come to harm. (knowingly ofcourse)
    A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    the third law requires no action of the robot in this dilemma you presented and because of the first law the robot wouldn´t harm anyone, there is no difference in the first law where harm is defined it is just harm, regardless of how much it hurts, the robot is not allowed to
    so he either shuts off because of the conflict of the first law in this situation or he trys to stop any harm from happening as long as he is able to withou harming anyone else (standing in the way of bullets)
    A vast oversimplification; which is what the Three Laws are - too simple. What is "harm"; is it any harm, or is it only lasting harm, or fatal harm. What constitutes "inaction"? What is a "human" for that matter?

    In that situation, the Laws would be in conflict with each other; and the only way to resolve that conflict is to judge one above the other and choose to define the Laws in a certain way; and defining a law in a certain way may not be the same way as the master defines it.
    Quote Originally Posted by Marjane Satrapi
    The world is not divided between East and West. You are American, I am Iranian, we don't know each other, but we talk and understand each other perfectly. The difference between you and your government is much bigger than the difference between you and me. And the difference between me and my government is much bigger than the difference between me and you. And our governments are very much the same.

  7. #87
    I Don't Work Here Endus's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Ottawa, ON
    Posts
    79,187
    Quote Originally Posted by Mayhem View Post
    i´m sorry but are these the same laws i know of?

    A robot may not injure a human being or, through inaction, allow a human being to come to harm. (knowingly ofcourse)
    A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    the third law requires no action of the robot in this dilemma you presented and because of the first law the robot wouldn´t harm anyone, there is no difference in the first law where harm is defined it is just harm, regardless of how much it hurts, the robot is not allowed to
    so he either shuts off because of the conflict of the first law in this situation or he trys to stop any harm from happening as long as he is able to withou harming anyone else (standing in the way of bullets)
    Kindly go actually read the Asimov stories that describe and explain the Three Laws, and their flaws. You are simply not correct in your claims that a robot would "just" shut off because of a conflict. They can't just shut off, because doing so creates inaction that allows a human being to come to harm; he'll be killed. That's why the Third Law is in play; an inability to resolve the conflict triggers the First Law, meaning the robot cannot shut down. It might fry itself trying to work it out, but even that would be unacceptable; if the robot knows it will fry itself, it will act, since a failure to act triggers the First Law, again.

    That's the entire point.

    so build a robot, with ai, free will and whatnot
    100% of the time monitored and in its hardware a safeguard that will kill the power if something unlawfull happened... better?
    No, that doesn't work at all, you're just shifting the burden of decision-making from the robot to the secondary AI that is the hardware safeguard. And if that's not an AI, then it can't work. As I mentioned above; the robot will know it will shut down, and that shutting down will be harmful to the human, so it will do everything it can to protect the human, doing anything it can to get around that hardware "safeguard". Particularly as you don't know how it triggers; if it's based on the robot trying to break a law, it would not trigger in this circumstance; the robot is obeying the First Law in torturing the human; it has been given a choice between "some harm" and "total harm"; there is no circumstance that allows the human to not be harmed at all. The First Law demands it minimize the harm. If the "safeguard" system able to make judgements based on circumstances, you've just shunted the entire issue off to a secondary AI which has exactly the same problems in the first place.


    if you´re just saying laws in generel aren´t perfect, well no shit, that´s why we have judges ^^
    That's exactly the point of the Asimov stories. The laws can't be perfect. That's my point.
    Last edited by Endus; 2013-04-22 at 05:17 PM.


  8. #88
    The Unstoppable Force Mayhem's Avatar
    15+ Year Old Account
    Join Date
    Feb 2008
    Location
    pending...
    Posts
    23,952
    but then building an ai would be stupid, wouldn´t it?
    Quote Originally Posted by ash
    So, look um, I'm not a grief counselor, but if it's any consolation, I have had to kill and bury loved ones before. A bunch of times actually.
    Quote Originally Posted by PC2 View Post
    I never said I was knowledge-able and I wouldn't even care if I was the least knowledge-able person and the biggest dumb-ass out of all 7.8 billion people on the planet.

  9. #89
    Void Lord Elegiac's Avatar
    10+ Year Old Account
    Join Date
    Oct 2012
    Location
    Aelia Capitolina
    Posts
    59,345
    Quote Originally Posted by Mayhem View Post
    but then building an ai would be stupid, wouldn´t it?
    It might, it might not be. It depends.
    Quote Originally Posted by Marjane Satrapi
    The world is not divided between East and West. You are American, I am Iranian, we don't know each other, but we talk and understand each other perfectly. The difference between you and your government is much bigger than the difference between you and me. And the difference between me and my government is much bigger than the difference between me and you. And our governments are very much the same.

  10. #90
    I Don't Work Here Endus's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Ottawa, ON
    Posts
    79,187
    Quote Originally Posted by Mayhem View Post
    but then building an ai would be stupid, wouldn´t it?
    The point is, an AI is about as trustworthy as a human. Perhaps slightly more consistent, but you'd need to understand its motivations and desires, and deal with it as another thinking, sentient being, not as a tool.

    It isn't "stupid", any more than raising a child is. There's a chance your kid will grow up to be New Hitler, but if you raise them right, the chances are slim. Same with an AI. The issue is; there's no way to tell, until we try.

    And you really do need to think of it that way. "Raising" an AI, the same as you would a child. It might go much, much faster in many regards, but you can't hardwire morality into it. It needs to understand it and make rational choices, and that means the possibility of making the wrong choice. Just like a person could. If you try and hardwire it, you just guarantee that the code will fail for reasons of neurosis (or the AI analogue), not deliberate rational choice.
    Last edited by Endus; 2013-04-22 at 06:06 PM.


  11. #91
    Deleted
    Quote Originally Posted by pacox View Post
    Chemical and Biological engineering walk a thin moral line ie chemical and biological warfare. In this field, methods of research can be just as bad as application. Even when warfare is not the end game, some pretty messed up things have been down simply for the sake of understanding.
    In science, there is no moral, only progress. Science doesn't care what humans think about it, Science is badass

  12. #92
    The Unstoppable Force Mayhem's Avatar
    15+ Year Old Account
    Join Date
    Feb 2008
    Location
    pending...
    Posts
    23,952
    i do understand this now and thank you both for your time

    though it sounds stupid to me if we´re talking about robots that are built without human (for that matter bodily) flaws, like dying of starvation and other stuff *g*
    like building the superhuman while knowing it has the potential of destroying mankind and infact could somehow think of this idea as very logical and thus worth it

    hey lets put free will into atomic submarines that are unsinkable because nothing in there needs air to breath and is by design undestroyable, what could possibly go wrong

    in the eyes of science it is an achievable goal, i guess
    Quote Originally Posted by ash
    So, look um, I'm not a grief counselor, but if it's any consolation, I have had to kill and bury loved ones before. A bunch of times actually.
    Quote Originally Posted by PC2 View Post
    I never said I was knowledge-able and I wouldn't even care if I was the least knowledge-able person and the biggest dumb-ass out of all 7.8 billion people on the planet.

  13. #93
    The Patient Nario64's Avatar
    10+ Year Old Account
    Join Date
    Jul 2012
    Location
    Vancouver, BC, CANADA Eh?
    Posts
    229
    Quote Originally Posted by c2dholla619 View Post
    there are many good things that science can offer, but then there is the science that can destroy humanity if the wrong guys learned how to do it.

    nanotechnology is just pure craziness that should never be done unless we have global peace (good luck with that). these tiny machine one-billionth of a meter being sent into the brain and practically turning you into whatever the designer programmed the nanomachine to do. from the very little that i know of this tech it just seems like a crazy science.

    what are some other scary forms of science?
    Fear is bred through ignorance.

  14. #94
    I can't think of any. No matter the tech, I believe that for the most part humanity will gain from it.
    Except maybe an malevolent AI, but I doubt we will develop and AI with that option.

    Something that scares me more than any direct science is that people is too afraid of it. For example if someone Found out how to become immortal, would we dare take use of that technology? If I know my humanity right, people will be too dumbfuckingstupid to exploit it.
    “The north still reeks of undeath. Our homelands lay in ruin. Pandaria oozes our hatred and doubt. What hope is there for this world when the Burning Legion again lands upon our shores?” - Eric Thibeau

  15. #95
    Deleted
    When the first wheel or the first fire was invented, it could be used to kill other people. The first stone tools could be used to kill other people.

    Just because it can be used for bad things doesn't mean we should steer clear of it.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •