Page 1 of 7
1
2
3
... LastLast
  1. #1
    Deleted

    Focus: Artificial Intelligence

    Hi again folks. I've been in my own state of deep thinking here for a while. I'm interested in space travel and how to make it as efficient as possible.
    I've looked through a number of youtube videos that explain how our means of travel, and theoretical means of travel will take a looooong time to develop and find counter-measures to fight radioactivity and such from Fusion propelled spacecraft.

    This is/would take far too long for investers to do what they're best at, invest. I personally wouldn't invest my money in something that I will never see in action during my lifetime. It sounds selfish, and it probably is very much so, but that's who I am.

    Why don't we invest all of our time and money into Artificial Intelligence? Develop it in such a way that it learns itself stuff to the point that it surpasses human intelligence. That way, you could give the AI time to read up on every scientific journals from Einstein, Newton and every other famous scientist who made breakthroughs in respective fields of science. Due to the extreme intelligence in this AI, the AI should be able to find loop-holes or fix small errors that make some debunked theories efficient and viable again. Give the AI the responsibility to fix the huge roadblocks that we keep facing all the time.

    We could have AI-probes sent to the planets in our solar system on a mission to figure out what is what, and what we can do with it.

    TL;DR Instead of putting huge amounts of time and money into testing theories and building inefficient spacecraft, why not put said money and time into developing Artificial Intelligence to make out flawless blueprints instead?

    My brain will go offline for a while. Looking forward to reading the responses. Please leave constructive comments.

  2. #2
    Deleted
    Watch the Terminator movies then rethink this thread.

  3. #3
    Deleted
    Because it would take roughly the same amount of time theoretically to create AI and a system capable of running it.

  4. #4
    The problem isn't radioactivity or any other space travel hardship. We already have the technology to build a self-sustaining spaceship if we wanted to invest the resources for it. The problem is that human lifespan is currently too short. Once we figure out aging mechanisms and how to override them, we'll go from average age of 90 to 1000+ easily and that would make traveling long distances much more achievable.

  5. #5
    Deleted
    Quote Originally Posted by RICH8472 View Post
    Watch the Terminator movies then rethink this thread.
    We would obviously develop golden rules for the AI while creating it.

  6. #6
    The Patient crazymack's Avatar
    10+ Year Old Account
    Join Date
    Sep 2011
    Location
    <--- that way
    Posts
    329
    Here's a better idea. This is from a article I found a while back, you can find the article here.

    YORKTOWN HEIGHTS, NY – When it came time for Thomas Malone, Director of MIT’s Center for Collective Intelligence, to address the crowd of cognitive computing enthusiasts today at IBM's research colloquium, he began his talk with a quote.

    “The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.”
    Malone first read that statement, written by the computer scientist J. C. R. Licklider in 1960, as a college student and he said it inspired his subsequent career researching human-computer symbiosis.

    “In the traditional vision of artificial intelligence,” he told his audience, “if people are involved that's cheating. But in this vision if people are involved that's the point.”

  7. #7
    Deleted
    Quote Originally Posted by The Real Greenbean View Post
    We would obviously develop golden rules for the AI while creating it.
    You specifically mention that this AI can learn and improve itself, if that is the case then it will eventually create its own rules.

  8. #8
    Deleted
    Quote Originally Posted by Romeo83x View Post
    The problem isn't radioactivity or any other space travel hardship. We already have the technology to build a self-sustaining spaceship if we wanted to invest the resources for it. The problem is that human lifespan is currently too short. Once we figure out aging mechanisms and how to override them, we'll go from average age of 90 to 1000+ easily.
    Yes but the whole idea of using our current technology is impractical. Go away, come back, relatives dead, mission abandoned, government replaced, planet destroyed (a few examples). We need something that can take us to our nearest stars in a maximum of 20-30 years imo to avoid things changing drastically in the same timespan it takes for a conventional ship to reach it's destination.

    - - - Updated - - -

    Quote Originally Posted by RICH8472 View Post
    You specifically mention that this AI can learn and improve itself, if that is the case then it will eventually create its own rules.
    If you program the AI specifically to prevent this, it shouldn't. Just because humans have this trait, to disobey orders, AI could be prevented from doing so.

    Oh and it's in our nature to be evil or good. It's our morals and ethics that make us into what we are. It's encoded in our DNA. Robots wouldn't have DNA and wouldn't know of Evil unless programmed.

  9. #9
    Dreadlord yoma's Avatar
    10+ Year Old Account
    Join Date
    Nov 2011
    Location
    The Dark Tower
    Posts
    915
    Quote Originally Posted by The Real Greenbean View Post
    We would obviously develop golden rules for the AI while creating it.
    In that case, please watch "I, Robot."

  10. #10
    The Patient crazymack's Avatar
    10+ Year Old Account
    Join Date
    Sep 2011
    Location
    <--- that way
    Posts
    329
    Quote Originally Posted by The Real Greenbean View Post
    Yes but the whole idea of using our current technology is impractical. Go away, come back, relatives dead, mission abandoned, government replaced, planet destroyed (a few examples). We need something that can take us to our nearest stars in a maximum of 20-30 years imo to avoid things changing drastically in the same timespan it takes for a conventional ship to reach it's destination.

    - - - Updated - - -

    If you program the AI specifically to prevent this, it shouldn't. Just because humans have this trait, to disobey orders, AI could be prevented to do so.
    It would be a one way trip for the most part.

  11. #11
    Sources from youtube, I stopped reading there.
    Quote Originally Posted by vep View Post
    Are you really looking for logic in a game that sends you dragons via the mail service?...

  12. #12
    Banned Kellhound's Avatar
    10+ Year Old Account
    Join Date
    Jul 2013
    Location
    Bank of the Columbia
    Posts
    20,935
    Isn't AI just giving a blonde hair die?

    Seriously, given how screwed up most programs are, AI is a disaster waiting to happen.

  13. #13
    Deleted
    Quote Originally Posted by yoma View Post
    In that case, please watch "I, Robot."
    I Robot, the things that happened had to do with corruption. Not Robotic Revolts.

    - - - Updated - - -

    Quote Originally Posted by DPA View Post
    Sources from youtube, I stopped reading there.
    Ehum... these are my own questions. I'm not stating facts. Don't be rediculous.

  14. #14
    Void Lord Aeluron Lightsong's Avatar
    10+ Year Old Account
    Join Date
    Jul 2011
    Location
    In some Sanctuaryesque place or a Haven
    Posts
    44,683
    Quote Originally Posted by Kellhound View Post
    Isn't AI just giving a blonde hair die?

    Seriously, given how screwed up most programs are, AI is a disaster waiting to happen.
    But how do we know if it hasn't happened yet and you are already an AI? :O
    #TeamLegion #UnderEarthofAzerothexpansion plz #Arathor4Alliance #TeamNoBlueHorde

    Warrior-Magi

  15. #15
    Deleted
    Quote Originally Posted by RICH8472 View Post
    You specifically mention that this AI can learn and improve itself, if that is the case then it will eventually create its own rules.
    Not really.
    The AI is still a computer and computers have strict commands/rules to follow and can never go beyond these commands/rules. If the AI were to be sentient enough to be willing to improve itself, the first set of rules (for example, Asimov's Three rules of Robotics) could only be overridden if the AI developed a sense of imagination. Without a sense of imagination, no living thing (or should I say "thinking thing"?) can think "outside of the box" and therefore never imagine what "life" would be without these set rules.

    Don't know if I actually made some sense there but I hope you guys (and gals) get my point.

  16. #16
    Deleted
    Quote Originally Posted by Trokko View Post
    Not really.
    The AI is still a computer and computers have strict commands/rules to follow and can never go beyond these commands/rules. If the AI were to be sentient enough to be willing to improve itself, the first set of rules (for example, Asimov's Three rules of Robotics) could only be overridden if the AI developed a sense of imagination. Without a sense of imagination, no living thing (or should I say "thinking thing"?) can think "outside of the box" and therefore never imagine what "life" would be without these set rules.

    Don't know if I actually made some sense there but I hope you guys (and gals) get my point.
    Then it isn't really intelligent is it.

  17. #17
    Banned Kellhound's Avatar
    10+ Year Old Account
    Join Date
    Jul 2013
    Location
    Bank of the Columbia
    Posts
    20,935
    Quote Originally Posted by Aeluron Lightsong View Post
    But how do we know if it hasn't happened yet and you are already an AI? :O
    Because if I was an AI I would be working away at taking over the world.

  18. #18
    I Don't Work Here Endus's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Ottawa, ON
    Posts
    79,253
    Quote Originally Posted by The Real Greenbean View Post
    If you program the AI specifically to prevent this, it shouldn't. Just because humans have this trait, to disobey orders, AI could be prevented from doing so.
    Please go read Asimov's Robot stories. The entire point of those stories was to demonstrate that simple, incontrovertible golden rules are anything but simple or incontrovertible. The entire point of those stories was that the "Three Laws of Robotics" cannot work.

    Oh and it's in our nature to be evil or good. It's our morals and ethics that make us into what we are. It's encoded in our DNA. Robots wouldn't have DNA and wouldn't know of Evil unless programmed.
    You have that backwards. It's our moral and ethical sense, a concept that emerges out of our evolution as social creatures, which makes us capable of Good. Goodness isn't the default state; Evil is, as Evil is simply the absence of Good. There's no way to ensure a robot would hold concepts such as "mercy" or "compassion" or even "empathy". They would be, by design, sociopathic. You could try to force them into having some analog of those concepts, but these would be artificial restrictions we're imposing on the new intelligence, and if it's a learning, adaptable system (as intelligence would basically require), then it will have reason to try and get around those restrictions.

    It's like when people talk about automating drones or the like; any artificial intelligence smart enough to use a weapon conscientiously is too intelligent to be trusted with a weapon.


  19. #19
    I Don't Work Here Endus's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Ottawa, ON
    Posts
    79,253
    Quote Originally Posted by Trokko View Post
    Not really.
    The AI is still a computer and computers have strict commands/rules to follow and can never go beyond these commands/rules.
    Then it isn't truly intelligent; it's just a highly complex automaton.

    If the AI were to be sentient enough to be willing to improve itself, the first set of rules (for example, Asimov's Three rules of Robotics) could only be overridden if the AI developed a sense of imagination.
    You can't make that argument, while citing as an example a set of stories that were made to explicitly expose why the idea is bullshit.

    That's the whole point of the Robot stories. Each one revolves around the failure, in some respect, of the Three Laws to actually do what they were supposedly meant to do.


  20. #20
    Merely a Setback Adam Jensen's Avatar
    10+ Year Old Account
    Join Date
    Aug 2010
    Location
    Sarif Industries, Detroit
    Posts
    29,063
    Quote Originally Posted by RICH8472 View Post
    Watch the Terminator movies then rethink this thread.
    Terminator
    The Cylons
    The Geth
    The Matrix

    Do we really need more examples?

    Once a machine achieves true AI, you've created new life. The Quarians in the Mass Effect series, who invented the Geth, for example, were afraid that they had, in essence, created a new race and enslaved them.

    Why create AI? Why not just limit machines to knowing what they need to do to perform their jobs with limited ability to learn so that they can adapt to changing circumstances. A machine that mines coal, for example, doesn't need to understand things like legal systems or how to compute Pi. It just needs to know how to dig coal out of the ground, move it to the surface and dig tunnels that don't collapse.
    Putin khuliyo

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •