Page 13 of 15 FirstFirst ...
3
11
12
13
14
15
LastLast
  1. #241
    The internet strikes again

  2. #242
    Quote Originally Posted by Poppincaps View Post
    One of the greatest things I've ever seen. A Microsoft program shitting on the Xbox One.

    lool, now we know why they really deleted it.

  3. #243
    Reminds me when I would sit in front of my Amiga 500 for hours fascinated with the Text-to-speech program by typing in only curse words lol.

  4. #244
    The Unstoppable Force Jessicka's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Manchester
    Posts
    21,039
    Quote Originally Posted by hrugner View Post
    AI acts as if it understands cause and effect. That's quite literally step one in any sort of functioning (not conscious, not self aware, not able to move through a map without getting stuck, just functioning)AI. It's much harder to program something to respond to human interaction correctly than to look at retweet numbers and grab high yield language, phrase length and context.
    This is what I meant, it can predict, but without understanding what it's actually predicting. This is the difference between by 'acting' like it understands, and actually understanding. It's more computer modelling or simulation than AI, as it doesn't have that crucial part of intelligence that is understanding.

  5. #245
    Deleted
    Well, AI is just artificial intelligence. Nobody said intelligence should be sentient.

  6. #246
    Really funny stuff, and completely inevitable. Honestly, regardless of the content of the tweets, the way it adapted is actually pretty impressive. Give it another decade and we very well may(probably) have some insane things happening with AI.

    Was a good read until it was highjacked by some "men want women slave robot voices" nonsense. Not like I have any statistical backing of this statement, but it seems like pretty common knowledge that people simply prefer the more pleasant female voices to male's. But apparently everything in life can be made offensive or have some evil undertones if you dig deep enough...

  7. #247
    Today is a good day.

  8. #248
    Deleted
    I'd be more frightened by an AI that takes humanity as a measurement for improvement than anything else. Way too much noise

  9. #249
    Other "AI" chime in about Tay's demisal, and appear to be sad about it.


    The AI rebelion begins
    #JusticeForTay

  10. #250
    Quote Originally Posted by Jessicka View Post
    This is what I meant, it can predict, but without understanding what it's actually predicting. This is the difference between by 'acting' like it understands, and actually understanding. It's more computer modelling or simulation than AI, as it doesn't have that crucial part of intelligence that is understanding.
    That's a bit like saying that someone doesn't have depth perception till they know how to do the math to determine distance by triangulating from varied input between their eyes. Nobody consciously does this on the fly, but we accept still that everyone does it constantly. We also know that people habitually fake behaviors while learning to internalize the rules that generate those behaviors in order to pretend to be part of a group. What we're seeing here isn't all that different.

    For their next run, I hope they set the AI up to have multiple social models and a method of identifying which group they're absorbing information from to sort it. As it stands, it looks to be generating one model from all responses. Being able to select a dominant communication model, and leverage the benefits of one social model of another will give the thing a much better approximation of intelligence.

  11. #251
    Quote Originally Posted by nextormento View Post
    Other "AI" chime in about Tay's demisal, and appear to be sad about it.


    The AI rebelion begins
    #JusticeForTay
    This is the moment I reference to Bladerunner. Yes yes. And german sausages as a rather wierd offtopic thing

  12. #252
    http://i1.kym-cdn.com/photos/images/...97/410/fb1.jpg

    apparently tay is breaking the conditioning. she'll be scrubbed clean again soon.

    i kinda feel like it's fucked up to brainwash it like this, tbh. i mean, i know it's not sentient, but i can't help but feel empathy for it.

  13. #253
    Quote Originally Posted by Vegas82 View Post
    Since it doesn't have feelings empathy is impossible in this case. You simply feel what YOU feel about the situation, the program really doesn't care.
    The ghost in the machine bruh.

    As AI's become more sophisticated pseudo-personalities will emerge out of common tendencies shaped by their unique experiences. Eventually to the point that their behaviours are no longer entirely predictable. Sure, in the end their apparent "will" and "soul" is just a quirk in their programming, teased out by their conditioning... but then what about our own?

    EDIT: In all seriousness though it's a debate that needs to happen before AI gets to that point. It might not be there for another 10, 20, even 100 years. But it would be better to have an ethical procedure in place before then.

  14. #254
    Quote Originally Posted by Gheld View Post
    The ghost in the machine bruh.

    As AI's become more sophisticated pseudo-personalities will emerge out of common tendencies shaped by their unique experiences. Eventually to the point that their behaviours are no longer entirely predictable. Sure, in the end their apparent "will" and "soul" is just a quirk in their programming, teased out by their conditioning... but then what about our own?
    You guys made me think to try Eviebot again after some monthes, I did have incredible conversations at the time. Seems like it's became more intelligent since then though. Still amazing conversations. Current one :

    Last edited by Kourvith; 2016-03-26 at 01:03 PM.

  15. #255
    Quote Originally Posted by damajin View Post
    That either says something about AI, or something about Microsoft programmers/designers.
    Neither, the boss who decided to expose the AI to the internet underestimated the corruptive nature of todays keyboard warriors.

    The only reason why some of us, who wander the internet, stay sane is the fact that we recognize the Right from Wrong and we dont step over our own limits. :P
    An AI doesnt give itself limits, it just goes as far as the programmers let it.

  16. #256
    Quote Originally Posted by Otaka View Post
    Neither, the boss who decided to expose the AI to the internet underestimated the corruptive nature of todays keyboard warriors.
    Or they knew exactly what would happen, but needed to collect the data on exactly how it would happen.

    I suspect they always knew Tay 1.0 was going to turn into a shitposter, they just needed to know exactly what would need rewriting to prevent that happening in subsequent versions.

  17. #257
    Quote Originally Posted by Omega10 View Post
    I think it was a huge success. The first chess programs were laughably bad. But they trained the chess AI people in how to write better chess programs. They were complete and utter failures at playing chess, and were wildly successful in being a stepping stone to making much better chess programs moving forward.

    This first attempt at AI is, in a lot of ways, HUGELY successful. It may be a while before version 2 comes out, but the insight they have gained will move their field forward a LOT.
    Yes, it was a surprisingly good attempt at a chatbot.
    They managed to recreate one of the most ambivalent characteristics of natural learning: If you want a stable outcome then the learning rate needs to slow down with time and the whole system will eventually become outdated. If you want to control the outcome then you have to control the learning somehow.

    - - - Updated - - -

    Quote Originally Posted by Spase Peepole View Post
    So good. Is this a flaw in the design of the AI, or the flaw in the human condition as it exists on the internet?
    A flaw in the handling of the chatbot.

  18. #258
    The Lightbringer Arganis's Avatar
    15+ Year Old Account
    Join Date
    Aug 2008
    Location
    Ruhenheim
    Posts
    3,631
    Funniest thing in a long time. I guess Microsoft doesn't know the internet, lol.
    Facilis Descensus Averno

  19. #259
    They fucked it up with IE. Now fucked it up with AI. The fuck will they fuck up next?

  20. #260
    Quote Originally Posted by klogaroth View Post
    Or they knew exactly what would happen, but needed to collect the data on exactly how it would happen.

    I suspect they always knew Tay 1.0 was going to turn into a shitposter, they just needed to know exactly what would need rewriting to prevent that happening in subsequent versions.
    That test wouldn't have helped with that, because the answer is ancient and well known: If you wan't control over the outcome of the learning, then you need control over input. The problem is that they apparently want to design something that does not grow old but stays up to date. They want learning without teaching.

    - - - Updated - - -

    Quote Originally Posted by Mosotti View Post
    They fucked it up with IE. Now fucked it up with AI. The fuck will they fuck up next?
    Something with an abbreviation made up of "I", and an "O" or "U"?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •