1. #1

    ‘Artificial Intelligence’ was the Fake News of 2016

    Author claims that 1. AI doesn't exist yet, 2. it might never exist, 3. Nobody wants it if it did exist.

    I don't think he's right, if there's a tech out there people will use it. A lot of people said similar things about the internet, smartphones, etc, e.g. that nobody will want them.






    http://www.theregister.co.uk/2017/01...f_2016/?page=2

    “Fake News” vexed the media classes greatly in 2016, but the tech world perfected the art long ago. With “the internet” no longer a credible vehicle for Silicon Valley’s wild fantasies and intellectual bullying of other industries – the internet clearly isn’t working for people – “AI” has taken its place. But almost everything you read about AI is Fake News. The AI coverage comes from a media willing itself into a mind of a three year old child, in order to be impressed.
    For example, how many human jobs did AI replace in 2016? If you gave professional pundits a multiple choice question listing these three answers: 3 million, 300,000 and none, I suspect very few would choose the correct answer, which is of course “none”.
    Similarly, if you asked tech experts which recent theoretical or technical breakthrough could account for the rise in coverage of AI, even fewer would be able to answer correctly that “there hasn’t been one”.
    As with the most cynical (or deranged) internet hypesters, the current “AI” hype has a grain of truth underpinning it. Today neural nets can process more data, faster. Researchers no longer habitually tweak their models. Speech recognition is a good example: it has been quietly improving for three decades. But the gains nowhere match the hype: they’re specialised and very limited in use. So not entirely useless, just vastly overhyped. As such, it more closely resembles “IoT”, where boring things happen quietly for years, rather than “Digital Transformation”, which means nothing at all.
    The more honest researchers acknowledge as much to me, at least off the record.
    "What we have seen lately, is that while systems can learn things they are not explicitly told, this is mostly in virtue of having more data, not more subtlety about the data. So, what seems to be AI, is really vast knowledge, combined with a sophisticated UX," one veteran told me.
    But who can blame them for keeping quiet when money is suddenly pouring into their backwater, which has been unfashionable for over two decades, ever since the last AI hype collapsed like a souffle? What’s happened this time is that the definition of “AI” has been stretched so that it generously encompasses pretty much anything with an algorithm. Algorithms don’t sound as sexy, do they? They’re not artificial or intelligent.
    The bubble hasn’t yet burst yet because the novelty examples of AI haven’t really been examined closely (we find they are hilariously inept when we do), and they’re not functioning services yet. For example, have a look at the amazing “neural karaoke” that researchers at the University of Toronto developed. Please do: it made the worst Christmas record ever.
    It's very versatile: it can the write the worst non-Christmas songs you've ever heard, too.

    Neural karaoke. The worst song ever, guaranteed
    Here I’ll offer three reasons why 2016’s AI hype will begin to unravel in 2017. That’s a conservative guess – much of what is touted as a breakthrough today will soon be the subject of viral derision, or the cause of big litigation. There are everyday reasons that show how once an AI application is out of the lab/PR environment, where it's been nurtured and pampered like a spoiled infant, then it finds the real world is a lot more unforgiving. People don’t actually want it.
    3. Liability: So you're Too Smart To Fail?
    Nine years ago, the biggest financial catastrophe since the 1930s hit the world, and precisely zero bankers went to jail for it. Many kept their perks and pensions. People aren’t so happy about this.
    So how do you think an all purpose “cat ate my homework” excuse is going to go down with the public, or shareholders? A successfully functioning AI – one that did what it said on the tin - would pose serious challenges to criminal liability frameworks. When something goes wrong, such as a car crash or a bank failure, who do you put in jail? The Board, the CEO or the programmer, or both? "None of the above" is not going to be an option this time.
    I believe that this factor alone will keep “AI” out of critical decision making where lives and large amounts of other people’s money are at stake. For sure, some people will try to deploy algorithms in important cases. But ultimately there are victims: the public, and shareholders, and the appetite of the public to hear another excuse is wearing very thin. Let's check in on how the Minority Report-style precog detection is going. Actually, let's not.
    After “Too Big To Fail”, nobody is going to buy “Too Smart to Fail”.

    2. The Consumer Doesn’t Want It
    2016 saw “AI” being deployed on consumers experimentally, tentatively, and the signs are already there for anyone who cares to see. It hasn’t been a great success.
    The most hyped manifestation of better language processing is chatbots. Chatbots are the new UX, many including Microsoft and Facebook hope. Oren Etzoni at Paul Allen’s Institute predicts it will become a “trillion dollar industry” But he also admits “my 4 YO is far smarter than any AI program I ever met”.
    Hmmm, thanks Oren. So what you're saying is that we must now get used to chatting with someone dumber than a four year old, just because they can make act software dumber than a four year old. Bzzt. Next...
    Put it this way. How many times have you rung a call center recently and wished that you’d spoken to someone even more thick, or rendered by processes even more incapable of resolving the dispute, than the minimum wage of offshore staffer who you actually spoke with? When the chatbots come, as you close the [X] on another fantastically unproductive hour wasted, will you cheerfully console yourself with the thought: “That was terrible, but least MegaCorp will make higher margins this year! They're at the cutting edge of AI!”?

    In a healthy and competitive services marketplace, bad service means lost business. The early adopters of AI chatbots will discover this the hard way. There may be no later adopters once the early adopters have become internet memes for terrible service.
    The other area where apparently impressive feats of “AI” were unleashed upon the public were subtle. Unbidden, unwanted AI “help” is starting to pop out at us. Google scans your personal photos and later, and if you have an Android phone will pop up “helpful” reminders of where you have been. People almost universally find this creepy. We could call this “Clippy The Paperclip” problem, after the intrusive Office Assistant that only wanted to help. Clippy is going to haunt AI in 2017. This is actually going to be worse than anybody inside the AI cult quite realises.
    The successful web services today far are based on an economic exchange. The internet giants slurp your data, and give you free stuff. We haven’t thought more closely about what this data is worth. For the consumer, however, these unsought AI intrusions merely draw our attention to how intrusive the data slurp really is. It could wreck everything. Has nobody thought of that?
    1. AI is a make believe world populated by mad people, and nobody wants to be part of it
    The AI hype so far has relied on a collusion between two groups of people: a supply side and a demand side. The technology industry, the forecasting industry and researchers provide a limitless supply of post-human The demand comes from the media and political classes, now unable or unwilling or unable to engage in politics with the masses, to indulge in wild fantasies about humans being replaced by robots. For me, the latter reflects a displacement activity: the professions have already surrendering autonomy in their work to technocratic managerialism. They've made robots out of themselves - and now fear being replaced by robots. (Pass the hankie, I'm distraught).
    There’s a cultural gulf between AI’s promoters and the public that Asperger’s alone can’t explain. There’s no polite way to express this, but AI belongs to California’s inglorious tradition of generating cults, and incubating cult-like thinking. Most people can name a few from the hippy or post-hippy years - EST, or the Family, or the Symbionese Liberation Army – but actually, Californians have been it at it longer than anyone realises.

    There's nothing at all weird about Mark. Move along and please tip the Chatbot.
    Today, lives on Silicon Valley, where creepy billionaire nerds like Mark Zuckerberg and Elon Musk can fulfil their desires to “play God and be amazed by magic”, the two big things they miss from childhood. Look at Zuckerberg’s house, for example. What these people want is not what you or I want. I'd be wary of them running an after school club.
    Out in the real world, people want better service, not worse service; more human and less robotic exchanges with services, not more robotic "post-human" exchanges. But nobody inside the AI cult seems to worry about this. They think we’re amazed as they are. We’re not.
    The "technology leaders" driving the AI are doing everything they can to alert us to the fact no sane person would put task them with leading anything. For that, I suppose, we should be grateful.
    .

    "This will be a fight against overwhelming odds from which survival cannot be expected. We will do what damage we can."

    -- Capt. Copeland

  2. #2
    The article writer is using "artificial intelligence" to mean high-level learning capability which can pass a Turing Test. The media uses "AI" more liberally and often interchanges it with "automation".

    It comes to down to what do you consider "intelligence". Is IBM's Watson smart, or just a massive search engine? Is a self-driving car or truck "intelligent"?

  3. #3
    I've always thought that AI is limited by storage and processing power, if we're talking about an AI that can pass a Turing test and hold full length conversations without derailing the subject. Then yes what you have is very little more than a database web of responses to any and all questions, the AI isn't 'thinking' about the answer just pulling said answer from a database of pre-programmed responses. I think the complex part comes when you imagine the sheer scale of possible word combinations and how many answers one question could have, combined with the amount of ways you can word that response. building that database could take a while, the same as creating the all the links between those words and sentences.

    in the end even if you do have said database and a super fast processor that allows the ai to work at near real time it could be very difficult to see the difference between something that is sentient and something that just mimics sentience. I don't think we are that close to a turing proof ai let a lone a sentient ai.

  4. #4
    Deleted
    I think you're as clueless as ever. Even more so in this subject.

  5. #5
    Quote Originally Posted by PvPHeroLulz View Post
    I think you're as clueless as ever. Even more so in this subject.
    Hey baby, wanna go out?
    .

    "This will be a fight against overwhelming odds from which survival cannot be expected. We will do what damage we can."

    -- Capt. Copeland

  6. #6
    I'm inclined to agree, I don't think there's any mythical software code out there that is suddenly going to become consciousness. We really don't know what consciousness even derives from, let alone how to create it.

    Additionally, I've converted to the belief that circuitry and metal isn't really more durable or practical than organic alternatives. I'm not sure "nanotechnology" is as practical as breeding/training bees to do the same task, for example. Or how about just creating bacteria that serves a specific medical function? These things are just as cool and we're actually already doing some of it.

    AI is going to require some unknown hardware and software components that we can't even comprehend yet.

  7. #7
    I still wonder if even a single company turned a profit because of Watson yet. The way they try to market it to the general public despite it barely having a use for large corporations, I can't help it, but it seems a little desperate .

  8. #8
    Banned monkmastaeq's Avatar
    7+ Year Old Account
    Join Date
    Apr 2016
    Location
    Frozen wasteland
    Posts
    903
    I want self driving cars, but yeah turing test crap not really, I dont see the point

  9. #9
    Merely a Setback Sunseeker's Avatar
    10+ Year Old Account
    Join Date
    Aug 2010
    Location
    In the state of Denial.
    Posts
    27,125
    Okay, some guy said something.

    Perhaps this article should be titled: "How this man demonstrated his opinion will blow your mind!"

    We don't have AI. Anything might never exist, or everything may exist, we don't know yet. I'm certain this guy doesn't speak for everyone when he says "noone wants it".
    Human progress isn't measured by industry. It's measured by the value you place on a life.

    Just, be kind.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •