Erik I

ai

So this got discussed on HN. The title is “AI made from a sheet of glass can recognise numbers just by looking”.

The results are interesting enough but the write up and discussion is what prompted me to write about it.

Maybe I’ve just been somewhat daft when it comes to Artificial Intelligence; I studied Computer Systems Administration, not Computer Science, so AI wasn’t part of my curriculum – and while I can enjoy good Sci Fi – I haven’t been very interested in the technical implementations of AI.

However – as presented – I think the way the article presents it it is more or less at the level where you could follow up with calling a special chiseled rock for AI. To continue on someone’s example, say you created a coin sorter from a rock for example:

Blimey, this is really stretching the definition of AI, surely? It’s a piece of glass that has been designed by a lot of trial and error to perform one specific task. It sounds like humans were analysing the fitness and making the modifications. I wouldn’t call it “unpowered AI” any more than I’d call a coin sorter that.

silveroriole, 2019-07-13

to which atoav replies

Because it uses glas as a substrate and light as a information carrier? Most of what we call “AI” today also uses prelearned weights for their neural networks and in many use cases these weights are not touched after deployment. I don’t see why a neural network encoded in glas should not be an AI while the same neural network on a computer is one — either you have to call both AI or neither.

atoav, 2019-07-13

At this point I realized I agree with atoav that we either have to call this an AI or stop calling other systems AI, and of those alternatives, at least for now I’ll choose the one that means I won’t end up having to call a block of rock or a slab of glass with no moving parts for an AI, meaning I think a stand alone feed forward neural network cannot be an AI.

Recurrent neural networks however might cut it for me, again for now, and, I guess: so might about anything containing enough memory and feedback loops, if implemented correctly.

If you read the rest of the discussion here and the linked article you might find some interesting thoughts but you might also like me come to the conclusion that there’s more buzzword abuse than you’d thought before.

Warning: the rest of this post is more or less a collection of small rants.

  1. I should point out that while I think the word AI is abused we might not be far away from some sort of seriously useful and/or dangerous implementation.

  2. For now however I see mostly artificial stupidity. Even the devil learns from his mistakes the saying goes, Google however didn’t for 12 years many times I clicked I wasn’t interested.

  3. If someone still thinks the “AI” mentioned in the article is an AI I have a self driving car concept to sell, carved from pure rock, to be adapted to any modern car by just strapping it to the top of the existing accelerator. Same goes for my first foray into finance AI: a physical neural network that sorts coins based on value by inferring value from size.

  4. If you just want to throw around fancy words, xkcd has some good ideas:

It also works for anything you teach someone else to do. "Oh yeah, I trained a pair of neural nets, Emily and Kevin, to respond to support tickets."

source: https://m.xkcd.com/2173/

Comments welcome on my mastodon account or write your own post, prove me wrong and let me know. Edit: I also posted it to HN, so you can leave a comment there as well: https://news.ycombinator.com/item?id=20432302

Filed under #ai #artificialintelligence #discussion and #hn