You should be skeptical when it comes to hyped-up AI. Here’s why.

You should be skeptical when it comes to hyped-up AI. Here’s why.
One of the reasons that we wrote this book is to teach people to be skeptical. So there's an old book called How to Lie with Statistics, which teaches people to catch all of the tricks like, you know, four out of five doctors recommend. Well, what exactly was the sample of doctors that made this recommendation was literally just five and four of them with a cousin of the people that the toothbrush that's being recommended. We wanted to write something similar for AI. And the reason for that is because AI is hyped all the time. A lot of journalists write by press release. Essentially what happens is Microsoft puts out a press release or some other company. I don't mean to pick on Microsoft, but I'm thinking of a specific case, puts out a press release that says, hey, we've got AAI system that's reading as well as human beings. And then the press just kind of accepts that often and they report it and in fact, they embellish it. So now it's not just they read as well as human beings, but machines are superhuman in their ability to read. And then they'll throw in, oh, and millions of jobs are at stake like this actually happened. And then you go and read the study. And it just means on one little narrow aspect of reading, the machine is maybe equal to people or better than people, or it's a little bit better than last week's. It's not really that exciting when you actually go and read it. It's nowhere near actually reading. So it's, you know, you can underline parts of a text, but it's not the same thing as being able to read between the lines, which is really most of what we do. You know, reading would be really boring if the author spelled out absolutely everything. Instead, they kind of let you play at home and come to some conclusions yourself. Well, we don't have AI that can do that at all. But you see this, you know, thing in in the newspaper or some website saying, you know, machines now are superhuman readers. So what we offer our readers are six questions that they can ask every time they read anything about AI in the news. So the first thing you ask should ask is, is there a demo? Can I try it out for myself? If there's no demo and you can't try it out for yourself, there's a reason for that. It's because it's not really stable yet. They've got some version of this that works in some limited context, but it doesn't mean that it really works for real. So that's one of the questions you ask. Is there a demo of this? Another question you ask is, how general is it? So, OK, DeepMind has won an Alphago on a 19 by 19 board. Is there any evidence if you trained it on a 19 by 19 board that it would then be able to transfer what it learned to a board that was 9 by 9 or 31 by 31 or wasn't a square at all? You know, a human Go player can play on a board of any size, you know, if they're expert enough. But that particular system would have to start all over. You know, it was trained on, I don't know, 10 million games. It would have to take another 10 million games in order to be able to play now on a board of a different size. So it doesn't generalize what it learned. And this is often the case. Another thing to do is to strip away the rhetoric. So somebody says this system is reading. Well, is it really reading, or is it doing some narrower task than that? Well, maybe you claim that it's reading, but if you think about all the things that go into reading, that includes inferring things that you darn already know, building some kind of internal mental representation of what's going on, who did what to whom, and so forth. Does the system that quote does reading? Does it really do that, or is it just underlining text that's relevant to a passage that you saw? So if you read one of Esop's fables, you can explain what the moral is at the end of it. Can this system do that or can it just underline the place where you know the animal packed away food for the winter? I mean, have no idea why it did that. Going back to how to lie with statistics, suppose somebody tells you this machine works better than humans. You really ought to ask which humans. And most of the time the humans are people on Amazon Turk who get paid like $0.02 per item and do like 300 items in order to make $6. And they are bored out of their minds. They're probably not human experts, you know, unless we're talking about a game like Go where machines get pitted against the the world champions. You're probably talking about average humans that are bored out of their mind as the control group. That doesn't really tell you what best human performance is. And you really ought to check when you read one of these studies. Another thing to watch out for is, OK, we've made progress on this particular task, but is this really a step towards general AI? So if you have a system that can tag photos pretty well, that's exciting and it's useful. You know, who wants to sit there labeling all of their photos? But does it mean just because you can tag photos that you have a system that's giving you general intelligence? Or is it just, well, you know, when you tag the Eiffel Tower, a lot of other people did too. And so the statistics of the Eiffel Tower and the images make it pretty easy. If you have a big data set like Google does to tag that photo, tagging the photo might just rely on something fairly superficial, which is like comparing similarity to a large database. That doesn't mean that your system actually understands what a tower is, what Paris is, why somebody would climb it or think it's beautiful. And so it's possible to make little narrow slices of intelligence that seem more intelligent than they really are. Go all the way back to the history of AI and there was this a system called Eliza, which just did basically keyword search and had psychiatric replies. So you say I'm upset with my mother and it says tell me more about your family. It's just looking for the word mother and saying tell me more about your family. And people were fooled into thinking that it's deep, that it is. Some people thought it was real, and other people thought it was a major advance in artificial intelligence. Instead, it was almost built as a prank. And the truth is, it wasn't that smart. The chat bots we have today aren't a whole lot smarter. So just because you can have a 5 minute conversation with the machine doesn't mean that it's really an advance towards intelligence. It might be just kind of a bag of tricks. The last thing is you want to know how robust something is. So maybe for example, you have a system that will work for blind people to label the images that it sees and they'll tell you there's a camera over here, there's a bookshelf there. But you'd really like to know, does it work in the dark or does it work, you know, if if strong sunlight, you want to know how robust this is. I actually built my new company around this idea. It's called robust dot AI, and the goal of the company is to try to take a new approach to AI where things work in general, rather than just some laboratory demonstration.
  • https://www.msn.com/en-xl/news/other/you-should-be-skeptical-when-it-comes-to-hyped-up-ai-here-s-why/vi-BB1nFKT8?ocid=00000000

Related

'It's nice we have a story': Slot not worried about Salah comments

'It's nice we have a story': Slot not worried about Salah comments

News
China: Farming tradition meets innovation in China's vegetable heartland

China: Farming tradition meets innovation in China's vegetable heartland

News
The crazy story behind a remarkable racing quadruple

The crazy story behind a remarkable racing quadruple

News
2025 Chevy Equinox EV Crushes Tesla Model Y In Range Test

2025 Chevy Equinox EV Crushes Tesla Model Y In Range Test

News
What happens to your body when you stop going outside?

What happens to your body when you stop going outside?

News
Heatwave hotspots are popping up - including one over the UK

Heatwave hotspots are popping up - including one over the UK

News
UK general explains why Putin is afraid of war with NATO

UK general explains why Putin is afraid of war with NATO

News
South Korea: Seoul Hit By Biggest November Snowstorm In 52 Years

South Korea: Seoul Hit By Biggest November Snowstorm In 52 Years

News