Machine Learning and Neural Networks.
Why Skynet won’t get you, yet.
What are neural networks and what is deep learning? What it this all about and how does it affect each of us and how does it affect marketing.
Full, non-edited transcript by descript.com
Machine learning neural networks, and why Skynet won’t to get you, yet.
Hi, it’s Clint Griffin, and welcome to reiterate, this week we’re looking at neural networks and how they work. There’s been a lot of talk about machine learning and big data and companies are using these algorithms to do pretty amazing stuff.
Um, maybe let’s try and figure out what they’re actually talking about. Siri, Alexa, Cortana, et cetera. I’ll probably our most famous examples of neural networks, and they are raised super complicated. They require vast amounts of data in processing, and this is why they have taken so long to work.
So I guess the first question is, what is a neural network. And they will get you. How does it work? They can get pretty complicated, but in essence, neural networks work by layering outputs of some kind of assessment on top of each other. I suppose a simple way to explain them would be to look at a sandwich with some fillings.
At the bottom you’ve got a slice of bread. That’s the input. Then you might have some liters, then tomato, then cheese, and finally another layer of bread, and that will be the output, the filling or the hidden layers. That’s what is referred to as deep learning. It’s simply things that happen between the input and the output that you don’t necessarily see.
So now we know what deep learning refers to. What does it actually do? So first step, let’s discuss what an algorithm is. And the algorithm is really a simple, if then do something statement or else do something else, come on. It asks the thing to see what it is looking at and the sign if it has meets a certain criteria and then it does something.
If it does or it does something else. If it doesn’t look at object, if cat Meow, if not Bach, it’s that kind of thing. At Wix, that’s simply everything always gets way more complicated. So let’s explore how these work on a neural network level. So say on the first layer, we have three points of data or neurons each looks at some kind of identifier, enhance that information onto the next layer.
Now when we think of switches, and we should understand a little bit about binary code, there’s a simple honor and or naughty row one ones and zeros. Neural networks don’t work like that. They use percentages to work out what it is they are being asked. So we present a triangle turn machine one year and looks for curves and finds one circles.
The next is looking for straight lines and finds on, there’s about a 0.8 of straight lines. The next one looks for joins and finds points six joins. This thing gets passed over to layer two where the information is shared with three more neurons. That creates nine connections. The three connect to the other three all together.
And that’s nine connections between the first and second layer. These might look for things like angles, what a 45 degrees. If they’re not 45 degrees until at the end you get an answer that pops up that says what the thing is looking at is a square and that’s obviously wrong, and then the process is harder.
We were fine it, and this is where it gets kind of really complicated depending on the rightness of each answer. All the neurons are given a weighting in regard to the answer as well. So in neuron one might be pretty close and it gets a weighting of three, which means it’s quite a heavy weighting.
The information that it’s given across is relevant. You’re on two might be whale and it gets a rating of 0.5 so between your own one with a rating at weighting of three and you’re on to the weighting of 0.5, you’re on one is six times more influential than you’re on two. You can see that adding these weightings to every single data point or neuron and every single layer and quickly get out of hand, it’s an exponential growth and trying to figure out what percentage has gotten Tim’s of an answer, what weighting of the two, that percentage, how does the next one in line work with it, waiting to work out what it’s seeing, et cetera, et cetera.
So the way that the engineers and scientists would work, cause they would tell the machine what the answer is with thousands or millions of images. So the machine can see what the final image would look like or variations of it. And it adjusts itself automatically to refine the answers and its own weightings.
And this is called Beck propagation. And it was the key to getting this very, very complicated system to work. The thing about neural networks is that they’ve been worked on since the 1950s and most scientists and mathematicians thought that those working on it were pretty insane. Well, they were probably only a little bit insane.
They knew what they wanted, but they only had limited computing power right around 2012 and suddenly the processes were there that allowed these networks to self propagate and back propagate and work out the answers to the questions. In 1952 point a million images into a network has just not gain.
It’s not feasible. But today it is. We’ve got the Google image bank to thank for that. So now we’ve got the processing power and we know how neural networks work. And so that’s said, right. No, well, we don’t have enough time or money or people to feed the networks with the information they need for them to work.
There’s simply too much information to feed into a network to expect it to understand everything. Think about how Siri would work out, whether I’m saying to as in the number two T O or to T O simply using rules and grammar. There are millions of contexts which humans simply cannot explain to machines what word goes with the next.
What is the context of the sentence before was the context of the entire document. It becomes a nightmare. It’s not something that we can program. Scientists and mathematicians have tried to program language. And it doesn’t work. So now we’ve got Amazon, Google, Facebook, and the other big guys are working on machines that build the own understanding of these models.
They sit them to scan the web, and after a while they can now recognize your face on Facebook and work out not only what kind of bird it is, but that it’s a cockatoo, a particular kind from a small Island off the South coast of Australia, Google’s machine had 1 billion neutrons when it did a test in 2012 humans have a hundred billion neutrons with connections between the two.
It’s simply a vast number of connections between them. So we’re not kind of sky 90 yet. So neural networks are what people refer to as deep learning because there are hidden layers between input and output. They simply access points that assist a criteria or a pixel of an image or something else in a sign of value without waiting to it.
This information has been passed on to all the neurons on the next layer with the processes are then again. To determine the next variable and done again and done again. Almost like an iteration. This continues for however many levels you might have until the machine pops out an answer, which is correct or not.
Now, the process is automated by the algorithm and it’s way too complicated for humans to understand. In fact, we don’t even know what language of the machine is speaking. It’s now really hidden in this automated process. All we get is the output. Clint was tagged in a photograph. So think about that.
We wouldn’t input layer. We’ve got an output layer. We’ve got layers in between. You’ve got computers and neurons talking to each other, making up their own code, their way of doing things, their own way of explaining one layer to the next. And with that, we don’t understand. It’s deep. We will never understand it.
That’s kind of scary in a way, but don’t stress because we didn’t know how our own brains work either. But then I suppose we don’t have the ability to shut down the entire planet. If one of us goes a bit nuts. So that’s how they work. We aren’t even building them anymore. They are building themselves and we don’t even know what they’re saying to each other.
That’s rather comforting. So what does this mean in the real world? It means that we can use things like Siri. Siri understands what we sang and it understands the queries using, and we get on suspect that don’t sound robotic because they’re not. They’re unlearned understanding of what meanings are, how deeper things get.
Will they take over our jobs? Well. There’s voice mimicry that’s happening where before with the original Siri, I think it ought to spend three weeks putting in all kinds of sounds into the machine to get parts of speech through within used to make up the Siri voice. Now there are programs online where you can spend two minutes saying a few random words.
It’ll pop your voice back to you with a pretty close approximation. That’s pretty scary. Um, where there takes fake news, you can figure that out for yourself. And then Saatchi did an experiment and copywriting where they got the machine, and I think they partnered with IBM’s Watson. She spit out copywriting slogans for an advertising campaign.
These were then, there were thousands of them. Obviously much more than that than a creative team could come up with. He used to put into a system, and a lot of them were used on thousands of adverts that each Edward had its own unique style. So think about thousands of Edwards designed, and each ad is designed and tailored for you specifically because it’s gone through this neural process.
The nitro people know your name, it’ll know where you live, what your issues are, what you, what excites you, what things make you laugh. And it Taylor’s an ad at the right need state in your life. And at the right location in a voice that is understanding of what you need right now. That’s where voices will get you.
That’s where the neural networks will get to. We’re on demand advertising is the thing. That is driven not by us putting together a whole bunch of lines, putting them into a sales funnel and making them work, but by the sales funnel actually creating itself through neural networks. So if you look at, um, Gmail, when you type into Gmail, you’ll end up with a lot of sentences that are finished for you with suggest things about what you want to say.
So you take that a step further and you read your email. I get it on the other side. I taught a couple of liters. It gives me an appropriate response. I hit return. You can’t be bothered to reply, so you hit return. The machine does it for you. The machine on my end does it for me. And so what happens is two machines stop speaking to each other as though they are you and me.
That’s a heck of a waste of data. It’s an interesting way of looking at how machines are going to work for us in the future or work against us or in spite of us or whatever it might be. That’s the potential of neural networks that they just become the thing that drives its own self fulfilling prophecy.
That’s about it for this week. You will need to Excel. Interesting. They offer an amazing array of things for us to be really happy about, like our self-driving cause. They should be a level of concern as to how we use them in marketing. How we use them in general and where we kind of get to be able to shut them off of the manual process because I think these things could get out of hand.
If there are any questions, please drop me an email podcast@reiterate.org and I will respond. Have an amazing week and we’ll speak again soon.
Hey Justin, the theme is called… Podcast 🙂