Happy Employees == Happy ClientsCAREERS AT DEPT®
DEPT® Engineering BlogPlatforms

Machine Learning: Explain it Like I’m Five [Shipit.io Podcast]

If you've heard a lot about AI and machine learning, but don't understand it, this month's episode is for you.

Hi, I'm Keegan Sands, and welcome back to ShipIt. The podcast from DEPT® that's made by engineers for engineers. If you've heard a lot about AI and machine learning, but don't understand it, this month's episode is for you.

Machine learning: explain it like I'm five.

Special guest, Tyler Renelle, host of DEPT®'s machine learning guide podcast chats with DEPT® engineer Matt Merrill about the historical background and technical landscape of machine learning. So without further ado, let's get started.

Matt Merrill: It's the new year. It's probably a lot of people that are looking for some new stuff to learn in the new year. Maybe as the year goes along, I've been, mostly a backend developer, a Java and node.JS, and dabble in dev-ops for 15+  years.

I know little to nothing about AI/ML topics. Thought it would be really interesting to bring Tyler on here.

Tyler Renelle: It’s named Machine Learning Guide. And it's part of our mutual podcast network now.

Matt Merrill: I thought we'd have a conversation about machine learning and explain it to me, the dummy here like I'm five. I briefly introduced myself. So Tyler, why don't you tell us more about yourself and why we should trust you and listen to you on this topic?

Tyler Renelle: my name is Tyler Renelle and I come from web dev mobile dev myself as well, about four or five years ago.

I switched to machine learning. So I'm definitely not the ultra expert on the topic, but I'd like to think of myself on an X, as an expert on teaching the topic as somebody who struggled with the transition from one field prep, probably primarily going to be a listening audience to machine learning. And at the time when I created the podcasts, there were no newbie resources in this, in the same way, that once upon a time, computer science was the only way into app development.

And then there were boot camps and it was very easy. At that time, it was the computer science days of machine learning. And I was like, this is actually a lot easier than they're portraying it in the textbooks. Let me take a stab at distilling this down to dummies like myself.

And now we've had the boot camp, the data science, boot camps, and the easy learning resources and all that stuff. But that was, it was a nascent market at the time.

Matt Merrill: Yeah. But you still have a pretty good listenership as I understand it. So like obviously you've hit, you've struck a chord there, so that's cool.

Tyler Renelle: Timing was timing. Like I said, there was nothing at the time, I was the first machine learning podcast. There was one out there. Was it called? Anyway, there was a professor who was teaching AI, but it was really mathy, really tacky.

Matt Merrill: Yeah. That's going to be one of my questions. Because that scares me as a person who's not great at math.

So yeah. Let's just get into it like AI ML, NLP, tell, just give us like, give us the overview of this. Like how can we. How can we get through this and what do I Google? What do I ignore? What do I start with? I don't know, wherever you think we should start.

Tyler Renelle: Okay, cool. So the master umbrella is data science. That's the umbrella term for everything we're going to be talking about in this episode, but we're going to hone in on machine learning, a sub field of data stuff. So data science is anything that deals with data. It can even extend it as far as why you might not even consider a financial advisor, a data scientist and anybody who's doing spreadsheets databases.

Somebody who has even a database administrator might be considered someone in the data science umbrella, but anything that deals with data is data science. Machine learning obviously deals with data. Therefore it's data science, the traditional data science roles at an organization are going to include things like feature engineering, data engineering, like pulling your data out of somewhere from cloud Twitter, whatever.

So data engineering also includes transforming that data into something that's usable downstream. It also includes data analysts. So people who are going to be looking at the data and making human-based decisions, using charts and graphs with programs like Tableau and Power BI, and then there's the machine learning people who are actually writing algorithms that make predictions based on the data.

So if a data analyst takes data from somewhere and looks at it to make a Schumann decision. Machine learning engineers are the same thing, but making robo decisions, they write algorithms that make predictions automated, with no human in the loop per se. Got it. AI artificial intelligence. It's bigger, so we got data science underneath data sciences, AI, and then underneath AI are machines.

So AI it's the field of automated intelligence. If you look it up on Wikipedia, it's just like automating things that would typically take human-level intelligence. That's going to include things like language processing, and natural language processing. So anything language-oriented, like Siri chatbots, computer vision knowledge representation.

And a decision and action planning. So a game-playing bots like Google's AlphaGo, zero, and all those things, anything that plays chess or predicts stock market or takes action on the stock market and then machine learning. And once upon a time machine learning was just a subfield within AI.

Nowadays that line is getting blurred. It's difficult to disambiguate the terms AI. And the reason is machine learning proved to be so valuable. So important learning on the data proved to be so important. It is subsuming the other spaces of AI. So once upon a time, computer vision was a bunch of dedicated algorithms that were handcrafted.

Using these things called, like these like patches that look for patterns in pixel values, these little square patches. It's almost, you've got his funny, you've got where's Waldo. We've got Waldo in the background. I always use Waldo as an example for my computer vision stuff. You might have red and white stripes in a square pack, a pixel patch that slides over an image and it looks for that pattern.

And then it says, ding ding, and then there's some score, but it's all handcuffed. Along came machine learning and they automated that process. Same concept still using the patch. It's called a convolution, but they use these networks called a convolutional neural network that actually learn the patterns that it's looking for.

So it's like one layer removed. So machine learning just swept through all the subdomains of AI. And that's why the line between AI and ML is blurry more than ever before. That's why some people have difficulty knowing the difference between the two terms.

Matt Merrill: No, that's really cool. I liked the way you explain that, it's almost like abstracting up that, but at least in that image recognition example, abstracting an upper level.

So that like image recognition becomes just something you can do when you tell it what to look for, as opposed to like bespoke, crafting an algorithm to look for some specific thing.

Tyler Renelle:  I like to think of machine learning as there are almost layers of machine learning. So let's say that the first level is to complete handcrafted scripts, like a Python file that you wrote, end to end yourself. The next layer above, that would be some part of that script.

Learn something on your behalf. So you don't quite code it into the script. You coded it. That part of the script is actually doing a little bit of knowledge gaining on its own learning patterns in data and that's machine learning. And then a layer above that is what's called learning. In machine learning, we have these machine learning algorithms or models where we tune what's called hyperparameters.

Don't worry about that for now, but there are still things that we like oversee in the machine learning process. There's a layer above that is called meta learning that does its own dial and knob me. A hyperparameter optimizes it. That's one step closer to AI. So like the master goal of AI, the thing completely manages itself.

We don't have to teach it how to learn.

Matt Merrill: It's like autopilot, as opposed to like tweaking the knobs yourself as you go.

Tyler Renelle: That's it? That's it. Cool. What, so basically, what is machine learning? I say machine learning is fuzzy logic, and I have to be careful with that phrase because it's actually a phrase used in machine learning and a subdomain of machine learning, but we're gonna roll with it.

If you write a script that says if-this, then-that, and it's very clear to you, it's, there are hard cuts that make for a very obvious A to Z scenario or solution. Then you write it yourself, but if there are no hard cuts, if it's fuzzy the in-betweens are different.

For you as a programmer to loop yourself in then it's fuzzy logic. That's a good use of machine learning. I'm going to get to an example for a project that we were actually working on slash considering machine learning implementation.

But we've talked about image recognition. Yep. That's sliding patches looking for patterns in the pixel values. Once upon a time, we did handcraft that, like you said, bespoke algorithms, and handwritten convolutions, but they were very scenario-based. So an expert in image theory would come in and write a handful of patches that are for car recognition, the handful of patches for car recognition.

It was fuzzy logic because we're because it's those, like delineators in pixel values to designate something as cat car tree or dog are fuzzy, but they were able to achieve it, nonetheless, because The Show Must Go On. Wasn't really machine learning at the time.

But that's a classic scenario. Once convolutional neural networks and learning algorithms for this type of scenario became available, they wiped out those old guard algorithms. That's a perfect situation in which there are no hard-cut if-then statements. So I say machine learning is fuzzy logic.

I also say it's learning patterns in data, and I'm going to pause in case you have any questions.

Matt Merrill: One thing I think that we could talk about though, like how processing power shapes.

Tyler Renelle: Let's talk about that. A little bit. Machine learning is quite old. Some people call it applied statistics.

What you do in machine learning is nothing but old textbook statistics, calculus in your algebra algorithms, and what we call it. So it's an algorithm is a math puzzle written in code. Once it's actually like the data that has learned gets fused into the algorithm.

These variables now. Numbers are called a model. So the model is like an algorithm, but the algorithms, the raw algorithms are nothing more than old textbook stacks, linear algebra, and calculus. Math problems. So in a way, machine learning has been around forever. It's just that, like you said, we didn't have the processing power.

We didn't have the know-how to turn it into tech. We really started hunkered down with the 60, 70s, and 80s. So machine learning as code is quite old, but it wasn't popular because at the time the algorithms in use were called shallow learning algorithms. There's a model called linear regression, a very simplistic model.

They just weren't that powerful, but the knowledge of neural networks, of deep learning, so shallow learning is basically like single math. Hydrate that into a model based on learning from data. Now you have a machine learning model.

Deep learning is stacked shallow learning algorithms. Okay. A neural network is actually nothing more than a network of linear regression nodes that are called neurons.

So it's stacked, shallow learning. Yeah, like in a Bayesian network, for example, would have been like naive Bayes as a shallow learning algorithm of Bayesian network is like that transformed into a graph. Any machine learning expert, listening to this episode is probably up in arms, but that's the general gist that was conceptualized a long time ago, but impossible to achieve until the computer power of our generation.

Matt Merrill: That's a lot of data. That's a lot of data and you have to keep a lot of it in memory. And it also requires a lot of CPU power to be able to move it around and adjust it and stuff like that. And also on top of it, up until the 2010s, we weren't even capturing a lot of that data.

Like it was just able to be accessed, even if we did have the computing power. Yeah. It's a really cool way of looking at it.

Tyler Renelle: It was like 20, I'm still like 2015, and it really all started popping off. And nowadays we use GPUs instead of CPUs for most machine learning. And why is that?

Qorvis CPU can handle a lot of computation. That's why there are only eight cores of a GPU. They are mainly tiny computations, but there are many more of them. And like I said, in the case of a neural network where each node of each neuron is basically just a linear regression unit, those are small computations.

They're very easy to distribute a crap ton of. And that's why GPUs is valuable in this case.

Matt Merrill: Got it. So I'm going to take a crack at this. A general-purpose CPU could handle one very complicated operation, very quickly. Our, as a Kuta core, I think you said that could handle a simple operation very quickly.

Couldn't do a complicated one, but it's able to scale that out, which is exactly what these algorithms are.

Tyler Renelle: Exactly. I think of a CPU, it was like an ECE two instance, a huge issue to a sense. And a GPU is like 50 Lambda functions normally. So, the computation became available, especially by way of GPS.

That made the neural networks feasible. And then, like you said data became available at scale and the way Google operates, right? Is it scraping the web? So now they have all the language, and data of the world. They went on to create all these natural language processing.

Pre-trained that are available through the hugging face library. Sorry, what was it called? Hugging face. Yeah. Let's go into, I'll talk about the different domains of machine-learning hugging face ties into the natural language domain. It's my favorite library actually, natural language is my favorite domain of machine learning.

I say machine learning is broken down into three parts: it's table space and time. Any time you're dealing with tables, whether they're spreadsheets or databases. You're going to be using tabled machine learning models. Let's say we have a spreadsheet. It's the housing costs of houses in Newburyport. You have a bunch of columns that are like the distance downtown on the square feet, the number of bedrooms, number of bathrooms.

And then what you're trying to predict is cost. So you take one of those columns in your CSV or spreadsheet pulled aside. That's the thing you're trying to predict a machine learning algorithm that deals with. We'll learn the patterns in those columns. We call these features the ones on the left X in order to predict the Y column, the cost of the house, a very popular algorithm here is called XG boost.

That's probably the most popular algorithm of the machine learning period. So that one's worth name-dropping interesting XG boost. It's a decision tree. It learns, then blanks. It's not actually, this is what's called a gradient boosted ensemble of decision trees. So it's multiple decision trees, all like voting against each other, but effectively it's a decision tree and what's cool is actually you, can you say like XG boost.fit?

And it can output the decision tree as generated and even look at the oh, okay. That makes sense. If its number of bedrooms is greater than two.  

Matt Merrill: So you're saying like XG boost stuff fit. What is that? Is that in Python? Like Python. Okay, cool. So I'm thinking in my head right now.

This sounds like I'm straight ahead, enough use case that I, a machine learning moron might be able to use this in something that I'm working on. Is it really that simple?

Tyler Renelle: Yes. But no, this is new. The idea of machine learning being available to people like us is very new. This revolution’s been going on.

Most people listening to this episode haven't heard the term machine learning. I didn't hear the term machine learning.

Matt Merrill: if you're like me, like I heard AI at times, but not machine learning. Exactly. And I don't think I learned the difference between the two until similar to the timeframe you were talking about, 2015 or 2016.

Tyler Renelle: Now, there was a revolution after the deep learning revolution, which became available again by, by way of lots of data, lots of. Compute power, even though the deep learning models themselves are available for a very long time sparked this huge revolution. And now there are just Python libraries for everything.

And you can just write some lines of code and get the job done very easily. It's crazy. So there are tablespace times. Let's start with. Time is like step, step you're learning. You're learning the steps that lead to the next step stock market predictions, a very obvious use case with that. So over the last week of fluctuations of the price of a stock, what is the likely next price of that stock?

What's going to be tomorrow and therefore, should I buy it myself today? Language. What's next? We're going to be, there's a lot of, there's a lot of really cool use cases of language. So language is. Stock market's time weather, predicting the weather budget forecasting for companies and all these things.

Yeah. So there are dedicated algorithms for four times and in the table stuff, the shallow learning algorithms, even the linear aggression, extra boosts, and stuff. There are also just neural networks. That was the deep learning revolution that introduced neural nets. A vanilla neural network. When somebody thinks of the word neural networks is actually almost like a class, like an abstract class to concept.

It's not really an algorithm. The neural network people are thinking of when they say that word is actually called a multilayer perceptron that's used for table data. And then for time series stuff like language, it's called a recurrent neural network. So subclass neural network, one is for table. One is for space.

One is for time is time. One is called a recurrent neural network. And the recurrent part is that it loops back on itself. Graph and you have to be able to, you have to do that because you have to be able to take an undefined number of steps in time. And so it has to be able to loop back on itself.

And then its space is like a pixel, an image. What we're talking about, a convolutional neural network. So subclass neural networks into convolutional neural nets. Convolution is that patch. I was talking about this while looking for Waldo. So it learns how to detect things in an image based on the pixel values.

So that space, interestingly, the difference between time and space has been a blurring. In the last few years, a series of models came out called  “Transformers”. There was a white paper called attention is all you need and introduced this like attention module into neural networks. So it pays attention to certain points either in time or space.

And what model did as strangely it blurred the lines between space and time. And now a lot of times for language stuff, people are using convolutional neural networks for tradition, traditionally used for space. So the line between space and time is being blurred and there's a little bit more universe, a universality to the neural networks that are being used in these cases by way of the.

Matt Merrill: If I'm about to, start out and look at this stuff like, shit, is that a very advanced topic or is that actually a good place to start these days?

Tyler Renelle: Because of how easy it is to get started as a developer, a question, tying that into you don't really need to know much of what I just said about like a 10, like a tension module and transformers and stuff.

A lot of days these Python libraries just handle the legwork for you and you just write a handful of lines of code to get the job done. So back to Hugging Fcce the library. Hugging Face is a suite of tools for natural language processing, your chatbot questioning, answering summarization, and sentiment analysis.

Anytime you do texts or spoken language, even when handling spoken language, you'll use this Hugging Face library. The goal of that library was to implement transformers that I just mentioned in its various ways, the different algorithms that were being pushed out in white paper land so that you, as a user, don't need to care about what's happening back there.

Let the scientists do their science. I just want to build this stuff spiraled out of control that the library does everything under the sun. It's incredible. You just write a one-liner to take a block of text, take a Wikipedia entry, and then you download it with a beautiful soup or something in Python, and you just say, Hugging Face dot summarize, and it's like a paragraph and it's incredible.

Matt Merrill: So a couple of follow-ups there. You've mentioned beautiful soup. what's that? I never heard of that.

Tyler Renelle: So beautiful soup is just a web scraping library for Python. Let's talk about languages a bit. So Python, that's the language of data science and machine learning. Other light popular languages for machine learning are.

Because of a spark and I guess Hadoop, because a lot of data science is data at scale parallelizing, data processing over multiple servers or whatever the case spark was a popular framework to use for that. So Scala became a popular language to use machine learning, but then they ported that to Python because more people were using Python and machine learning.

And it was such an uphill battle that they're like, okay, we give up, let's create pipe high spark. So now spark works. Python R is a popular language for machine learning, but it's a little bit more in the data analytics crowd and is fast becoming phased out in favor of Python, MATLAB. Matt lab is popular in the academic crowd for research purposes.

Again, starting to phase out the shows, becoming very, it's becoming very clear that there's a clear winner language. Yeah here. If you're going to learn machine learning, learn Python, honestly don't bother with any of the other languages. It's a bold statement, but I think most of my colleagues would agree.

Matt Merrill: I think a lot of folks listen to this podcast, just because of what Rocket and DEPT® do. Like a lot of us are coming from a web background. And the nice thing about Python is it could do that too. It can't go wrong. Learning Python these days. It's not a waste of knowledge.

You could script dev ops stuff with that.

You could strip the random stuff. It's the language of stats. It's pretty great. So what you were saying about taking the Hugging Face library and just feeding some random text into it, there's your podcasts ML guide to do you have, do you walk through some of this stuff?

Tyler Renelle: No, that's awesome.

I talk about a lot of these libraries and the technologies that are used in. Specific function calls you might make. I don't go too much into code because it's obviously audio format, but the podcast has a companion GitHub repository called no-fee this AI journal that I'm building open source. And I use a Hugging Face, through and through, but we're talking about how easy it is to do machine learning.

These days. I mentioned XG boost for tables. I mentioned a Hugging Face for language stuff. And more surprisingly, if you look at the Hugging Face models on their website, like what, I thought this was a language library. Does it work with table data? It works with images as well.  The first step to getting started with machine learning is to not use actually any Python libraries yourself, but to see how far you can get in the cloud-first and foremost, Oh, because cloud services on AWS, Google GCP, and Microsoft Azure, they have these machine learning tasks as hosted services that you can make calls to their endpoint.

And that's beneficial to you as a developer because many of these machine learning models do want a GPU about the only one that doesn't is X. But Hugging Face. If you're dealing with time and space, you're going to want a GPU. If you're developing on a Mac, you're out of luck because Macs don't have Nvidia graphics.

And Nvidia is pretty much, you can get machine learning, working with open cl and third-party libraries, something called plat ML, but it's just a pain in the neck. What you really want is a PC, either Linux or windows. If you're using Windows, use WSL two to use Linux. In other words, your only option is Linux pretty much.

It's not completely true, but pretty much you really want to be developing on Linux because you want to tap into your end video graphics. What if you don't have that, it's a pain in the neck to set up let's Kuda. There are ways around that by way of Docker. If you're developing a machine learning model, you're going to deploy it to the cloud eventually.

Anyway, you might as well get started one step ahead of the curve by working with the cloud-first. And these main three cloud offerings have machine learning models as hosted rest endpoints for almost everything you're going to be doing. Oh, that's amazing. Tablespace and data. Sentiment analysis, text classification, you can pipe it.

An image. AWS has its image classification services. You pipe it, an image. It'll tell you everything. That's in the image. You give it a table of data spreadsheets. There's a service called AWS autopilot. That's something worth knowing as well as GCPS auto ML. These are cloud services that you just give a spreadsheet and it does the number crunching.

You should tell it which column you're trying to. And it will look at the spreadsheet or the database and say, okay, this is the one who's trying to predict. It looks to be numeric. Maybe it has some out of bounds, like outliers or such. I'm going to try to dial that in using some magic. I'm going to take these columns.

This one's text, but I want numbers. So I'm going to turn that text into categories. It does all the magic for you behind the scenes. It selects which model to use, usually XG boost, but you don't have to care. It will select, tune and train the machine learning model for you. And then it will optionally host that as a rest endpoint for you to call for future data, which is the point of training, a machine learning model.

Matt Merrill: The rest endpoint is a facade for whatever algorithm that this thing I don't want to say determines for you, but. Help it program and it just takes and compares it against the model that runs the algorithm. That's right.

Tyler Renelle: That's amazing. That's autopilot, AWS autopilot. They also have a service called canvas autopilot and canvas, by the way, these are under the umbrella of AWS.

SageMaker is their suite of machine learning-oriented tools. What's cool about SageMaker? Obviously, the tools inside of SageMaker are incredibly powerful. Like I said, autopilot will automatically generate a machine learning model for you, but what's also cool about SageMaker is it pipelines them together as needed.

You can either use these tools, all a cart within SageMaker, or you can use them as part of a pipeline that you build and then deploy. And then as new data comes in, for example, It will send it through the pipeline of those, some feature engineering. It will do feature transformations and pipe it through autopilot and then redeploy a rest endpoint and all that stuff for me.

Matt Merrill: So it almost manages the adjustments to the algorithm for you as it gets more data. Yup. Because even at the very little I've looked into this stuff, like I know that as of, even two years ago, that was a big deal of okay, I've got to retrain the model and redeploy it and dah, and this stuff will do it for you.

Tyler Renelle: So within SageMaker there are tools, like clarify that sort of keeps an eye on drift and bias of data and can kick off certain processes that retrain and redeploy the model. If I were to say to this audience, how do we get started technologically? I would say, start with SageMaker actually, because SageMaker will use under the.

Much of what I've mentioned in this episode, like extra boosts and Hugging Face. They're using those technologies as open source technologies in their stack, but are a black box to you. You just send it data and it sends back a response. By becoming familiarized with SageMaker, we'll get you the furthest fastest.

You'll also be already in the cloud-ready to deploy when the time comes because the time will come.

Matt Merrill: So one of the questions that I was going to ask and like at this point, I don't even think I need to ask it, but I am going to: can I still do this?

And based on everything that I've just heard you say the answer is a resounding yes.

Tyler Renelle: Resounding yes. I suck at math as well.

Matt Merrill: The best analogy that is coming to mind for me is I'm like, I'm a web developer. I like to do a lot of API. I don't need to know the details of the HTTP spec to make an API.

I can use a library like express or spring NBC in Java. And it handles all that stuff for me. But I do need to know some of the stuff happening under the covers, right? Like a simple example is if I'm going to throw a bad parameter, I want to throw a 400 level or code. Cause that's just a state.

So one of the things that are coming to mind is obviously you can't go into this blind, right? Like you have to know which knobs to tweak and what to dial-up and down. Or Ooh, SageMaker tried this for me, but it doesn't feel right. I'm not getting it right. How do I learn? Just enough to be able to do that.

Maybe you can speak to that.

Tyler Renelle: So I'm a huge fan of top-down learning. Start by building an app. Now you have an app. Oh, this tiny module here. Doesn't get the job done from the auto tooling that I've been using. I want to unpack. And write it myself, start writing it yourself. Oh, this module, that module requires math, learn only that math needed and it will become obvious the process.

When I started machine learning, project processes started with. Is it a table auto autopilot. Is it text? I use SageMaker to comprehend images. I use their image to take risks. It'll get you 80, 80% of the way. There are clear gaps in SageMaker tools very quickly. You will discover things. You need to write yourself.

Yes. Okay. Let's crack open. Python was downloaded, Hugging Face or extra boost or something. For example, an AI journal gives book recommendations. You journal today: my boss yelled at me. Over time, it gets to know you and it will start recommending books. Okay. Maybe you should read this like emotion management books and it acts like Pandora.

So you can thumbs up and thumbs down. Its recommendations. The book recommendation is simple off the shelf. It's Hugging Face your journal hugging. Facebook's cool. Now you turn into vectors. Now I can compare those vectors using cosine similarity. A lot of that stuff is off the shelf, but the thumbs up thumbs down process gets a little hit.

That's handwritten in Python using a framework called care ass. So deep learning, Python's scikit-learn language is the shallow learning library. And then the deep learning libraries are TensorFlow and PI torch. Those are the two popular, deep learning libraries, and then add extra complexity.

There's a thing called care OSS that like sits on top of TensorFlow. That makes it easier to use. It's syntax. Overlay on top of TensorFlow specifically. So TensorFlow is by Google. It's like the angular of deep learning and PI torch is the React of deep learning. You know what I mean? I would so much rather use the PI torch and react most of the time.

Yeah. But there are times to use one or the other. Usually, if you're using another library, and they have already selected a deep learning preference, then you've just got to roll with it. So it pays to know both later after you've tried your SageMaker hand in and it didn't fully do the job.

Matt Merrill: Plug that custom code into things that SageMaker already has done for you.

Tyler Renelle: Yes. SageMaker has this concept of bring your own model, bring your own container and bring your own something else. I can't remember. There are three prongs. You can use them. Services like sentiment analysis. For example, you send it a string, a review and it'll be like sad, angry, happy block, easy peasy.

We want to get a little bit more complex. I want to classify this text. Is it, is it about baseball or is it about football? So you're going to do a little bit of training on your own, but because it's such a common type of scenario, SageMaker offers tools. Around that, but you do have to write your own code.

And so they call that bring your own code. You could still deploy it to SageMaker without worrying about GPS and scalability and the libraries that are pre-installed. But you do have to handcraft a little bit of code and it's going to subclass something. Got it. And then finally they have brought your own container.

You're like, no, I'm getting real customization here. I don't want to worry about the GPU that's it. But that and scalability. I want to handle all the libraries and the code myself.

Matt Merrill: So that's "bring your own container." Cause maybe you need some kind of custom library or something like that you don't offer out of the box.

Tyler Renelle: I'm with your subclass their Docker container, and then you do the rest of yourself.

Matt Merrill: Like I'm drawing some parallels here to like Lambda where it's like, Lambda will give you the node runtime and you can run a JavaScript Lim. But you have to go within the bounds of whatever Lambda gives you.

Tyler Renelle: Perfect analogy. Say SageMaker is the Lambda of machine learning. That makes sense. Plus, a bunch of extra with monitoring. Yeah. I be the pipelining and stuff like that. And monitoring to monitor. And then it sends you email alerts. Look, your data is starting to drift.

Look, your users aren't who you thought they were. It's incredible. It's super powerful.

Matt Merrill: People are going to start thinking like where like selling SageMaker, we're legitimately not like we're just legitimately excited about that.

Tyler Renelle: We talked earlier and I'm like, I'm not a DevOps guy and I'm taking an interest lately, but because AWS just seems to be the most popular.

It's the easiest place to. So just so happens. I don't know crap about GCP or Azure, and that's why I'm harping on SageMaker. It's a knowledge thing. It's because I'm taking the easy way out here, basically.

Matt Merrill: As a counterpoint, even though I know nothing about this, I did work with a client at DEPT® that was using Google.

And they were doing all kinds of image, training records, image, training algorithms. I worked with their AI engineer or ML engineer, and he raved about GCP. In fact, they were using GCP for that reason. Like he drove the rest of their organization to use it. Interesting. So there's something to it, yeah, there's a really good, that's a really good kind of caveat to give people's this is just what we know, but that's through the lens of one person at one client, but what he was doing was pretty, pretty neat with image recognition.

Tyler Renelle: That's really interesting to hear actually. Our parent organization DEPT and they're all GCP and they're like, yeah, they're mega data scientists and to Azure to actually, oh really? Okay.

Matt Merrill: Yeah. Rocket is mostly AWS. Like all of our clients use AWS, a lot of DEPT clients, which are, right now, anyway, primarily in Europe, they're all using Azure and GCP.

Tyler Renelle: Isn't it? Interesting.

I'll have to hit the drawing board one of these days and check out those offerings. Cause yeah, DEPT swears by it.

Matt Merrill: with our head of DevOps Gerawan the other day. And I agree with him that GCP is much easier to use than AWS. AWS is an Interface can be very difficult to use.

Tyler Renelle: Very difficult to use. AWS is so old. Everything is just band-aids on band-aids from like 2007, like the math to know our calculus statistics and linear rails. But don't learn it.

Matt Merrill: I'm tapping out if that's chase.

Tyler Renelle: Look, you'll know when you need it. So what I was getting at with that act, so I wrote about a  care OSS neural network that learns the thumb, the thumbing process, thumbs up a book, thumbs down a PR a book. Yeah. It's hard to explain why I use what I use there, but the last module of the neural network, the last layer is like layer dot soft, man.

But what is a soft max? And so that's not a super black box. You have to know what a soft max is to tack on that layer and understand what's coming out of it to use it. So you look up softmax oh, it's something in statistics. So then you deep dive into this concept of soft. You'll know when you need mare that the time will come if you really get into machine-learning, but punt for as long as you can.

Matt Merrill: There's this series of YouTube videos from a guy from the New York Times who explains math concepts for adults in kind of very simple ways. I should find that and link it in the description because is this stack quest?

Tyler Renelle: I don't know that doesn't sound familiar to blue one brown,

Matt Merrill: That could be it though. Those are good.

Tyler Renelle: Those are really stack quests that will get you mostly there as far as needed math knowledge for machine learning. Nice.

Matt Merrill: Nice. I think we touched on a lot of really good stuff. I know, like I'm super interested. I'm charged up. I want to start now. So like a shameless plug here.

How do we find your podcast? Because that's probably a good place to start, right?

Tyler Renelle: So just look at machine learning. In Spotify or iTunes or whatever you find it. The website is O C D E V E l.com forward-slash MLG C developer.com for session NLG.

Matt Merrill: Cool. And it sounds if you're not, if you are interested, get yourself an AWS account, start playing around with SageMaker.

I know that you've got a couple of episodes and more of your latest episodes are on SageMaker. I'm going to go check those out. Nice. Anything else you'd recommend against final thoughts on ML?

Tyler Renelle: So you're like in those podcasts, it's a bit of a, like a course, almost an audio course and more conversational format audio course, take a eudemonic class and turn it into.

And I have a forward slash resources section. And so if you even want to skip the podcast, go to the resources section, cause that's a step-by-step beginning to end for how to learn machine learning. And I keep it constantly to keep it up to date. So like the first course, everybody recommends on the internet this Andrew in Coursera course, but that's an example of following this curriculum.

And by the way, a little teaser for folks like w we're gonna actually going to put up an episode on the machine learning guide. Dev ops topics and machine learning. So if you already know all this stuff, I don't know why you would have listened to it this far but hope it's cool, thanks. And you can keep an eye out on the feed for that.

So that should be fun. Nice. Thanks for taking some time to explain this to me, like I was five. I appreciate it.

Tyler Renelle: Thanks man. Thanks for having me on.