top of page
  • Writer's pictureKen Jee

Everything You Need to Know About ML Ops (Miki Bazeley) - KNN Ep. 110

Updated: Aug 8, 2022


Today, I had the pleasure of interviewing the incredible Miki Bazeley - The MLOps Engineer for the second time! Watch the first interview here: https://www.youtube.com/watch?v=Ii2Qo5pwWho&ab_channel=Ken%27sNearestNeighborsPodcast. We dive into everything ML Ops in this episode. What is it, what is the future of it, and how can you break into it? We also touch on cultural differences and their impacts on the workplace.

 

Transcription:

[00:00:00] Miki: What is the data scientist uniquely sort of positioned to do and contribute to, and same with ML Ops engineers. And what I found is that like a lot of ML engineers I know, it's possible that many of them, you know, used to be data scientists and they decide to develop more of an engineering skill and move into that role. With low ML Ops engineers that I know out there, most of them tend to have more of a software engineering background.

So they're still like, honestly learning the ML, like life cycle, the feature engineering, the training, the experimenting with it. They're not gonna have like a deep sort of understanding of like, Okay, like the metrics we would use for a classifier versus like a regression model or things like that.

[00:00:54] Ken: This episode of Ken's Nearest Neighbors is powered by Z by HP. HP's high compute, workstation-grade line of products and solutions. So Miki, thank you so much for coming back on the Ken's Nearest Neighbors Podcast. You're part of a very select group who has been invited back so far. So I think you're either the second or third person. Who's had two episodes, which is very exciting, very special.

Today, we're gonna talk all about ML Ops. We're also gonna talk about sort of workplace culture and a couple other fun areas for those who are interested in hearing more about Miki's story, you can tune into part one, which I've linked in the description or in any of the relevant text for the podcast files. So Miki, why don't you this time, introduce yourself. Just, you know, what's your story? Or maybe like, who are you for those who haven't had the initial pleasure of hearing the first podcast.

[00:01:45] Miki: Yeah, totally. So, Hey listeners, my name is Mikiko Bazeley. So I currently work as a Senior ML Ops engineer for this little company called MailChimp, which was acquired by Intuit in November. It is based in Atlanta, Georgia. So I work remote from San Francisco. We're at to, I was born raised, grew up and have spent a majority of my adult working life. You know, I was born in San Francisco. I went away to college in UC San Diego. My initial focus was to study, you know, go to medical school, become a doctor and all that, you know, make the ancestors proud.

You know, and that ultimately ended up being a five year kind of failure into anthropology. And we talk about a lot in the other episode, but essentially I was, I had no kind of background in computer science or engineering. So you know, it was six or seven years of basically trying to break into tech and eventually breaking into tech and moving my way from analytics to data science, to now I'm a ML Ops engineer all without a masters or PhD or any kind of really additional sort of accreditation or graduate schooling.

I did do a bootcamp for springboard. So I really enjoyed my time there. I felt it was very instrumental to my pivot into data science. You know, and more recently I've been besides doing work in ML Ops engineering. I did launch the YouTube channel, which I have to say thank you to you and Luke for promoting it because wow.

That was awesome to hit 1000 subscribers, like in the first week of launch, I think all thanks to you guys. So I'm doing that. I'm also doing a bunch of technical writing and I'm developing some like educational resources on ML Ops because it's an area I'm very passionate about, especially for people who are kind of like me and like, they didn't really go the traditional route. They didn't have sort of like other options. So, you know, for you listeners who feel like you fit into that mold you know, I have a lot of stuff for y'alls.

[00:03:51] Ken: Amazing. So maybe it would be helpful just to define what ML Ops is. I know it stands for ML Operations, but beneath that kind of opaque name, how can we more broadly define it?

[00:04:04] Miki: Yeah, absolutely. So I view machine learning operations as the practice of both developing platforms and infrastructure to make deploying machine learning applications easier. But also really it's the focus on like the end to end responsibility and accountability and getting that codified as like much as possible.

So you're not just kind of focused on you're not focused on like the nitty gritty of like the data science process, for example you're really more focused on how do I as an ML Ops engineer enable data science teams to innovate and deploy at scale. So that's kind of the entire focus that I'm interested in.

I do have blog a blog post coming out probably within the next week or so potentially depending on when this is released talking more about it, but it really is a domain and practice that still looks a lot like DevOps, you know, like there's a lot of things that I would say are sort of derivative and it's really, it's almost like DevOps just with applied to the quirkiness of like the ML life cycle and ML products.

[00:05:13] Ken: Amazing. So, for example, like a day in the life, what would an ML Ops, I guess engineer do or someone in ML Ops. What would they be focusing on that might be different from a data scientist or data analyst or data engineer?

[00:05:28] Miki: Yeah, absolutely. And I think it's good to point out that not all companies will have like an ML Ops team and the reality is that that kind of investment is not right. It's not honestly the right strategy for all companies. So there is this guy ... I'm trying to I'm blanking on kind of how it pronounce his last name. So he has this get a project or a repository called, You don't need a bigger boat. And a theme that he really emphasizes is this idea of like reasonable scale ML Ops, and you know, much of the narrative and discussions.

It's really being driven by companies like Google, Apple, and Facebook, where they, you know, they were the first ones to solve some of these like big data challenges. But the reality is that the way they operate and the scale that they operate at and their kind of concerns is not gonna be the same as, for example a Fortune 500 company that maybe made its money in like other industries, like e-commerce retail, medical, or even these like small startups where they're very like data savvy.

But they don't have like the same kind of like resources to buy compute and storage. And so ML is gonna kind of look different at each of these companies. In terms of like, for example, if you do have a company where they do have like, A full on ML Ops team, like MailChimp does, for example, in terms of what the sort of the day to day or the week looks like.

It really is a combination of one like building out either platforms or tooling to, you know, facilitate like a lot of this, like end to end automation is a big part. We'll rely very heavily on open source tools. Like for example actually I'm not necessarily gonna drop any of those like names right now, but there are some very popular tools out there for monitoring for experiment tracking for like mall registries that we are either looking at or investing in.

Because we think it's very important for enabling the data scientists to innovate. So we'll do a combination of like open source tooling. We also have a managed cloud provider. We specifically use GCP you know, and we try to essentially like bridge a lot of these sort of gaps and these kind of like, sort of smooth out the rough edges so that data scientists can just focus on.

You know, working with the business partners and legal and product understanding kind of what is like the business need, you know, making sure that they're getting access to the data, making sure that they're able to train their models. And then you know, once they've kind of written the tests that they need to really like being able to kind of package it into like a CI/CD kind of pipeline.

So a CI/CD, it stands for continuous integration continuous deployment. The longer version of it is CI/CD CD, which is like continuous integration, continuous delivery, continuous deployment. In continuous integration, the focus is on building and testing out the code and like merging it, like as quickly as possible with this idea that you wanna catch bugs early on with continuous delivery, that's basically like you release the code to like the repository or the code base that then in continuous delivery gets pushed into production.

So ... delivery is where it will actually finally get in front of users. And we also wanna make sure there's all this like monitoring and health checks. I kind of see like continuous integration and continuous delivery as like, this is the we're trying to like get it to work and we're trying to kind of get it released out.

And then continuous deployment is the, like, we are now putting it out into the world for people to use and making sure also that if it like fails that we can kind of like roll it back really quickly before people know us ideally.

[00:09:18] Ken: Amazing. And so, tell me if I always, well, I just thought of this sort of analogy. And I want to know if it hits home cause I love analogies. So, and I also love food. So something I think about is maybe data scientists are like more like a head chef who is experimenting. They're creating new dishes and they're trying to put them out there. Once you have this new dish, if you wanna serve it to people, you have to create a system to produce it reliably and get it out there.

And put it in front of people and there's quality control. You have someone that's tasting stuff. You have someone that's making sure that everything that goes out of the kitchen is ready to be used and essentially the whole process of taking that recipe from, Oh, we made it and making it consistent and making it reliable and checking on it is what the process of ML Ops. Is that like a reasonable proxy?

[00:10:11] Miki: Yeah, absolutely, absolutely. And it's, if you think about it you know, companies, companies can be. It first, the first source step they need to be practicing is good DevOps practices. Just straight off the bat, if they don't have that kind of culture already, they're not gonna succeed at like the ML Ops level because ML Ops really is this combination of DevOps and this awareness and understanding of the machine learning life cycle.

You know, and the thing is like for example, if something goes bad in the kitchen, potentially someone gets poisoned. They die. That is very bad. That is absolutely very bad. When a model in production goes bad, depending on like what that model is. So let's say for example, we're a medical device company and you're dealing with a population of people with chronic conditions.

You could be really hurting people on a very, very large scale, like the ubiquity of machine learning models and everyday consumer devices means that, you know, not the company takes on risk if something happens. Yes. But it also means that like everyday users of these applications can be harmed and they won't even know it.

And especially to, when we think about mental health, like for, I remember it wasn't Facebook, they did that experiment with emotions, right. Where they, I think did something where they they tried to curate people's feeds to either be more depressing or more happy. and given like the emphasis on like mental health nowadays that we have that kind of human testing for them to have done that also.

And for people to not have known that they were participating in it kind of who knows what kind of damage was like, was kind of like happening in that time. Now, the nice thing was that it was a short period, but let's say for example you know, you have a recommendation service or just anything like that, you know, it could be really, really bad.

So, you know, companies should probably appreciate ML Ops, not just because it'll help them, you know improve like innovation. It'll like cut down on toil, you know, they will be able to create kind of new products and features, but it could also be like that set of practices can also kind of help them you know, live up to like some, the sort of like corporate responsibility values that they might be like, like practicing fascinating.

[00:12:35] Ken: So, I mean, it seems like there's a lot of negative possible ramifications. If these systems are not put in place, if we're not doing a good job of like constantly checking in with these things, you've mentioned DevOps a couple times. Obviously that comes from sort of the software engineering side where it's, you've described it as being very similar. How is ML Ops different from DevOps? I know I've asked a couple people this question before on the podcast and I've gotten different answers every time. So I'm interested to hear your thoughts.

[00:13:05] Miki: Yeah. I mean, I see, like I see ML Ops as I don't wanna say like a subset of DevOps, but, but an extension of it. I think so DevOps, it sets it set down the foundation for example, You know, collaborative communication for it. It wasn't just like, Okay, this is how you should set up a repository and you need to have like these tests that automatically, you know, run on your code every time you push it.

But DevOps was also about the culture and practice of like, how do we reliably, you know, deliver software in a more kind of scalable way. And also in a way that wasn't just kind of like burning people out and also was breaking down silos, you know? so all that is incredibly important, especially in machine learning products where, you know.

Like, I've bumped into like, I've been in a lot of situations where I've had to kind of like educate both software engineers about what the machine learning process actually looks like, as well as data scientists about what the software development life cycle looks like. And then, you know, also trying to zero in on that whole, like there's also this like internal developer experience that you're trying to cultivate as an ML Ops engineer, because at the end of the day, like you're still kind of building these like linkages.

And if the data scientist can't use, like the tooling that you're building and the software engineer sort of runs into issues with like bugs or data kind of getting slipped into the models, right. So, you know, it I've been in this situation where like, communication is like so important. So ML Ops, like there are, it takes a lot of the same practices.

The tooling landscape is gonna look a little bit different, but I really think it's more like dev it's like an extension of DevOps ops. It's adapting to kind of the specific quirks of like machine learning products. You know, like for example, the fact that you need to have both it's like code data and model, whereas for traditional software, you didn't really have the model.

And even then the data was kind of, it was important, but it wasn't driving the model and the model wasn't Dr. Wasn't driving the code, you know? So I don't know, like, I think once again, like it's still like, you still need to have really great DevOps practices in order to have like a good machine, like ML Ops practice.

[00:15:27] Ken: So, you said, you mentioned ML Ops engineers a second ago. Who does ML Ops? Is it just ML Ops engineers? Is it machine learning engineers, is it data scientist sometimes. Is it data engineer sometimes? Like who does that responsibility fall or is it even like a project manager's responsibility in some sense?

[00:15:43] Miki: Yeah. That's a really great question. And I mean, it just really depends. Like, I think for example, you can be applying like ML Ops practices without necessarily having a team there. But I do think, for example, so in my role, like we work very, very closely with the teams that would roughly correspond to like data engineers as well as like data scientists.

So it's not like there's only two out of three of us in the room, but there's really like, we need to kind, we form this kind of like triangle, right. Where each point is like equally sort of important in terms of like who is responsible for ML Ops? it depends on the company. I. In bigger companies where you've seen a longer term investment in data and data science you'll see distinct teams, right.

In startups or companies that are a little bit more midsize. A lot of times those responsibilities end up being shared by like multiple teams. Usually you'll still have the data. Like you'll still have like either a data engineering team or data science team come first. Then you have like the data engineering team, then data science team. And then usually the company will then start investing into like an ML Ops team. If they feel it's like really necessary.

[00:16:51] Ken: So it seems like ML Ops, it it, can be done by some group of people. Yeah. But it's also kind of supposed to be done by everyone, right. Data scientists are supposed to have a certain role in the process that is underneath ML Ops.

Like, maybe they design the model and they deploy it into some like infrastructure or some system. And then like part of that is being, or maybe they're not deploying it, but they're giving it to someone to deploy or it has to be in a certain format. It has to be accessible to the next person up the chain of command. So in theory, ML Ops, it's sort of like this all encompassing thing, but some people are just implementing it. Is that correct?

[00:17:32] Miki: Yeah. So you have some people that, so a really good way. I like to kind of think about scope of like responsibility is what's called the race. The race scene matrix our RACI, right? Where it's like Responsible, Accountable, Consulted, Informed. Some teams like are like, they are responsible and accountable for like driving it for signing off on like building the infrastructure. Usually that's gonna be like the ML Ops team, right. A lot of other teams are either like consulted where, you know, you'll ask them for feedback, but they're not really driving the project versus teams that are informed.

And I feel like data science and data engineering teams. Data entering teams kind of fall more on the like account accountable and consulted sort of range. And then data scientists will usually for like kind of the data science team will usually fall on like the consulted and informed sort of end of that.

And I think it just depends on like every par, like every part of every state. So for example, let's, let's talk about model training, right? So the what's the responsibility, the ML Ops team versus like the data science team, right? So in my home, in my opinion, and this is not necessarily one that's shared by a lot of by everyone.

I know, but in my personal opinion data scientists are absolutely responsible for the model. They need to understand, like, what are the, what is the mall supposed to be predicting? What algorithm are they using to train it? How they're doing the hyper parameter tuning all that good stuff, picking the algorithm.

What does, like, what metrics are they looking for? I don't know if an ML Ops engineer can be responsible both for like the infrastructure. Oh, so for example as an ML Ops engineer, what I, the way I might facilitate mall training is I'll ensure that like the development environment standardize.

So for example, if they're using SageMaker or like GCP's Vertex AI notebooks formally known as AI platform, all that good stuff. Each of those instances instances, right? Like you can choose the type, you can choose a GPU or TPU or whatnot. Like it's my responsibility as an ML op engineer to first make sure that that development environment that they're using actually matches with what we're able to provide them in production, but I'm not necessarily going to be responsible for making sure that their model is performing where they need it to, or even that, for example, the model is like unbiased.

right. I think there is a, you do have to kind of separate some of these concerns to like, what is the data scientist uniquely sort of positioned to do and contribute to, and same with ML Ops engineers. And what I found is that like a lot of ML engineers, I know it's possible that many of them, you know, used to be data scientists and they decided to develop more of an engineering skill and move into that role with a lot of ML Ops engineers that I know out there.

Most of them tend to have more of a software engineering background. So they're still like, honestly learning the ML, like life cycle, the feature engineering, the training, the experimenting with it. They're not gonna have like a deep sort of understanding of like, Okay, like the metrics we would use for a classifi versus like a regression model or things like that.

[00:20:54] Ken: This episode is brought to you by Z by HP. HP's high compute, workstation-grade line of products and solutions. Z is specifically made for high performance data science solutions. And I personally use the ZBook Studio and the Z4 Workstation. I really love that Z workstations can come standard with Linux and they can be configured with the data science software stack. With the software stack, you can get right to work doing data science on day 1 without the overhead of having me completely reconfigure your new machine.

Now back to our show. Interesting, so it seems like ML Ops in theory could be done by software engineers who are maybe adopting or getting familiar with more of the specific, like the data or machine learning specific tool set.

It seems to me that ML Ops is very much about the tools and the technologies that are relevant for the production, less so than maybe like the finite math and the statistics and those types of things. Yeah, absolutely. So where does ML Ops stand now in terms of maturity? I think. That might be a very difficult question because data science, data engineering, I think like, you know, get 20 data scientists in a room or data engineers and they would...

They would completely disagree on how mature those things are. But where do you view ML Ops in terms of maturity and where do you think, or where do you expect it to be in maybe the next five years?

[00:22:18] Miki: Yeah, absolutely. I mean, so there were some, there's been some really good blog blog posts written about this very topic. So for example Mihail Eric, he wrote "ML Ops Is a Mess", but that's to be expected. I think that's the name of the blog post. And there's been a few others but I think the consensus is that ML Ops right now is in a very like nascent. So what do I mean by that? Well, let's take, let's kind of take a look at like DevOps, right?

So 2013 to like 2015 or 2016, these were really, really magical years. Can you actually guess what technologies were released during that time? 2013 to 2015.

[00:23:03] Ken: And what types of technologies.

[00:23:05] Miki: Tools that might be relevant to ML Ops or to DevOps, actually.

[00:23:10] Ken: Maybe like Docker. I'm trying to think of anything.

[00:23:15] Miki: Yes. we can go with that. So Docker, Kubernetes what else? PyTorch, Keras, jenkins. There's like a bunch of others where that was like the first. V like the V one that they were all released. Yeah. That was like, how many years ago?

I actually have a Twitter ... So like, I need to kind of go back and refresh the dates. But if you think about that, that was like, so what today's, this is 2022. So that is only seven years. And yet you can't like you toss a stone in any direction on Google search results, right.

About like what you need to do to productionize the data science model or any kind of like application. And like Docker is like the first thing that comes up, right. So it's incredible. Like, there are so many of these different kind of technologies and there's more like when you see the list of like all the technologies or projects ever released between 2013 - 2016, all, most of 'em are foundational to what we think of like DevOps, but also to allow the, like the deep learning work that's currently being done, right.

So, you know, when you think about that, that's only about. what seven years ago. And yet we already have like, such a robust understanding of how to use those tools. Whereas like most, a lot of the ML Ops tools I think have only been around maybe for the last two, three years, may, maybe four or five. And they tend to be very kind of specialized.

So I think the thinking will continue to evolve. I think the tools will like continue to mature. I don't know if anyone wants to be in the state where like they're using a single tool for every little part of the pipeline. Like there will be some like maturation and like shakeout such that as an industry, we probably will sort of kind of align on like, at least like a few of each category instead of like a list of like a hundred or 200 tools, which I think is almost like where it's at, like right now.

[00:25:05] Ken: What would be some of the most common ML Ops tools that people, you know, in theory should be familiar with if they're interested in this domain.

[00:25:13] Miki: Yeah, absolutely. So let me first start off by. Listing the tools that in technologies that you should know, like, regardless of whether you're getting into ML Ops, right.

So the first one is I get a, I get this question a lot where people are like, do I really need to know like a cloud vendor or provider? It's like, yes, absolutely. Like, no one is like buying like data centers anymore. Like you have to like every, you have to deploy to the cloud. Like if you can develop and deploy in the cloud, that's even better, but you have to know a cloud provider.

A lot of people use Amazon. A lot of people use GCP. I mean, some people use like Azure, you know, so, but you have to know one, right. The second set of technologies is, you know, like, like base container technology. So that's really gonna be Docker. Kubernetes as well, I think is like really important.

Especially too, like for example, both GCP and AWS, they have managed versions of like Kubernetes. Right? So in general, if GCP and like Amazon have said, Okay, we're gonna do a managed version of this technology. It's absolutely important. Yeah. It is absolutely worth looking at. And also too, like, I wouldn't, I don't know if people should like, try to learn all the cloud providers at once.

They should just pick one. And the reality is that whatever they learn on one, they can kind of transfer to the other, right. The third thing is that people do need, you need, you need to be strong in some higher, higher level language. So whether it's Python, Java, C plus, plus like scholar, like you name it, I've looked at about fif about 50 different ML Ops, like, you know, engineer, job descriptions the past couple weeks, just cause I wanna kind of like build out like a blog post on like where the skills people need.

And that is just what people ask for. Like, I'm sorry, our users, but R is almost never mentioned. like in any of the job descriptions on ML Ops for data science. Yes, but not for ML Ops. So someone should have a core have some, one core language, at least under their belt. They should also know sequel two, like, you know, so those are important.

Once we start getting out of that, so the, some technologies that people are really kind of get excited about ML flow is like a big one that people really love. The three or four main modules that ML flow focuses on. One is experiment tracking. The second one is like a mall registry.

And the third I think is I think it's serving. So that's like a really kind of useful one for people to take a look at. Another popular one is like fast API. People love that for essentially writing like asynchronous APIs for their projects. That's a really good one. I'm trying to think weights and biases for experiment tracking. Let's see Pachyderm and DVC for data versioning. I mean, there's a lot of them like out there, for sure.

[00:28:15] Ken: So what separates like ML flow. I know there's, it's like an open source platform, correct? A lot of them are, but some of them are privatized what separates an open source platform from a privatized one. And why would someone use one over the other?

[00:28:32] Miki: Yeah, yeah, absolutely. So, Okay. So in some cases like the private version of, so the manage in like hosted version of a popular open source project it shouldn't defer too much other than a lot of times it'll have a nicer user interface and also like it'll.

Store the data and the results, like on the webs on the way website or the service. And that's basically how they charge you and how they monetize is they charge you for the data. Right? Sometimes this can be a very useful relationship because a company that is like a hosted or managed version of a product.

So for example flight flights, a project that's out there, they are like a model orchestration tool. That's like akin to, I think they're like airflow for ML malls. Like, I feel so bad. Like I should know them cause I did hackathon like on their stuff. So I should know them. I'm sorry. But for example, like now they're doing like a hosted manage version called like union AI.

And basically like with all these open source projects, a lot of times they. Were like initially developed at the big companies like Uber, Facebook, Google, or apple, and then either they didn't either, they couldn't find a way to like monetize it internally, or they decided to kind of go release it, such that like now companies can do like manage hosted versions of them.

So sometimes that's a really great your relationship because the companies who are ma try to make money off these projects they're invested in you know, putting engineering time and development towards it, right. Because everyone's seen like open source projects where like, they kind of, they sort of trend and then they die off cuz no one wants to maintain anymore.

So at least like if you have a company that's behind it, they're saying like, Hey, you know, we're willing to like put our horses like behind it. We're willing to put money in engineering time towards that. In a lot of cases, is it worth. So, with open source project itself, people can try it out for free.

So I really kind of encourage like people, like, if there, if there's like a project out there and it seems like reasonably like, well adopted to just go try it for free. If at a certain point, you know, you're a part of a team and the team has decide, Okay, we wanna like, actually bring this into our tool chain now.

Even then I would still kind of look at how they play with like your popular like cloud provider. So for example, I know mflow can be like hosted on GCP and I think you can like switch out the, like, you can store the experiments, like on like GCP as well, either using like BigQuery or some other, one of their other like data storage solutions.

There's other projects where it doesn't like it, they, it doesn't play as well. You know, so that, I think that is like a consideration ultimately is, you know, if your team is thinking about like buying into that project, like who's gonna like operate and who's gonna maintain, you know.

[00:31:36] Ken: Awesome. And so obviously we talked a lot about model maintenance and those areas of it. How does ML Ops touch data engineering? If at all, I'm also probably gonna interview Ben Rogojan and Zach Wilson about, you know, data engineering. I'll probably ask him a similar question. For people that don't know we're having another meetup this week brought to you by bright data, so special, thanks to them for making it possible, to have all these incredible guests, actually in person this time, it's gonna be so exciting, but to the question, how, you know, how do you. How does ML Ops fit with the data engineering side? Does it touch the data pipeline and that type of stuff? Or is it a little bit separate from that?

[00:32:15] Miki: Yeah, absolutely. So I think there, and, you know, I would be so curious to see like more talks or content about this because there is like a little, you know, a little bit of a, of a sort of bias towards like every kind of discipline or role sort of speaking to like how important their role is.

So in all of like content out there, you see like ML Ops peeps going, like our role is super important and encompasses all these other things and, you know, vice versa. But for example, like currently, like in my role, like I we're technically, for part of the time we were actually technically part of the data engineering.

You know, and in a lot of places, like there's a very tight, collaborative relationship, so where do the boundaries kind of begin or end? So for our data entering teams, they're not just focused like the data scientists are the primary consumers, but they're not just focused on the data scientists. A lot of times they're working on data for like the wider company initiatives.

And this includes like analytics and metrics for like the business teams, like marketing sales, customer success. It's not just like machine learning malls where we, so they're responsible for like the data warehouses for the data. And occasionally like there's some tables. That they are sort of monitor more closely because they're related to finance or like these metrics or Indi like these metrics in calculations that really should not be like changing.

Right. Where they're kind of responsibly ends a little bit is when the data scientist starts doing like feature engineering and like acquiring the data that they need for their models. And at that point, usually like from a tooling perspective or an infrastructure perspective, like ML, the ML Ops of boundary begins sort of around there at like the feature engineering.

And like, and then it kind of ends like at the like deploying to production. But even then like, so for example, if we have a machine learning model, that's like a live service. So for example, a recommendation service, right. We'll work with data entering and we'll also work with the front end team cuz the front end team ultimately needs to like make an API call to the model.

Or they can also fetch like pre-computed results, right. So that those are cases where like, we'll work very closely with data engineering to make sure that, like, for example, we're not S screwing up a bunch of the tables you know, but also that it is like performant and like customers do get the most value out of their models.

[00:34:58] Ken: Yeah. It's funny. I mean, it seems to me like ML Ops has more in common with data engineering than it does with pure data science, because it's the process of systematizing. It's making sure things don't fail, it's monitoring, which is really important related to data quality and those types of things. And even though it probably touches more data science, like there's a lot of parallels within those domains and like in the thought, not necessarily the tools, but yeah. But it's a pretty, pretty fascinating, I guess, relationship that that's built in there.

[00:35:32] Miki: Yeah, absolutely. And it was you, it was so interesting because like Andrew a while back, he. You know, I have, I have mad respect for him. He did this one talk called like data centric AI. And what was really fascinating was when he did the pull of the audience members.

Right. And he was like, what role do you think would be the easiest to transit, to pivot into ML Ops? And a lot of people thought like data scientists and it was, it was like, it was really fascinating. The result was like the number one role that people thought would make the easiest pivot into ML Ops was data science was data science or data scientist.

And the, I think the last was like DevOps. And I thought that was wildly inaccurate. At least like, from my experience, I know for me, if I had tried making the jump from well, I guess I did kind of pivot from data science to ML Ops, but I did spend a lot of that time, like actually working.

Productionizing like machine learning models and building up the data pipelines for this, like one start I was working on during COVID thousand based in real estate tech. But I don't feel like that it's a really easy transition because like, I think, I think the easiest transition is like a data scientist going into ML engineering.

I think the much easier role to pivot into ML Ops is like DevOps ops. And it's really kind of interesting. Like I see a lot of the roles out there in the job descriptions for ML Ops. And I don't know if someone like really entry level could immediately jump into like an, an ML Ops engineering role.

[00:37:04] Ken: So let's dive into that a little bit more. It seems like a lot of these skills, you kind of have to learn on the job because ML Ops, in some sense is associated with large systems and systemic thinking, you know, the question. W well, can you refine that for me?

[00:37:23] Miki: Yeah, absolutely. And actually, I will refer back to that same talk that I was just criticizing the data-centric AI that Andrew was talking about.

So it was fascinating cuz he had this anecdote where he was saying like he had worked with this one big cloud provider. He did not name it, but everyone was guessing it was like one of the big A's you know, where the product team had made it a strategic marketing point to say that you needed big data to do machine learning, data science.

When the reality is that you actually probably don't need like, really be like, you know, like not all models need like really, really big data to be effective, right. And I think it's the same thing too with like how to implement like good Mo ops practices. Like I think someone who is trying to build like a portfolio to make that pivot, I think they can probably implement a law of the practices, like in their own projects, you know, so, and it goes back to like, Jacob's like reasonable scale ML Ops like with big companies, like for example, Kmart target, Target's still around, Target's still around Krogers or these other these companies where, you know, their main bread, butter is not data, but they want to incorporate data to you know, to extend and improve their, like core their, their core business offering, right.

For them, they have a lot of resources like at their disposal, but they might not have a data driven culture, right. So they can kind of like, they can sort of buy the tooling that they need to, but for them, like their concern is building up the culture and like the people skills, whereas for a small startup that is like very data savvy and like ML native.

For them, they probably have some headcount, but they're really constrained in terms of like what they can buy, you know? So I think like, there's, you can practice ML Ops at different levels. And there are definitely like a lot of these maturity models out there where people can kind of like get close to the functionality of let's say like on a five level ML Ops maturity model, they can get close to like a three like, or four you know, in their own like projects.

[00:39:37] Ken: That's fair. I think maybe the misconception I had, or maybe it's not a misconception, but where my brain was is that ML Ops. becomes increasingly valuable at scale. Yes, absolutely. So it's useful to have in a small scenario and it can be doable in a small scenario. Yeah. But where you're getting the biggest dividends and where people are focusing on it the most.

Or when we have to do these things at scale. And so in terms of like someone lending an entry level at mom's position, why do you think that that's so inherently difficult? Now, you touched on it a little bit. But I'm interested in maybe a more, more detailed route.

[00:40:11] Miki: Yeah. And I think, let me, let me, I guess, re yeah, let me refine that a little bit further. I think there people can post entry level ML Ops jobs. I think it is very tricky to be successful as an entry level. In ML Ops, unless you, for example, are joining a team or company where they have a very, very strong ladder and like hierarchy of expectation for engineers.

So for example in a law of companies, right, you have like a junior engineer, you have a senior engineer, you have staff principal, and also other stuff. Right? So a junior engineer, a lot of times the expectation is like you're given you are given a pre scoped task and you can execute on it with the senior engineer level.

It's you can serve, start defining your tasks and your project execute on it. And then staff, a lot of times is, you know, you're, you're leading like the strategy initiatives you're helping and mentoring out. So there probably are like junior engineer roles or entry level roles out there. You'll only be working on a very small part of the stack though.

To kind of have a little bit more responsibility. You'd either have to be with a startup or you'd have to be a more kind of senior senior engineer and a huge part of it is because like, you need to have multiple like skills, like under your BEM. So for example, when we were so I'm on the hiring committee for MailChimp actually.

So I, myself with about, you know, 13, 14 other people were trying to help sort of design like the culture and values panel. I've also been doing a lot of hiring for both the ML Ops team and also for the data services team. So I've sat through tons of these interviews. And what we found was that we had a few different buckets of candidates.

So the unicorn bucket where, you know, for example, the candidate had, you know, a great software engineering skill set. They also have like been involved in model training and deploying models that was like maybe one out of 10 or 15 candidates that we found. A lot of times, the rest of the split was like 50% software engineers who didn't have really any experience with like the machine learning life cycle process.

And then the other bucket was data scientists who didn't really have like experience deploying models to production. Like maybe they would hand off a model to like an ML engineer or something like that. And so what that says to me is that like, you know, we either, it's not even like a capability standpoint, it's just that, like, it takes a lot of time and investment to be good at like so many sort of like different areas of the business that at some point, like, for example, if you're a candidate and you're trying to get into a job or for, if you're a hiring manager, a team that's trying to hire, and I think data entering has like similar difficulties, you kind of, sort of have to pick, like, what is the thing I'm willing to sort of like teach an onboard?

The engineer that I hire a lot of times we tend. We will prefer engineering candidates who we can then sort of like help onboard and like help teach them the ML life cycle, because it's just like a much like harder skill set to kind of like find and an app before like the big tech companies, like give them those like big juicy offers. You know, so it's like, I think, but it could also be that as the industry matures, as the tooling matures, like we'll see more people kind of cross those like intersections.

[00:43:46] Ken: Yeah. I think that that's a very interesting contrast between perhaps like a more traditional data science role from yeah. You know, a lot of the time with the data scientists, you're indexing really heavily on communication.

You're indexing really heavily on maybe some of the software skills. And it seems like with this, it's like, Hey, we want people that can like, do good engineering. Yeah. Like not client facing necessarily. So it's like, Okay, this is like a really important part of the skillset and we need to focus on it.

And I think it's important to realize that different positions, as well as within different companies, you can index on these different things. And that, that's a pretty fascinating one for me. So we we've talked offline quite a bit and I've encouraged you to, or I've been a big proponent of you creating some educational content around ML Ops, and, you know, a course YouTube videos, these types of things.

And I'm interested in what you found from starting to think about putting together perhaps like a curriculum or putting together a more of a learning pathway down this domain. Is there anything that really stood out to you where you're like, Wow, I didn't, I didn't realize that that maybe it would be this easier or this hard to put something together.

[00:45:03] Miki: Yeah, absolutely. So something that I, you know, I feel like I've brought this to every single job, but that really kind of stuck out in like the most recent pivot to ML Ops was how important, like in how important the user experiences of like ML Ops tooling, and specifically like how, when we're designing tools or infrastructures or platforms it's for example, it's like writing code, right?

Like you can write code in a, in a really like clever one liner function. But code is also meant to be read by people's meant to be used by people's meant to be developed and maintained. And it's the same thing with like the tooling. And so that to me was I feel like that is really the underdeveloped area in ML Ops.

I feel like if more companies like, you know, invest more time in like the user, like experience of their tooling, They would get so much farther, both with adoption. And it was really, it was really fascinating because I see some of this, like on online too on LinkedIn where people are like, Oh, well, like data scientists are like really bad at coding and it, and they almost use that to kind of justify like why we need like ML Ops and monitoring.

And I think that's like a very lazy way of like looking at it like, Oh, I'm gonna blame this other group because like, they just can't get with the program or something. You know, the reality is that like, in all these different roles in data, right. We all have to, at some point we have to specialize.

And the reality is that for a lot of data scientists, what they're incentivized by is the uniqueness and the creativity of the, of the things they're able to produce. Right? Like really they they're being paid. I feel like where they should be being, getting paid and incentivized is on innovation for like an ML Ops team.

I feel like our sort of our incentive is to provide as smooth and experience as possible, both because, you know, when you have like a resilient and robust and scalable pipeline, yes, that's all great. More people can use the product. It's less faulty, all that good stuff, but more importantly too, it makes it a better experience for the people who are trying to innovate.

Cuz it's very hard to like, you're trying to like, you know, you're trying to create these like really cool new products with who knows maybe the latest like Dolly three or whatever, right. You have to like keep an eye on the research. You have to keep an eye on like the R and D that's going on in other companies.

But at the same time, then now you're having to like troubleshoot like Kubernetes pods, like failing, like that's like an awful place to be, you know? So I feel like that is like super, super important. And I didn't realize how many people sort of resonated with that of the like. You hear like from the, a lot of data science going like, Okay, you hear our pain because I think a lot of times, you know, a lot of data science are, are, I think they're so used to like being told like, Oh, you're, it's cuz you're a bad coder, which, but it's the reality is that like, look, we need a design interfaces.

so it makes it easy for people to use. So that to me was like really absolutely fascinating, but also like the whole phrase, right? Like the more things change, the more they stay the same. Like I feel like for me even making the transition, a lot of the challenges I was running into in knowledge, understanding it was like foundational software development or like an understanding of how web technology worked or like distributed systems.

Like it was really kind of, I don't wanna say basic, but foundational stuff. Like, I feel like if you have a strong. Base in, for example understanding distributed systems, understanding data, understanding good DevOps practices. it just becomes so much easier. And so when I realized I was starting to have issues there, then I was like, Okay, well maybe other people are having issues too.

And so that's kind of where I want to target law. This educational content towards is people who maybe they entered at, like, you know, from a certain, like from, they have an inch point into like one part of the stack or the process, right. It could be on data science. It could be, you know, wherever, but who are being asked to like operate at like these increasing levels of complexity.

And they don't even know like, why, you know, that is like so important for me. and I'm hoping the content kind of gets at that.

[00:49:31] Ken: So gets at that. I think that that's. How do I put this? we don't realize things until we necessarily experience them or we get data on them. Yeah. I mean, something speaking to what you were describing with user interface, I didn't realize, but, so for example, Anaconda.

Like on the computer, I always use the command prompt. I've never used the gooey ever. But the vast majority of people from polling actually prefer the gooey. Yeah, right. And to me it's like, Oh, like I would've never thought of it that way, but I guess it is actually easier. Maybe I'm like a purist or, you know, I'm not out here, I'm not using FM or anything, but like but there is some elements of what we see and we perceive that is changed once we actually get exposure to what the vast majority or other people are doing, and then you naturally understand it.

And I think that it's exciting how involved you are in this and how much interest you have in growing yourself as an engineer. Yeah, to be able to apply that to educational resources is really, really exciting. I am interested also in just your experience in general, maybe creating more content and what that's been like, you know, obviously I'm someone who's a huge proponent of branding and yeah.

Creating something for yourself. I think in the last episode I talked about how that's frankly, a really important part of recession planning for me is having as many options as possible in revenue streams and those types of things. But I'm interested in maybe your, your like purpose behind creation and you like, like why do you enjoy doing it? Or why do you put stuff out there?

[00:51:11] Miki: Yeah, absolutely. And like, going back to that whole, like that whole recession proof thing, like I think what was in the last, like three or four weeks I've seen at least like 20 companies, like announce major layoffs. And so, I mean, that, that is, that is terrifying. You know, I am really grateful that I have been, you know, building up an like emergency fund.

Right. That was something that one of my like early mentors encouraged me to do early on and to automate that like Romy studies, like I will teach you to be rich. Like I love his stuff, like automating a law that has been super helpful. You know, but at, and as well as in terms of like, it's really fascinating because like right now in ML Ops, I don't know if it's an area that's still changing.

And I don't know if there is like a Bulletproof recipe for for eyeballing or anticipating changes. You kind of have to just, just be ready for any opportunities that come your way. So for me, the content creation really has been like about building a body of work that I can kind of point to both for future employers, but also for potential clients as well to say like, Hey, you know, like, I understand like these areas very deeply.

And I'm, and I'm showing you my understanding, you know? And that has been like, the big motivator is like, is creating a body of work, especially to, because like, you know, when we, when we work in these like big companies or these big teams, it's really hard to like finger and say, like, I did this, like I moved the needle.

And so for me, like having a body of work that exists outside of like my main, like nine to five, it means that one, like, I can always like point to future employers and like clients to it. But also it's that personal sense of satisfaction. So I'm sure, like so many people experience burnout during the last, like two, three years.

I know I was burnt out at many points in time and I mean, burnout can happen for a variety of reasons. Like it can happen from overwork, but it can also happen from like, not having meaning and purpose. Like in your day to day, you work on these projects, you see them get scraped by a company you then get laid off.

And then, you know, after you've put like five, 10 years, I mean, where's like the gratitude, where's the, I dunno, where's the pension or whatever, right?

[00:53:36] Ken: Yeah. I had a little, a fairly harrowing point in my life where I realized that I was pretty replaceable. Yeah. In terms in like company's eyes. And that to me is the scariest thing is that, you know, I work really hard.

I give my all to a company and then, yeah, I know from running my own businesses, like sometimes it's just business and you have to let someone go, who's maybe an asset, but not for the direction or the vision that you're going. And that's scary, but it's also like exciting to say, Hey, like what can I do to give myself power?

What can I do to give myself. Opportunities going forward that doesn't conflict with the company mission, right? Absolutely. Like you can be a really good employee. You can create out outside stuff. Yeah. And frankly, a lot of that stuff looks really good for the company. Like, Oh, they have these incredible employees that are speaking at these things that are producing these, these value valuable other valuable pieces of information. And I think a lot of people get wrapped up in like, Oh yeah, like this is the company I work for. This is what it is. But that is not diversifying very well in any sense.

[00:54:45] Miki: No, that is not, it is such a difference in like the generational mentality and attitudes. Like every time I switch jobs, I swear on my parents have, have like many heart attacks. We're like, Oh my God, not again. Another change on LinkedIn. That's how my mom found out that I had quit my job at Teledoc in the middle of in the middle of the quarantine situation. Even though we were offered a year's guarantee. and I was just like, you know what? I can't handle it. I was constantly being gas lit by my manager at that point, you know, it was soul sucking.

I had to do like twice daily check-ins on the SAS of my work. It was just every day I went in going, like, I know the mission is supposed to be, to help people with chronic conditions, which is a very valuable mission. Absolutely. Especially since this was the population that I was being impacted by COVID, but I was like, but I go in there every day and I feel a little bit dumber and less valuable.

And so even though like my parents couldn't understand it, I was like, you know what? I told them I I'd quit to go work on the startup that I had been doing, like during the weekends and night times, but it was really more like, I had kind of quit so I could not just make the pivot to Emma ops, but so that I could kind of like regain a sense of self-worth like, and it's like when you have like a bad working relationship, it's like, oh, it's like a traumatic real, it's like a traumatic, like romantic relationship.

It's so toxic, you know, so, I mean, for me per like ramen who I think it was ramen who wrote about the tripod of stability, you know, and I really like that idea where it's like, you need to have at least like kind of three, like points in your life need to be like, absolutely stable, like on like Maslow's hierarchy of needs, whether it's like the roof over your head, whether it's making sure you have an emergency fund being able to have an identity outside of work too, like is super important.

If my job goes, I am still Miki, I'm still serving up hot takes on YouTube and LinkedIn and Twitter, still yelling at people there. But you know, if my, but if my company stays, I'm also the same person mm-hmm , you know, and that, and there's room to grow from there for sure.

[00:56:52] Ken: Your room to grow, I think is a massive thing. And like just giving yourself the opportunities to to expand and go beyond that. Yeah. I mean, a lot of people forget that we work what the majority of our lives. Yeah. As much as you do anything else except for maybe sleep. And that to me is something, if you're doing something that you don't don't enjoy for that long, it can be very damaging.

And you know, of course we're unbelievably grateful to have the opportunities to work just in general and those types of things. But if you also have the opportunities to like take time off, you have the opportunities to pivot and change careers and positions, you know, why would you be so grateful for that specific job if you've worked hard and you've done a lot of these things to create these other opportunities that you can then pursue.

So I think a lot of, you know, especially like our parents' generation, they're like, Oh, you should be grateful for the work and the stability you should be grateful for all these things, but at the same time, like if you have the opportunity. To be able to pivot, to find something that's a better fit for you.

It's kind of idiotic not to pursue that or not to be able to to, go down those domains and just be grateful because this company gave you an opportunity. No, that company will cut you loose. If things aren't going well, and it's not in their best business interest also it's again, this like kind of generally at larger companies, this more cold business logic.

But, but that is sort of the way of the world. Why are we as individuals a little bit more scrutinized when we make a cut their business decision than a business is when, when they make one.

[00:58:37] Miki: Yeah, absolutely. I do feel like in America, we have like, especially like the middle class and I consider myself middle class.

Like we have a really weird relationship with like companies and like the 1% where we'll just sort of, we'll just sort of bless everything that they do. and we're like, Oh no, no, but if we do the same, like that's like terrible and it's this weird, it's this weird thing of like both being like beholden to like, you know, big corporations and to like the 1%, but at the same time wanting to be them. Like want it's a weird sort of like subservient envy. And I don't like under, I understand it, but I also don't understand in a way.

[00:59:22] Ken: Yeah. You know, I think we scrutinize individuals more carefully than we do organizations because we can't like personify them. So there's something going on right now with professional golf. So there's probably completely off your radar, but so a new league that's backed by the Saudi government has come up.

[00:59:42] Miki: Oh, okay. I See the problems there.

[00:59:44] Ken: And they're a lot of money to individual golfers, hundreds of millions of dollars to come play on their tour. And so a lot of people are very critical of the individual players, other are going and taking that money. But if you also look at it, the us government has had a really longstanding relationships with a lot of the, with the Saudi government. A lot of us companies have had those relationships. And so we're adding, we're giving this extra level of scrutiny, scrutiny on these individual players who are making that choice.

For like, you know, a lot of money. I mean, they're already making a lot of money and that's probably the conflict things, but it's just interesting to see how differently it's portrayed when we talk about the companies where we don't really even think about it, right. Yeah. We're not boycotting, like we're probably boycotting like the gas, oil and gas companies for different reasons right now.

We're not boycotting them for doing business with Saudi Arabia, but, you know, we're, we're like riding in the streets over these individuals who have made this decision and this very public decision to go and play for a lot of money. And so I'm not gonna say if that's a good or a bad thing. But I think what's important is we're seeing the like difference between that individual criticism and that global company criticism. And I have no concept of why that might be as well, you know?

[01:01:02] Miki: Yeah. I mean, it's interesting too, because like, for something like the most recent right. Supreme court, like decisions, right. What we saw was then tech companies kind of saying like, Okay, well we're gonna expand our like medical benefits coverage to like I know my company has done this too to, you know cover, travel off state if you need to get you know, certain services and all that. And it was fascinating because I've seen some like LinkedIn comments or were people are like the companies are paying you not to have families.

And in my head I'm like, well, a lot of these companies, it gets a little bit weird, but basically they're like, One poster was like, Oh, you know, these tech companies are paying you to not have families instead we'll help, you know, pay for you to have a family with like IVF treatments and all that. I'm like, well, a lot of these companies, a lot of these tech companies that are offering like expanded like coverage they already offer IVF services.

Like I know every company I've worked at were interesting. I did not know that. Yeah. Every company I've worked at they have had, well they've okay. Not every company last two or three kind of big companies I've worked at, they've had coverage for example fertility treatments for other stuff.

Like, I think Facebook was like maybe one of the first companies to offer like egg freezing services as well. So a lot of these tech companies that are offering expanded coverage, like to, you know, get an abortion or what have you, like, they already offer benefits to grow your family. And in my head, I'm like, look like at the end of the day, like companies are trying to they're trying to compete for talent.

The way I look at it, is that like any benefits or anything that they add, it's not like a, I should be grateful for it. It's a, like, they are, they are competing for my attention, like, or not my attention, but for, you know, a worker's attention. It's right. It's not out of the kindness of their heart necessarily, right.

Exactly. Like at the end of day, like we're all kind of having to look out at for our own best interests. Right? So for a company to like offer a high salary and benefits and all that, like it's not that it's like, I am grateful that I'm in the position in life such that I can be choosing the opportunity, but I'm not gonna offer any more additional gratitude for a company for basically saying like, we'll bid more for your time than this other.

You know, like, and that's, and then, and that's a fast that's a really fascinating thing with like our, our parents' generation is like, the minute you say that they're like, you are so full of it. You are so conceded. It's like, no, like, I am just not willing to take a law of the nonsense that like their generation had to deal with.

Especially my, like my mom, as a woman in the workplace, there was some nonsense where I was like, you know, I don't feel like she was really, I don't feel like she was as valued, you know, for her contributions. And for me I'm like, it's that weird thing of, I want to honor their sacrifice sometimes like honoring that sacrifice doesn't mean doing exactly what they want me to do.

It means like taking the best, like, you know, living life to the fullest and taking advantage of like the opportunities that are presented to me, even if it makes them like so uncomfortable to switch jobs. so uncomfortable.

[01:04:16] Ken: Yeah. Well, I think that that's sort of. The crux of it is that if you do have a valuable skill set. Or you've establish yourself, you can switch jobs, you can take, yeah. These quote unquote risks, you do have these other opportunities. And in, you know, our parents' generation also in different cultures, that's not something that's necessarily viable. I mean, there there's this whole movement in China, that's going on, where in order to be competitive in the workforce, you have to do the 6-9-9 filter.

Yeah. I think it's 6 days a week. 9:00 AM to 9:00 PM working. So something along those lines, right. Yeah. And, you know, it's just interesting to see the different dynamics and different, different cultural things associated with that. And something that we also talked about offline, was that working in different countries, the populations are so much more homogeneous, right.

If you're in working in China, if you're working in. Kind of likely that everyone you're working with is also Chinese or Indian, or even in like Germany or in Sweden or something. The people you work with are a lot more similar to you than they are different. And in the us, you get people from all different types of backgrounds.

All different types of experiences, all different types of like behavior patterns. And for people that are, you know, coming from other countries and working here. Yeah. I think that's probably the biggest difference that you'd see. And it's like, shocking. Like how do you integrate with those types of things, right? How do you not only work, but also be respectful of other people in their, in their cultures because they are inherently different from yours.

[01:06:03] Miki: Yeah. And I mean, like, it was so fascinating, like with my work on, in the hiring panel and helping to develop and lead some of some of actually the interviews for data engineers and ML Ops engineers.

We had candidates where, you know, so the first panel is usually what we call the, like the culture and values. And, you know, we're looking for candidates that are a culture ad, you know, it's not like a culture. Where it's like, do you kind of like, are you exactly the same as like all the people who are already there, but, you know, instead like a candidate who we feel can bring something new or different to the table in a very like, positive way we still wanna screen out for red flags, you know, but in that regard, our, our criteria tends to be a little bit more holistic.

Right. While still trying to be fair. And it was fascinating because we had some candidates where, so we're asking like, you know like how do you contribute to DEI? What does DEI mean to you? And then they're like, Okay, well first can you define it? We're like, Okay. Yeah, we can do that. Like sure.

You know, diversity, equity and inclusion, right. And they're just like, yeah, well, Sorry, can you give me an example of what that looks like? It's like, we want you to give us an example of what that looks like. You know, in some, in some candidates like they they were very honest. They were like, look, you know, we actually, haven't worked with people of different genders, orient, orientations, religions, and all that other stuff.

And other candidates that kind of gave us examples that were sort of the optics of it. Weren't great because they either had, Oh God, I'm I'm just, I'm cringing. I'm remembering some of these stories, I'm just really cringing where they had done something that would've been very offensive or was derogatory or discriminatory or something like that.

You know, it was just, it was interesting. It was mind boggling and it's also kind of fascinating because like, There are so many of these policy decisions that are, that are going on that impact the workplace. I also see a lot, these posts on LinkedIn, where people are like when, when we're talking about DEI, like in the workplace, they're like, this does not belong on LinkedIn, you know, da, da, da, right?

Like this is on Facebook. And I think something that there's a lot of valuable reasons for why you would have, why you would want like a diverse like workforce. But I think for candidates that are, for example, coming to the us, and this is the first time where maybe for example, they have worked with people of different gender, different orientation or what have you.

I think the biggest gotchas are like the communication patterns and also like what's considered like a respectful attitude. So for example, like I've had some engineers where they have just been. I'm not gonna say at my current job, right. But I've interacted with some engineers who they were very like rude and they kind of had this attitude of like, Oh, you sort of just got here because you kind of like, sweet talked your way into the role.

Like, Oh, you don't really kind of deserve the role that you're in. Or even just like aggressively trying to like hammer their point. And it's like, look, Okay. I can try to, like on an individual level, I can try to help kind of coach them on that. But outside of like, communicating with me, they're potentially burning a lot of bridges because the reality is that everyone wants to work in a safe, respectful, collaborative environment.

And if you engage in behavior that immediately puts people's backs up. Like you're not only are you not getting the benefit of that collaborative environment, but you're just sort of, kind of destroying the psychological safety of it, you know? Yeah. Because the thing is like all of us, we have like life that hap like life happens, right.

We all have stuff that, that comes up. So. You know, for example, if you have a culture in an engineering team of like psychological safety, you know, it means you can for example, you can, you know, like raise your voice to say like, Hey, like I think these things are gonna be concern. You can give feedback, you can kind of like you can learn and grow from that experience.

You can ask for a mental health day. Right? Like I think all of us have needed a mental health day, at least once or twice. Like in the last like year. I know I have you know, the psychological safety aspect is so important and like respectful communication is like such a crucial part of that.

[01:10:37] Ken: Yeah. I think it's important to really note that, I mean, this is in somewhat, in some sense, fairly specific to the us. But a lot of people wanna work in the us. And you know, we're not sitting here judging other people's cultures, right. We're we're saying that if you do wanna work in the us. it behooves you to research and understand and like, you know, just like you were saying, like, you're not gonna judge a new a candidate for being like, look, I haven't worked in this domain before.

As long as those candidates have done their homework and they're receptive to say, Okay, like, this is what it's like to work at this company. This is what it means to be a part of these types of teams. That's really the valuable thing. And so I think taking away from this, if you are someone who is international and maybe coming from a more Eastern culture, just realizing and doing your research about what does it mean?

Like what does diversity equity and inclusion mean in these companies? Because there are very few big tech companies where that is not like a major focus. And even if it's just, you know, you're, you're doing those things just because you want to have access to a role, like as long as you're abiding by the system and you're treating everyone okay.

That's like you should want have like a moral reason to do it as well, but in some sense, it's still okay. Just to like, you know, if you treat the people in the way that is conforming to the societal norms and the company norms, like you're not gonna piss anyone off, right. And you're not gonna bring, or just like make you describe.

[01:12:09] Miki: Yeah. I think everyone wants to show up to work and feel valued and feel safe and feel like they can be like, do the best work that they can, right. Like, I don't think anyone well, first off the minute a company says like, we're like family. It's like, Ooh, red flag, red flag we're run away.

But no, but I mean, but seriously though, like I think, you know, work doesn't have to be like your life. And your coworkers don't have to be your friends, but we're all professionals at the end of the day, we all want to like, achieve something in the work. You know, it could be a promotion. It could also just be like coasting too, you know for some people, but I think, I don't think anyone wants to show up feeling like demeaned, feeling angry, feeling like their contributions don't matter. Feeling like they're being discriminated against. I think that's like a horrible feeling to be in.

[01:13:00] Ken: Yeah. I think the way that, that simplifies things for me is looking at it sort of like, I think it was John Locke who had the social contract, right. We give up some freedoms or we abide by certain norms to be able to interact in society as a whole.

And a more micro level, you do that same thing in a company. Like you don't say certain things that might be offensive in order to get along with your coworkers. You don't you know, you don't maybe like you treat your boss in a certain way. You treat your fellow employees in a certain way.

So as not to disrupt things and, you know, in some sense there is room like to question some of these things, and it's important to have feedback, but it's like a mature, calculated conversation rather than like this aggressive rebellion, but that, you know, but if you want to aggressively Rebell or do something else, you can go to a different company or you can find a company that has a contract that makes more sense with your values and your systems.

And honestly, there's a lot of money going around. So being in some sense, you're paid off a little bit to be able to make that the back contract as well. Miki, I think this was an awesome conversation. Is there anything else you wanted to touch on? Anything you wanted to dive into before we ...

[01:14:17] Miki: No, I think this was great. And we're, we're just at the start of like a really fun week. So I'm looking forward to like everyone coming in and then like being able to engage with them. Thank you again, Bright Data for the opportunity to do that.

[01:14:30] Ken: Yeah. A lot of fun interviews, a lot of really cool conversations coming up. A lot of really great content coming out this week. So I'm very excited, you know, Miki along with, I think at least 14, 15 other people. So stay tuned and get excited.

[01:14:47] Miki: Bye everyone.

30 views0 comments
bottom of page