top of page
  • Writer's pictureKen Jee

How He Breaks Down Complex Machine Learning Research for YouTube (Yannic Kilcher) - KNN Ep. 95

Updated: May 11, 2022


Today I had the pleasure of interviewing Yannic Kilcher. Yannic is a YouTuber covering state-of-the-art Machine Learning research topics. He has a Ph.D. from ETH Zurich and is currently the CTO of DeepJudge, a LegalTech NLP startup. In this episode, we learn about how Yannic decided on a Ph.D. in Ai, how he is able to make advanced research so digestible, and the reason why he wears sunglasses on camera. I hope you enjoy the episode, I know I enjoyed our conversation.

 

Transcription:

[00:00:00] Yannic: If something was in a newspaper. That's like really, really good information and, or an email. Right. But now there's an email and some Nigerian prince writes you, and you've inherited some money. People know that not everything that's written is real people know that a picture of, you know, the front page of Vogue, isn't real, like people know. And I think the deep fake era will just introduce another medium, like say, Hey, you know, talking head footage. Not necessarily a real, right. If it's suspicious, you know, be careful.

[00:00:41] Ken: Today, I had the pleasure of interviewing Yannic Kilcher. Yannic is a YouTuber, covering state-of-the-art machine learning research topic. He has a PhD from ETH Zurich and is currently the CTO of deep judge, a legal tech and LP startup. In this episode, we learn how Yannic decided on a PhD in AI. How he's able to make advanced research so digestible and the reason why he wears sunglasses on camera.

I hope you enjoy this episode. I know I really enjoyed our conversation. Yannic, thank you so much for coming on the Ken's Nearest Neighbors Podcast. I love your YouTube channel. That's one of the main places where I find out about the upcoming trends in machine learning and AI. And I'm so glad that I can have you on to talk about your experience.

First of all, learning the data domain and getting an advanced degree in it, but also your story and creating content and making it as accessible as possible to people as well. It's something of a vision, I think, we both share and I'm again, really excited that we could have a dialogue here and I'm certain everyone is going to find this fascinating.

[00:01:41] Yannic: Thank you. I'm excited to be one of your nearest neighbors.

[00:01:45] Ken: I love it. Perfect. Well, you know, the first thing I like to do to get everyone familiar a little bit with your story is I'd love to hear about the first time you sort of had exposure to data, the first thing, or the first moment where you realized, Well, this could be a profession, or this is something I could spend a large portion of my life doing, you know, is that a pivotal moment or was that sort of a slow progression over time.

[00:02:11] Yannic: I'm not really super duper sure where it started, but I'd say pretty late in life for me, I was not. I was certainly drawn to technical things as a child, like computers in general, but I had no one in the family, like zero academics or technical people or anything like this.

My family is a family of posts of male. Essentially postal administrators and transport people. So not nothing technical there. Then I went to university, I started studying medicine but switched over the computer science then after a year. And I got slowly introduced. I think I had a pattern recognition class in my bachelor studies.

And then for masters, I switched over to ETH, which is this fairly well-known university in Europe. So in mainland Europe, I think one of the largest technical universities and it happened to be in my country. So that was good. And there, I just happen, you know, all the, it turned out that somehow all the things that I chose for my lectures were somehow machine learning related.

And it was, I didn't consciously try to do that. I was just like, Wow, that's sounds interesting. Oh, this sounds interesting. And all of a sudden I had like three computer vision classes and probabilistic, graphical models and big data and data mining. So that was just, yeah, as I'd say, a slow progression and then deep learning was just starting and I had the opportunity to do a PhD in the matter. So that was, that was kind of my way into AI and data center.

[00:04:01] Ken: I think it's always interesting. It seems like you sort of followed the path. I wouldn't say of least resistance, but it's the one they're like, Oh, this seems interesting. I'm going to, I have appetite for it. It's I have at least some prerequisites go for it.

And you sort of pull on the thread and eventually you start pulling on the thread and you on unravel the entire ball of yarn. And I love that approach for a lot of people breaking into these things. It's like, if you follow your intro, you're inevitably going to end up somewhere, right. Interests don't necessarily usually end up circular.

You don't end up where you started by pulling on a thread. You end up with at least something new, something in your hands and something to experiment with. And I think so many people, they try to put the cart before the horse. So like, Oh, I see the outcomes of working in this field and I want to get there.

And then they realized that they didn't like the field to begin with. They just like, without. And, you know, it doesn't seem like your approach was outcome focused. It was actually like very processed focused. And I think that's really a beautiful thing actually.

[00:05:01] Yannic: Yeah, definitely not. and I see this, that a lot of people want to become data scientists nowadays, just because it's, I guess cool and sought after.

And there are a lot of jobs which is, you know, all or more of the power to the people who educate themselves to get a good job. But that wasn't definitely wasn't me. I was like, well, Okay, this seems interest. And then I did that and I was kind of lucky that it coincided with the deep learning. Boom.

[00:05:29] Ken: Yeah. Well, I think for anyone who has the appetite to go and pursue a PhD in anything, you cannot approach it with just the end in mind. I think a lot of people, I know quite a few people who have dropped out of PhD programs because they're like. I don't enjoy doing this enough to stick with it for three, four or five years.

And, you know, without a mind, I'd love to hear the story of your PhD, how it went. I realized you'd finished about a year ago, but a lot of people I know are considering, is it worth doing a PhD? And obviously I think I would hope we both agree. It's like, well, it depends on your situation. It depends on your appetite for learning. It depends on a lot of things, but I would love to break down your experience and what you got out of that.

[00:06:12] Yannic: Yeah, to the people who are considering it, it's really, at least in machine learning. It is really a matter of whether you like doing ML research. There are definitely careers available without the PhD.

So if you're just considering it for career-wise it's, you're probably not going to have the best of times because it is, it is kind of, you know, you have to enjoy the process, as you say. I slid into that relatively also kind of path of least resistance. I really didn't know what to do after my masters. I enjoyed my master thesis, which was in convex optimization.

I got promised deep learning, but it turned out to be convex optimization. And then I just spent like a summer playing video games and then I ran out of money slowly. So I was like, Okay, what am I going to do now? Well, I guess I could do a PhD. That seemed pretty interesting. So I, you know, I contacted my professor, which also, I was very lucky again that he just started out like a year prior and was sort of building up the research group and therefore their world.

Spaces available for doing PhDs. And he already knew me and knew that I was sort of capable from the master thesis. So I essentially slid right in there. This is this is, this was a definitely a good opportunity because nowadays. It's quite hard to get into a machine learning PhD. It's easier if you already know the people there, but you know, if there is an open position, we'd get, we'd get a three digit amount of applicants who were all really good.

Right. I mean some better, some worse, but we'd get like a huge number of applicants for each position. So again, I realized how lucky I was the kind of after the fact. So I started and then I had no idea what. Like I did not. I think I understood that my goal was to. Right. And publish papers. Well, after a year into the PhD.

So I was like, Okay, I'm gonna do some research. And then slowly, I kind of learned by osmosis from my surroundings, what I had to do, which is also really, I see people coming into the PhD, they know exactly what's going on. Right. They are like, Okay, I'm going to do this. I'm going to publish there.

This is my niche. I think it took me a long time to find that. Anything that I could, you know, research and dive into, but then eventually I found it in sort of the space of Gans first, which everyone did at the time, but then adversarial examples. And yeah, I also teamed up with some other great people during that time.

And finally I got sort of my papers published and the the drive of my professor to. Come on, finish. And that's how I finished.

[00:09:25] Ken: That's awesome. I think something I do want to want to ask you and touch on. You said there's such demand for the, these positions in these PhD program, but from what I understand, the number of machine learning or AI focused PhD programs is very limited.

You know, a lot of people. A lot of schools offer programs in computer science where there's like a concentration machine learning or statistics where there's some sort of concentration, but true blue, like AI and machine learning, PhD programs are very few and far between. Do you see more of them coming in the, in the future or is it something where there's like an administrative burden to create these programs?

So they're just, aren't really going to be that many, but you can sorta get one through the guise of another.

[00:10:12] Yannic: It's probably, it's probably both a bit, so there isn't administrative burden, but I think that it is being overcome. There are more and more programs springing up in various places that are offering PhD programs or even master's programs.

In AI or in machine learning, things like this, it just took a while to catch up to the trend and to really be clear, Okay, this is a thing that's going to stay and not just a passing hype and therefore the university's finally, I guess, caught on and invested more resources and created these programs.

Yeah, it takes a while, but I think they are coming out now, Europe. I only know Europe, I don't know what's going on in the rest of the world, but Europe has now a number of programs across universities, but also in different universities, really focusing on machine learning.

[00:11:07] Ken: It's interesting. I think Europe is actually a little bit further ahead than the U.S. is on that front. And so maybe not in the master's programs, but definitely in the, in the further education programs. I know I've met a couple of friends. Who've been. Master's degrees, even in AI, which is not really common in the U.S. at all, which I, which I find pretty interesting.

It, I think at a broader level, I would like to know if the education system in Europe is more flexible or less flexible in the U.S. and it might vary by country, which is a benefit. I mean, you ask.

[00:11:44] Yannic: I, yeah, I have little clue. I mean, I don't have a great insight into the, into the U.S. education system have a couple of friends who've gone to college there, but at the end of the day, I'm not really sure. I'm not even really sure how it compares across European universities. I have an N equals one experience which was pretty good on my part. Yeah, I don't, I'm not competent to make these kinds of comparisons.

[00:12:12] Ken: No problem. I think that's something I have to do more research into and maybe, maybe actually collect some data. That would be a fun little one. You've mentioned that, you know, pursuing a PhD, you're doing research and I'm interested and I kind of know, but for the audience, I think they'd be really interested in understanding what the differences between machine learning or AI research and practice.

I think that's something that, you know I do practice. I do virtually no research. You probably have a pretty good mix of both. And what are some of the constraints or whether the things that people should know about these two different types of work within the data domain.

[00:12:54] Yannic: Yeah, there are almost, I mean, they are quite different from each other in practice.

If you do machine learning and practice, I think a lot of like the most focused, what people usually do in practice goes into the data. It's easily the part where you can gain the most versus the most low hanging fruit, getting good data, cleaning up your data. Post-processing being really smart about it.

And then the ML techniques, I want to say they're often quite, No, I want to say basic. I mean, and by that I don't mean simple, but standard, let's say standard, that's a better, better thing because if I take. Let's say I'm an NLP, right? That takes something like Bert, which is an established model, right?

It is well understood how to fine tune it. And so on. I can get, you know, maybe a 2% of 5%, maybe a 10% boost by really putting the newest and latest research efforts into that. Or I can get a 50% booths by cleaning up my data. Right. And the practitioner. They will focus on what's most pragmatic, what's most relevant.

What gives them the output they want and the understandability. So I think, yeah, the practice ML looks different from the research and research. You're really trying to understand. One specific aspect of the ML space. And that could be anything, it could be, you know, data quality, but it could also be whatever.

How does the introduction of a batch norm affect. The curvature of the space of the loss landscape, something like this. And then you're, you're trying to bit by bit tiny piece by tiny piece, make this global understanding a bit larger. So I think the difference there is, is quite staggering in that most PhD thesis, you will never be able to directly let's say, apply. To a lot of practical circumstances, even though most of them claim state of the art on some benchmark.

[00:15:06] Ken: I think that's really fascinating. I mean, to me, I've always been intimidated by the research side because it's a little bit like where do you start? Right. I have to choose something that I want to optimize.

I have to choose something I want to improve or understand better. Let me do I start with the math? Do I start with. Examples and practice. I would imagine you can approach these things in multiple ways. Do you have a process for like when you were doing your PhD research, how did you settle on the problems?

How did you narrow down the scope enough where you felt like you could make some reasonable progress without getting bogged down and all of the other stuff that comes with. You know, focusing on a very specific problem.

[00:15:44] Yannic: Well, I'm a, I'm a very crappy academic. I don't, I don't advertise my processes. However, I can tell you what I found, what I've observed in others to work pretty well. And that is to go get, can be guided by your interests, right? The most, I want to say the most successful academics, they even strategically choose their kind of interests. But most people, they go with their interests and then.

They go into a direction. And then it's really about getting to know every single thing there is about that particular topic. And as you do that, you start narrowing it down. Until you really have a little piece where you can say, I know everything that exists on the planet, every single paper that has been on this very tiny particular topic, right.

Everything around it. If there's math involved, I know all the proof techniques that are in this tiny sub field. Right. And at that. You should also have a good understanding of what the open questions are and that, that will, that will be easy to figure out because you'll be reading the papers and the books and whatnot, and having conversations about these things.

So at that point, do you know what are the open questions? And then that takes a little bit of, it's almost like an art and that's where a experienced, like a professor or an experienced post-doc comes in, who can say, Okay, this one of the open questions. This could be tackleable with the tools we have now, right?

It's always, you want to be at that edge of knowledge and you want to tackle something that you are pretty sure is doable. You never know, but you want to tackle something that you kind of, well, the probability is high that I, if I focus, can make a bit of progress, I think that is the most successful process of doing research that I have seen that I've observed in my peers.

[00:17:45] Ken: I think it's really interesting how business people or like companies view PhDs versus what you literally just described were there. So historically from hiring a lot of these things, people say, Oh, you know, this PhD must know everything about this field.

They spent so much time studying it. But what you just described as like this PhD, An incredible amount about this very small, specific part of the field that they're studying to do their research or do whatever. And I think that that's a really interesting observation is that yes, as a PhD, you know how to specialize in something and you might be able to scale specialization, right?

You might be able to specialize on something else really well over a long time. But it's not like you're coming in with this vast breadth of knowledge. It's more, this really specific thing. And I think that there's a lot of confusion in hiring and with companies around that specific area, I don't know if there's anything that necessarily like, say beyond that, about that, but it's a very interesting observation from what you're describing.

[00:18:44] Yannic: Yeah. A PhD is mostly a certificate that you can stick for a long time with the hard problem that isn't even really specified. Right. That that is, that is a PhD. And then the knowledge that comes with it is kind of irrelevant. Certainly your specific niche fields will be irrelevant. Of course, by osmosis you'll know quite a bit about the surrounding topics, but it is in no way, a certificate of sort of expertise in like the general field of machine learning or something like this.

[00:19:15] Ken: Yeah. Well, I mean, something you did describe pretty in-depth as that, when you're learning about the. A specific area. You're collecting a ton of research. You're reading all the papers that are out there. You're really diving into this topic. I'm interested at a broader level because it's very relevant to your career, YouTube, which we're going to talk about soon.

And how do you one find research that is relevant to a topic? Like what are the best resources for that? And two, how do you go about digesting that research? How do you intake it? How do you go through it?

[00:19:47] Yannic: So I used to read archive. I used to read all of our archives, like not all of our archives, but on archive you have these different lists, right?

So one is ML one. General computational learning or learning and something. I'm not even, I'm not even sure, but do you have the one is computer vision and all the new papers they would be released on these lists. And so you could always everyday you could go and see what are the new papers I had.

That would download all of them. Put them into a folder. I did that in the morning. Then I went on the train, which took an hour from where I lived to the university. I would read all of the titles and all of the new papers. It was still possible at that time. It is not, it's not possible anymore to do that.

I mean okay. Some people do that, but it's crazy how much research there is. So I think everyone relies on any sort of combination of filtering mechanisms. I personally, I still every now and then look at an archive feed. I look at the usual places like Reddit and Twitter, and there are various pages like archive, sanity, or obscenity, like pages that just kind of distill down.

The things to what you find interesting, and also what other people find interesting. And I think by this network of kind of topic filters combined with social filters you can get a fairly. Good idea of what's going on if you follow it. Right. But it is a noisy process inherently. I miss 99% of all the research.

And so does everyone else in the field then again, I think a lot of times if something's really kind of breakthrough ish or a big deal it usually surfaces in multiple places. So. Yeah, I'm pretty sure I don't, I'm not missing these things, but every now and then I'm for sure. Missing some kind of hidden gem that would be super interesting to examine.

Yeah, the ML field has become as much research as it has become kind of self promotion marketing and so on. Big companies are really good at that, obviously, but I think there has emerged a new class. Academics. We're also good at it. And that's, I think that's cool and digesting it is, I mean, a lot of it is filtering a lot of digesting.

It's just filtering what, whatever is relevant and what is not what I'm interested in and what I'm not. And then once I have something that I find interesting, I just commit to making a video about it. And that's kinda my way of forcing myself to read something very. Because I feel like really crappy making a video about something that I didn't like.

Sometimes I won't read the abstract, sorry, the appendix, because the paper might already be super long or I just, I don't have time to read the appendix or I don't expect something interesting to be there. Right. But then I already feel very. On edge, because I could say something like, Oh, they didn't even test whatever.

And there could be, you know, two pages in the appendix testing. Exactly. That. And that's, you know, I mean, you also have the defense of saying, well, it's not in the main paper, so, but yeah, so that forces me to read a paper very thoroughly until I really can explain it. And I guess that forcing myself to do that has been a good way of digesting research.

[00:23:34] Ken: I really love that accountability factor sort of accountability to yourself. I mean, I've found tremendous value in trying to teach things because if I can't teach it and explain it, I probably really don't understand it that well. And I'm glad someone else has tapped into that. The things you're teaching frankly, are significantly more complex than the things that I'm teaching and talking about.

But at the same time, I think. That is something that everyone can take away, whether it's like, Hey, I'm just going to write a blog post on every new algorithm that I learned for my records, but I'm also putting it out there. There's that accountability factor. There's also feedback factor, right? Where even if people on YouTube, for example, aren't very nice about.

You're still getting constructive feedback about where you might've missed something and knew you were able to do it better over time, which is incredible. I do want to dive more into your process for videos and research. But before that, I definitely want to ask about, you mentioned that. Okay. Like, you know, from a commercial perspective, companies are producing research.

PhDs are producing research at university. Whereas the bulk of research coming from, I think the data domain is very different than from like the medical domain, right? Where you have three main journals where it's like, if you're not published in one of these three journals, there is some skepticism about your work.

I would imagine. And from what I've seen, the data domains, a little different researchers coming from everywhere, you have like DeepMind, you have Google, you have Nvidia, you have all these places. Is there like a central governing body that vets these things, is there, like, you know, compared to other professions, I mean, none of the other domains that are heavily research focused, it seems just very different. And I'm interested in what you think of that.

[00:25:16] Yannic: There are surprisingly similar construction in that the machine learning community usually publishes in conferences, not in journals. There are a few journals and they're becoming more now. But mainly it's still a few conferences a year, so it, depending on your particular subfields, there might be two, three or four conferences a year.

And. Pretty much, same deal. If your paper's not published there, you don't, you're not considered top tier or whatnot. So in machine learning, this is typically ICML, the International Conference on Machine Learning is really big. The neural information processing system. Conference. These are the two really big ones.

And then there's, there's various others that specialize more. There is one there's big ones for computer vision, big ones for natural language processing and so on. But it's very much the same. What is, I guess that's where it's published. The other end is where it's coming from. And a lot of research is now coming from corporate research labs.

That it's a bit skewed in the statistics because we always have these statistics, you know, who publishes the most papers at these conferences and kind of Google is always really big and deep mind and Facebook. Meta, I guess they are always really big. And then the universities kind of trail, but then also there are a lot more universities than these big company labs.

So yeah, I can't really really tell, but there's definitely a big industrial component to ML research.

[00:26:58] Ken: That's awesome. I, you know, something I think is really fascinating. I like to read also a lot of research about, you know, like the human body, like fasting and a bunch of stuff like that. And the constraints with those as you have to have.

A very strict research methodology around how you treat people and all of those things, and that's significantly less prevalent and in our domain. And so it's like, Oh, the amount of research you can publish is dramatically higher in terms of volume. I mean, obviously there are still more, you know, medical research publications out there and things that have been published.

But I would imagine that there'll be eclipse pretty soon just because the ease of access, which is awesome. If someone, for example, did not have a PhD and they did want to get into research. Do you have any recommendations on how they would approach that? Whether it's at one of these corporations or just on their own.

[00:27:54] Yannic: It's the hurdle of getting into machine learning research has been lowered significantly in the last years, and that's really something that's cool about this field is that pretty much anyone can get into it. Just to the point before in the journals, I want to amend that the landscape of publishing itself has really changed in machine learning in that.

In other fields, it's also, if your paper isn't in one of these journals, it's kind of like no one even knows about it right now with COVID. I think preprints have become a bit more prevalent in medicine, kind of unpublished research in machine learning. I don't even look at conference like this it's like something you need for your impact factor or for graduation, or I guess in the companies that get bonuses when they publish at these conferences.

But no one cares, like no one actually cares. People care when the research is out when the pre-print is out and they're skeptical enough to evaluate sort of, there's also the other point in that the reviewers at these conferences. They're just like the same people, right? Like they're just kind of.

There are so many research papers. The chance that a reviewer is like really perfectly knowledgeable to the point where they could, you know, find the little mistakes and criticize is it doesn't happen. The review we know is kind of around them process. Really good papers do get accepted, really bad ones get filter out.

But all of that bulk in the middle, it's sort of like a coin flip and therefore. There's really no difference to looking at a pre-print and kind of deciding for yourself, do I find this believable or not? And yeah, I guess so if people want to get into research I would, I would go. I would go from maybe if you're not in it at all, maybe the practical way is a good one.

So go to whatever TensorFlow or PyTorch websites do a bunch of tutorials, right. And just kind of follow, follow that there are online communities that talk about research that even do research. There is a big movement of open kind of open science, open source research. Going on on various levels. So there's lots of places. And once you get started, you'll easily find those places. Yeah, that's, I guess it it's super easy and it's also, you can find it easily as well.

[00:30:34] Ken: That's awesome. You know, I think one of the interesting things is about it's having a lot of research out there. That's hyper specialized, is that as we both know, it's pretty easy to not necessarily fudge numbers, but use data that's conducive to your means or sample in a certain way that makes it, you know, your model look like it's performing incredibly well. When on other tests that isn't, I think that there was some drama on that around fairly recently around one of the new models that I'm not going to name the company, but it's one of the large companies that they put out.

And, you know, I guess in this domain, that's kind of okay, because the model. We'll validate it for you. Like people there there's really low risk and putting some new things into production or at least into testing and exploring it on your own. And you'll be pretty vocal if it doesn't match the reality of what's going on.

I think on the other hand, if you're telling people though, like, you know, for example, take a new drug, there's a lot harsher consequences around that. I mean, do you still think that that's a big problem or do you think that this market force is enough to cover. A lot of this and that there's an active enough community that they'll basically like, you know, Shaun the research, if it really isn't consistent with what people are.

[00:31:55] Yannic: I mean, there are, there are various aspects there, I think. Yeah. As you say that the research in machine learning is largely such that people can kind of check it for themselves. And the community has learned to become skeptical, too many new claims of a new best model and whatnot. We know that, you know, in order to publish, you need to present yourself in the best light possible, which means you go out and you select your data sets.

According to the strengths of your model, you creatively choose the random seeds for your random number generator. If you don't get the results you want, you can just run the experiment again. There's so much to do. Yeah, exactly. So but people, I don't think, you know, I don't think that is viable to make, to kind of push on that, to make that go away.

Just because of the nature of the field. I do believe people, people being skeptical and people sort of validating it for themselves, whatever a paper says is still the best way of doing that. That being said, there has emerged some research over the years. It's always been the case. More so in recent years that it's just kind of too large for the regular people that check right there, there is models with hundreds of billions of parameters that require a cluster just to run once.

And you know, don't talk about training them that that requires millions of dollars of investment. Right? So it is kind of a problem once it gets to that level, but still you'll have at least a few competing big players in the market that keep check on each other or at least to try to compete in that market.

So ultimately, you know, I, as a consumer will go to the company. That has the better model. So not that there aren't concerns, but I am, I'm kind of a believer in this as open being as open as possible, pushing publishing stuff, putting knowledge out there, putting code out there, and then people go and see for themselves.

[00:34:14] Ken: I really liked that. I think that, you know, there are constraints I'm interested in. If you think that eventually there will be. Like some form of monopoly on these types of things, or if there is enough competition that there will be, you know, we will still see advancement without anyone owning everything.

I mean, it's, as you mentioned, it seems like there's like less than 10, like major, major players in AI advancement space. I think that's like a really roughly fair statement. Do you think that there will be actually more players? In the future or less players based on accessibility of compute and some of these other factors.

[00:34:57] Yannic: This is really hard to tell. I have to, I have no idea. There, there are multiple, right. There are multiple, multiple possible futures. The trend right now is certainly to build bigger and bigger models and they do perform better. The bigger we build them, and that is. Amazing, but also means that ultimately economies of scale kick in and that naturally leads to leads to monopolies.

And the monopoly is accentuated because there are also just very few places that have. Certain types of data to work with that just no one else has. Right. And so there's definitely a winner takes all element to this, but it could also very much be that we, we see this scaling up approach. We see that reaching its limits.

And so there might, might very well be that some invention comes our way that changes the field and that just mixes up the whole field. But I don't know. It's really hard to tell. Yeah.

[00:36:04] Ken: I mean, that is the beauty of technology is there could literally be a technology evolution that happens probably not tomorrow because we at least hear rumblings of it, but that could transform everything overnight.

And, you know, it would probably, wasn't a fair question for me to ask based on knowing that there is infinite possible outcomes there, but that was an awesome answer to that one. I really appreciate it. I think something that we don't have to touch on it too much, but something that's interesting to me is data quality and access.

As it relates to those large massive models. I mean, it seems like as of right now, the U.S. is very, or the U.S. and Europe are pretty far ahead in terms of technology resources. But, inevitably, I think they will be very far behind in terms of access to data compared to a place like China, that has a tremendous amount of access to data, data that we don't feel ethically.

Okay. With collecting in the U.S. and Europe. And I'm interested if you see any implications of that on the models. I mean, obviously there's going to be a tipping point when when technology does catch up on. I think I would know what would have to happen for that to happen. And it wouldn't be great for the entire world, but You know, is that, is that something that's on your mind frequently or is that something just like what you were describing before is that, Hey, anything could happen? We could have a technology change where the amount of data and the specificity of it isn't really relevant anymore.

[00:37:39] Yannic: Yeah. I have no idea. Right. It's I mean, these are, there are too many variables right here. There's also the effect of, as countries get is like more prosperous and so on. People will also start demanding more.

Let's say privacy control over their data and whatnot. It's it, the the outlook on that, it is not something that is generally on my mind, like global, you know, geopolitical movements of the, of the it's certainly like I think about it, but it's not like I don't worry about it. Yeah.

[00:38:16] Ken: Spoken like a true AI PhD, not enough data about it. I completely, it really, I think that it's a fun thought exercise, but you know, for any one of us to understand exactly the implications of that are is, is very difficult. I would love to transition into talking more about your YouTube journey and. You mentioned that that's one great way to keep you accountable.

It's one great way to make sure that you're actually focusing on your research. Before we talk more about the channel and the specifics, I am very interested in the sunglasses. So you wear sunglasses and effectively all of your videos. You're wearing them right now. What is the story about.

[00:39:02] Yannic: Yeah, I never really, I made these videos and they were just me kind of talking over a paper presentation. So there wasn't never a face in them. Just my voice. And then at some point I got connected into this, into this actually made some videos, even before that, where I just sit there regularly and talk. But I got connected into this. Group that would eventually become the machine learning street talk podcasts that we have.

And that time I was just kind of worried. Of just putting my entire face out there because deep fakes were just coming up and I thought, you know, if I'm going to put hours of my face talking out there, it'd be pretty easy to sort of deep fake me into anything. Right. And make me do anything now.

Nowadays, the technology has advanced so much that probably like a single picture suffices, so that that's kind of out the, out the door, my sites. So the, like the two videos I made without glasses are already enough to Deeplake me into anything. But. It just kind of stuck because people would kind of recognize me by the glasses.

I actually have. I think I have, these were the very first ones I wore just lied and lying around. And then I had some, one, some that were super mirrors. I thought they were really cool. But then I also. Everything reflected. Like you could see exactly what my screen was, which is fine for me. Right. But it was just really distracting.

So now I have the black ones, but yeah, it's a branding thing and I kinda, it's kind of annoying because now I have to code with sunglasses on that's how branding works. So, you know,

[00:40:53] Ken: I personally love it and I can relate not entirely, but with the glasses reflection. I like when I'm filming and if I have lights, it drives me insane.

So I think it's probably worse if, cause you can see my eyes and you see the reflection over my eyes and at least for me when I'm like reviewing the edits and like I need to reshoot it because of X, Y, Z. So yeah, I think that that's a really fascinating and practical where you're it was like branding.

It didn't intend to be branding, but it became branding and those are the. Fun types of stories as well, in terms of, you know, we, we did, you did touch on deep fakes. You know, we talk a lot about how scary deep fakes can be, but you've also, you know, we've talked offline about some of the benefits that deep fakes can have. Can you talk a little bit about like the uses for that technology, which might be really beneficial rather than just overwhelmingly scary?

[00:41:51] Yannic: Yeah. I mean, well, in terms of usefulness, I'm pretty sure people can think of much, much more interesting, useful cases, but some that are, let's call them mildly useful in the entertainment industry is that for example, an actor could just sell their face without actually going and acting. Right.

So you could, you could sort of have someone else act for you and then you just put your face on there and you get a license fee or something like this. You can, you know, this, it goes, goes further than that. Obviously. I'm pretty sure that. Medical applications for deep fakes, there are humanitarian applications and whatnot and therapeutic applications.

But I'm generally off the philosophy that any technology can be used for good and bad, and almost every technology it's kind of undecided which one over ways, it's more a function of the environment. There are some like, you know, atomic bombs and so on. I mean, nuclear technology as such no, but you know, once you build something that's called an atomic bomb, then you're like, well that thing's going to destroy stuff.

But. For the most, for the bulk of technology? I believe, yeah, you can, you can always use it for good and bad and that's just, that's just it. And I believe in humanity and I believe that humanity will inevitably invent or find more good things to do with tech. Just from the historical record that, that tended to be what happened, because there's just more. There's just more fun. There's more profit in making good things than bad things. So, yeah.

[00:43:39] Ken: Yeah. That makes sense. So, you know, it's funny, you could probably have solved your initial sunglasses problem with some iteration on deep fakes. Right. If you sort of do some sort of like generative fake.

[00:43:53] Yannic: Yeah. and I don't mean with respect to, with respect to deep fakes. I have to say, I don't think it's like that scary because it's just like another medium that we can't trust them. You know, people, people used to trust really if like a letter was written like that, that is like solid information. If something was in a new space that's like really, really good information and, or an email, right.

But now there's an email and some Nigerian prince or eight two, and you've inherited some money. People know that, not everything that's written is real people know that a picture of, you know, the front page of Vogue. Isn't real, like people know. And I think the deep fake era will just introduce another medium, like say, Hey, you know, talking head footage. Not necessarily a real right. If it's suspicious, you know, be careful. And I think that adjustment people have made it for so many mediums in the past. It's just one more and yeah, that's it.

[00:44:57] Ken: I'm interested in eventually what the media, what like the next iteration of the medium that we trust is, you know, we can't trust video. What is the future of that? Is it some sort of...

[00:45:08] Yannic: It's kind of probably multiple corroborating angles of video from different sources, right. And then you're like, Okay, chances, that's all the pay consistently from the different sources is kinda low.

[00:45:24] Ken: Interesting. Now I will start filming in two camera angles. Incredible stuff. You know, I want to make sure we save some time for how you got started with YouTube creation. So again, you talked about how it is a great medium for you to. Like actually like learn stuff and communicate it, but where did it all start? You know, like how, how did, how did you get going? And I mean, you've obviously. Found an incredible audience. How has, how has that process been for you?

[00:45:56] Yannic: I just, I had to read some obscure papers and reinforcement learning and I just, for some reason I thought, well, if someone else needs to read. You know, they might find an explanation of it useful. So I made these videos and I uploaded them to YouTube.

And I mean, predictably, not really anyone watched them. And I think, I think that was 2017 or 2016, no, 2017, something like this. So it'd be another, like three years before anyone really paid any attention to my channel at all. And I just uploaded these videos because I thought, you know, it, yeah. As I said, it forces me to read the papers and I kind of saw that there was a.

There's lots of content for beginners in machine learning. And then there is research talks, right? Talks from research conferences. There there's nothing really, there was nothing really in between. So I thought, you know if someone wants to get really from kind of like master's or bachelor's level understanding to the latest research, I could provide that. And yeah, that's what I did.

[00:47:07] Ken: Well, I think that there's a real. Unique role that you're filling where it's like advanced enough, where people are going to like understand it and want to dive in, but you're not skirting over any details. So, you know, another channel that I love is two minute papers, right. They do a great job, but I think for an advanced practitioner, that's probably just scratching the surface.

That's an invitation for you to actually just read the research, whereas with a lot of your things, If I really want to dive in. Yes, I can read the research, but I walk away with a fundamental understanding of like the benefits and some of the drawbacks. And you all, you still leave it open to say, Hey, like, you know, pursue this further.

Something I've noticed you doing more recently is bringing on authors of papers. And you had mentioned to me offline that that can present some form of problem. Can you talk me through that and what, you know, how you're, how you're sort of resolving some of the. I wouldn't say conflict of interest, but how you're resolving some of the challenges that go along with that.

[00:48:08] Yannic: Yeah. Well, it's just about the beginning. I thought no one would answer. Right. But then interestingly, a lot of people were like, Oh yeah, we'd love to kind of to come on. Right. And talk about our research. So that was really cool. And I still think it's a John privilege to have people on to talk to me about their papers.

So I enjoy that thoroughly. I think there, the videos are not doing as well as the paper reviews themselves for now. So I still have something to figure out of how to make these more useful to the audience. But like I'm mindful of the fact that it's a huge privilege and yeah. There's, I mean, problems.

First of all, there is the meat problem. I'm not a scheduler. I'm not a conscientious person. I'm not a person that is organized or anything like this. So actually organizing people coming on, you know, everyone needs this ahead of time and that, and it's a challenge to me, but, you know, If I'm, if I focus, I consult with the other problem I had was that it's just, everyone's so nice.

Right? Everyone's really nice. Mo most people I'd say most people are really nice. Most people, they really believe in their research. Right. They can, they can also present it in a good way. And I'm also. Not a disagreeable person too much in an interview, especially if I host it, I want to be a nice host. So it's quite hard for me to like, criticize.

A paper when the author is here and I'm like, Oh, this is so cool. So I had to play around with this a little bit, and I also had a lot of questions, my audience, and got a lot of feedback also from my discord community. And. Now, I figured out that if I do the paper review first, I can be kind of as critical as I want there.

Then I bring on the authors and they can sort of respond to all of the criticism. I think that that is. Sort of the best of both worlds. And I thought I was really smart because I used to have the authors on and then record the paper review. So I felt like, well, I can just let the authors explain their paper to me.

So I don't, but then yeah, it'd be a Dick move. Right. If I raised criticism in the paper review, After the interview was over that they didn't have a chance to respond. So I just ended up not being critical at all. Which was kind of a missing element. So now I've settled on this kind of new mode of doing things and I'm pretty sure I'll iterate again. As I figure out how to do these things in the best way.

[00:50:56] Ken: Well, I think iteration asking the community, those are all really relevant and data-driven things that you're doing. And I mean, I don't think there's any wonder why you've had success in that space and been able to tell such incredible stories and get such incredible guests.

I think the last thing I want to touch on, you know, you had mentioned that the scheduling and the stuff is not your. Area of expertise per se, and that, you know, you're doing a lot, you're producing a lot of videos, you're working full time, which we didn't even really touch on that much. But how do you, how do you find balance or how do you try to balance your time or, you know, do you even find balance across all of these domains?

[00:51:36] Yannic: Oh, I don't it's kind of a mess and I need to slow down. I'm wary variable. Of that fact, I'm just, I'm just trying to find a good way of doing that. I'm trying to find ways of outsourcing stuff, which, which I also really bad at. And I'm bad at saying no to things, which I also need to improve and yeah, I'm bad at taking time off or something like this.

I'm just. Yeah, it is not sustainable, which I'm very well aware. So I guess sooner or later I'll have to figure something out, but don't take, don't take like work-life balance lessons from me is not, not a good idea. I would not recommend at all. Yeah.

[00:52:24] Ken: Well, first, I'm grateful that you took time and you said yes to coming on the podcast here. Second, I really appreciate how candid you are with. But that, you know, that isn't one of your strengths.

[00:52:36] Yannic: Yeah. I enjoy it. Right. Like, I enjoy, I enjoy the things that I do, which I think helps. Like I could like if you slay yourself for just because you wanna make more money and you actually don't like your job and so on.

I think that that is a lot worse than. Being like up in arms about the things that you like to do. Like also, so this, like, it's not that you asked me and I'm like, Oh God, no, I can't say no. It's like, yeah, that'd be cool to do it. So I think that helps a lot just doing something new you like.

[00:53:09] Ken: I really liked that. And you know, I think for a lot of people figuring out how to get a lot of the things that you like on your plate. Versus feeling obligated to do a lot of things that you don't want to do is at least some of that equation is some of that formula. Right. I mean, you haven't necessarily burned out recently or probably not in the foreseeable future and that's because you're working on things that you care about.

And I would imagine for the things you really don't want to do, you can, you don't have too much trouble saying no as well. And so, yeah, I would always, you know, I think you're in a great situation or. All of the opportunities you have to say no to great opportunities because you have so many of them, which is like Something that we're both fortunate not to be in.

And I would love for everyone who comes in and everyone possible to be in that situation as well. So that's all I have for, in terms of questions today, I had so much fun speaking with you. Where can people learn more about you? What are you working on right now? This is your time to take on a share with the world. What's going on.

[00:54:21] Yannic: Yeah, thanks for having me. This was, it was really great. I don't have too much to say I have a YouTube channel. That, that is find-able under my name, which is weird to spell. But if you search for kilter, actually, no, I'm not. I'm not by far, not the most famous culture on the planet. I have extended family which is actually my relatives that have like a reality TV show and one as a pop star and so on. So yeah, but if you search machine learning, you'll probably find me.

And other than that, it's just, I mean, for people who aren't in machine learning that much, it's really an open field. It is, there there's so much resources. It is really easy to get into. Also the coding bar has never been lower. This has been tremendously easier and easier every, every year in the last years just to get in, to get something, going to train your own models, to explore the existing models. There are so many pre-trained models available that you can build cool stuff with.

Yeah, so the hurdle, the is really low to get into the field and to do cool stuff with. And if people aren't yet, if people are kind of respect, like have maybe been scared of that or something like this, then don't be this it's very accessible. And I encourage everyone to do so.

[00:55:54] Ken: Incredible stuff. Thank you so much again, and I really appreciate the time. I think the will love this episode.

[00:56:00] Yannic: Sure, thank you very much.

108 views0 comments
bottom of page