How to Solve Algorithms Problems & Coding Challenges
I think dynamic programming can be very intuitive. If we actually make a nice gradual progress through the material, right? A lot of students have this habit of trying to attempt one of the very difficult dynamic programming problems without going through the necessary steps of really understanding the material, right? It goes without saying, you definitely need to know dynamic programming if you want to do well on those data structure and algorithmic overviews.
You really need to ditch those kinds of problems. Now, that being said, what problems are being addressed throughout the course? Here are a few examples. So one question I can ask you is that while computing the fourth number on the Fibonacci sequence sounds like a very easy problem, you might need an asics by nine grid, or something different, such as, given a set of Can also ask to count the number of different ways of moving through a medium.
coins? How can we make 27 cents in coins in as little as possible? A final example would be given a set of substrings? What are the possible ways to make a string powerful pot? And all these questions really fall under the umbrella of dynamic programming. And that's why I think this topic has such a bad reputation or is so hard, because the problems abound, isn't it? It seems that these problems are completely different. And there may be no underlying mechanics we can use to deal with them all. But the short answer is that we can actually do it if we take the right way of thinking about these problems.
That being said, let's go over the overall format of this course, in this course, I think the key to our victory is to actually visualize all of these algorithms, right? So we're going to spend a lot of time visualizing things with animations as well as drawing things on the whiteboard.
"All the heavy lifting on the algorithm interview for me is really done" when you come up with that picture, right? The really easy part is when you describe that process, and then translate it into some code. Correct? The hard part is designing the algorithm in the first place, isn't it? So we're going to run out of things to make sure we understand the structure of the problem, as well as work out a solution". And we'll solve this in some code, we'll probably have to go back and forth until we end up with an algorithm that runs in an efficient time, right. So it goes without saying, Also going to analyze the time and space complexity of all the solutions.
I will write my code in JavaScript, but you will find it very easy to translate our solutions into the language of your choice. So in this course, on dynamic programming, we are going to divide the material into two main parts. Part one is going to be about the memoir. And then part two will be about tabulation. And if you don't know what those two words mean, don't worry, we'll get to all those physical steps together I think you're going to feel if we really learned all these things Huh.
Very logical progression, we are almost the same as discovering these algorithms. And I don't just need to tell us what the algorithms are. So in terms of prerequisites, I wouldn't assume that you know anything about dynamic programming, but I'll assume that you understand a little basic recursion, as well as some basics of complexity analysis, just fine. So you are familiar with big O notation.
And I'm sure we'll be able to review some of that notation as we go forward. Well, I want us to be really comfortable with this new topic. And so what we will do is that we will start by attacking a problem that you probably saw in the past, i.e. I want to solve a Fibonacci problem. And so for us, we will have a special phrase of the Fibonacci problem.
What I want to do is write the function fib of n which takes a number as an argument, and I need to retrieve the nth number of the Fibonacci sequence. And just to review, how does sequence work? Yes, the first and second sequences are simple. And to generate the next sequence number at any time, you can simply add the last two.
So for example, these are the first few numbers in the Fibonacci sequence, the right 193 starting with 358, and so on. What I am saying is that your number needs to be considered as a sequence. In other words, if I asked you the seventh Fibonacci number, you need to get 13 answers, right? Because this is Bonachi's seventh number 13. And how do we actually count 13? Yes, it makes sense, it is the sum of the last two items, five and eight which gives me 13. Sorry, a job well done here. And to really encourage us to move forward with what we are doing over time in the study, I want you to resolve this repetition, well, that's in the middle of today's topic that is going to happen.
Why not get things started by quickly using the repetitive implementation of the Fibonacci function, perhaps a few times in your editing work, in this work, often as one of the first examples of re-encounter. So we're just going to do the old classes here. So I want to take a number and I want to get back the Fibonacci sequence number. As we might expect, basecase is about, you know, the first two numbers in a row. In other words, if I am given something n less or equal to two, all I have to do is return one.
I read this because hey, the first two numbers in the Fibonacci sequence are the same. But in the recurring case, in general, all I can do is return the fibonacci number just before I ask, and the fibonaccin number before I ask, right? Also, with a Fibonacci-baked nature that repeats a certain Fibonacci number, you can take the sum of the last two numbers in a row. Now, we have to test our code with some accuracy, which I can do to call other aspects of this fib function. So I would try six, seven and eight strands. And I shouldn’t have got the answers to 813 and 21 respectively, okay. So let's try.
I will use it in another JavaScript. Where we have the right. 813 and 21 So this is the oldest implementation of Fibonacci, you may have seen it many times before. And we solved it over and over again. And actually, what I want to do is give a big number to this fib function. So what if I ask, let’s say, I don’t know, the 15th Fibonacci number seems to make sense in some way. So when I put in this codia shot, it seems like the first three calls to Bonachi are working well, I get 813 and 21. But the fourth call actually works, well, my system isn't really finished yet. And this is a big problem with this kind of startup with Fibonacci.
Obviously, this Fibonacci work requires some work. Let's move on to the drawing board. OK, so it's clear that our Fibonacci work is right. And that gives the right result. However, if we give it a sufficiently large number of n, it slows down. That is, there is cleanliness in our work but there is some inefficiency. Over time, we certainly want to improve on this use of the repeating Fibonacci. But in order to do that, it is very important that we identify exactly where there is room for improvement.
And to do that, I think we should build a few things. This is something I think students really need to work on. Students have a habit of trying to love everything on their minds. And that applies to some of the simplest problems. However, when we want to deal with complex issues, if we just try to capture all this information mentally, such as pen and paper, or markers and white boards, or chalk and paste boards, outside, you will discard the following details of these structures.
So I want us to be very organized, and we will draw how you should think about a problem like Fibonacci. And with my drawing, let's just say I wanted to find out what happened when we made a call at number seven. i.e. asking for the seventh number in the Fibonacci sequence, I know that over time, I should go back to 13.Ok. 13 number seven in a row.
So I'll keep that goal in mind. But on my drawing, let's say I'm called the filament of seven, I represent it by actually drawing a circle with my value for n. So I think about this call for seven's fib, what's this going to do? Well, I know Seven is not a base case, is it?
Seven is not less than or equal to two. So this callis is going to branch out into a few more recursive calls. Specifically, on the left hand side of this, I'm going to call n minus one, which is six. On the right, I'm going to do minus two, which is five. And at this point, I proceed the same logic as the other nodes have the structure, well, if I look at node six, if I route from there, it will have a left child of minus one. , so five, it will also be the right child of minus two. So four, you can start to see a pattern where really this recursive structure just shows up as a tree, which is really neat.
So you'll see that the notes that pointed to red are actually base case, right? For those nodes, I have values of two or one. And I know those function calls will return immediately.
More importantly, it means they don't branch into any more calls. So I don't want to start traversing out the tree of those nodes. Instead, I look at things that aren't base case, right? These are these nodes in yellow.
So I'll continue to branch out this tree, but don't branch out further for the base cases. So at this point, I built my entire tree, and I stopped taking out the tree whenever we had a base case scenario. So it's actually a full recursive tree.
Remember that the numbers inside the nodes here represent the end we went through. That being said, if we have this view, how exactly does this tree calculate the Fibonacci answer? Okay, so let's start breaking it down right here. Let's say I saw some note, specifically, this base case note of two, right?
I know this note is a base case, so it's supposed to return a value of one according to my base case, when we say return, that's actually what it means to return to your caller to your mother. - Going back to father. So this note of two is going to return one to his parents of three. Similarly, this node on the right hand side of one is also a base case, two will return one, both the values they are returning go back to the parent of three, and three will actually return those two values. going to add.
One plus one is two. And it matters a lot because we know that the third Fibonacci number is two. So we can continue this process. OK, this node is over here. This is also a base case. So it returns one. Andnow the parent node of char is going to sum the values of both its children. Two plus one three. And that in itself makes sense, because the fourth Fibonacci number is three. So maybe you get the picture. Let's speed things up now. For all of these original cases, I know they're going to return one to their parents. And for all parent nodes that have both children ready, i.e. both of their children are returned, they are just going to add those values.
And this process continues all the way up the tree, isn't it? Just adding our left and right children. To answer whether we should return the top of our tree to the root of our tree, we get a final result of 13, which makes a lot of sense, because, in the beginning, we said that, the seventh Fibonacci number is actually 13. So now that we have a strong understanding of how to visualize this fib function, what do we actually know about its motion? What do we know about time complexity? And so you might have heard people mention that a classic recursive implementation of fibis in time complexity is going to be two to n. And this is the case though, you really understand the reason. So the reason hidden in this picture is that for us the Fibonacci is going to be two to n in terms of its time complexity. What is unfortunate about this picture, however, is the asymmetry.
I think, one big reason why students have a really hard time convincing themselves is that such a function has more time complexity than two. So here's what we'll do, why not warm up and go through some basic understanding of time complexity? And I promise we'll answer that Fibonacci question. So let's warm up a bit. Suppose I gave you this foo function, notice that it is different from our fib function, isn't it? It's similar in that recursively, the function is kind of arbitrary, it doesn't actually calculate or solve a particular problem.
So if I want to imagine how this foo function behaves, let's get it out there.
Let's say I initially call it the top level foo of five. I know five is not a base case. Soit's going to call n minus one, or it's going to call four calls to three, threecall to two calls to one, and then here, we've really nailed down a base case. If you look at the number of calls I've made, I basically made exactly five function calls. Which makes sense, because in terms of our base case, where we stop once we hit a number less than or equal to one and every recursive step, we just subtract one from our current value of n. So in total, I have five calls here.
But if I generalize this, for any arbitrary input, I know that in the long run, I'm going to have about n different function calls recursively. have to evaluate O out of n different function calls. While we're at it, let's take a look at space complexity." Well, you may have heard in the past that when we analyze the space complexity of our recursive functions, we must include any extra stackspace that our function call takes right up, when we make a recursive call. we add to the call stack, and those should be tracked by our computers." And so since we have about five or n different function calls added to the stack before we hit our base case, you can see that the space complexity of this code is also O of n space, in total , we are looking at n times O and the open space for this function. Pretty straightforward, isn't it?
Let's look at a more involved function. So let's say I gave you this bar function now, it's another arbitrary function, it's very, very similar to foo, what you should notice, the only difference is that when we make a recursive call, we get a Instead of n minus one do a n minus two.
So how exactly does this change the time complexity of this function? So let's say I wanted to trace through it and I made a top level call for six times, I know six is going to call four, four is going to call four, and zero to call Really hits the base case. So it's very similar to our previous example, except we see that from one call to the next, we take a big step in time, right. And so in a way, we can say that we are proceeding twice away on every recursive call.
This will actually be half the number of recursive calls we need. So I guess we might be tempted to say that the time complexity of this is more than twice that, but a keen observer will note that according to our Bego, you know, understand, when we have the time complexity , then we can remove any multiplicative constant, so two times N is the same as one half times n.
So it simplifies nicely, it's just an O of n time complexity. Using the same exact logic, we can also say that the space complexity from the stack is also open space. OK, so let's take a bit of ground, I showed you two functions that are very similar, they really only differ in how they made their recursive calls, right? One did minus one, the other did Minus two done.
But in the grand scheme of things, we saw that they had a similar complexity class, right? For both these operations we have O of n time and O of n space. So after these two examples, you might be able to see the reason I wanted to bring them up, well, maybe you really want to take the logical leap and draw some conclusions about our classic Fibonacci recursive function. Be ready.
That being said, I don't want to miss a step. I want to be super methodical, I guess, if we pay the cost of understanding fib right now, and I mean, really, just like that. Understanding fib. It's really going to pay off when I slam me with some of the more difficult problems later in the lesson. So let's be nice and organized here. Let's take a look at some of the other functions. Let's say I did this to you. Very special OK, we won't pay it. What did we care about it worked? Well, it has two recursive calls which are now right inside each call. And they both do a n minus one.
How should we imagine it? Well, this is similar to my guess, our initial fib drawing and figure, where if we start with some initial call, let's say five, five is going to branch out to exactly two children, right? Because five is not yet a base case. And for this task, it does a minus one to its left, and it's also a right child, right. So the next level is just for the next levels, only two to three, so just the ones that will really hit our base case here. It's a really nice, like, pretty symmetric tree, isn't it? Okay, so that's a view for our deb function.
But what does this tell us about time complexity? Much of what you'll hear me say in this lesson is that when we deal with a quote, non-quote, new problem, or a new pattern what we're really trying to do is just our past experiences. Taking advantage of it, isn't it? So when I look at this tree structure, I'm trying to notice something familiar, right? Is there an edge here that I can figure out in order to feel really comfortable and expand on my previous teachings, right? Where can I find our base inside this drawing?
Boop, right here. So if I look at this path, I've highlighted in yellow, it's actually a path starting from the root node going into some base case here, I just named the leftmost path. And what's really cool about this structure is just the yellow notes it talks about. It's a linear structure we saw earlier, isn't it?
If I start from root, it just gets 5432. And a. And so I know that in general, based on my initial input of n, like the length of this path that has that many nodes highlighted in yellow, there's going to be about n distinct nodes. If I adapt that language like a tree, I can also say that the height of this tree is n. So the height of a tree is really the distance from the root node to this leaf, in this case, that means the distance from our top level called five, all the way to the base case, which is going to be exactly five here. Be. Something that you may also hear, we can say that the number of levels in this tree is also n, the term is pretty straightforward, isn't it?
When we say number of levels, a level is just a collection of nodes that are at the same distance from the root. So for example, here in yellow, I've highlighted level zero, this is level one, this is level two, this is level three, and so on. But if I rewind things a bit, I see at the very top level, here is a node, on the next level, there are two nodes.
On the next level, there are four nodes, then eight nodes, then 16 nodes, see the pattern? So let's try to generalize it. So I know whether, whenever we call some top level logic for DIB, we know that we will have a node at the top level. But to get the number of nodes on the next level, we will simply multiply it by two. And the level after that will also be multiplied by two and then multiplied again to the next level. And I do this a total of n different times, right? Because I know that the height of the tree, or the number of levels in this tree, is exactly n.
And so what we can conclude here is, we're basically saying that to get the total number of nodes, or the total number of calls by a recursive function, you'd just take number two, and do it yourself. will multiply by approximately n. Finish. And that's really the definition of an exponent, isn't it? This is the same as two to nthpower. So we can say that this tree structure, this recursive function has two to n, time complexity. Great.
So we recognized that this dip function is two to ntime complexity. But from what we know about space complexity here, I think a common mistake I've seen people make is automatically assuming that the space complexity of a recursive function will be the same as the time complexity.
Top Best FullStack Developer Course here. with 100% placement guarantee.....
No comments:
Please don't write any spam text or message