0% found this document useful (0 votes)
25 views525 pages

Leetcode Problems Summary

The document summarizes various LeetCode problems, detailing their topics, difficulty levels, and key takeaways. Each problem includes an analysis of different approaches, time and space complexities, and personal insights on the solutions. The author reflects on their learning process and strategies for optimizing solutions across a range of algorithmic challenges.

Uploaded by

zayn96165
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views525 pages

Leetcode Problems Summary

The document summarizes various LeetCode problems, detailing their topics, difficulty levels, and key takeaways. Each problem includes an analysis of different approaches, time and space complexities, and personal insights on the solutions. The author reflects on their learning process and strategies for optimizing solutions across a range of algorithmic challenges.

Uploaded by

zayn96165
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 525

LeetCode Problems Summary

Problem 1.0
https://leetcode.com/problems/product-of-array-except-self/description
Topics: Array, Prefix Sum
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1204250450/

Takeaways:
The second (prefix product) array can be replaced by an iteratively updating variable

Analysis + Comments:
Initial idea was to create two 1-D arrays, one for the prefix product and another for suffix product. The
product except itself will be the product of element before and after it (from these arrays). Also, you
need atleast one array aside from the input array for solving in the intended manner. That array in this
question is supposed to be the output array itself. Did in O(n) time and O(1) aux space. My initial
approach using 2 arrays was O(n) time and O(n) space.

________________________________________________________________________________

Problem 2.0
https://leetcode.com/problems/rotate-array/description/
Topics: Array, Math, Two Pointers,
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1173410423/

Takeaways:
Modulus can be used to keep the rotation within range of array. In C++, modulus is negative only if the
number being divided is negative. Hence for counterclockwise rotation (k<0), k%n will provide a
suitable k, with a negative sign signifying opposite rotation.

Analysis + Comments:
The three approaches I can think of: 1) Swap one position to the right and put the last element in the
first position.Then repeat for k steps.: O(k*n) time 2) Copy the elements which will be overwritten in a
temp array, then overwrite them, then copy those elements to the new position: O(n) time but O(k)
space. 3) Reverse entire array. Then reverse subarrays of size k, n-k individually O(n) time and O(1)
aux space.Current submission solution is hella sloppy, you might need to remaster it.

________________________________________________________________________________
Problem 3.0
https://leetcode.com/problems/running-sum-of-1d-array/description/
Topics: Array, Prefix Sum,
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1173414127/

Takeaways:
Prefix[i] = Arr[i] + Prefix[i-1]; for i-1>=0

Analysis + Comments:
This is extremely basic. Got good time but bad space complexity despite being O(1) space. Maybe this
requires something goofy.

________________________________________________________________________________

Problem 4.0
https://leetcode.com/problems/best-time-to-buy-and-sell-stock/description/
Topics: Array, Dynamic Programming
Difficulty: Easy
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1173442304/

Takeaways:
Stack is good for questions when the difference in two things and those things can be updated based
on some condition.

Analysis + Comments:
I used a stack for solving this in O(n), basically pop and replace whenever a condition is fulfilled, else
keep pushing. Need to try out the "sliding-window" approach for this, whatever that means.

________________________________________________________________________________

Problem 5.0
https://leetcode.com/problems/two-sum/description/
Topics: Array, Hash Table
Difficulty: Easy
Status: Completed, Revisit
Solution Link: https://leetcode.com/problems/two-sum/description/
Takeaways:
Taking care of hashmaps with duplicate keys.

Analysis + Comments:
Time: O(n^2) approach can be the regular traversing every oter element for every element and
checking if it satisfies the target condition. O(nlogn) approach will be sorting and keeping track of new
vs old indices in a hash table and then checking if the two sums of arr[i] + arr[i+1] exceed the target or
not. O(n) Hashtable attempt is the thing i submitted, it is hella messy and needs improvement because
despite it being O(n), this solution only beats 50% in time.

________________________________________________________________________________

Problem 6.0
https://leetcode.com/problems/first-missing-positive/description/
Topics: Array, Hash Table
Difficulty: Hard
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1214122447/

Takeaways:
MEX: minimum excluded value. O(n) time: For O(1) space we use the input array itself as a hashset;
this is only possible because in an array of length n, if no elements are missing the array will be [1, n].
We can use this property and use the elements as indices and mark the elements at those indices as
negative after some preperation (this preserves their value but still prevents data loss). However, this
approacch can lead to data loss of negative values (because they are being replaced by n+1). The best
method here would be to use cycle/cyclic sort, after which nums[i] should have i+1. For the first i where
this is invalid, the ans would be i+1. This approach would prevent data loss, the the data would get
reorganized. For O(n) space solution; an unordered set is good for marking and then rechecking if
something exists or not if we know what exactly we are looking for.

Analysis + Comments:
Got the best approach for this question in the second attempt. Going to the root of the problem is
extremely important, dividing what we have, what we can do and what we cannot do can highlight all
possible approaches. Here, finding the first missing element in a vector of size n, we use the approach
where we use the input array itself as a hashset. This is only possible because if we look logically, in a
vector of size n with no missing numbers, the elements would be [1, n], which luckily corresponds with
our indices of [0, n-1]. Hence using the approach where we iterate through the array and mark the
element at nums[i]-1 as visited (if the index is valid, 0 <= newIndex <= n-1) (by making it negative). The
only problem here is that input array can contain negative numbers. To fix this, we do some
"data-cleaning". The solution only asks us for the smallest positive number, hence negative numbers
are useless to us and it won't matter if their data is lost. Therefoer before doing the marking step, we
replace every non positive numer (<=0) with n+1 (becase that would lead in an invalid index in our
second step). After both these, we just iterate through the array and return i+1 as ans. Time: O(n)
space O(n), used a set to mark all the positve elements in the input, and then running a loop from i = 1.
The first element in the set which is missing will be our answer. Despite being O(n) time, this approach
beats only 20% so there has to be a better solution. The question wants a O(1) auxiliary space solution
from us. One method is to sort the input vector in O(nlogn) and then traverse it. If the difference
between the first positive element and its next is greater than 1, then return that element.

________________________________________________________________________________

Problem 7.0
https://leetcode.com/problems/sort-colors/description/
Topics: Array, Two Pointer, Sorting
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1174067880/

Takeaways:
Counting sort can solve this in O(n) with 2 passes. For a single pass, use DNF algorithm, which uses
three pointers, 2 of which keep track of where 0s should end and where 2s should start (since
everything in between will be 1s) and the remaining pointer is for traversal. For testing algos like these,
make multiple cases and try to fnid exceptions and conditnios for each case, i.e. what should happen,
where should stuff go and make sure these cases are off of some concrete initial assumption, pattern.
Here we know to te left of our l pointer, we will only have 0s and to the right fof r, we will only have 1s.
WE need to take care of the other cases then

Analysis + Comments:
This question reminded me of counting sort, where the elements being sorted have a constraint. Here
the elements are 0, 1, 2. Hence implementing counting sort with an array of size three, utilizing indces
0, 1 and 2 was a natural conclusion for a O(n) time approach and O(1) space. However, this approach
utilized 2 passes, and the question asks for 1 pass.

________________________________________________________________________________

Problem 8.0
https://leetcode.com/problems/3sum/description/
Topics: Array, Two Pointer, Sorting
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1177768767/

Takeaways:
When needing to skip elements in a (sorted) array while going forward, put a condition for skipping
when arr[i] == arr[i-1]. And for skipping while going backwards, put a skip condition for arr[i] == arr[i+1].
This makes is such that when a first element (in a sequence of duplicates) is encountered, it wont be
skipped, allowing for the remaining sequence to still be used for computation. Ths skips will start
happening after this. If this condition is not handled properly, it leads to the skips happening first and
computation happening at the end, which basically results in a lesser number of tital cases considered.
Only iterating the pointer when necessary is a smart move as well, divides problem into proper
subproblems.

Analysis + Comments:
Brute force approach was achieved but it was in O(n^3), which would result in input time exceeded. To
fix this, I had initially thought of using a three pointer approach leftPointer = 0, rightPointer = n-1 and
middlePointer = rightPointer-1. The middle pointer placement was unintuitive as it resulted in a weird
loop and hence some cases could not be passed. Since we sorted our array in O(nlogn), setting a
interating leftPointer to decide a target which the other two pointers would try to meet in a binary search
type scenario is a good approach. Ofcourse, the proper skip conditions to remove duplicates need to be
considered. This approach resulted in a total time complexity of O(n^2). Also, is there a better option to
using a do while(); loop for implementing the conditional skip ?

________________________________________________________________________________

Problem 9.0
https://leetcode.com/problems/maximum-subarray/description/
Topics: Array, Divide and Conquer, Dynamic Programming
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/problems/maximum-subarray/submissions/1176633576/

Takeaways:
Always try the brute force approach first, since it makes optimizing it easier, instead of going for the
optimized solution first. The O(n) solution for this used Kadane's algorithm, which is a natural
conclusion from optimizing this from O(n^3) -> O(n^2) -> O(n). DP approach for this is also intuitive, but
the extremely important thing is knowing what is being stored in dp[i]: here, the max sum of subarray
ending at i. This mkes storing answer easy and at the end we can return maximum element from this as
our ans (as it is not necessary that the max subarray will end at the last index)

Analysis + Comments:
Three solutions are applicable here, all have O(1) space. The first solution with O(n^3) time is the brute
force approach, where all subarrays are calculated and their respective sums are compared to find the
maximum sum. The second solution which optimizesover this is in O(n^2); here you don't need to find
the sum for a subarray family with the same starting element (and different ending element) since it can
be found by adding the previous sum to the current element. The best solution for this is in O(n) using
Kadane's algorithm. I was extremely close to blindly figuring out this algorithm, if i had started solving
this problem from the brute force approach. This solution assumes if the prefix sum is negative, then it
can be discarded since it will only negatively contribute to the maximum sum and is hence reset. Else, if
the prefix sum is greater than the max sum, max is updated.

________________________________________________________________________________

Problem 10.0
https://leetcode.com/problems/reverse-linked-list/description/
Topics: Linked List, Recursion
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/problems/reverse-linked-list/submissions/1176651813/

Takeaways:
Recursive approach is is most intuitive for me here, solves in linear time but takese O(n) stack space.
The most optimal approch would be to use a 3 pointer (curr = head, prev = null and next = curr.next)
and while curr is not null, we point each curr to its prev.

Analysis + Comments:
Aware of both recursive and itrative approaches here but iterative needs to be reconsidered for O(n)
time, as far as i remember we use 2 pointers, one for head and one for tail: iterative soln done, uses 3
pointers

________________________________________________________________________________

Problem 11.0
https://leetcode.com/problems/trapping-rain-water/description/
Topics: Array, Two Pointers, Dynamic Programming, Stack, Monotonic Stack
Difficulty: Hard
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1195997142/

Takeaways:
Proper order of updation and subtraction can prevent unnecessary if else cases. Also, when looking at
problems like these, find the relation at a single index / node and then extrapolate it to the entire data
structure. Prefix and Suffix data structures (like prefixSum or a prefixMax array) are king when looking
at problems which require a previous solution of some kind (dp logic). Here, the use of a prefix / suffix
array canalso be eliminated if we are traversing in the proper manner (because the answer only cares
about min)

Analysis + Comments:
Spent a ton of time on figuring out the relation between water stored and the neighbouring elevations.
Was looking at water chunks at multiple indices at a time and hence could not figure out the general
relation; please don't do that. When looking at water chunks stored at a single index, it is the difference
of height of minimum of (max elevation to its right and left) and the current elevation. If this differece
comes out to be negative, make it zero (since water stored at a point cannot be less than 0). To
accomplish this, 2 arrays were used which maintained the value of curent maximum element when
traversing from left to right and right to left. Hence this approach takes both O(n) space and time. Now,
a solution in same time and O(1) space is possible as well, since our arrays were being used to store 1
most recent value, they can be replaced by a variable and a pointer. The nature of taking minimum of
the two elevations will be exploited here. Since we are taking minimum, if we move the pointer which
has a lower Max associated with it, we can guarantee that that value will be the resultant of
min(leftMax, rightMax). Also if we update the respective Max before (if current height is greater than
previous max height) subtracting from current height, we won't have t worry about negative water
stored values.

________________________________________________________________________________

Problem 12.0
https://leetcode.com/problems/maximum-product-subarray/description/
Topics: Array, Dynamic Programming
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1215256793/

Takeaways:
Understanding what values need to be saved to get our current answer is really important, made this a
DP solution which only neds O(1) space. Also, making previousMin*currentElement as currentMax and
previousMax*currentElement as currentMin in case of sign flips is intuitive (if that case comes true i.e.)

Analysis + Comments:
The first two approaches [ O(n^3) and O(n^2) time] were similar to #9 maximum subarray sum. The
best solution has an extremely intuitive logic and i was *this* close to figuring out. The thing is, since
our input array can have negative values as well, it can transform the minimum subarray product into
the maximum subarray product (and vice versa). Hence at each index, we need to be aware of the min
and max subarray product which were present at the previous index (these values will be initialized with
1 which is a multiplicative identity. Then, the current max and min subarray product will be calculated:
currentMax = max(currentElement, max(currentElement*prevMax, currentElement*prevMin)). The
same is true for currentMin (all Max are replaced by Min). We are comparing the current element on its
own (assuming it to be its own subarray) along with it multiplied by prev min and max (in case of sign
flips and assuming it to be part of some prev subarray). The max of ans and currentMax are stored in
ans. After all this is done, prevMin and Max are updated to currentMin and Max, because for the next
index, these values will be the "previous" values. O(n) time O(1) space solution.

________________________________________________________________________________

Problem 13.0
https://leetcode.com/problems/maximum-sum-circular-subarray/description/
Topics: Array, Divide and Conquer, Dynamic Programming, Queue, Monotonic Queue
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1195581083/

Takeaways:
Constraints are strict. Even though the order of my brute force was 10^8, due to the mantissa being 9
instead of 4 i was getting TLE. Finding maximumCircularSubarray as the maximum of (regular)
maximumSubarray and TotalArray - (regular) minimumSubarray was crazy. Consider dividing your
current (unapproachable) problem into parts of previously solved problems (with hopefully optimized
solutions) to progress if feeling stuck.

Analysis + Comments:
My first attempt was to use a brute force approach (similar to the one used in #9). We save the running
sum of the array for each subarray set belonging to the same starting index to make sure we get this
solution in O(n^2). Also, for the "circularity", % operator was used and a condition for the end index of a
subarray not being equal to the start index was applied. THe problem with this approach was that since
constraints on the length of the array was 3*10^4, n^2 caused it to go over the computation limit of
4*10^8 operations per second. To reduce the time comp, the solution to the regular maximum sum
subarray question (Kadane's algorithm) was utilized. The twist was that since the answer here can be a
subarray which can loop around, to account for that: TotalSum - minSubarraySum (which was found
out using a modified version of kadane) was used. The ans here will be max( maxSubarraySum,
TotalSum - minSubarraySum ). One special case where this fails is when every element is negative,
then this would give 0 as the answer, which is not a valid subarray. Hence in the cases where every
element is negative (and therefore maxSubarraySum itself is negative), we just return
maxSubarraySum.

________________________________________________________________________________

Problem 14.0
https://leetcode.com/problems/house-robber/description/
Topics: Array, Dynamic Programming
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1176880705/

Takeaways:
When chosing either A or B, consider which of the following cases will lead in a better output (in this
case, maximum) and adjust accordingly. In case of DP/memo, save that output so that it can be used
afterwards.

Analysis + Comments:
Started with the initial recursive brute force approach, which is of the order O(2^n), which was getting
TLE. Then i tried to reduce the time taken by introducing memoization using a hashmap. This was done
by storing the output computed in a hashmap (for each indice). And in each recursive call, it is checked
if a value exists in the hashmap or not. If yes then the value is directly used from the map, else it is
recursively computed. After this, the best solution was using DP with the memoization logic as base
which resulted in the fastest solution with 100% users beat wrt time.

________________________________________________________________________________

Problem 15.0
https://leetcode.com/problems/merge-two-sorted-lists/description/
Topics: Linked List, Recursion
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1182647558/

Takeaways:
Carefully consider the base cases so that redundant ones can be eliminated (like the head1 == null &&
head2 == null base case here). Also, the "bacwards" recursive call seems to be more intuitive when
solving problems where you need to return something. The "forward" recursive calls are better when no
values need to be returned and you are traversing forward.

Analysis + Comments:
Like most other linked list questions, this problem was more intuitive to solve with recursion when
compared to the iterative approach. The submitted solution is the "backwards" recursive approach
where we do the recursive call first, assume it has done its job and then work on the small problem.
Here, there were two base cases, one for each of the linked lists head becoming zero, where the other
address is returned. By design, if both head addresses are zero then NULL is automatically returned.
The recursive call and small step were done based on a comparison. If head1 data is <= head2 data,
then we would need to do the recursive call on head1.next and head2 (head2 was only compared with
head1, it needs to be compared with head1.next and so on) and te returned address was stored in
head1.next to "link". Vice versa was done for the else case. And then depending on the if else entered,
the head was returned.

________________________________________________________________________________

Problem 16.0
https://leetcode.com/problems/power-of-two/description/
Topics: Math, Bit Manipulation, Recursion
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1272568424/

Takeaways:
If a number is a power of 2, then its binary form will only have 1 bit as set (ex in 16, we have 10000).
Hence if after unsetting the last set bit the resultant number is zero, that means the number was a
power of 2. For unseeting, ether n & (n-1) or [ n - (n & -n)]. Take care of overflows and 0 edge case.
Take care possible values being stored when multiplying in a loop to prevent integer overflows. Also do
learn about the non iterative approach (it probably uses bit manipulation since that is a tag here)

Analysis + Comments:
Used a loop approach, had to put a condition to prevent integer overflow since we are multiplying by 2
for each iteration. Still need to discover the approach which does not use loops or recursion

________________________________________________________________________________
Problem 17.0
https://leetcode.com/problems/missing-number/description/
Topics: Array, Hash Table, Math, Binary Search, Bit Manipulation, Sorting
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1181044885/

Takeaways:
A + B = A^B + 2*(A&B;), bit manipulation helps prevent integer overflow

Analysis + Comments:
The first approach was to sort the array in O(nlogn) time and then compare the element with its next
one (if it exists obviously), if the difference is greater than 1 then the missing elemen is current
element+1. The other mathematical approach is to find the running sum of the array, and then compare
it with the expected sum of first n natural numbers [ n*(n+1)/2 ]. The missing number will be
expectedSum-actualSum, and this approach is O(n). One flaw with this is that since we are calculating
running sum, it can cause overflow. Hence th ebest approach for this would be to use bit manipulation.

________________________________________________________________________________

Problem 18.0
https://leetcode.com/problems/longest-increasing-subsequence/description/
Topics: Array, Binary Search, Dynamic Programming
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1290217478/

Takeaways:
Consider the DP array properly before submitting the solution. In this case, unlike most other DP
questions the last element pf th eDP array is not the answer here, it is the maximum element of the
array. Also, DP[i] = maximum(DP[0] to DP[i-1]). Binary Search for the nlogn approach, where we create
a "fake" LCS array and update it accordingly. Basically lower_bound(start, end, num) - start for index.
lower_bound return iterator of element just greater (or equal to val), which mkes it perfect for this ques
since we want to remove values >= current val from our "ascending array"

Analysis + Comments:
Since this question deals with subsequence, we can use recursion to find subsequences.Subsequence
can be created by passing 2 cases, one where current element is considered and the other where it is
not considered. However, before implementing this approach I had to stop because it would be O(2^n),
which will cause TLE because of the constraints. Hence the DP approach here was considered, where
the DP array ith element would tell the length of longest increasing subsequence when the input array
is considered till the ith element. Before creating each entry, the array and dp array before current
element will be considered and if certain conditions are fulfilled, based on that condition the entry for
the current dp array element will be made. This leads to a O(n^2) solution, which isdecent enough
considering the input constraints. The final answer will be the max of DP array and NOT the last
element. Apparently, it can be solved in O(nlogn) which would use divide and conquer I assume. For
the nlogn answer, the intuition is that we wlil maintain an array where we push in num (from our actual
array) if they are greater than the previous value. If not, we find a value just greater than it and replace
it by that. This is done because in this manner, we maintain the smallest numbers inside our array and
the length is same. This is done so if the numbers are smaller, the next one has more chance if being
greater and hence increase the length of the subarray. (this is the equivalent of making an nother array
where that element and all the elements smaller than that are considered). Finding this element is done
using binary search or lower_bound.

________________________________________________________________________________

Problem 19.0
https://leetcode.com/problems/remove-duplicates-from-sorted-array/description/
Topics: Array, Two Pointers
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1202336204/

Takeaways:
The 2 pointer approach is the best case here since it completes its job in minimum time and space,
while being a "clean" algorithm (since the one used here is just a modified version of selection sort)

Analysis + Comments:
This question requires us to do 2 things, 1st is to return the number of unique elements (k) and the
second is to change the input array in such a way that the first k elements are the unique elements
themselves (in their original order). There are multiple solutions in O(n) present here. First is to copy
and count all the unique elements (by only pushing if nums.at(i) != nums.at(i-1)) into an array and then
copying them back into the original vector. This takes O(n) space as well. Another solution would be
using a 2 pointer approach, where a pointer (i) is initialized at the beginning and the other is initialzed at
i+1. It keeps iterating until it reaches an element which is not equal to i, and then swaps. In the next
iteration the first pointer will increment and the second pointer will remember start from where it left off.
This is like a modified version of selection sort which completes its work in O(n) time and is inplace.

________________________________________________________________________________

Problem 20.0
https://leetcode.com/problems/palindrome-number/description/
Topics: Math, Bit Manipulation, Recursion
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1184929313/

Takeaways:
Modulus is the goat for extracting last digit of a number
Analysis + Comments:
The OG question. The first consideration according to the test cases is that the a negative number wont
ever be a pallindrome because the reverse of -121 is 121-, so a case needs to be made for that. Also
the number is just compared with its reverse and the number is reversed using the modulus method of
extracting the last digit, adding it to the reverse number and multiplying the result by 10.

________________________________________________________________________________

Problem 21.0
https://leetcode.com/problems/merge-sorted-array/description/
Topics: Array, Two Pointers, Sorting
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1186866554/

Takeaways:
Inserting elements using comparison between 2 arrays leads to an efficient solution. Also, any
stragglers need to be checked for as well.

Analysis + Comments:
Easy but ineffecient solution is to merge the arrays normally and then sort the result in (m+n)log(m+n).
The better solution is to merge the arrays in a manner using 2 pointers where both the arrays are being
traversed at once and the smaller element is inserted and the pointer for that is incremented. after that,
remaining elements need to be checked for insertion.

________________________________________________________________________________

Problem 22.0
https://leetcode.com/problems/diameter-of-binary-tree/description/
Topics: Tree, Depth-First Search, Binary Tree
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1187947612/

Takeaways:
Diameter of a binary tree is given by max(leftHeight+ rightHeight, max(leftDiameter, RightDiameter).
Also to reduce time complexity by not having to calculate height of every subtree, we return a pair
which returns both the subtrees diameter and its height. Height of the new tree is max(leftHeight,
rightHeight) + 1.

Analysis + Comments:
Returning multiple things to reduce time complexity is smart, here we are returning a pair<int, int>
which has both the height and diameter of the subtree. Diameter means length of the longest possible
path of a tree. It is given by max(leftHeight+ rightHeight, max(leftDiameter, RightDiameter). This is
because either the diameter can be an existing path from the right or left subtree, or it can be a new
path which goes from the left to right subtree. Also, new height of the tree is given by the max of legt
and right heigt + 1, since height only cares about the maxsubtree height and 1 is added because of the
extra node. O(n) approach because all nodes need to be visited atleast once.

________________________________________________________________________________

Problem 23.0
https://leetcode.com/problems/find-bottom-left-tree-value/description/
Topics: Tree, Depth-First Search, Breadth-First Search, Binary Tree
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1188805540/

Takeaways:
Level order traversal is goated when we need to solve a problem which requries acting on the "rows" of
the tree separately. Here, it was used since we need the left most node of the "last" row

Analysis + Comments:
Level order search was utilized to isolate the first element of every level. Level order search a type of
breadth first search, but it is a more literal representation of the tree. Hence a queue is used and all
neighbours of a node (if they exist) are added to the queue. Also after adding the root node, a NULL is
added to mark a new line. While traversing, one the popped node is a null, that means we need to go to
a new line. Hence the next node is to be saved as a variable since it is a possible answer for leftMost
value in the last tree row. But if the queue is empty that means the tree is traversed. If not empty then
another NULL should be pushed so tat while traversing we know when to switch to a new line. O(n)
solution since every node is traversed once, relatively easy if you know level order traversal of a tree.

________________________________________________________________________________

Problem 24.0
https://leetcode.com/problems/even-odd-tree/description/
Topics: Tree, Breadth-First Search, Binary Tree
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1189765319/

Takeaways:
Honestly, dividing this problem into 2, level order traversal + conditional array comparision (asc or
desc) based on the level index ( %2 == 0 or %2 != 0 ) simplified this by a ton. Also somebody in the
discussion was suggesting another method of level order / breadth first type traversal, where all the
elements (until next null), which usually means current queue size are stored in an array
(simultaneously queued into the queue as well). This gives us complete access to the entire "row" of
elements at once.

Analysis + Comments:
Aside from storing the entire "level" as an array, another important thing here was just storing the
previous value to check for asc or desc array. This question was not that complex when compared to
other tree / tree adjacent questions.

________________________________________________________________________________

Problem 25.0
https://leetcode.com/problems/maximum-odd-binary-number/description/
Topics: Math, String, Greedy
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1190584196/

Takeaways:
Maximum odd binary number is obtained by shifting all ones (excluding 1) to the left and then add the
remaining one to the right to make it odd

Analysis + Comments:
Easy question. For a number to be odd, the last digit must be odd. Hence for a binary number to be
odd, the last "bit" must be odd (1 in this case)

________________________________________________________________________________

Problem 26.0
https://leetcode.com/problems/squares-of-a-sorted-array/description/
Topics: Array, Two Pointers, Sorting
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1191174838/

Takeaways:
Two pointers are extremely useful when the array is already sorted, here used to compare the squares
of the largest positive and smallest negative numbers to find the largest square.

Analysis + Comments:
Obviously, squaring all the elements and then sorting can easily solve this problem in O(nlogn) time
and inplace. However, a smart approach would be using 2 pointers, one at the start and another at the
end. Since we know the array is sorted in asc order, the largest element on squaring would either be
the last element or the first element. The larger of these is stored at the end of an output array and its
respective pointer is moved. This yields in a O(n) time solution. Since this question asks us to return the
output in another array, we can consider this approach to be O(1) space as well.

________________________________________________________________________________

Problem 27.0
https://leetcode.com/problems/remove-nth-node-from-end-of-list/description/
Topics: Linked List, Two Pointers
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1192877592/

Takeaways:
Reference variables are super useful when you need to "activate" certain modes of a recursive
function. Here the regular mode was forward traversal and once this was completed, the reference
variable was switched the function changed to a backwards traversing node remove checker. I still
don't know whether this approach is better or the 2 pointer one.

Analysis + Comments:
Clearly there axists a simple O(n) approach where the length of the list is calculated. And then length-n
is calculated and then that node is traversed to and removed. But to solve this in "one" pass, some
goofy stuff has to be done. Some people were suggesting using the pointers (fast + slow), the fast one
can traverse quicker and then find the length of the node, and then the slow one can remove that in the
same iteration (atleast this is what i infer this approach to be). But what happens if the node to be
removed is behind the slow pointer ?. My approach abused reference variables (fromLast = 0). First the
entire list is traversed with fromLast = 0. Once null is reached, fromLast is incremented and the function
returns. This means we are currently at the node which is 1st from the last. Now the entire list is
traversed back (by exiting the function stack) and fromLast is incremented. Once n == fromLast, the
node is removed (and fromLast is still incremented, because we don't want to remove the other nodes).
This approach is 1 pass but that pass is a front and back scan.

________________________________________________________________________________

Problem 28.0
https://leetcode.com/problems/bag-of-tokens/description/
Topics: Array, Two Pointers, Greedy, Sorting
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1193770292/

Takeaways:
Two pointers again proving goated when the array is sorted and movement / selection needs to be
done based on a certain condition.
Analysis + Comments:
Sorting the input array makes for a possible greedy approach. According to this approach (which is
lowkey common sense), the total score can be maximized if we buy power using small tokens and then
use that power on big tokens. Therefore two pointers (one at start, one at end) make for an intuitive
solution. This will run until the array becomes invalid (aka startIndex > endIndex). Also the edge case
described in the question should be accounted for. After the greedy approach clicked and a diagram to
account for the test cases was made, this approach was relatively was relatively easy.

________________________________________________________________________________

Problem 29.0
https://leetcode.com/problems/minimum-length-of-string-after-deleting-similar-ends/description/
Topics: Two Pointers, String
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1194335098/

Takeaways:
StartIndex, EndIndex, move while same (until they cross), leep track of movement length and subtract
from total length. Also, comparison here is made with relevant index (+1 if moving ahead, -1 if moving
behind) before the movement. This is because this question invlovles "removing matches" and hence
that made more sense to me than moving and then comparing with relevant indicies (-1 if moving
ahead and +1 if moving behind).

Analysis + Comments:
Question should have been marked as easy ngl. Like extremely easy, once the possible cases are
taken care of (which are explicitly told in the question itself.

________________________________________________________________________________

Problem 30.0
https://leetcode.com/problems/linked-list-cycle/description/
Topics: Hash Table, Linked List, Two Pointers
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1195532768/

Takeaways:
Given enough tries, the tortoise and hare algorithm (slow and fast pointer) can be used to solve this in
O(n) time and O(1) space, despite the values being discrete.

Analysis + Comments:
Obvious approach for O(n) time was to use a set to store the addresses of the nodes visited. If the nod
visited already exists, then we return true for cycle. This is O(n) space. Now for a constant space
solution, a slow and fast pointer algorithm was suggested, which i need to look more into since i still
have a hard time believing that tortoise and hare will work for discrete values (fast might skip over
slow?). It kinda makes sense but need to do it more on paper. // UPDATE, WORKS EVERY TIME DUE
TO RELATIVE VELOCITY

________________________________________________________________________________

Problem 31.0
https://leetcode.com/problems/middle-of-the-linked-list/description/
Topics: Linked List, Two Pointers
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1196604209/

Takeaways:
Fast and Slow for 1 pass, find length and move length / 2 for 2 pass.

Analysis + Comments:
The only thing which needs to carefully considered is the termination condition of the loop ( cancel if
end.next == null or end.next.next == null. And the mid node is returned based on whichiever of these
conditions terminated the traversal.

________________________________________________________________________________

Problem 32.0
https://leetcode.com/problems/spiral-matrix/description/
Topics: Array, Matrix, Simulation
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1196876402/

Takeaways:
Marking "coordinate's" using pointers is helpful, draw a diagram and figure out how they react to
movement. Also keep track of stuff being printed so you know when to exit. This just makes it easier to
avoid edge cases.

Analysis + Comments:
I remember solving this wayy back and had covered the matrix with int_max flags, and was
replacingevery element i traversed over with int_max. All i had to do was turn right upon encountering
the flag. That approach won't work in a non isolated scenario like leetcode so i had to resort to the
regular "4 pointer" approach which marks the starting row and column and the ending row and column.
Based on the portion covered, these pointers were moved around to mark the remaining region. Also i
was keeping track of the number of current elements printed and if nE >= nxm that means everything
has been printed and we can exit the loop.

________________________________________________________________________________

Problem 33.0
https://leetcode.com/problems/count-elements-with-maximum-frequency/description/
Topics: Array, Hash Table, Counting
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1197477151/

Takeaways:
Using frequency array in place of hashmap when desirable constraints are provided for T R U E S P E
E D and calculating the answer in parts of two and returning the product.

Analysis + Comments:
Since a reasonable constraint was given here on the array elements, a frequency array was used in
place of a hash map for P U R E S P E E D. The O(n) solution will use this freq array along with the
previousMaxFreqand the number of elements with that frequency. If another element with the same
max frequency (as prevMaxFreq) is found, then the numberMaxFreq is incremented. If a number with
freq greater than that is found then prevMaxFreq is updated to that and numberMaxFreq is set to one,
since we have a new desired solution. The answer is prevMaxFreq*numberMaxFreq.

________________________________________________________________________________

Problem 34.0
https://leetcode.com/problems/minimum-common-value/description/
Topics: Array, Hash Table, Two Pointers, Binary Search
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1198343606/

Takeaways:
Again, 2 pointers the goat when conditional movement is required in sorted arrays.

Analysis + Comments:
Using the fact that both the arrays are already sorted, two pointers approach is used here again of a
O(n+m) time O(1) space solution. Since the minimum cmomon element would appear first, both the
pointers are moved in such a way that the one with a smaller value is moved ahead. If both have the
same value, said value is returned. Another possible approach which is O(nlogn) is to traverse an array
(preferabbly the smaller one) in an linear fashion and then binary search for that element in the other
array.

________________________________________________________________________________

Problem 35.0
https://leetcode.com/problems/intersection-of-two-arrays/description/
Topics: Array, Hash Table, Two Pointers, Binary Search, Sorting
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1199444056/

Takeaways:
Boolean version of freqArray is optimal when you need an element just once (only care about its
existence).

Analysis + Comments:
Again, because of the reasonable element constraints, a boolen versino of freqArray (elementExists)
was used here. While traversing the first array, all elements were marked. While traversing the second
one, the existsArray was chcked. If the element existed, itwas pushed in the answer array (which could
be the first array since we dont need it anymore). Also the elementExists for that was made false
because we only wanted each element just once.

________________________________________________________________________________

Problem 36.0
https://leetcode.com/problems/custom-sort-string/description/
Topics: Hash Table, String, Sorting
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1200276761/

Takeaways:
Frequency array is busted if you are aware regarding the constraints on the input. The order array
(initial) approach was a great idea as well but in this case frequency array yielded a better result.
Typecasting using char - 'a' to to set index of a as 0 (b = 1, c = 2, ...) is helpful as it reduces the size of
the array being used.

Analysis + Comments:
Two approaches for this question. Since we are given constraints on the characters in both the input
and order string (26 characters) we can use an array for size 26 to store information regarding them (by
typecasting like int index = char - 'a', which will store a at index 0 and z at 25). Now the information
being stored depends on the approach taken. The initial approach was O(n^2) time and O(1) space
(and the sorting was done inplace). It basically used the array to store the order in which characters are
present. (ex if order is "cba", then the array will be 2 1 0 ~ ~ ~..) Then when traversing the actual string,
a bubble sort type of algorithm will be used to swap the current character with the next if the order in the
array dictates it. Due to this sorting, this approach is in O(n^2). The better approach, which is O(n) time
and O(n) space, it basically uses the array to store the frequencies of all the characters. Then it iterates
the order string and concatenates elements based on no. of times they are present in the freq. array
(and their freq is decremented): WHILE is used. Once this is done, the normal string is iterated and all
the elements (with a freq. > 0) are concatenated once: IF is used.

________________________________________________________________________________

Problem 37.0
https://leetcode.com/problems/remove-zero-sum-consecutive-nodes-from-linked-list/description/
Topics: Hash Table, Linked List
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1201792646/

Takeaways:
Firstly, please start reading the questions properly. Secondly, start thinking about ways to retrieve the
key from hashmap in a manner. In this case, math and some common sense of using the current prefix
sum + current node sum to find the exact prefix sum of the node to delete. Using a hashmap with prefix
sum as key and the node's address as the value was the perfect solution here.

Analysis + Comments:
The brute force approach, which is similar to the maximum subarray sum brute force approach, which
takes O(n^2) and it takes O(1) space. The O(n) time approach (which is kind of like kadane's in max
subarray sum). The smart thing here was using a hashmap (which causes this to lean towards O(n)
space), which has the prefix sum as key and the node's addresses as values. While traversing, check if
an entry for the current prefix sum already exists in the map (which takes O(1)). If it doesnt then create
it. Else start deleting nodes from the node after the previously existing node (which was found in the
prefix map). When a node is deleted, make sure to remove its entry from the map as well (it is to
prevent bad dereferences once a node is deleted). To delete a node in O(1) from the map, use the
prefix sum; create a variable which starts from the map entry of the previously existing node (after
which stuff has to be deleted) and adds the current node data to get the prefix sum of that node. Hence
using this value, that node can be deleted from the map. The edge case was where prefix sum is 0
needs to be considered because that means the entire sublist from head to the current node needs to
be removed. In this case head would be updated as well.

________________________________________________________________________________

Problem 38.0
https://leetcode.com/problems/find-the-pivot-integer/description/
Topics: Math, Prefix Sum
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1202200055/

Takeaways:
Binary Search does not always need an array to search. Here the search space is the number line from
1 -> n itself and binary search is possible here because the first n natural numbers will always be
sorted.

Analysis + Comments:
The best approach here is to consider the numbers 1-n as the easrch space and since the first n natural
numbers are always sorted. Then binary search can be applied with start as 1, end as n and mid would
be the number considered. Then using the n*(n+1)/2 the sums on left and right of mid can be calculated
and if equal, mid can be returned. Else depending on leftSum / rightSum being greater, start is changed
to mid+1 or end is changed to mid-1. This solution takes constant space and O(logn) time. There is
another O(1) time approach applicable here as well, which will use math (sum of natural nos. at left =
sum of natural nos. at right)

________________________________________________________________________________

Problem 39.0
https://leetcode.com/problems/binary-subarrays-with-sum/description/
Topics: Array, Hash Table, Sliding Window, Prefix Sum
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1204227034/

Takeaways:
Number of subarrays for an array of length n = n*(n+1)/2. Even though this was not used, the logic that
adding the normal subarray length (when it was satisfying the condition) each time will lead us with the
total number of valid subarrays. Doing T(x) = T( <= X ) - T ( <= X-1 ) to reach the solution was
something new, unexpected and smart. Another thing learnt while solving this is the order in which
operations are done in a loop can be the difference between bad_alloc and the optimal solution,
literally. So think extremely hard about that (ex here the current value was always being added, then its
validity was ensured. After that, the important operations were performed).

Analysis + Comments:
Again, by design first the brute force approach for this question (similar to maximum sum subarray) was
attempted which is O(n^2) time and O(1) space. And as expected, since the upper limit on n is 3*10^4,
we get TLE by a small margin. Therefore, for a more optimized approach a variation on the sliding
window was attempted. The super smart thing about this approach was let us say we want to find the
answer for 2 (which is hard). To do this, we will find the solution for <=2 (let S1) and <= 1 (let S2) and
then to get solutino for = 2, we will do S1 - S2. This was done because finding subarrays whose sum is
less than or equal to a given value was very easy. Using sliding window, leftPointer and rightPointer
were maintained. The value at right pointer was added to currentWindowSum. If it was invalid ( cWS >
goal ), the leftPointer was moved ahead until it became valid (i.e., cWS -= nums.at(leftPointer;
leftPointer++). Then the length of current subarray was added to the total number of subarrays
(rightPointer - leftPointer + 1), this because if we add the length of subarrays everytime, we traverse,
we get the total number of subarrays. Then the rightPointer was incremented. Also, apparently there is
another approach possible with a prefixSum hashmap and using the differences in value for the goal
(which takes O(n) aux space), but i was already too tired to try that. NEXT TIME BABY.

________________________________________________________________________________

Problem 40.0
https://leetcode.com/problems/contiguous-array/description/
Topics: Array, Hash Table, Prefix Sum
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1205076414/

Takeaways:
Sliding window is not always applicable, especially when multiple cases can happen (either window
expands ahead or behind) for a O(n) solution. Creative use of a prefix array / hashmap can transform
current problem into a previously solved problem (example custom prefix where 1 = +1 and 0 = -1).
Reverse engineering the problem from sometheing we need to something we can get is useful in cases
where all other solutions are not landing.

Analysis + Comments:
Considering I had to solve for a subarray, I assumed that sliding window would be applicable here.
Unfortunately, this question showed me the limitations of sliding window. Here, we don't know which
choice to make because we can't see the "future". (seeing future usually involves recursion, which in
this case would be greater than n^2). We don't know whether to move the left pointer ahead to
"shorten" our subarray size for it to meet conditions or to move the right pointer ahead to look for more
options to make our subarray valid. Also, since the number of test cases are in 10^5, the brute force
solution which is common for subarrays (which takes O(n^2) usually) will straight up not work here. For
solving this, my initial approach was to make a (regular), prefix sum hashmap (with prefix sum as keys
and the index as value). If sum = len/2 then that means number of 1s and 0s are equal. Then we
assume that value to be left or right pointer (of the subarray) and we check the other pointer using
formula len = right - left + 1. If the values of the other pointer are valid we store the maximum. I was
able to pass ~350 test cases with this but there was a slight problem (which was expected), i was not
sure whether to consider the first occurence of that prefix sum or the last, because there were logical
fallacies in both the cases (we wouldnt know if the "other" pointer is a one or zero. Another solution
using prefix sum hashmap is to not use "regular" prefix sum, but instead add 1 when we encounter a 1
and subtract 1 when we encounter a 0. At every instance, the existence of a previous "prefix" sum is
checked and if it exists, that means between that and current index number of 1s and zeros are equal;
and the subarray length can be found. Also, this should not be updated as we want to the largest
subarray and updating it would cause our previous subarray data to get lost. Another approach (which i
implemented) was to calculate the difference between the number of ones and zeros at every step and
store that difference in a hashmap along with the index at that difference. If the current difference is
already stored in the map, that means the previous "prefix" can be removed from the current subarray
to get a valid subarray and the resultant length by subtracting the differences. These solutions are O(n)
time and space.

________________________________________________________________________________
Problem 41.0
https://leetcode.com/problems/insert-interval/description/
Topics: Array
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1206461749/

Takeaways:
A complex and unapproachable problem (at first) can be broken down into its most rudimentary cases
and then be efficiently solved using a simple array and if else cases. You just need to take care of the
cases them selves (whether to use equal or less than equal) and edge cases. Using boolean flags to
account for these is also extremely helpful, this question used 2 sets of flags.

Analysis + Comments:
At the end it all comes down to simple if / else statements and an array. Managed a O(n) time O(1)
space solution, all it took was careful examination of every if / else case possible. Basically, keep
traversing until your start element of the newInterval is either smaller than the current interval or can fit
between them. Corresponding to these cases, either insert that element or the first element of the
currentInterval in a newVector. Then do the same with the end of the newInterval. Once a suitable end
interval is found and inserted in newVector, insert newVector itself in answer. Dont stop traversing the
intervals vector though, since there are still possible intervals in the thing. Keep adding them in the
answer vector by running another while loop, which picks up from where the previous iterator left. One
intersting thing is that based on the type of overlap, after inserting the newVector in the answer, either
the iterator will be incremented or not, to account for the overlap. Also to account for edge cases, a
boolean will be made which checks if a newVector has been inserted or not. If it hasnt been inserted
(that means the interval will be inserted at the end), the interval will be inserted at the end of ans vector.
My solution code could be a bit cleaner though. Another way for overlapping intervals: left =
min(newInterval, interval[i]), right = max(newInterval, interval[i]). this should make the code easier to
read.

________________________________________________________________________________

Problem 42.0
https://leetcode.com/problems/minimum-number-of-arrows-to-burst-balloons/description/
Topics: Array, Greedy, Sorting
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1207476110/

Takeaways:
Always try to sort the intervals (primary based on the starting interval) in these type of interval
questions, it makes appraoching them relatively easier. Also practice overloading operators to pass
them as a separate thing for custom STL function calls on objects. Visualizing intervals on a number
line will also help a lot while solving these type of questions.

Analysis + Comments:
This is another example of an interval question disguised under the balloon fulff. The thing with interval
question is that some sorting them generally helps (usually sorting based on the opening interval). I
used the C++ STL sort (which works in O(nlogn)) on the vector and overloaded operator() to take
vectors as argument and return vector1[0] < vector2[0]. A loop is ran which increments the arrow first
(because atleast 1 arrow will be fired to shoot 1 balloon) and then checks for overlapping intervals. An
interval is a valid overlap if its start and end point is equal or inside the current interval. Since all
intervals are sorted based on the opening interval, the start interval will always be inside. Hence only
the end interval needs to be verfied by (j = i+1) points.at(j).at(0) <= endingPoint and endingPoint =
min(endingPoint, points.at(j).at(1) (this is just to update the end interval if it becomes smaller than the
current one). Then we can start iterating again from the interval which did not overlap. Another
approach can be to let number of arrows fied = n (since at most n arrows will be fired for n intervals)
and then decrement it by 1 everytime as interval is detected, because a separate arrow would not need
to be fired for that interval. Apparently there is a greedy solution for this but im not aware of that (this
could be the one people were talking about in the comments).

________________________________________________________________________________

Problem 43.0
https://leetcode.com/problems/task-scheduler/description/
Topics: Array, Hash Table, Greedy, Sorting, Heap (Priority Queue), Counting
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1208398819/

Takeaways:
Toggling data between 2 data structures who perform similar tasks differently but using them for
separate purposes. Also abusing the input constraints to get a constant time access of top() from a
priority queue. Also learnt that if you are thinking of an over-engineered solution, you are probably
going in the wrong direction since even if you reach the solution, it won't be elegant and effective.

Analysis + Comments:
Interesting use of 2 data structures, a max heap priority queue (which is the default priority queue in
C++). Since we are only dealing with the minimum number of intervals required, we don't need to take
care of the actual tasks. Their frequency will suffice (which will be stored in a frequency array). The
"math" (not even math, its logic) here is that the tasks which are present in a high freqency should be
run first, and the tasks with lesser frequencies should be run in the gaps created. To do a priority queue
will be used. As we know, the top elment (max heap, max element) can be extracted from the queue in
O(logn). Best thing about that here is that the number of elements is limited (26 alphabets = 26 tasks)
hence that is basically O(log26) = O(1). Once we get the most freq element, we decreemnt that by one
(because interval has been used to complete that task) and push that into a regular queue. That queue
will store both the frequency and the interval after which that task will be available again (given by
interval+n). Because of this, at the start of every interval, we need to check if the front element of the
queue has its specified time == interval and we need to keep popping elements from that queue into
our priority queue (because they are ready to be worked on again since n time has passed). Since we
are using a priority queue, the lements will automatically rearrange based on their frequency. Also,
using queue for the waiting list is extremely intuitive as a queue is a FIFO based structure and since we
are pushing and pulling elements based on time (intervals), it will always remained sorted.

________________________________________________________________________________

Problem 44.0
https://leetcode.com/problems/merge-in-between-linked-lists/description/
Topics: Linked List
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1208750303/

Takeaways:
Main takeaway here is that a drawing / good figure can simplify a question ten fold.

Analysis + Comments:
This question should have been marked as easy. Straight forward solution in O(n+m) time and O(1)
space. You just need to take care of the starting values. Let count start from 1 and end at a-1 and then
let it run till b. This allows us to get the node before the list we need to remove and the node after,
allowing for easy removals and insertions. Also, do not delete nodes on leet code using delete keyword,
some weird exception happens. You are probably not allowed to do that for security reasons.

________________________________________________________________________________

Problem 45.0
https://leetcode.com/problems/next-permutation/description/
Topics: Array, Two Pointers
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1209725679/

Takeaways:
Dividing the problem into manageable subproblems reigns supreme again. Also, logically the next
permuation was figured out intuitively (more in depth in comments for this). Iterators are goated, iterator
at ith index of a vector can be obtained by: vector<int>::iterator it = num.begin() + i; where num.begin()
is the iterator for the 0th index. num.end() is the iterator for the last index. Look for more specific
solutions, since here reverse() yields a better results than sort() and it does basically the same stuff
here.

Analysis + Comments:
My second attempt at this question; the logic for this was straight up not hitting during my first try. What
I basically did here was divide the array into 2 subarrays an iterate from the right side of the array to the
left ( i = n-2 -> 0, n-2 is chosen because a subarray of size wont require conditions to be checked ). The
checks are done on the right subarray, with the check being whether it is in descending order or not
(done via nums[i] >= nums[i+1). This was done because is not in descecing order, then a valid next
permuation is possible in that subarray itself. Hence once such a subarray is found, then we find an
element from i+1 -> n-1 which is JUST greater than the element at i. Then we swap these two
elements. This makes it such that the next valid permutation at i is set. Now we need to arrange the
remaining subarray (i+1, n-1) in ascending order because the element with the highest weight has been
changed. (ex when we go from 2 3 1 to 3 1 2, because 2 changed to 3, the remaining shoulc be in the
"smallest" possible state which is ascending order). If none such subarray is found that means that the
entire input vector was in descending order and hence the next permutation would be purely ascending;
which can be easily acquired by sorting the entire vector. This solution is O(nlogn) time because of the
sorting and O(1) space. This is probably the most efficient solution for this but i have not looked more in
depth. NEVERMIND FOUND A BETTER SOLUTION. Since we are swapping the element at i with an
element just greater than it, we dont need to actually swap the subarray from i+1 to n-1, since after the
swap, everything to the left will be greater than equal and everytyhing at the right will be less than the
element. Hence we can just reverse the subarray to sort it in ascending order. All sort() calls are
replaced by reverse() calls and hence timecomplexity changes from O(nlogn) to O(n), which is crazy.

________________________________________________________________________________

Problem 46.0
https://leetcode.com/problems/rotate-image/description/
Topics: Array, Math, Matrix
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1209979323/

Takeaways:
First major takeaway: For the clockwise rotation of matrix: First take transpose, then reverse each row.
Second major takeaway: Try doing the thing you want to do, but instead start from the last element.
That leads in less extra variables and hence simplifies code. An example here would be instead of
shifting topleft to topright, and then saving top right variable and so on, we just save the topleft variable
and move bottom left to top left. Now we can easily move bottomRight to bottomLeft and so on without
incurring any data loss.

Analysis + Comments:
Similar to the "rotate array" question, this one has multiple valid approaches here, the difference being
that all those approaches have the same time complexity here. The only constraint given to use in this
question was that the replacement must be inplace, i.e. another matrix cannot be made. The first
solution was to replace the elements by 1) saving thetop left element in a variable 2) replace topleft by
bottom left, bottom left by bottom right, bottom right by top right, top right by top left (which) was saved
in an element. Now the same thing is done for the element to the right of top left and so on. Also the
right most element of the toprow should not be moved as it is the topright and is already taken care of.
Then the same thing is done for the matrix inside this, until no more matrices are left. Another
approach, which is analogous to rotate array by 2 reversals: we first take the transpose of the matrix
(swapping i,j for j,i) and then reversing each row. This leads to 1 clockwise rotation.

________________________________________________________________________________
Problem 47.0
https://leetcode.com/problems/palindrome-linked-list/description/
Topics: Linked List, Two Pointers, Stack, Recursion
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1210808164/

Takeaways:
Interesting things can be done to do something in (technically) O(1) space, like exploiting recursive
stacks, splitting something in half and reversing and even storing stuff in a constant datatype (integer,
size_t)

Analysis + Comments:
The O(n) time and space solution is easy, since a linked list can be reversed in O(n) time using
recursion and saved somewhere and then compared to the current list. For an O(1) space solution, one
idea was to traverse to the middle of the list (using a fast and slow pointer) and then reverse the
remaining list. Now traverse both lists and compare the elements. If the list was a pallindrome then both
lists should be identical. A case for odd number of elements should be considered. Another approach is
since each element of the list has a constraint where its values range from 0-9, we can create 2 integer
numbers and then stored the list's in these numbers, one normally and one reverse. At the end we
compare these 2 integer sand if they come out as same, the list is pallindromic. But it might lead to
integet overlow since we are given that the length of the linked list can be 1, 10^5 and hence lead to
integer overflow (maybe use size_t). Another approach (which might actually be O(n) space instead of
O(1) because it uses the space provided by the recursive call stack to store arguments) is to use
recursion alongwith a reference variable. The reference variable (which is a pointer towards the head)
stays at the head, meanwhile the other traverses till the tail. Once it reaches the last element the values
at the head and tail are compared. Once this is done, head is moved to head.next and the tail function
returns true (if values are same) and the stack goes to the pervious call; in a way we move backwards
from tail to the node previous it it. And then the same comparison is done until the entire LL is traversed
(both back to frontn and front to back).

________________________________________________________________________________

Problem 48.0
https://leetcode.com/problems/reorder-list/description/
Topics: Linked List, Two Pointers, Stack, Recursion
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1211410690/

Takeaways:
For questions where we have to compare first and last of a linked list, divide the list into two (remember
to set the next of the fail in the first list as NULL to prevent stragglers). Then the second list can be
traversed and comparisons can be done, based on the question. Special considerations need to be
made regarding the middle point though.

Analysis + Comments:
Interesting question, got 2 approaches which were O(n) time and O(1) space. First approach was to to
split the linked list at half using slow and fast pointer (make a clean separation, the tail of second list
should point to NULL to prevent goofy dereferences), and then reverse the second list. Now these two
links should be cross linked to get the result. This approach could have been cleaner, I ended up using
too many functions but that helped in bugfixing // edge case testing. The other approach was using a
stack / recursion type function like the approach in #47, using a reference variable for head traversal
and reverse traversal will be done using recursion. Then traversal will be in 2 sided again, but that
might get a bit tricky with the mid point and all. Still worth trying this once.

________________________________________________________________________________

Problem 49.0
https://leetcode.com/problems/merge-k-sorted-lists/
Topics: Linked List, Divide and Conquer, Heap (Priority Queue), Merge Sort
Difficulty: Hard
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1211496020/

Takeaways:
PRIORITY QUEUE FTW. It provides an ordered way of comparing all the smallest elements of each list
and taking the smallest one out of them in a smal amount of time. Pushing an element into the queue
also only takes O(k) time. Just be sure to not push NULLs into the queue since thats just wasted space
and then the comparison function (and the top) won't work properly. The final tail will already have its
next as NULL since it was the last node of its respective list.

Analysis + Comments:
The basic brute force would be to push all elements into the array, (which will take n*k time), then sort
that array (nk*log(nk)) time and then pull all elements into a linked list and return its head. This is a
O(nk*log(nk)) solution (with O(nk) space as well). Another approach: since all lists are already sorted,
we can apply the merge 2 sorted lists into a sorted list algortihm here 2 at a time. Depending on how
the 2 lists are selected, the time complexity varies. If we merge the ith list with the i+1th list, we will
perform ~ k iterations total and inside each iteration we will do n, 2n .. operations. Hence time comp
comes out as O(k^n2). However, if at every level we merge 2 lists into 1 (merge sort type beat), then
the time complexity will come out as O(kn*logk) time. The approach which i used was by applying a min
heap (priorty queue), unironically one of my best and cleanest solutions. The priority queue was just
used to store the heads of each linked list at first (by pushing all non NULLs) into the list. This takes
k*logk time. The priority queue's data type is the node itself, and a custom compator operation is being
used which compares node's values (min heap, node with smallest value is available first in O(1)). Now,
the nodefrom top of queue is taken (popped) as head, and its next node is pushed inside the PQ. The
previous tail is connected to current node and tail is moved ahead. Since all nodes need to be visited
atleast once, the total time complexity here is O(nk*log(k)) (since max size of PQ will always be <= k).
Space complexity is O(k) for the same reasons.
________________________________________________________________________________

Problem 50.0
https://leetcode.com/problems/reverse-nodes-in-k-group/description/
Topics: Linked List, Recursion
Difficulty: Hard
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1211616896/

Takeaways:
First of all, using the recursive call stack to store space does indeed count for auxiliary space usage.
Also, when dealing with an iterative approach, create apppropriate variables (properly named to avoid
confusion) for proper management. Using recursion alongside iteration is always an option, since
recursion is conventionally intuitive when dealing with linked lists and saving "previous heads" in the
call stack which can be useful later. Iterative part was used when we don't need to return anything
(finding the section of the list which needs to be reversed).

Analysis + Comments:
Used the normal reverse linked list code (O(n) time) in a mainRecursive function. The main function
basically keeps track of head, increments tail and counts till it i == k. Then it will save the node next to
tail in nextNode and call the reverse function on head -> tail. The next node is then connnected to the
new tail (which was the previous head) and this mainRecursive funtion is called again and nextNode is
passed in place of its head. The base case here is return head if head == null || head.next == null
because a single node can't be reversed anyways. The function also returns head (the new head node
after reversal) and on the receiving end, it connects this to the next of new tail. This is basically the best
of both worlds since the traversal to find the section of list to reverse is done via iteration and recursion
is being used to return heads of the reveresd list to the previous function calls. This approach is O(n)
time and O(n/k) space. Careful considerations have to be made such tat sections of NULL are not
accessed. The iterative approach is similar and has same time but O(1) space. There, the main
functions is replaced with an iterative function with multiple variables which need to be changed
accordingly (rturnHead, prevTail, prevHead etc) and a special case has to be made for the first reversal
where we need to replace the return head with the reversed head.

________________________________________________________________________________

Problem 51.0
https://leetcode.com/problems/find-the-duplicate-number/description/
Topics: Array, Two Pointers, Binary Search, Bit Manipulation
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1212491712/

Takeaways:
I'm sorry Floyd's Cycle detection algorithm (Hare-Tortoise), i was not aware of your game. That algo
can be used to detect the cycle here (and almost everywhere in ~O(n)). Also, mathematically, the
distance from the start to where branch happens = distance from where slow stops to where branch
happens. This can be used to find the node where branch happens (i.e node addressed by 2 other
nodes i.e. the duplicate). The other approach involves binary search for answer (which is also a good
approach but asymtotically slower)

Analysis + Comments:
Hare-Tortoise will always work, no matter the input for cycle detection since the hare and tortoise will
collide. This is because since hare moves ahead by 2 and tortoise moves ahead by 1, the relative
distance between them (when they are in a loop) decreases by 1. And since everynumber is divisible
by 1, the distance b/w the hare and the tortoise will eventually become 0 and these two will meet. This
is also the reason why one moves ahead by 2 and the other by 1 (so that the difference b/w them is 1).
Also, the most important thing from this question is the inference of: let distance from start to branch =
p, (branch = node which can be reached by 2 other nodes), let distance from stop to branch = x (stop =
where hare and tortoise meet) and let cycle size = C. The slow pointer will cover a total distance of p +
C - x. THe fast pointer will cover an extra cycle since it meets slow by looping over the cycle = p + 2C -
x. Since the distance covered by fast poitner will always be double than that of slow pointer, we get =
2*(p+C-x) = p + 2C - x => P = X. Hence the distance is same. Therefore if we start a new slow pointer
from start and resume our previousslow pointer, they will eventually meet at the branch; which is the
node pointed towards by 2 other nodes and hence the dupllicate in this case (we are using indices as
nodes and their values as other nodes they deference). This solution is ~O(n) time and O(1) space.
There is also another binar search solution where we do binary search on answer space; we go to mid
and count number of value smaller than equal to mid in the array (this is done because our question
constraints specify values in array will range from [1, n]). If that count is lesser than equal to mid, then
the element which is duplicated lies in [mid+1, end]. Else the duplicate will lie in [start, mid-1]. This has
a time complexity of O(nlogn). Bit Manipulation solution is O(nlogn). First we find frequency of each bit
and the regular array, then we find the frequency of each bit in the input array and subtract the former
from latter. Now, the bits which are positive (non-zero) mean that they are extra in our input array and
their bit would b taken as high for our answer. Now we will convert the binary into regular to get our
answer.

________________________________________________________________________________

Problem 52.0
https://leetcode.com/problems/get-the-maximum-score/description/
Topics: Array, Two Pointers, Dynamic Programming, Greedy
Difficulty: Hard
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1213653109/

Takeaways:
This question is proof how much visualization can affect problem solving skills. The DP approach was
exponentially better than both pure recursive and memoized approach. The iteration logic was similar
to merge 2 sorted arrays. Take maximum of previous running sums and add it to answer in case of
intersection. Setting runningSums to 0 was also smart, since they are only useful between intersections
and lose their meanings after that.
Analysis + Comments:
One of the harder problems i've encountered. The brute force solution took some time to come up with.
The main thing was that it was a recursive solution which involved using a boolean path1, which
indicated if the we are currently on nums1 or not. If path1 was true (we were on nums1), Then the
following was done: bring the other index (index2) until the elemnt on it was either equal to or greater
than nums1[index1]. Then the forwardSum was calculated based on a recursive call (nums1, nums2,
index1+1, index2, path1). Here path1 was passed as we did not switch tracks. If nums1[index1] ==
nums2[index2], that means we can switch tracks. Hence forwardSum2 was calculated (nums1, nums2,
indes1, index2+1, !path1). Here !path1 is passed because path is switched and we are not on path1
anymore. Now, the maximum of these 2 paths is taken, added to current element (nums1[index1]) and
returned. Vice versa is done when path1 is false. This is clearly a heavy time approach (~O(2^n)), this
time can be somewhat improved by memoization to prevent duplicate calls but the far better solution is
still true bottom-UP DP. It involves calculating simultaneous running sums for both of these arrays and
the index of the array with current smaller element is incremented. If both are same (i.e. nums1[index1]
== nums2[index2]), there is a possibility of a switch. Hence we take the maximum of current running
sums, add it to current element, set both running sums to zero and increment both indices ahead. At
the end, we have to check for stragglers (like in merge 2 sorted arrays). Visualization is like DNA
double helix where the intersection is the point where both arrays have a common element.

________________________________________________________________________________

Problem 53.0
https://leetcode.com/problems/find-all-duplicates-in-an-array/description/
Topics: Array, Hash Table
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1213184811/

Takeaways:
When element constraints of an array are given [1, n], then most probably something can be done
inside the array itself. To preserve the value but still check if an element has been traversed to or not,
we use absolute values of elements as indices and - signs as visited flags. Also for any problem
regarding elemnts in a specified range like this (missing, duplicate etc), teh Cyclic Sort pattern can be
used.

Analysis + Comments:
Obvious solutions aside (hashing, brute force, STL sort), if we look at the constraints provided by the
question, the elements will always lie in the range [1, n] for and array of size n. Hence that means we
can access our entire array properly if while accessing we subtract 1 from each element to get our new
index. The regular hashmap (freq array) approach in this question (which will take O(n)) extra space
can be integrated into our original input array itself, if we look at the second constraint, that is an
element can either appear once or twice (or zero times ofc). Using our original array itself to mark
values, we don't want the data present to get lost. Hence instead of modifying the values, we modify the
sign. At nums[i], we will take the absolute value present there, subtract it by 1 to get new index. Now if
the value present at neew index is negative, that means it has already been accessed during some
other iteration in a single way. That mans that value was twice and the answer can be pushed into our
output. This approach basically combines normal array iteration (i++) along with the using values at
elements at indices to traverse (used in #51) to get desired output. Also noticed the CYCLIC SORT
pattern.

________________________________________________________________________________

Problem 54.0
https://leetcode.com/problems/diagonal-traverse/description/
Topics: Array, Matrix, Simulation
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1213247478/

Takeaways:
Don't forget to pass vectors as constraints to functions and check arguments incase you are unable to
find errors elsewhere. Dividing this question into 2 smaller problems (upwards diagonal print and
downwards diagonal print) and alternating them based on diagonalNumber%2 == 0. Also, passing
arugments in the function based on i + j = diagonalNumber.

Analysis + Comments:
Good question, was able to understand the relation between the variables easily. Dividing the question
into 2 functions made the work easier and more approachable. Number of diagonals in a n * m matrix =
n + m - 1. Note the conditional updation of i and j (they only update while they are < n-1 and <m-1
respectively, because their max values are n-1 and m-1). The best thing to do for questions like these is
mark all the elements' coordinates in diagram (0,0; 0,1; ... etc) which makes it easier to derive a
relationship between different variables. Time = O(N) Space = O(1) where N is the total number of
elements.

________________________________________________________________________________

Problem 55.0
https://leetcode.com/problems/diagonal-traverse-ii/description/
Topics: Array, Sorting, Heap (Priority Queue)
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1213368699/

Takeaways:
Firstly, pairs can be passed directly by {first, second}; i was not aware. Also using an offshoot of BFS
here to even further optimize the space comp from O(n) to O(n^1/2) is smart. If you are in the leftmost
matrix column, push the index pair below you and to your right (if valid). Else only push index pair to
your right (if valid)

Analysis + Comments:
After solving the previous Diagonal Traverse question, i had expected this one to have a simple
solution, where id follow similar traversal rules (of i+j == diagonalNumber) and check if that element
exists or not. If yes then itd be added to the vector else we move. The problem was that this way we
would go through 10^5 * 10^5 iterations in the worst case, this is extremely bad because if the input
matrix is extremely jagged, then we would be wasting our iterations. This is why that approach was
getting TLE'd by leetcode, even if the logic was correct. For an actual approach, where we only traverse
the existing elements, what can be done is an unordered map can be created for
diagonalNumber:vector mappings. Now, while traversing the matrix, we push all elements belonging to
a specific diagonalNumber (i+j) into the vector belonging to that number in the map. The maximum
diagonal number is also tracked during this. Then we iterate hashmap via the diagonal numbers and
append connect all the vectors together in our output. This approach is O(n) time and space both. The
best approach here: if we tilt out head (45deg ccw) slightly we see that this is basically a type of breadth
first traversal. A queue can be created of type pair<int, int> (i, j) and {0, 0} can be pushed. Now BFS is
done (like in level order traversal, instead of using NULLs as flags for next line we check the current
length of queue before pops and derement till that length becomes zero): if j == 0, we push index pair
below and to its right (if they are valid). For other js, we only push index pair to the right (to prevent
overlaping). During every iteration, the value present at the index pair which is at front of queue is
pushed into the output vector. Because here the max queue length = diagonal of matrix, space
complexity = O(n^1/2), time = O(n).

________________________________________________________________________________

Problem 56.0
https://leetcode.com/problems/search-a-2d-matrix/description/
Topics: Array, Binary Search, Matrix
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1214258962/

Takeaways:
Non-decreasing means increasing (asc) with repeating characters. 2D (n*m) array can be "converted"
into an equivalent 1D array with index i,j corresponding to index i*m+j. The opposite can also be done
where the 1D array index can be converted into a pair of 2D (n*m) indices by i = index/m, j = index%m.
This ise helpful because our input matrix is basically a 1d sorted array where the rows are attached end
to end. Also, this can be solved by considering the array to be a BST (by tilting head 45deg clockwise),
the root being the last index of the top row.

Analysis + Comments:
Visualization for the matrix into an array was done similar to how static arrays in cpp are stored linearly
(with indices being i*m+j). Similarly the inverse relation can be found and binary search is performed on
a "virtual" (the 1d array is not actually created to save space" array of inidex 0 -> m*n-1, and the single
index is converted into respective i,j coordinates by index/m and index%m. Time = O(m*n), space =
O(1). The more crazy approach was conversion of this matrix into a BST with top right corner as the
root. This was inferred due to the way we want to search for an element (either by going to its left or to
the down, which when converted into a bt is like left or right subtree).

________________________________________________________________________________
Problem 57.0
https://leetcode.com/problems/string-to-integer-atoi/description/
Topics: String
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1214311735/

Takeaways:
Get the integral value of a char number by subtracting '0'. same cane be done for alpha bets by
subtracting 'a' and 'A' respectively. Also, when solving a question with multiple cases like this or a
question which has many moving parts (which can be defined separately) using functions for each
separate task makes code neater and easier to debug.

Analysis + Comments:
This question was not medium worthy, it was just pesky. My code looks ugly because i didnt even
bother to divide it into functions, which if i had then debugging would have been WAYYY easier. Had to
store the output number in an array first, just in case it becomes an invalid number.

________________________________________________________________________________

Problem 58.0
https://leetcode.com/problems/letter-combinations-of-a-phone-number/description/
Topics: Hash Table, String, Backtracking
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1214334771/

Takeaways:
Recursive Brute force is the goat for finding "all possible" of something. Here both approaches
choosing and going ahead + assuming answer and integrating it in a bigger answer and returning are
valid.

Analysis + Comments:
The "going ahead" approach involves taking a string and filling it wth the different combinations as
recursive calls are made. Here we are modifying a single string as we move forward. When we reach
the end (i == digits.size()-1), that string is pushed inside the answer vector. The other approach is
making the recursive call first, assuming it provides you with some small answer, and crossing
characters of current string with the outbut by appending the output string at the end of all possible
current characters and returning the resultant vector. The base case here would be when digits = "",
then we will push an empty string insde the vector and return that string. Some chanes have to be
made such that the input case of digits = "" is valid according to leetcode.
________________________________________________________________________________

Problem 59.0
https://leetcode.com/problems/subarray-product-less-than-k/description/
Topics: Array, Sliding Window
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1215095239/

Takeaways:
Try to use mathematical / logical inferences as much as possible to reduce number of operations. Here,
number of subarrays = (endIndex - StartIndex + 1) was used (to accout for the missed subarrays),
alongside a sliding window approach for all valid start and end indices for a O(n) time solution.If this
inference was not known then solution would have reverted back to O(n^2). This was only possible
because we were aware of the constraints that all elements are positive. While() being used to move
the startIndex is also important, instead of an if.

Analysis + Comments:
The regular O(n^2) approach is as expected, and just as expected we were getting TLE'd (barely).
Looking at the constraints, we know all products are positve integers, for a given subarray "window" of
a fixed startIndex, its product will only increase as we increment the endIndex. For each (start, end)
index pair, we find the toal number of valid subarrays in that index by end-start+1. This tells us the
possible subarrays in the window which includes the element at end index. This is good as we initialize
our endIndex with the same value as start Index. This loop will run until the valid condition is no longer
satisfied (i.e. subarrayProd < k). Now, to decrease our product sum we have to move our start index
ahead (and subarrayProd /= nums[start], because that element is no longer part of our window), until
the condition becomes valid again. Then our previous loop can start normally. Obviously, careful
consideration has to be put into the order of these operations: first subarrayPrduct is update with
endIndex, then its validity is checked and startIndex is updated accordingly (also another constraint
here is that startIndex <= endIndex, else subarray will become invalid). This will give us either a valid
subarrayWindow, or a NULL subarray window (which means that for that endIndex, there are no valid
subarrays and start > end). Then the formula for number of subarrays within start and end index is
applied and we increment end. This approach is O(n) time and O(1) space. Another approach, which is
O(nlogn) time uses the prefix product subarray alongside binary search. From the end of the prefixProd
subarray, we find an element at start in it such that end/start < k (using binary search). THen the same
math inference can be applied. Only consideration is that the prefix product can overflow and we have
to use Logs to keep them in rage

________________________________________________________________________________

Problem 60.0
https://leetcode.com/problems/maximal-square/description/
Topics: Array, Dynamic Programming, Matrix
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1215535314/

Takeaways:
The most important things regarding this was first figuring out some sort of brute force solution. Then to
optimize it, knowing wha texactly to store in the DP array was important (the DP state). Here since our
input is a matrix, we used a dp matrix and according to our problem statement, we stored max possible
side (this is the only information relevant to us). Space can be further optimized by just "rendering" 2
rows of the dp matrix at one time, because whlie filling in the ans we only require to check these two

Analysis + Comments:
Brute force was going crazy, O(n^5) approximately. To optimize this, a dp matrix was used. dp[i][j]
soted the size of maximum square containing all 1s with its bottom right most cell as i,j of the input
matrix. The base case of the dp was filling the top row and column, since we know the max size of
square there can either be 0 or 1, depending if input has 0 or 1. After that, we trverse the input matrix. If
we find a 0 at i,j then dp[i][j] becomes 0 because we know no such square is possible there. If we find 1,
we look at the dp cells (i-1, j-1), (i-1, j), (i, j-1). We take the minimum of those 3, add 1 and store it at
dp[i][j] because since we want a square, we are constricted by the smallest side. At the end, we find
maximum value stored in this dp matrix and we return its square (since we want area).

________________________________________________________________________________

Problem 61.0
https://leetcode.com/problems/length-of-longest-subarray-with-at-most-k-frequency/description/
Topics: Array, Hash Table, Sliding Window
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1216072523/

Takeaways:
Typical sliding window template applies here, unordered map for storing the frequencies is optimal
here, since the elements have no constraints here. Avoided using extra variable for subarray length by
using j-i+1 instead. Also, using a hashset for storing important values along with sliding window can be
the key.

Analysis + Comments:
One thing to think about was whether to update j first then satisfy the conditions by moving i or vice
versa. My approach first tries to satisfy the less then equal to k condition first and then accounts for the
frequency of j, but the opposite order is jsut as valid. THe only thing which will change is the while
conditions, which can be easily checked by dry runs. This solution is most optimal at O(n) time O(n)
space.

________________________________________________________________________________

Problem 62.0
https://leetcode.com/problems/subarray-sum-equals-k/description/
Topics: Array, Hash Table, Prefix Sum
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1216191965/

Takeaways:
Sum of subarray from [i, j] = prefixSum[j] - prefixSum[i-1]. Prefix sum is super helpful when we want to
figure out sums between 2 points and if stored in a hashset, their previous occurence / amounts of
previous occurence can be easily checked. Using this we can figure out prefixSum[i-1], which would be
prefixSum[j] - goal. This expected prefixSum can be checked for in the map and its number will tell how
many times it has previousy existed and hence we can figure out the number of possible subarrays.

Analysis + Comments:
Similar to #61 in some ways, but different in others. like the above question, where we stored the
frequency of each element in a map, here the frequency of each prefix sum is stored in a map. The
difference is that above a sliding window type appraoch was used, because we could predict the
"movement" of the window based on the conditions (if frequency of some element is greater than limit,
then move left part of window until it is within range, else you are free to move the right part until there
is some problem). Here, we wat the number of subarrays with a sum = k. The above approach would
have been valid if only positive inputs were allowed. Here instead, we have to use prefixSums, which
will be stored in a map alongside their frequency. Once we reach an element and add it to prefix sum,
we will subtract our goal from it and store it somewhere (expected). Now, we look for expected in our
map. If it exists (and has some frequency) then we add that frequency in our answer. This is because
prefix[j - prefix[i-1] will give sum for subarray with indices [i, j]. The base case would be map[0] = and we
would need to store/update the frequency of each prefix sum in the map. Here, since we are storing
current prefixSum in a variable, we only need O(n) extra space for the hash map, O(n) time is best
solution.

________________________________________________________________________________

Problem 63.0
https://leetcode.com/problems/minimum-path-sum/description/
Topics: Array, Dynamic Programming, Matrix
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1216269260/

Takeaways:
Even though the problem seems daunting at first, drawing a diagram and coming up with a recursive
brute force trivialized this problem. Also, for considering recursion, think of a mid example, for dry runs.
This can prevent dry runs from scratch and save time. The best thing about this solution is that its kind
of intuitive an almost all dp problems can be converted into some sort of grid problems like these.

Analysis + Comments:
Done all approaches, first the brute force approach. Here we return the minimum of cost from both
possible paths from and index i,j (possible paths are from i,j+1 (downwards), and i+1, j (rightwards)).
Then we will return the min(pathRight, pathDown) + current. The base cases will be 1st: if we reach the
last cell; if i == n-1 && j == m-1 the we just return the value at that cell. If the index is invalid i.e. i >= n || j
>= m we then return INT32_MAX, such that it never gets selected while considering min and by doing
this we can make these invalid. This approach is ~O(3^n) which is bad. To improve on this we memoize
the solutions and eliminate overlapping calls. Here, since each i, j pair will have a different min path
sum associated to them, we will save these values in a 2d matrix. This solution is ~O(n*m) time and
O(n*m) space. The best solution is ofcourse DP. Here, dp[i][j] will indiciate the minimum path sum to
reach i, j in the matrix. Hence the anwer will be stored in dp[n-1][m-1]. Due to the movement
constraints, we already know tha values which will be stored in the first row and column (running
Sums). Then we start iterating from i = 1 and j = 1. From here, we take the minimum of value when
coming from above cell or when coming from the cell to the left: min(dp[i-1][j], dp[i][j-1]). And then we
add the value at i, j to this.

________________________________________________________________________________

Problem 64.0
https://leetcode.com/problems/longest-common-subsequence/description/
Topics: String, Dynamic Programming
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1217279518/

Takeaways:
Make sure to clear all dependencies for the DP first, then come up with the logic. Reverse engineering
the recursive + memoization approach can be super helpful in this.

Analysis + Comments:
Coming up with the initial recursive approach was kinda tricky, but the solution basically boiled down to:
There are 2 cases, first if the characters of both strings at respective indices match (text1[i1] ==
text2[i2]), then we recursively pass i1+1 and i2+1 and whatever answer we get, we add 1 to that and
return (because the "current" element is matching and hence the length of LCS can be 1 greater than
the length if we did not consider that element). If we the characters do not match, then we get 3 sub
cases. In the first subcase we exclude character in first string for LCS consideration, in second we do
the same for second string and in the last subcase we exclude both. i.e. we pass (i1+1, i2), (i1, i2+1)
and (i1+1, i2+1) respectively. However, the last case is redundant, because that case will automatically
be considered once we consider the first subcase and then the second subcase. Hence we only need
to return the maximum of these cases. Once this was figured out, it was easy to memoize this, similar
to the minimum path sum approach, a 2d dpMatrix was used because we know that the answers
depend on both i1 and i2, where dp[i1][i2] stores the LCS when we consider string 1 from i1 to end and
string 2 from i2 to end. In the true DP approach, since it is bottom up, dpMatrix[i][j] stores the value if we
consider string1 from 0 to i1 and string2 from 0 to i2. Personally, i did not create a matrix of size
(n+1)*(m+1) because my dpMatrix was based on indices rather than size. Before retreiving value at
each dp, i was just checking for valid index, else i was using 0 as a value.

________________________________________________________________________________
Problem 65.0
https://leetcode.com/problems/count-subarrays-where-max-element-appears-at-least-k-times/descripti
on/
Topics: Array, Sliding Window
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1217158963/

Takeaways:
STL for max element: *max_element(nums.begin(), nums.end()), for array, *max_element(arr, arr+n).
When the regular SW template doesnt work then think of possible offshoots to apply. Ex: solving for the
oppositve condition and subtracting that frmo all possible conditions; or reverseing the condition
satisfaction order, where we move j to satisfy the condition and move i normally (usually its the
opposite). Also, always look ahead (and behind) the pointers and check if you are missing some set of
answers. max_element() returns iterator fr the max element, hence we dereference it to get the value

Analysis + Comments:
We want to return count of subarrays whiich have frequency of max element atleast k (>= k). Firstly, I
tried applying regular sliding window template on this. However it could not work out properly because
the condition was causing "missed" cases of subarrays. Usually in sliding window a condition is fulfilled,
then we move our right pointer until that condition becomes unfulfilled. After that we move our left
pointer until that conditions becomes fulfilled again. Here, when we start the condition is already
unfulfilled. Hence we have to swap these conditions. First, we move j ahead till until that conditions gets
fulfilled. Now we have a window from i, j were max element appears atleast k times. But all subarrays
after this point should also fulfil this condition (plus elements after j can also be the max element).
Hence we add all possible subbarays to its right (which include i,j subarray to this): ans += n-j. After
this, to include possible subarrays from the left portion, we move the left pointer and repeat this while
the condition remains fulfilled. As soon as the condition gets broken, we move the right pointer to look
for other candidates. Another approach is to find the opposite of this condition (all subarrrays where
max appears less than k times), which can be done via regular sliding window and subtract that from
total number of subarrays. Since we only need to keep track of frequency of 1 element, these solutions
are both O(1) space and O(n) time.

________________________________________________________________________________

Problem 66.0
https://leetcode.com/problems/edit-distance/description/
Topics: String, Dynamic Programming
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1217984497/

Takeaways:
Sometimes the exact memoization logic won't work for the dp solution. Here, instead of the dpMatrix
working on indicies, it works on the lengths of the strings to account for the empty string exception and
for proper base cases. But after that, the dp approach becomes similar to the memoized approach.
Logically, the asnwer will be at the last cell of the dpMatrix (because that would mean the min no. of
edits required when word1 and word2 are entirely considered).

Analysis + Comments:
Kind of similar to LCS, initial recursive approach for this was lowkey tricky as well. We have 2 main
cases, 1 where word1[i1] == word2[i2], hence no edits need to b edone since word is same. In this case
we will return the ans of recursive call (i1+1, i2+1) as it is. In the other case where characters don't
match, there will be 3 subcases which match their respective edit caes. When we add a character
word1, that would mean that logically we have added a character which matches word2[index]. Hence
we need to match the remaining word1 to the remaining word2, we pass (i1, i2+1). In the case we
remove a character from word1, we don't know if the next possible chracter will match word2[i2]. Hence
here, we pass (i1+1, i2). In the case we update, we know that word1[i1] will be updated to match
word2[i2]. Hence we pass (i1+1, i2+1). We take the minimum of these 3 and add 1 to account for the
change and return. Memoization logic is same. In the dp solution, dpMatrix[i][j] represents the number
of changes required when length of word1 is i and word2 is j. This is done to account for the possibility
when empty strings are passed as word1 or word2. Base case is dpMatrix[0][0] = 0 where both strings
are empty, hence no changes required. For dpMatrix[0][j] = j, because we know when word1 is empty,
we would need to make j additions to make it equal to word2. Similarly for dpMatrix[i][0] = i, we know
when word 2 is empty, we would need to make i removales from word1 to make it equal.

________________________________________________________________________________

Problem 67.0
https://leetcode.com/problems/subarrays-with-k-different-integers/description/
Topics: Array, Hash Table, Sliding Window, Counting
Difficulty: Hard
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1217874245/

Takeaways:
For sliding window template solutions, i is extremely important to be aware of what you are actually
adding to the "ans" variable at each interval, and when you should be adding it. Dry run for some
random middle case and check if the ans variable has some extra cases (stuff which should not be
present in the ans) or is missing some cases (stuff which should be in the ans). Then think about how
you can reach the actual answer using this variable (in this case, soln was fun(k) - fun(k-1))

Analysis + Comments:
The final boss of the sliding window week. The first thing which came to mind was the typical sliding
window template, along with a frequency map for each distinct element. While moving the forward (j)
pointer, we add to the frequency of that element. Once the condition breaks (map.size() becomes > k),
we run the loop for i, where we decrement the frequency for nums[i]. If frequency of that element
becomes 0, erase that from the map itself. (because that element no longer exists in the subarray).
Now, when we go outside this i loop, we know our map size is == k. But what do we add here ? In
regular cases we add the number of subarrays for this i,j (j-i+1). But if we add that, that means every
subarray is considered (sizes j-i+1 -> 1 [the element just as j]). Hence it is entirely possible that these
subarrays are not abiding to our problem condition. What we are exactly adding here is the number of
subarrays with distinct elements <= k. Hence we need to do this <= k and <= k-1 and subtract these two
to get the solution. One more optimization is to use a frequency array instead of a map because the
constraints on our elements allow so. We just have to subtract 1 from nums[i] to get the index for that
element in the frequency array. Along with this we can keep track of a numDistinctElements variable
which gets incremented if we change frequency from 0 to 1 (while moving j) and it gets decremented if
we change frequency from 1 to 0 (while moving i). Time and Space O(n).

________________________________________________________________________________

Problem 68.0
https://leetcode.com/problems/count-subarrays-with-fixed-bounds/description/
Topics: Array, Queue, Sliding Window, Monotonic Queue
Difficulty: Hard
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1219247884/

Takeaways:
Identifying the "bad" elements (ones which make the subarray invalid and the good ones (which make
the subarray contribute to our answer) are important. In this particular variant of the sliding window,
konwing the indices of both good and bad elements was important. On resolve, we basically add
contributino of each valid subarray by subtracting min(minIx, maxIx) which tells which of the boundary
element most recent appearnce is "lefter", since that becomes the right boundary of our validness. left
boundary would be the first valid point. Before each operatoin, maxIx, minIx is updated accordingly and
add cnotribution by: min(maxIx, minIx) - leftPointer on each iteration. In case we find a bad element, we
reset max and min ix to -1 (since they do not "appear" yet) and move leftPointer to rightPointer as well.

Analysis + Comments:
My initial logic was tooo close to being correct. Here, we actually needed to know the indices of the
most recent good elements (elements which fulfilled the condition of minK and maxK) and the most
recent badElement. A bad element is any element which makes it impossible for our subarray to fulfil its
condition. I.E. any element smaller than minK or greater than maxK would make it such that all
subarrays which have that element as its part would not contribute to the answer. While iterating (using
i), we look to update minIndex, maxIndex and badIndex (initially all set to -1). We will consider
subarrays which have i as its part. Contributions can only be valid (>0) if badIndex appears before both
minIndex and maxIndex, and the number of possible contributions would be the difference between
whichever appears before and badIndex. In case the contribution is negative, we will disregard it.

________________________________________________________________________________

Problem 69.0
https://leetcode.com/problems/harshad-number/description/
Topics: Math
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/contest/weekly-contest-391/submissions/detail/1218731165/
Takeaways:
Super straight forward question and a super straight forward solution.

Analysis + Comments:
No thoughts head empty

________________________________________________________________________________

Problem 70.0
https://leetcode.com/problems/water-bottles-ii/
Topics: Math, Simulation
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/contest/weekly-contest-391/submissions/detail/1218752600/

Takeaways:
Simulation question like these are best solved once you get the proper understanding of what needs to
happen. Understand the cases and construct proper diagrames to help with this (tables which show
how the values update will help the most).

Analysis + Comments:
Took me 3-5 minutes to understand what the question was wanting, but that is long wrt contest
demands. Clearly, we cant drink from water bottles if we dont have any filled with water and hence the
outer loop was while (numBottles). Inside, add the number of bottles we drank (which in this case was
equal to numBottles) to our answer variable (bottlesDrunk). And then we add this to the number of
emptyBottles as well (since all these bottles are now empty). Now, if our condition for exchance, i.e.
emptyBottles >= numExchange satisfies, then we can trade numExchange empty bottles for 1 filled
bottle. i.e. numBottles++, emptyBottles -= numExchange. And due to the rules, we have to
numExchange++. This solution is ~ O(1) time and space.

________________________________________________________________________________

Problem 71.0
https://leetcode.com/problems/count-alternating-subarrays/description/
Topics: Array, Dynamic Programming, Sliding Window, Two Pointers
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/contest/weekly-contest-391/submissions/detail/1218731165/

Takeaways:
Regular sliding window applies here since the contribution towards the number of subarrays by j can
properly add towards the result without any missed / extra cases, if we are doing the checks for
alternating elements properly. Once this fails, we reset our window to start from where j failed, (by i = j),
assuming it to be the start of a new alternating subarray.

Analysis + Comments:
Got a good solution [O(n) time and O(1) space] using some sliding window offshoot. Basically checking
if the elements are alternting (by comparing num[j] with nums[j-1], if valid). THe the condition for
alternating satiesfies, then we add the current subarray contribution by j (i.e. j-i+1) and move the
window ahead. Else we bring i to j. Also, before this comparison, we add 1 to our answer (when i and j
are same) because we know a subarray of size 1 will alawys fulfill the alternating condition. Then we
iterate j ahead and resume with our normal checks.

________________________________________________________________________________

Problem 72.0
https://leetcode.com/problems/minimize-manhattan-distances/
Topics: Array, Math
Difficulty: Hard
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1220255958/

Takeaways:
Knowing the math behind something, (here, manhattan distance = | xi - xj | + |yi - yj |) and manipulating
in in such order order that resultant operations decrease. Here, the variables were shifted around
(accounting for all the cases) which resulted in reduction of O(N^2) accesses to O(N). Knowing what
contributes to what (relevant to solution) is important for optimization.

Analysis + Comments:
Manhattan distance between a pair of 2 points i, j can be calculated by |xi -xj| + |yi - yj|. From an array
of n such coordinates, there are n^2 pairs and hence finding the pair with maximum manhattan
distance will take O(n^2). |a| can be opened as +-a. Manhattan distance => |xi - xj| + |yi - yj| => +- (xi -
xj) +- (yi - yj) = hence we have 4 possibilities. The max manhattan distance will be the maximum of
these 4 possibilities. To improve our time complexity, the iterations are changed such that intead of
iteration over i and j (which causes O(n^2)), we will seperate i and j which will cause 2 O(n) operations.
Hence (ex possibility) (xi - xj) + (yi - yj) = (xi+yi) -(xj+yj). All 4 possibilities resolve to some arthimetic
combinatino of addition of the coordinates or their difference. Now, max manhattan distance is given by
maximum of sumi - sumj and diffi-diffj. A number which is the resultant of differences can be maximum
if the left is maximum (i here) and the right is minimum (j). Hence we we just need to find the coordinate
with max sum and min sum and max diff and min diff, which is 2O(n) operation. Now we take
max(maxSum - minSum, maxDiff - minDiff). We also need to remember which pair of coordinate from
the vector gave us these 4 values (this is important for the later part of the question). Doing this we can
get the pairr of coordinates which contribute to the fina maximm manhattan distance. Knowing the
points itself is important because while minimizing the max manhattan distance, if we remove any one
of the points which contribute to the max manhattan distance, the max manhattandistance will surely
get reduced. Hence we will take the minium of the above function (which calculate the
maxManhattanDistance) while SKIPPING these points one at a time, which results in another O(n)
operation. Hence overall solution comes out as O(n) time O(1) space.

________________________________________________________________________________
Problem 73.0
https://leetcode.com/problems/shortest-subarray-with-or-at-least-k-ii/description/
Topics: Array, Sliding Window, Bit Manipulation
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1224682629/

Takeaways:
Incase of OR operations, a bitArray can be use where keep track of all bits we have encountered (in the
window). When adding elements to our window, we increment corresponding bits and when removing,
we decrement its corresponding bits. We ctually count the number of bits instead of taking actual OR
and when we want to convert our bitArray into its valid integer number, we assume it to be a regular
binary where all non-zeros are 1s. Also, the use of modular functions which can do multiple things
based on the parameter. For ex here the function which increments bitCount can also be used to
decrement the same. For decreasing the window, check for i <= j (checking while reducing) vs i < j
(check after reducing)

Analysis + Comments:
Since we know that (atleast for positive numbers here) OR will only increase i.e. it is monotonic and will
always increase if the size of the subarray increases. Hence we can apply some sort of sliding window
here, where we increase the j (right part of window) and while our condition is satisfied (OR in window
>= k) we update ans and increment i to decrease the size of our window. The problem lies here, usually
when decreasing the size of a window we have to perform an operation (like subtraction) so that our
window reflects the change where the element at previous i is excluded. There is NO inverse operation
of OR (like - for + and / for *). Hence to fix this, we have to maintain a bitArray, where we count the
number of bits at each location. When a new element is included in window, we increment all the bits
corresponding to that number (ex if original bitArray = 0 1 1 0 and new element is 3 then updated
bitArray = 0 1 2 1), similarly when the number leaves the window, we decrement its corresponding bits
from the bitArray. Another function can be used which converts bitArray into its corresponding integer
number. Here it will assume all nonzero numbers to be 1 (since other values won't make sense in
binary and are just a byproduct of the nature of us keeping "COUNT" of all bits which we have
encountered). Time comp: O(n), space comp: O(1).

________________________________________________________________________________

Problem 74.0
https://leetcode.com/problems/shortest-subarray-with-or-at-least-k-i/description/
Topics: Array, Sliding Window, Bit Manipulation
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1219376497/

Takeaways:
OR increases monotonically, just like prefix sum when all elements are positive.

Analysis + Comments:
Regular subarray brtute force application of the above approach in O(n^2), which is possible due to the
constraints.

________________________________________________________________________________

Problem 75.0
https://leetcode.com/problems/length-of-last-word/description/
Topics: String
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1219897802/

Takeaways:
Don't overcomplicate solutions and in some cases (when required) try traversing from the end instead.

Analysis + Comments:
Extremely easy questions: the normal approach is to start from i = 0 and traverse, count length of each
word and update its length as new wordsa are encountered. Another approach which is slightly faster is
to do the same thing but from the end of the string. This saves us extra iterations. Also this was my first
LC AC which was done using C# so i was goofing around in general.

________________________________________________________________________________

Problem 76.0
https://leetcode.com/problems/minimum-levels-to-gain-more-points/description/
Topics: Array, Prefix Sum
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1221278944/

Takeaways:
Prefix / Suffix arrays sums can usually bre placed by some sort of variable or a mathematical
calculation: ex prefixSum array can be replaced by a running variable. In the same loop, if we want to
know the suffixSum (at the point ahead of it) and we are aware of the totalSum, we can just do:
totalSum - prefixSum

Analysis + Comments:
Brute force method is to check for every index; assuming D played till that index and remaining were
played by B. And the first index where D > B we return i+1 (because we want the number of levels).
This approach would be O(n^2) time however. Another logic which came to me was to use a suffixSum
array. This would help us know if we are at i, then the remaining array sum would be suffix[i+1] (in O(n)
time but O(1) space). But if we thinkg about it more, the game D and B are playing at a time is the same
game. Hence they both will get the points from the same game only. Points(D) + Points(B) = Total
points. Hence once we have the totalPoints which can be acquired (if one person plays the complete
game) and the points of D (which can be stored in a single variable like in running Sum), we can know
the points B would have at that stage (total - D). Hence after every step of D, we can compare if D >
total - D. If this is true we return i+1 and after the loop is over, we return -1 (no such case is possible).
Another thing is that this loop will run for i <= n-2 because both players houlld play ATLEAST 1 turn.

________________________________________________________________________________

Problem 77.0
https://leetcode.com/problems/isomorphic-strings/description/
Topics: String, Hash Table
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1221232579/

Takeaways:
Read the small notes given at the end of question carefully, they can contain important edge cases.

Analysis + Comments:
If one: many character mapping was allowed, then my initial approach where a single 256 size array
(for character indces 0-255) could be used where the indices would correspond to ascii values of
elements in string s and we would store the ascii of first character which appears in string t
corresponding to it. The string would be initialized with -1 to let us know if a character is appearing for
the first time. However, using this we can only know if a s -> t mapping is consistent or not. We would
need to do the same for t -> s mappings. If either of the two is broken we return false.

________________________________________________________________________________

Problem 78.0
https://leetcode.com/problems/maximum-points-you-can-obtain-from-cards/description/
Topics: Array, Sliding Window, Prefix Sum
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1223252879/

Takeaways:
Sliding window doesnt always have to be of a variable size, this question is an excellent example of a
constant size sliding windows. Here, the parts of the array which are relevant to us are parts which do
NOT belong to the sliding window itself. Hence inverted logic like this can also apply.
Analysis + Comments:
Brute force would be either recursive where we assume max score comesfrom either side, or we would
calculate all possible calues of k on left and right. THe oprimal solution here would use a constant size
sliding window (of size n-k), which moves from left to right. The area which is not covered by the
window would be the area which cards we pick. This is extremely intuitive if you can visualize it. Initially
the sumLeft is zero (since window is completely to the left) and right is at itsValue. As we slide window,
the sumLeft increases and the sumRight decreases. At each step we update answer if valid (ans =
max(anx, sumLeft+sumRight).

________________________________________________________________________________

Problem 79.0
https://leetcode.com/problems/minimum-size-subarray-sum/description/
Topics: Array, Binary Search, Sliding Window, Prefix Sum
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1221964031/

Takeaways:
Another goated application of sliding window and Binary Seach on Answer; these two are only
applicable here because we know our constraint (elements are always positive) makes it such that our
subaarraySum monotonically increases with its size. This simple approach cannot work (this
straightforwardly) when negative numbers are introduced, which disrupts the monotonic nature (EX:
#80)

Analysis + Comments:
Brute force here is applicable like always (for subarrays, ~O(n^2)). Now, for optimiztions, we can first
reduce it to O(nlogn) by using binary search on anwers. Since we know the miimumSubarrayLength will
lie between 1 and n, (and 1 -> n are natural numbers, hence sorted), we can use these two as our
starting high and low. Then we assume mid to be the length of shortest subarray and iterate through
our original array, checking if an subarray of that size (which satisfies our sum condition) exists or not. If
yes, then we save mid in an ans and do: low = low, high = mid-1 (because we want the smallest
possible ans). Else we try searching for a larger subarray by low = mid+1, high = high. BSoA is possible
herebecause the constraints state that all elements are positive, hence the sum of a subarray can only
INCREASE if we increase its length. After this, the best case solution would be a regular sliding
window, where we decrease (first we ans = min(ans, j+i-1)) the length of our window while our condition
satisfies. When it stops satisfying, we start expanding it.

________________________________________________________________________________

Problem 80.0
https://leetcode.com/problems/shortest-subarray-with-sum-at-least-k/description/
Topics: Array, Binary Search, Sliding Window, Prefix Sum, Queue, Monotonic Queue, Heap (Priority
Queue)
Difficulty: Hard
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1222266721/

Takeaways:
Dequeue is useful when you want to maintain data in some sort of monotonic manner, need more
practice with this. Also: 2 pointer sliding window only works when there is a monotonic curve. Window
and subarray are analogous because they are both contiguous. Understood why pure slidinwg indow
wont work. But what next ?CHange sutff to math to model: basically currently we have pfx, and at some
previous i pfxi, subarray sum = pfx - pfxi >= k. To maximize k, we want minimum pfxi: can use a min pq
with {pfx, i}). And we can keep minmizeing and popping while condition is satsified. Else we move
forward. Main thing here is that once an old pfx is used, there is no need to use it again because if it
somehow satisfies, the length will be greater. This pq can be replaced with a monotonic deque as well
(monotonic inc), simulate a similar process

Analysis + Comments:
Brute force solution here applicable just like above ques (#79), but when we try to optimie it using the
same techniques, we are unable to do so. This is because there are no constraints on the numbers
anymore and they can be negative as well. Due to this, we can't normally slide our window, because we
are not sure if increasing its size will increase subarraySum (if next el is positive then sum will increase
and if next number is neg then sum will dec) and the same applies with decreasing the window size;
this is also the reason why binary serach on answers won't work here. Prefix Sum is not monotonic.
However, if there was a way to makek it monotonic, we can cook up something. Using a Deque, we will
push elements from the back in such a manner that the queue(<pair<int(index), int(prefixSum)>>
remains monotonic (in an asc. manner). To make this possible, we will compare the prefixSum with the
sumPresent at the last element, if pfS is greater, then we push_back normally. Else we pop_back the
last element until the Deque gets empty or the condition gets satisfied. The reason we are storing in the
index along with the prefixSum is due to the popping, we need a method to calculate subarrayLength.
We know that subarraySum [i, j] = prefixSum[j] - prefixSum[i-1]. Using this, we check if subarraySum
satisfies condition ( !deQueu.empty() && ( prefixSum - (deQueue.front()).second >= k ) ), then do
something analogous to ans = min(ans, j-i+1) to store new answer accordingly. After this, we pop
elements from the front while this condition remains satisfied; this is possible here because the way in
which we maintain our queue makes sure that it is monotonic. Hence when we pop the front element,
we make sure that the subarraySum is decreasing. Soln: O(n) time and O(n) space.

________________________________________________________________________________

Problem 81.0
https://leetcode.com/problems/sliding-window-maximum/description/
Topics: Array, Queue, Sliding Window, Heap (Priority Queue), Monotonic Queue
Difficulty: Hard
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1458154349/

Takeaways:
Again, utilizing a Deque for forcing a monotonic natured data structure (of the descending nature this
time). The reason for utilizing a monotonic DS is because of the observation we made that all elements
which are smaller and to the left of some other element will NEVER be the answer for a window. Hence
the Deque we maintain is desc (non-asc) to get the greatest element of a window in O(1). THe
important thing is to pop the greatest element once its validity gets expired (i.e. its moves out of the
window, to do this we need to keep track of the indices of the elements as well). RE: easily got the
solution, the window was obvious and at each index, we need the max. Now, in our window if a value
comes which is greater than some x previous values, those x values are kinda useless since they are
both smalelr and will expire quicker. Hence we maintain a kind of monotonic desc structure in our dq,
ans is at dq.front().

Analysis + Comments:
Common sense brute force, where we will find the maximum of each window, for all windows. This
appraoch will take O(k*(n-k)) time. One observation that can be made is the answer for a window will
always be its maximum element, hence if we use a priority queue to keep track of the elements inside a
window, we can get the largest element in O(1). Adding and removing elements take O(logk)
[something similar can be done using BST). Total complexity would be ~O((n-k)*log(k)) time and O(k)
space. For the most optimal approach, another observation needs to be made. In the window, if there is
a smaller element to the LEFT of a larger element, the smaller element will never be useful to us. This
is because: 1) the "lifespan" of the element is lesser to the elements on its right. 2) During its lifespan,
the greatest element on its right will always be the answer. Hence, the best case for uswould be to
maintain a monotonic datastructure, which is in descending manner (values should always go down).
Hence, we will use a Deque, which will contain values in a desc. manner. Similar to #80, we will
maintain a deque where the elements are always descending. One extra case which needs to be
checked is that the element at the front of deque should get popped if it goes out of the window. The
answer for a window will always be the element at the front (since our deque is desc, greatest element
will be at first posn).

________________________________________________________________________________

Problem 82.0
https://leetcode.com/problems/word-search/description/
Topics: Array, String, Backtracking, Matrix
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1222808582/

Takeaways:
Breakdown algorithm into multiple parts; we know that we would need to search every element of the
matrix so a row wise traversal is a must. Also, a DFS type serach to "dig deeper" into the matrix is
important because we are searching for a word in multiple directions and the answer can come from
either one (we are not sure). To prevent infinite recursive searches, we use a visitedMatrix to make
sure we only visit unvisited stuff and mark the stuff we visit as such (and remark them as unvisited upon
going back: backtracking). If we are backtracking: reverting something to its original state, we can
actually use the inputMatrix itself as a "marker" of some sorts, since it will be reverted to its original
state after every possible search. Extremely important submission with good syntax and proper usage
of cpp OR statements for pruning: Short Circuiting Evaluation

Analysis + Comments:
One approach is to use a recursive function, which does a sort of DFS search, matching current
element with the element of the string. This function will be released iteratively as we traverse the
matrix (row wise). The function will return false if matrix[i][j] does not match word[index]. Also, to make
sure our searches are withing valid indices, we also need to return false if i < 0 and i >= n or j < = and j
>= m. To prevent duplicate searches (graph logic) we also keep track of a visitedMatrix, on visiting we
mark visited[i][j] as true and upong going back, we need to mark visited for that as false, since we are
no longer visiting it and now we are going back. If the cell on the matrix we are visiting is already
visited, then also we go back. We only return true once the word we are searching for gets exhausted.
For each cell, we recursively call the dfssearch function on the cell to its top, left, right and bottom. The
answer will be the OR of all 4 recursive calls. Now, for some CRAZY optimizations. Firstly, since after
marking something as visited we are also unmarking it. If instead of using a visitedMatrix, we replace
the word on the matrix with a '*' and upon going back, we revert it to its orignal value then we will save
space (minor) but also save alot of time (MAJOR). This is because we are not accessing a
separateMatrix for each iteration (even if the access time is O(1), we end up saving alot of time due to
multiple accesses). Search pruning: Not attempting a search when we know the result is not
favourable/necessary for us. Though it looks like the timeComp is exceeding our constraints
O(m*n*4^l), it is not actually the accurate representation. Since after the first search, we only have 3
possible ways, which reduces to 2 and eventually 1; the actual time taken is a lot lesser than this.

________________________________________________________________________________

Problem 83.0
https://leetcode.com/problems/maximum-nesting-depth-of-the-parentheses/description/
Topics: String, Stack
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1223231869/

Takeaways:
Replacing data structures with variables to improve both time (since accessing fucntions of DS adds
extra time) and space (obv) in caes when there are some sort of input constraints (here input will be a
VPS).

Analysis + Comments:
Stack can be used and opening brackets can be pushed in. Once we ecnounter a closing bracket, we
can take note of the size of the stack (ans = max(ans, size)) and pop the stack. Since we are guranteed
that the string is a VPS (valid parenthesis string) wedon't even need to use a stack to account for the
number of bracets. We can just increment count when we encounter an opening bracket and ans =
max(ans, count) and decrement count upon encountering a closing bracket. Best solution = O(n) time
and O(1) space.

________________________________________________________________________________

Problem 84.0
https://leetcode.com/problems/make-the-string-great/description/
Topics: String, Stack
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1223710677/

Takeaways:
Difference in lower case and uppercase ASCII for the same letter is 32. When you want a way to check
elements in a "browser-history" type manner: check most recent first, least recent last, we can store the
values in a STACK.

Analysis + Comments:
Initial logic was using pointers, comparing the characters at 2 places. If we compare a character with
the next character, if our condition is met (ascii diff is 32) we will ignore both and not append them to
our answer string. Once this is done, we will move our pointer (on our input string) 1 step back (if
possible). This is because it is entirely possible that the removal of the middle 2 elements can cause
the elements on the other 2 sides to clash as well. Hence we need to check this case. Since we always
want to check the elements in the opposite order to which they were added, (check most reacently
added first etc etc) a STACK will be used. If stack.top() and currentElement satisfy condition, then stack
is popped. Else we push the currentElement. In the end, we will have to popAll the elements and place
append them in a string and since the STACK is a LIFO structure, we will also have to reverse the
string to get our correct final value. O(n) time and O(n) space for the stack.

________________________________________________________________________________

Problem 85.0
https://leetcode.com/problems/median-of-two-sorted-arrays/description/
Topics: Array, Binary Search, Divide and Conquer
Difficulty: Hard
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1223914500/

Takeaways:
Knowing what to do Binary Search on Answer is very important. Here, 4 things were dependent on 1
and hence doing binary search on that resulted with the most optimal result. We divide our problem into
different problem components and find solutions for each component separately. Drawing a diagram
and highlighting the parts of answer which are relevant to us can help with visual interpretation

Analysis + Comments:
The biggest brute force would be to merge these two arrays and then sort them after and then find
median, which is O( (n+m)*log(n+m) ) time and O(n+m) space. But since these arrays are already
sorted, we can merge them in O(n+m) time + O(n+m) space and then find their median. Another
optimization over this which takes even less time and space would be to "assume" these are merged
and iterate over this imaginary merged array using an index iMerged. Now, since the median is different
when number of elements are odd/even, we have to maintian 2 mids: mid1 and mid2. This imaginary
array will be traversed using the merge 2 sorted arrays logic only (including the stragger cleanup at
end). One extra thing here is that since we want our mid2 to be one step behind mid1, we do mid2 =
mid1 at the beginning of every loop and when that loop is over, mid1 gets updated to a new value and
mid2 retains its previous value. This solution takes no extra space. The best case solution here would
be BSOA. But on what answer ? We will do binary search on how many elements from nums1 wlil be to
the left of the median (inclusive) of the "imaginary" merged array = mid1. Since we already know the
total number of elements that should be on the left of the "merged" array = (n1+n2+1)/2 ; plus 1 is to
make this valid for both odd and even elements -> we can figure out number of elements from nums2
which will be on left = mid2 = left - mid1. From this, we can know the elements from both arrays which
will be on the left and right of this "division". And if left1 <= right2 and left2 <= right1, we know our
division is correct. Else we have to adjust the number of elements (mid1) according to BS logic. if left1
> right2, then that means the num. of elements in the above array on the left is greater than it should
be, we do high = mid-1. For left2 > right1, then number of elements is lesser, we do low = mid+1. We
are not cchecking if left1 <= right1 because these 2 arrays are already sorted. The ans in case of odd
will be max(left1, left2) (since these 2 elements are closest to the division from the left and median will
be one of them) and in case of EVEN, ans = (max(left1, left) + min(right1, right2))/2 -> we want to
element closest to this division from the right. To make this work, we have to make sure that number of
elements in nums1 is smaller , hence we add a condition if n1 > n2, we return the ans of the same
function but swich vectors around in the parameters, so it abides to our rules. Time comp= O(logn1),
where n1 is the smaller length.

________________________________________________________________________________

Problem 86.0
https://leetcode.com/problems/permutations/description/
Topics: Array, Backtracking
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1223914500/

Takeaways:
In case of all possibles (using recursion) assume you already have the answer from the next step. Now
you need to figure out what you need to do to get the correct answer for this step.

Analysis + Comments:
Since we need to find all possible of something, recursion will be very helpful here. We (for index i)
assume we already have all the permutations (smallAns) for array i+1, end. Now to form the other
possible permutations, we take the current element and insert it into all possible positions in all possible
strings/vectors of our smallAns. Since this approach is recursive, it is slightly slower than the iterative
approach, where we call nextPermuation in a loop n! times and push every answer into our output
vector.

________________________________________________________________________________

Problem 87.0
https://leetcode.com/problems/minimum-remove-to-make-valid-parentheses/description/
Topics: String, Stack
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1224791644/

Takeaways:
Firstly, stacks are intuitive for parenthesis questions in some way because of their callback nature.
Secondly, always use string.push_back(c) to append [O(1)] instead of string += c [O(n)]. Lastly, if you
want to reverse a string with parenthesis, you will have to make all parenthesis negative { ( -> ), ) -> ( }
after the string reversal.

Analysis + Comments:
First and most intuitive approach was the stack, since due to its callback nature stack is inherently
linked with parenthesis questions. And we only care about opening brackets in the stack, hence we
push their indices. All opening brackets (indices) are pushed inside the stack and if we encounter a
closing bracket, we check if there are any opening brackets present in the stack. If yes, then we pop
that (since that has been used now). Else, we store the index of the closing bracket (in a set/array
badIndex) since there was no matching opening bracket for it. When the string is traversed, if we have
any remaining opening brackets in our stack, we push them into badIndex as well. Now, while building
our answer string (which can be built while traversing left to right), if we encounter our currentIndex is
present in badIndex we skip it. Else we normally append it. Also for strings, use string.push_back() to
add characters since that takes O(1) time. DO NOT use string += char since that is O(n). This approach
is O(n) time and O(n) space. Here we can note that the stack was only useful to us for "counting" the
number of openingBrackets present. Hence, instead we can replace it by a variable numOpen. Also for
this approach, to save the space we build our answer as we traverse our input string. Upon
encountering an opening Bracket it increments and for closing it decrements (if it is greater than 0).
Else that closing bracket is ignored. When our traversal is over, we would have successfully ignored all
invalid closing brackets. Now, to prune all the extra opening brackets, we traverse ans (and build
another ans) and count down all valid openin brackets (validOpen = totalOpen - numOpen). After all
valid open brackets are checked (validOpen == 0), we start ignoring them and then we return this final
string.

________________________________________________________________________________

Problem 88.0
https://leetcode.com/problems/delete-node-in-a-linked-list/description/
Topics: Linked List
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1225629028/

Takeaways:
A node can be deleted when just its address is given (if swapping values is allowed) in O(1) time.
Instead of deleting the node, we overwrite its data by the next noed. Now, since we have 2 nodes with
the value of nextNode, we can delete the actual nextNode by regular LL deletion.

Analysis + Comments:
Obviously, to properly *delete* a node from a LL (removing that specific address) we need the node
before that as well: to reconnect the previous to the next node to maintain integrity of LL. But the ques
clearly states that we do not need to "actually" delete the node and value of each node is unique.
Hence one way of solving this is by erasing this nodes values (copying values from one node ahead
into current node) and then deleting the tail node (since there will 2 nodes with the value of the tail).
This way, we have overwritten our previous node and shifted the list "1 node behind". This solution is
O(n). An even faster solution which comes from this logic is since we know that the given node is not
the tail node, we can copy data from a single node ahead (hence creating 2 copies of the next node, we
are free to overwrite data of current node since we want to "delete" it anyways). Then we can do regular
LL deletion of the nextNode. This takes O(1) time.

________________________________________________________________________________

Problem 89.0
https://leetcode.com/problems/valid-parenthesis-string/description/
Topics: String, Dynamic Programming, Stack, Greedy
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1225713928/

Takeaways:
Separate stacks for separate stuff can be used, since we are pushing inside their indices instead. A;so,
as learnt from prev questions the numOpen is very useful (which increments for opening and
decrements for closing). Here, its functionality is extended for * as well. Since * is flexible, it can be
opening as well as closing ( or empty), we will increment in its case. This approach can be extended to
be done from both directions, making sure of both conditions, and checking in the inverse conditions.

Analysis + Comments:
Initial approach: since the asterix can either be empty, openingBrack or closingBrack we can do a
recursive solution where we assume it to be either and return the OR of all these (since we want atleast
1 possibility). When we are assuming it to be openingBrack, in the recursive call we pass numOpen+1.
For empty we pass emptyString only and in case of closing, we pass numOpen-1 IFF numOpen > 0.
Since this will be of order ~ O(3^n), we will have to memoize this. Since there are 2 deciding
parameters which affect the function: the current index of the string and the numOpen we will make a
2d DP matrix of n^2. This approach is O(n) time and O(n^2) space. Now, for further optimization we can
use stacks. The thing with stacks is that they are intuitively sensibile for paranthesis questions. Here for
'(' we know they will be pushed in a stack and in case we get a ')' in the string we can check if that stack
is empty or not and then subsequently pop it. '*' creates some uncertainity here: it can act as all three
cases. Hence we will create a separate stack for it. While our first traversal, we push all '(' and '*' into
their respective stacks. Since we already know what is in the stack (since stacks are separate for both
these) we will push their indices inside (this will be helpful later), In case we encounter an ')', we first
check the '(' stack. If non empty, then a pair can be formed and we pop that. Else we check the '*' stack.
Same conditions apply here. Once this traversal is done, we are sure of all our ')' brackets (so far, '*'
has acted as a ')' / "" ). Now, we need to check for the remaining '(' in the stack and if any, we need to
match them with * to form pairs if possible. The maincondition we need to check here is if the index at
top of '(' stack is greater than top of '*', because that would mean this type of condtion: * (, which cannot
be fixed by transforming the asterix. This solution is O(n) time and O(2n) space. The best solution here
follows the numOpen logic, but alongside that uses another variable numClosed. These two pointers
traverse from the start and end respectively. And increment in case of openings and * and decrement in
case of closing: for the reverse traversal, ( <- will be closing and ) <- will be opening. If at any point
these variables become negative, that means that cannot be fixed for its corresponding use of * and
hence we return false. Becaues of '*' our numOpen and numClosed is not just restricted to == 0, we
need to make sure that they end up as positive.

________________________________________________________________________________

Problem 90.0
https://leetcode.com/problems/longest-strictly-increasing-or-strictly-decreasing-subarray/description/
Topics: Array, Two Pointer, Sliding Window
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1529489931/

Takeaways:
Obvious sliding window since the question name itself has monotonic. Best method is to maintain both
inc and dec lengths and resetting the other to 1 when one's condition gets satisifed. Maintaining both
prevents overengineering in the solution. Or can just find or inc and then reverse

Analysis + Comments:
Obvious n^2 approach you already know for subarrays. Here, the best approach is to maintain
currentLengths of inc and dec = 1 (since at minimum, these lengths will be 1). Upon traversal, when the
increasing condition is satisfied we inc++ and set dec = 1 (because at that element, length of dec has
been reset to 1) and the oppositve is true as well. When neither satisfies we set both to 1. We have to
take max of our previous Ans and inc / dec as required. This is a weird type of sliding window which
only uses 1 pointer.

________________________________________________________________________________

Problem 91.0
https://leetcode.com/problems/lexicographically-smallest-string-after-operations-with-constraint/descrip
tion/
Topics: String, Adhoc
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1225817630/

Takeaways:
Lexographic minimum always starts from 'a'. For cyclicity, we can either use modulus or do 'z'-s[i]+1.
'z'-s[i] will give us distance from that char to z and + 1 will help it reach a.

Analysis + Comments:
Due to the cyclic nature, we have to possibles methods of reaching our lexographic minimum goal (i.e.
'a'), either we move forward or we move backward. Since we want to the minimum amount of cost we
take the minimum of the distanes. If k is enough for these distances then we take it, else we only move
bckward (because moving forward in this case will make us lexographically ahead).

________________________________________________________________________________

Problem 92.0
https://leetcode.com/problems/minimum-operations-to-make-median-of-array-equal-to-k/description/
Topics: Array, Sorting, Adhoc
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1225793465/

Takeaways:
Even though question asks to make the changes 1-1, we can simulate these all together in by adding
the required operations all together. This can save a ton of time complexity. Also after sorting, if we
change our median value, depending on the direction of the change we only have to adjust values to
the one side of the median, which makes total operation nlogn + n.

Analysis + Comments:
To find the median, first the array has to be sorted first. So that takes O(nlogn). This problem has a
slightly skewed definition of median where its index will always be n/2. Now, we need to calculate the
differece between our median and expected value. We will add the difference to our operations. Now,
since we have modified a middle value our array has become unsorted. THe brute approach here
would be to resort and then do the same change. But that would agai take nlogn, which is not feasible
here. In the case where we decreased our median value, since its value is lower than previous value
and the array is sorted, we know all elements in front of it would still be sorted. But there is a chance
that elements before that have broken this condition. Hence we go there and try to do the minimum
number of changes (i.e. bring it to median's value) to make sure the array remains sorted. The opposite
holds true when we increase our median value.

________________________________________________________________________________

Problem 93.0
https://leetcode.com/problems/minimum-cost-walk-in-weighted-graph/description/
Topics: Array, Bit Manipulation, Union Find, Graph
Difficulty: Hard
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1230148312/

Takeaways:
Multiple mapping connections used to recude the time complexity to fit constraints. First, if nodes
belong to different components then clearly no path is possible between them. Hence knowing
component of each node can be helpful. Also, sincewe want to minimize AND, we use the property that
the AND value of a numbeer can only decreased when ANDed with others. Since we can walk over a
node multiple times, minimum and will be the total AND of the component itself. NEXT, due to the
contraints and staggeredness of the prah, adjacency list is a better suited method for graphing than
adjacency matrix. AND values for each component are also mapped and updated just before the
queries by traversing input edges so that multiple paths between same nodes can be considered as
well.

Analysis + Comments:
Boy oh boy, probably one of the hardest questions i have done. Requires multiple "logical chain
connects" to solve it within the given contraints. We are given the number of possible nodes and edges
(source, dest, weights). First thing we do is create an adjacency list (cannot craete a matrix because n
<= 10^5, will give TLE because matrix will square it). Adj list can be created by: u = edges[i][0], v =
edges[i][1] and then graph[u].push_back(v) and graph[v].push_back(u). Before this, we need to create
a main vector graph of size n and initialize it with empty vectors of size 0 by: vector<vector<int>>
graph(n, vector<int>(0)); Secondly, we need to find minimum AND path for a given source and destn
node. If the 2 nodes lie on the same component / island, we can apply the AND property, which states
that AND can only decrease (because 0 cannot become a 1 but a 1 can become a 0, this property is
opposite to OR which can only increase). Hence the minimum AND will be DFS search on the
component of source and destn. If diff components, then AND is not possible and we need to push -1.
Hence we need to create a map (or an array, which maps each node to its component, will be useful at
the end to figure out if path is possible or not). This will be done by DFS traversal. DFS with adfList is
done by: while i <= graph[sv].size(); int node = graph[sv][i] and the nnormal dfs.Here, aside from
marking visited, we also mark the component number of a node (which is done via the outside loop,
which is used for DFS trav in disconnceted graphs). After node to Comp mapping is done, we need to
map each component with its AND value. For this, irst we choose the maximum and mask possible:
111111..., created by (2<<20) - 1, and set this at each value of our compAND map. Then, we traverse
our input edges, and check the source node. Then we and the value of component of that node with its
current and value (stored in the map). This allows us to get and values for each component. At the end,
we just need to run our queries. If source == dest, then push 0. If source and dest do not belong to the
same component, then push -1 else push the and value of the component.

________________________________________________________________________________

Problem 94.0
https://leetcode.com/problems/number-of-students-unable-to-eat-lunch/description/
Topics: Array, Stack, Queue, Simulation
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1227338593/

Takeaways:
Using the provided input elements as the datastructures themself. Entry and exit can happen only at
one point in the stack, hence that becomes our bottleneck for ending the loop (if nobody else in the
queue wants the sandwich, then we leave the queue). Knowing the bottleneck and what actually halts
the simulation can be used for the most optimal solution (here the number of students who want the
sandwitch at top of stack == 0) for determining the loop end and the subsequent result.

Analysis + Comments:
Simulation type beat. The only thing which took some time to figure out is the terminating condition;
which is if nobody else in the queue wants the sandwich which is at the top of stack. Then the answer
will be to return the number of students left in the queue. Hence we will first count the number of
students in each for each sandwhich (and store them in a counter[2] array, array is being used to store
because incrementing indices based on the sandwich will be easier). Also, there is no need to create a
separate queue // stack for sloving this because we can use the vectors they have provided itself. For
the stack, we maintain a top index. If the student at front wants thesandwich we do top++ (while top
<=numSand-1). For the queue in the same condition, if we do front++ as well. If the student does ont
want that, then they will go to the back of the queue, by doing students.push_back(students[front])) and
then front++. Once this loop is over, the answer would be students.size() - front. In the worst case, we
are iterating over n, n-1, n-2 size array by looping around and hence he time complexity here is ~
O(n^2). This can be further optimized by iterating throug the sandwhich stack (since it is the bottleneck)
and if the students which do not want that sandwich == 0, then the simulation won;t progress anyways
and we can return the remaining number of students as the answer. Improved time comp: O(n) and
space = O(1)

________________________________________________________________________________

Problem 95.0
https://leetcode.com/problems/maximum-depth-of-binary-tree/description/
Topics: Tree, Depth-First Search, Breadth-First Search, Binary Tree
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1226407893/

Takeaways:
Height of tree = max of the height of the two subtrees (all subtrees in case of generic) + 1. Adding 1
because of the parent node.

Analysis + Comments:
DFS is obviously more intuitive for figuring out the height in a tree because trees are a recursive data
structure. But here, BFS can also be used for figuring out the depth/height and is possible faster
because it does not involve recursive calls.

________________________________________________________________________________

Problem 96.0
https://leetcode.com/problems/subsets/description/
Topics: Array, Backtracking, Bit Manipulation
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1226922785/

Takeaways:
2 Possible methods for subsets. First is the recursive solution where we assume first copy all vectors
from smallAns as it is (which simulates "not" choosing something for subsequence) and then we do the
same thing but before pushing we push the current element first. Iterative method is using bit
manipulation, where we assume the current bit status as the current element being chosen or not in the
subset. For the recursive method, backtracking can also be used, where we pass a temp vector and
take 2 paths (one where we select current element and one where we dont) and when we reach the
end of the path, the temp vector is psuhed inside the ans ~ solution similar to combination sum

Analysis + Comments:
Need to return all possible subsets, hence the goat recursion comes to mind. The approach is to do the
recursive call (at i), where it returns all possible subsets from i+1 to the end. Now to create the other
subsets, we will first copy all of those subsets (from smallAns into ans). Then we will copy them again
but before pushihng them into the answer vector, we will push the element are current position first,
which will provide us with all possible substrinigs. For an array of size n, there are 2^n possible subsets
(-1 subsequence, subsets also involve the empty one). Hence this recursve approach yields a time
comp of O(n*2^n) and a space comp of O(n). Now, since we know iteration is faster than recursive
calls, to further speed this up, we will use bit manipulation for the iterative approach. We initialize a bit
array with all 0s (size = input size). bitArray[i] signifies if for that subset, nums[i] will be a part if
(bitArray[i] is high then it will be a part else no). After pushing that subset in the ans, we do bit addition
in our bitarray. For convenience, the bitArray is stored in little endian format (least sig bit is stored first).

________________________________________________________________________________

Problem 97.0
https://leetcode.com/problems/symmetric-tree/description/
Topics: Array, Depth-First Search, Breadth-First Search, Binary Tree
Difficulty: Easy
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1226936820/

Takeaways:
Mirror tree and symmetry check involves passing a recursive function which does 2 checks at once.
Hence 2 nodes are passed. Based on the type of check (symmetry involves comparing inside with
inside and outside with outside) and hence left.left, right.right && left.right, right.left are passed.

Analysis + Comments:
3 Approaches came to mind. First approch is to do a BFS type search where we create a queue and
check if the created queue is an even pallindrome or not (for all levels except the first one). T maintain
some sort of structure, in case a child node is null we push -1 in the queue. The other approach is to
create a mirror tree and then we check if these twotrees are teh same or not (via some DFS / BFS type
of traversal). The last approach is to use a recursive comparison (which is intuitive for trees) which
compares 2 nodes at once. We pass the left and right child. If obth of these are null, then that means
current node is symmetric and we return true. If only 1 of these is null then obiviously we have some
unsymmetry going on and we return false. Now, since we have done the NULL checks, we can
dereference to check if the values are equal, If values of left and right child are not equal then obv the
tree is not symm and we return false. Now, we need to do a recursive check. First we need to do the
same comparisonn on the outer side and hence we pass left.left and right.right. Then, we need to doan
inner check and qwe pass left.right and right.left.

________________________________________________________________________________

Problem 98.0
https://leetcode.com/problems/combination-sum/description/
Topics: Array, Backtracking
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1226954768/

Takeaways:
This specific one has no restrictions, hence we select an element (by pushing it in our currentAns
vector and reducing its value from the target in the recursive call). In case the target == 0, we push
currentAns into ans. In case target < 0, then there is no possible way of creating that sum from the
given numbers anymore and we return. After the recursive call is done, we pop() the currentAns, since
this element has done its work (backtracking)

Analysis + Comments:
The classic backtracking brain opener. Backtracking is basically returning to a previous state, making
some changes and starting again. Here, following the typical backtracking formula; we pass a
"currentAns" vector which will be modified as we go along the different paths and if a certain condition
gets fulfilled, we push that into our final ans.

________________________________________________________________________________

Problem 99.0
https://leetcode.com/problems/combination-sum-ii/description/
Topics: Array, Backtracking
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1226964151/

Takeaways:
Sorting helps for backtracking when we want to weed out duplicates. Since the range of values are
small, a count sort soln can also be implemented.

Analysis + Comments:
When we are given in a backtracking type question that the inputs can be duplicates and the output
must not contain duplicates, then we can sort the input first [nlogn, usually less tedious than the intense
recursive stuff we are about to do). Logically, the solution is similar to combiniation sum, recursive calls
and all. We just add a small check to not include an element if it has already been included ( if nums[i-1]
== nums[i]) continue; also since an element can only be used once, in the recursive call we pass i+1.

________________________________________________________________________________

Problem 100.0
https://leetcode.com/problems/combination-sum-iii/description/
Topics: Array, Backtracking
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1226985889/

Takeaways:
Doesnt always require you to make an array, sometimes just the numbers will suffice.

Analysis + Comments:
Even more restrictive than the previous combination sums, since here a restriction on the number of
elements which sum up to a combination is also given. Hence we pass times in the recursive calls
(which decrements in each call) and if times == 0 and target == 0, then we have found a good match. If
only times == 0, then we cannot proceed more with this.

________________________________________________________________________________

Problem 101.0
https://leetcode.com/problems/combination-sum-iv/description/
Topics: Array, Dynamic Programming
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1228732170/

Takeaways:
Again, if unable to think about dp solution just attempt a rec brute force first. Then think about the
parameters which differ in each recursive call to think for memoization approach (here 1 parameter so
1d dp). Then finally reverse engineer the memo into a bottom up dp.

Analysis + Comments:
Again, recursive brute force, then memoization and then finally DP leading to the perfect solution. The
brute force is similar to other combination sum questions. Since we are calculating the total number of
something, instead of needing a currentAns array our rec fnunction itself will return the numbers. If
target == 0 then we will return 1, because we know theer is 1 way for some stuff to sum up to 0 (by
including none of them). There will bea loop for recursive calls hich willl return their sum in ans += rec().
Similarly, a dp array can be used for memoization. Since the only paramater which varies for different
recursive calls here is target, this will be a 1d DP problem bsaed on target. hence if dp[target] != -1,
reeturn dp[target]; for top down memoization. Proper bottom up dp, we use a 1d array for target and
use the base case dp[0] = 1 (from our recursive call). Then we just replicate what the memoization
approach did with a bottom up dp version. Only difference is that since we are actually summing up to
something (from 1 to 999), there is a chance for int overflow and hence we will have to use unsigned
int. Also you need to think about follow up.

________________________________________________________________________________

Problem 102.0
https://leetcode.com/problems/combinations/description/
Topics: Backtracking
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1227007298/

Takeaways:
Passing i+1 in the recursive call to make sure the currently used element doesnt show up in the answer
of the recursive call again and using the size of currentAns to check when to push the ifnal answer in.

Analysis + Comments:
nCk type question. We are given a contraint on the number, hence we keep a tracker that that as well
(of if size of currentAns == k then we push it in our ans. Since we need to return all combinations of
integers from 1, n; there should not be any duplicates. Hence we follow our i+1 method. Rest is regular
backtracking.

________________________________________________________________________________

Problem 103.0
https://leetcode.com/problems/time-needed-to-buy-tickets/description/
Topics: Array, Queue, Simulation
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1227376467/

Takeaways:
Again, we simulate the condition and then find the respective breakpoints. Here, the breakpoint was the
value at our respective index becoming zero. Also, pushing indices into the datastructure makes it
easier to keep track.

Analysis + Comments:
Initially, we do the regular simulation approach to figure out what exactly is the breakpoint. This
approach is done by using a queue and pushing in indicies in the queue (if non zero after
decrementing). Time is incremeted after each ticket decrement and if ticket[i] == 0 whtn i == k, we
return he time. Upon solving this, we know that the breakpoint is the tickets[k] becoming 0. Hence
atleast tickets[k] need to be done. All the elements before (i <= k) will be reduced by min(tickets[i],
tickets[k]) and subesequently that would be added t othe time. Now, since k has been depleted, we only
care about that and elements after that do need need to be decremented. Hence at max they will
bedecremented by tickets[k]-1 time (since the tickets[k]th iteration, we have fulfilled our goal and we
"break" out the loop in the simulation approach. Hence these are subtracked by min(tickets[k]-1,
tickets[i]) and added to ans;

________________________________________________________________________________

Problem 104.0
https://leetcode.com/problems/reveal-cards-in-increasing-order/description/
Topics: Array, Queue, Sorting, Simulation
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1228383140/

Takeaways:
Crucial question for reverse engineering simulation steps. Also can be simplified by pushing indices of
the elements inside the queue/deque. Most intersting thing was despite the step reversal, if we followed
the index method we would still get the regular output.

Analysis + Comments:
Simulation, the problem statement was kind of tricky. We know that at the end we need our output to be
a sorted array. Hence our "input" array should be in such a way that when we apply the given steps it
should result in a sorted array. To figure out the "input" array from the sorted output, we can just
reverse these steps. From the problem desc we can figure out that a queue (or deque) will be used.
Either this or we can replicate how the elements will be pushed inside by using a queue which stores
the indices. Time O(nlogn) for sorting and space O(n) for the queue. Two pointers can be used here but
that is neither intuitive nor optimal

________________________________________________________________________________

Problem 105.0
https://leetcode.com/problems/merge-intervals/description/
Topics: Array, Sorting
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1228390792/

Takeaways:
The general thinking for interval questinos is that after merger of two intervals, the final interval is
min(start1, start2), max(end1, end2). Also for solving these, it is good to create some sort of number
line diagram and draw these intervals over them, to figure out the underlying relationship (visual
intuition always helps)

Analysis + Comments:
For questions regarding intervals, as previously done we know that sorting is important. THe most
important thing is using proper conditions to avoid unecessary if else programming. After sorting the
intervals (based on the interval opening), we iterate over them. At every iteration a newInterval is
created: intervals[i][0], intervals[i][1]. And we create a j = i + 1. Now, this interval is overlapping while
intervals[i][1] >= intervals[j][0]. Hence we have to merge these. The new endpoint will be either the
previous endpoint or the endpoint of the new interval (depending on the maximum after merge).

________________________________________________________________________________

Problem 106.0
https://leetcode.com/problems/remove-k-digits/description/
Topics: String, Stack, Greedy, Monotonic Stack
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1229162428/

Takeaways:
Brute force approach would be common sense, lexographic comparison of strings with and without
removing characters. We are doing lexographic comparison becuase we know one character would
eventually be removed from the case where we didnt remove a caracter and hence lengths will be
equal. This leads to the realization that we are only removing a current character if the next once is
smaller then current. Hence we want our output string to be monotonically inc (non-dec) to an extent. In
the end, we remove the extras while k > 0. Annoying edge cases though.

Analysis + Comments:
Here, we want to create the smallest number form a string, i.e. the lexographically smallest string. One
method is to consider 2 scenarios, one where we remove the current character and one where we dont,
and we pick the smaller (lexographically, by comparing each character) of these two, We do this until
our string ends and k == 0. This would lead in a solution which is O(n^2). In this solution, what we are
basically doing is checking i the character to te right is lesser or not. If lesser, then we are eleminating
current character and sfhiting the remaining string 1 step leftwards. Hence we can use this conclusion
to form a O(n) solution. We will use a monotonic stack for this, since we only want characters to
increase in lexography. If num[i] > stack.top() and k > 0, then we stack.pop() while this condition
remains true. This will enable us to form a monotonic stack. If by the end k is not zero, we know the
stack we have created so far is monotonic, Hence to achieve the smallest string, we will start
eliminating characters from the end while k > 0. Now we just have to remove leading zeros and our
solution is done. Here, instead of sacrificing O(n) space for a stack, we can just use our answer string
as a stack by using ans.push_back() and ans.pop_back() to simulate stackyness. Here, we won't even
have to reverse our stack to get our desired string. One for test case for empty string = "0" should also
be handled.
________________________________________________________________________________

Problem 107.0
https://leetcode.com/problems/subsets-ii/description/
Topics: Array, Backtracking, Bit Manipulation
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1229366111/

Takeaways:
For questions where we need to commit backtracking in case of duplicate elements in our array (and
our output should not contain) we first sort the array. After that, in the backtrack loop we add th
econdition for skipping dupes. currentAns is pushed in at the start of each recursive call.

Analysis + Comments:
First, we need to figure out how to find subsets using the backtrack method. The play is to push the
currentAns first. Then we start the loopage i = index. We push the element at nums[i] and then do a
recursive call where the we pass the index i+1 (so that same element does not get repeated). Then
after that, we back track and pop the element out and increment. For this specific case where we do not
want duplicate subsets, first we willsrot the array (in asc order most prob). Then we will do if i > index
&& nums[i] == nums[i-1] continue; This will make sure that if an element's course has already been
completed, then it will not be selected as a start element for the next "course".

________________________________________________________________________________

Problem 108.0
https://leetcode.com/problems/permutations-ii/description/
Topics: Array, Backtracking
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1229417652/

Takeaways:
*(used+i-1) check needs to be applied for for skipping since we will only skip if tat element has ben
used as well.

Analysis + Comments:
First, we need to figure out how to find permutations using backtracking. This is similar to subsets via
backtracking but there is one important difference, that being purmutaions can only be of the same
length as the OG input. For that, our push condition for currentAns is if currentAns.size() ==
nums.size(). Also, in our permuation the an element from before can now be after. Hence we need to
maintain an used array which keeps track of all the lements in the array which have been used so far in
our permut ( *(used+i) = true before push, *(used+i) = false after the pop). This works when there are
no duplicates. For duplicate prevention, we do th regular sorting method and then do the nums[i] ==
nums[i-1]. Another check needs to be done if the previous element is ALSO used by *(used+i-1). Rest,
it is similar to the subset soln

________________________________________________________________________________

Problem 109.0
https://leetcode.com/problems/keys-and-rooms/description/
Topics: Depth-First Search, Breadth-First Search
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1230253396/

Takeaways:
Adjacency list traversal is slightly different from adjacency matrix traversal. For this question, we will not
run the outside loop for traversal because we just need to check if all elements are connected or not.

Analysis + Comments:
Might not be obvious but the question is of graphs, basically we need to tell if all nodes are connected
or not and the graph input this time is in the form of an adjacency list. We will basically od a DFS
traversal and every time a new node is visited, we will do numNodes++. If at the end, numNodes ==
rooms.size(), then all rooms have been traversed and te graph is connected, hence we can return true.
For adjacency list traversal (DFS), after visiting a node obviously it needs to be marked as visited. After
that, we find the neighbour list of the current node by graph[startNode].size() and iterate over this. The
unvisited nodes will be visited.

________________________________________________________________________________

Problem 110.0
https://leetcode.com/problems/number-of-islands/description/
Topics: Array, Depth-First Search, Breadth-First Search, Union Find, Matrix
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1230265144/

Takeaways:
Different type of DFS traversal due to the nature of the input; connected nodes are nodes which are
adjacent hence recursive calls are made in the nieghbouring directions.

Analysis + Comments:
Similar to #112 but the different thing is the traveresal method here. The traversal method for this is
similar to the word search in matrix question. Other than that this was mostly easy. Four recursive dfs
traversal calls for visiting the nodes to the up, right, down and left were done because the island is only
considered connected in these 4 directions. Here, te node will stay marked even after going back since
we only need to visit every node once. After we exit the entire traversal into our original otuside loop,
we increment ans by 1.

________________________________________________________________________________

Problem 111.0
https://leetcode.com/problems/max-area-of-island/description/
Topics: Array, Depth-First Search, Breadth-First Search, Union Find, Matrix
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1230270034/

Takeaways:
Passing reference variables in recursive calls backtracking style to keep track of a certain something
(here, the size of the island)

Analysis + Comments:
Similar to #110 and Word Search in a matrix due to the traversal method, which is based on neighbours
(top, right, bottom, left). Here, for each DFS call we create a new reference varuable size which
increments upon each new visit on a node (we are obviously not visiting already visited nodes, which is
done by maintaining a visited matrix). In the outside loop, we update maxSize = max(maxSize, size)

________________________________________________________________________________

Problem 112.0
https://leetcode.com/problems/number-of-provinces/description/
Topics: Depth-First Search, Breadth-First Search, Union Find, Graph
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1230323573/

Takeaways:
Basically return number of componentswhere input is given in the form of an adjacency Matrix. We will
use the code of COMPLETE DFS/BFS graph traversal (where a loop is running outside to make sure
disconnected noeds get traversed as well).

Analysis + Comments:
Basically, we need to return the number of connected components in this graph. Here, the input is
given in the form of an adjacency Matrix, instead of an adjacency list. We will start from a node (by
running a loop on the adjMatrix, we will start traversing non visited nodes. At the end of every DFS
traversal, that means we have completely traversed a province / component, then we will incremen
tnumber of provinces. When an option is given between DFS and BFS, choose DFS because shorter
code.
________________________________________________________________________________

Problem 113.0
https://leetcode.com/problems/binary-tree-level-order-traversal/description/
Topics: Tree, Breadth-First Search, Binary Tree
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1230539468/

Takeaways:
Level order traversal using queue, pushing in NULL when another NULL is popped (and queue is non
empty) to signify a "level has been completed". Important edge case for root = NULL.

Analysis + Comments:
This is just regular BFS traversal with the added NULL pushing to signify the end of a level, nothing
special.

________________________________________________________________________________

Problem 114.0
https://leetcode.com/problems/n-ary-tree-level-order-traversal/
Topics: Tree, Breadth-First Search
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1230543740/

Takeaways:
^^A substitute for pushing NULLs is taking note of the size of the queue at the start every time before
the operations are done, and running the loop for that size only.

Analysis + Comments:
Similar to #114 binary tree traversal, the difference is just that instead of pushing left and right child, (if
they are not NULL), we push in all of the children in the childrenVector.

________________________________________________________________________________

Problem 115.0
https://leetcode.com/problems/binary-tree-maximum-path-sum/description/
Topics: Dynamic Programming, Tree, Depth-First Search, Binary Tree
Difficulty: Hard
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1230560357/

Takeaways:
Base case was very important here. Also returning pairs to prevent duplicate calls and hence improve
time comp. The work pair does can also be substituted by an applicable reference variable. The value
being returned will be the upward sum instead of leftRightPath sum, because if we return the
leftRightPath sum then the node will be "included twice" in the path which will be calculated at the
parent. Aside from this, some sort of Kadane's logic can also be used where we discard path sum (and
save it in ans if applicable) once it becomes negative (because it won't contribute positively to the sum
again)

Analysis + Comments:
This is in a way, similar to the diameter question and hence i applied a similar sort of logic here. We will
return a pair<int, int> where first is the maxPathSum for that sub tree and second is the
"upwardsPathSum". We need these two because it is said that maxPath does not necessarily have to
go through the root. Since its a binary tree, there will be 2 recursive calls, for both left and right child.
The first of these 2 will contain the maxPathSum of their resepctive sub trees (leftMax, rightMax). Since
a possibility is that the currentNode can be part of a path which goes from left to right, we create a
variable pathLeftRight = leftUpwards + (*root).val + rightUpwards. We also need the currentUpwardMax
(which helps in the pathLeftRight for its parent) which will be max((*root).val, max(leftUp, rightUp) +
(*root).val). The final pair ans we are returning, will have its .second as currentUpwardMax by definition.
The maxPathSum for this subtree (.first) is max of: 1) currentUpwardsMax, pathLeftRight, leftMax or
RightMax. The base case was very important for this, which will be {-1001, 0}. -1001 was taken as a
substitute for INT_MIN, due to constraints. This is because since everynode will be bigger than that,
maxPathSum will also always be greater. upwardSum was 0 because for no nodes, there will be no
sum. Another (simpler) approach is too keep the maxPathSum in a reference variable and update it
whenever applicable, and return only the max upwardSum.

________________________________________________________________________________

Problem 116.0
https://leetcode.com/problems/maximal-rectangle/description/
Topics: Array, Dynamic Programming, Stack, Matrix, Monotonic Stack
Difficulty: Hard
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1235469301/

Takeaways:
Identifying the subproblem (which inturn is a hard question) can be crucial. Here, one can think in a
similar line to area and link it to maximum histogram area. These blocks look like histogram pillars
anyways. Then an array is constructed which acts as the histogram base (only calculates heights for
stuff connected to the base) and the histogram code is called on it.

Analysis + Comments:
Firstly, the brute force. The brute force for this is similar to the maximal square question, where we will
iterate over this matrix. In case we find a 1, we assume that to be the start (top left) of our new
rectangle and we wlil iterate while we have 1s. Whenever a row ends (or we run out of 1s), we
recalculate area and update max area. Also, we have to keep track of the smallest row length, because
as we go down, that will be used to calculate the area (the smalles row length would be the constraint
for our rectangle). We stop going down once the rows are exhausted (or no longer 1s). This solution is
O( n^2 * m^2 ). For optimization, one might think of an DP approach similar to the maximal square one,
but the problem here is that square had constraints. Here rectangle does not have constraints, max
area for can come from a 4*1 rect or a 2*2 rect. And for figuring out the new rectangle, we need to know
the previous dimensions as well. Hence this approach fails. Looking at the visuals, we can we need to
find area similar to the Find Largest Histogram question. The height here is given by consequtive 1s in
a column. If we find the largest histogram area assuming every row to be the base of the histogram (if
an index in the row has a value zero, then the height of the hisogram bar above it doesnt matter
because its a "floating" histogram, and its relavent calculations would have been done when we did the
"above" rows). The ans would be the max of all these areas. Time O(n*m), space = O(m) for the
histogram array and the (monotonic) stack used to solve largest histogram.

________________________________________________________________________________

Problem 117.0
https://leetcode.com/problems/largest-rectangle-in-histogram/description/
Topics: Array, Stack, Monotonic Stack
Difficulty: Hard
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1231023931/

Takeaways:
1) Mark an exact thing to calculate, and calculate for a single value. Here, for index i we calculate the
area of rect possible with height = heights[i]. 2) Monotonic stack is used to limit calculations in one
direction and optimize the overall solution. 3) Pushing in the index of the replaced element instead of
the current element (stack is of pair, height of current is pushed) because the smaller histogram's
rectangle can extend backwards till it encounters an even smaller histogram. Pattern: Nearest
smallest/largest which is to left/right: Monotonic Stack

Analysis + Comments:
Slightly similar to the trapping rain-water, in the way. The similarity lies in picking how to calculate the
exact "area". In the water question, we were calculating water one at a time for each bar, which made it
easier. Here, once that was figured out the question became slightly easier. We will calculate the area
of the rectangle with height[index] starting from that index, setting this constraint makes it easier for us
to think around this question. Now, one thing that we know is that the smallest histogram will decide the
"height" of the rectangle. If all the histograms are in ascending order, then rectangle area of histogram
at i will be heights[i] * remainingWidth (given by heights.size() - i). This makes it easy for us, and do to
force this nature we can use a (monotonic) stack. While the height of current element is smaller than
the element at top of the stack, we know that the old top element's rectangle won't be able to extend
any forward (since area rect is is decided by the smaller one). We will calculate its area covered by
finding width (i - stack.top().index) (the stack will store both index and height;) and multiply it with
stack.top().height to find the total area spannage of that specific rectangle (and update ans
accordingly). Now, we can push the current histogram but one thing we are forgetting this that the area
spannage of this rectangle will extend backwards (since this is smaller than the removed rectangle).
Hence while pushing this thing in, we will push the recently popped index along with the current height.
At the end, we will have a monotonic stack and we will do the same popping for ans calculation. Soln =
O(n) time and O(n) space. Another way to do this (by using a normal stack is to do this operation twice,
one in the forward direction and other in the reverse dirn, by this way we wont have to worry about a
histogram's backward extension area not being calculated)

________________________________________________________________________________

Problem 118.0
https://leetcode.com/problems/score-of-a-string/description/
Topics: String
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1231472760/

Takeaways:
abs(x) can be used like |x|

Analysis + Comments:
Extremely easy, the only new thing i learnt was that abs() is a function in C++ which exists, my
dumbass was using max(a, -a)

________________________________________________________________________________

Problem 119.0
https://leetcode.com/problems/minimum-rectangles-to-cover-points/description/
Topics: Array, Sorting, Greedy
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1231465528/

Takeaways:
Greedy approach after sorting is optimal since we know that planting rectangles EXACTLY at the
starting points of the points will be most optimal.Oter than that this is just a type of interval (?) question.

Analysis + Comments:
Again, the interval intuition to sort strikes. Apparently, you don't need to pass a custom comparator
when sorting based on the element at first index of a vector (or pair) so that is good. Other than that,
this question was mostly easy. We go to a point and "plant" a rectangle there. The rectangle now
extends till points[i][0] = current to points[i][0]+w (notice how y axis is completely irrelevant in this
question, making this literally a regular interval ques). Good job on identifying that during the contest.
Then, to account for all points which can come inside this rectangle, increment i while i <= n-1 &&
current+w >= points[i][0].

________________________________________________________________________________

Problem 120.0
https://leetcode.com/problems/minimum-time-to-visit-disappearing-nodes/description/
Topics: Graph, Dijkstras
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1232383186/

Takeaways:
Learnt about the use case of dijkstra's algo (easiest implementation using min priority queue) for
shortest path from source to all nodes. Also, alwas typecast stuff coming from STL function calls since
they can be in size_t and if we increment from a 0 it will underflow into a very large number. Also learn
to read errors properly.

Analysis + Comments:
Always type cast stuff / store them in integer when dealing with STL functions / data structures. THe
problem here was that nums.size() returns a value of type size_t (which make sense in case storing
something big). Now we are running a loop for i <= nums.size() - 1, if nums.size() is zer0, and we do -1
it will become an extremely large number (since size_t is unsigned). Hence to prevent this, always
typecast it or store it in a variable (int/ long). The graph inputs are usually given in the form of an edge
list by leetcode hence to make it better we either use an adajency list (good for sparse inputs) or
adjacency matrix (good for dense inputs). In case constraints are too large, do not use adjacency
matrix since it is of the form n^2 and can lead to TLE/MLE. This question used a modified
implementation of dijkstras (which used a priority queue, (greater<pair<int, int>> to make it min heap
type, also it automatically works based on pair.first. Custom comparator not required. First, we
initialized our ans vector with all -1s (so the nodes which we cannot visit will already be marked as -1
and hence we won't have to worry about them later on, by ans(n, -1) where n is length. Our adjacency
list was of the type graph[u][{v, w}], basically a vector of vector of pairs of ints. u is source, v is dest and
w is weight (in our case travel time). For this dijkstras, we initially push in the source node and 0 (since
we are already on that node). Then we run a loop while this queue does not get empty. We get the top
node and pop it and check if the node has already been visited or it has been expired. If either is yes
then we continue. Else we mark the visited for this node as true and store the length in ans vector:
ans[node] = length. Now, we have to push the other neighbours of this node inside (by iterating over the
adjacency list of this node and pushing into queue if not visited). While pushing, we make sure to push
the length as current length + the length present in the graph.

________________________________________________________________________________

Problem 121.0
https://leetcode.com/problems/find-the-number-of-subarrays-where-boundary-elements-are-maximum/
description/
Topics: Array, Binary Search, Monotonic Stack
Difficulty: Hard
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1232084134/

Takeaways:
Monotonic stack again proving clutch, we needed to maintain a rule, monotonic helped us do that. We
also used the "good pair" / subarray sum == 0 logic where ans += freq because the new element can
form subarrays with itself and all of the pervious existing occurences of that number. Also, there was
some use of a PBDS but honestly i cannot look into this rn, ill do that later properly once i get the time.
whenever thinking oif something is usefull after an occurence of something else, or its "range" of
validness / usefullness. monotonic stack can come in clutch.

Analysis + Comments:
Need to find the number of subarrays, which are contiguous therefore you might be thinking sliding
window. But the problem is, one element which comes can frm "multiple-windows" with elements on the
backs (multiple elements with boundary elements that are maximum. One thing which can come to
mind is we only care about the elments which are maximum. Hence if there is a smaller number in the
subarray, and the next element which we want to include is larger than that, then the smaller element
can no longer be a part of any subarray to the right. However it can be a part of answer to the stuff on
its left. Hence, we want to keep large values on left and smaller on right, a descending (monotonic
structure). And since we care about the latest occurence of something, we will use a monotonic stack
(desc). We are also using a hashmap to keep count of the numbers (if they are in the stack or not)
hence map[decStack.top()]-- is done before we pop and map[decStack.top()]++ is done after we push.
This is important since when we push a valid element into a stack, we check its count. Existing count.
For ex if we add 3 and there are already two valid 3s to its left, then this 3 can form a total of 3
subarrays where boundary elements are 3 (two with the previous, 1 with itself). Due to our stack nature,
we know these subarrays will be valid. Hence we do ans += valueFreq[nums[i]] after updating the
values.

________________________________________________________________________________

Problem 122.0
https://leetcode.com/problems/sum-of-left-leaves/description/
Topics: Tree, Depth-First Search, Breadth-First Search, Binary Tree
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1231896065/

Takeaways:
Apply root == NULL base and edge case testing, also will prevent null dereferenes inthe actual code.
For the left leaf check, we can only know if its "left or right" from the parent and hence we need to do
these 1 layer above the actual leaf node itself. Also, if we know left is leaf no need to do a recursive call
there since its answer will already be calculated.

Analysis + Comments:
Since we need to only return the sum of the left leaf node, a DFS type recursive function would be the
best (because its easy to apply tree logic recursively). For an arbitrary node, first we will check if
leftNode exists or not (leftNode = (*root).left). if (leftNode), then we will check if its a leaf or not (leaf ==
when both children are NULL). Now, before we make some calculations we need to get the sum or
leftLeafNodes from the right subtree so we will do a recursive call to the right, smallAns =
recLeftLeafSum((*root).right). For the left, if (isLeftLeaf), smallAns += (*leftNode).val. Here, since our
left child is a leaf we are adding its value to the recursive soln. If the left is not a leaf, then it is possible
that left subtree instead has some left leaves and hence we do a recursive call for left and add it to our
small ans. For the base case, since our code checks the left and right of root node, we need to put a
root == null, return 0 base case because that would mean that there is no tree only, hence return 0.
Time O(n), space O(n) because of recursive call stack,

________________________________________________________________________________

Problem 123.0
https://leetcode.com/problems/latest-time-you-can-obtain-after-replacing-characters/description/
Topics: String, Enumeration
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1231904393/

Takeaways:
Proper consideration of all cases and when dealing with time, go from last to first.

Analysis + Comments:
Handling different cases properly, solving back to front becausee thats how time works and NOT using
nested if elses to make the code much simpler. Still salty that this question gave me a 15 min penalty
but thats partially my fault as well.

________________________________________________________________________________

Problem 124.0
https://leetcode.com/problems/maximum-prime-difference/description/
Topics: Array, Math, Number Theory
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1231921522/

Takeaways:
1 IS NOT A PRIME, plus Sieve of Eratosthenes for makring primes when constraints are stricter. Since
we know how to maximize distance between 2 specific elements in an array (from first and from last)
the solution is a type of greedy algo.
Analysis + Comments:
Relatvely easy due to the constraints provided but my dumbass forgot that 1 is not a prime number. My
approach was O(n*10) where we find the index of first prime from start and end and return the
difference. The prime is calculated by the regular i <= rt(n) loop (because if a number n has a factor >
rt(n), it will already be deteced by the other side). If constraints were stricter then we would have used
sieve of eratosthenes.

________________________________________________________________________________

Problem 125.0
https://leetcode.com/problems/kth-smallest-amount-with-single-denomination-combination/description/
Topics: Array, Math, Binary Search, Bit Manipulation, Combinatorics, Number Theory
Difficulty: Hard
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1458454960/

Takeaways:
K is very large search space which is monotonic, dead giveaway for binary search. The main probem
lies here in the fact that there are multiple coins, but amount of coins = 15 (which is prime bitmasking
hint). LCMs will cause us to over count, hence we need to remove them inclusion exclusion. Since we
are making inclusion exclusion on the lcms, based on the bitmask, take lcm of the elements, and based
on count of set bits, we remove or add the contribtution. One of the hardest part in these inclusion
exclusion problems is what exactly to include / exlude. Also another pain point was that the resultant
number has to be a factor (aka divisible by atleast one of the elements), but that can be handeld by
properly minmizing the BSoA window, since if cnt < k, we are moving l ahead, else moving r inside,
answer will be at l.

Analysis + Comments:
GOATED problem, finally able to solve it on my own like 6 months later. Basically obviously the heap
approach wont work. Since we are trying to find some "kth" smallest, start guessing. Since for an
element, it will either be ? kth element or <= kth eleement. If for some element we can check "what"
number element is it. For a single number x, a number y is (y / x)th element (assuming proper division).

________________________________________________________________________________

Problem 126.0
https://leetcode.com/problems/minimum-sum-of-values-by-dividing-array/description/
Topics: Array, Binary Search, Dynamic Programming, Bit Manipulation, Segment Tree, Queue
Difficulty: Hard
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1481402161/

Takeaways:
Understood the power of sparse table for range AND. Can use that to find smallest l and largest r such
that &[i -> l], &[i -> r] = andVal. Since and is monotonic, can binary search: low = i, high <= n - 1. if and i
-> mid <= andVal, means we need to bring the thing under. Other possible dp optimizations: running
AND (or running OR) can only have <= x values, where x = number of bits. Hence we could have
implemented a soln with sparse states using a map. Others use the sliding window minimum
(monotonic queue) for optimizing the dp transitions. Also dp optimizations for range queries but makign
previous state a segment tree / sparse table

Analysis + Comments:
Key observations where the fact that AND is monotonic decreasing, hence we need to do something
with that, since naive dp will time out. Since the dp works by finding minimum value from a range, set
up a sparse table for the previous dp layer and use that to query

________________________________________________________________________________

Problem 127.0
https://leetcode.com/problems/design-graph-with-shortest-path-calculator/description/
Topics: Graph, Design, Heap (Priority Queue), Shortest Path
Difficulty: Hard
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1234243475/

Takeaways:
Using different algorithm based on different constraints: If sparse graph with less shortPath calls and
more edgeAdds, then dijkstras because of no preprocessing. If dense graph with less nodes and a ton
of shortPath calls but relatively less edgeAdds then use floyd-warshall because of O(1) shortest path
returns (after it has done its preprocessing). Also 1e9 can be used in C++ to act as a sort of pseudo
+INF (because if it gets added with itself it wont throw an error)

Analysis + Comments:
Despite being a hard question, this was relatively simple. The more interesting case were the
constraints and its involved. The main approaches which we can have are Dijkstras and
Floyd-Warshall. As we know, the regular dijkstras approach is used when we want to calculate the
shortest distance between a source node and some other node once. It takes O(VlogE). Disktras is
optimal when we have a sparse graph (which utilizes adjacency list) and the number of calls to find the
shortest distance is less and the addEdge call is less, since here adding and edge is just an O(1)
oepration because there is no preprocessing involved here. Whereas Floyd-Warshall calculates the
shortest distance between ALL pairs of nodes in a 2-D matrix in O(V^3). It does preprocessing and
when anytime a call si done to get the shortest distance between any 2 pairs, it can return that in O(1)
time. Hence it is more optimal in desner graphs (with lesser number of nodes, since it uses an
adjacency matrix type of implementation) and when there are a ton of shortest Path operations.
However, because of the preprocessing, everytime a new Edge is added, it has to do the preprocessing
again wrt that edge just in case it has modified the shortest path between any pair of nodes. How floyd
warshall works is that it assumes a mid point node between any 2 nodes. All nodes are marked as a
regular adj Matrix (with mat[i][i] = 0 obv and unvisited are 1e9, basically for +INF. Not using INT_MAX
bcuz adding 2 INTMAX might lead to overflow). Then i, j is traversed row wise with diff values for k and
mat[i][j] = min(mat[i][j], mat[i][k] + mat[k][j]). Basically, updated tha path if the current distance from i -> j
is greater than i - > j -> k. This can also create new paths because if there is no direct path b/w i and j it
will be marked as INF. When adding a new node, this same check is done again for that specific k. Ex if
we add a new edge {u, v} = w. First we check if this new edge is even better than a previous one. Then,
we run a loop for all i, j if that specific midpoint (u, v) is better: mat[i][j] = min(mat[i][k], mat[i][u] + w +
mat[v][j]).

________________________________________________________________________________

Problem 128.0
https://leetcode.com/problems/surrounded-regions/
Topics: Array, Depth-First Search, Breadth-First Search, Union Find, Matrix
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1235447729/

Takeaways:
Another new type of traversal unlocked, Boundary DFS. This can probably be extended to some other
type of conditional DFS where we only start our DFS from some row or column or index or whatevs.
Extremely intuitive for this question though, find what cannot be flipped and flip the rest.

Analysis + Comments:
Considering multiple approaches for this question, all of them involving word search type DFS
traversal. The problem here is that once you find a potential candidate for flipping, and later we find that
it is part of a block wihch does not need to be flipped, handling that condition becomes hard because of
visited array and stuff. The best solution for this is unironically genius. We know that sections which
have even one part connected to an edge won't be converted at all, where as the completely land
locked regions will be flipped. Hence, we do a sort of "boundary DFS", where we initiate DFS from Os
which are present on the boundary and mark them as visited. When we are done, all regions which
cannot be converted would have been marked. Then we will do regular row wise traversal of the matrix
and convert the Os which were not visited before (that means the ones which are completely
landlocked) into Xs.

________________________________________________________________________________

Problem 129.0
https://leetcode.com/problems/vertical-order-traversal-of-a-binary-tree/description/
Topics: Hash Table, Tree, Depth-First Search, Breadth-First Search, Binary Tree
Difficulty: Hard
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1233215911/

Takeaways:
Priority queue used instead of sorting a vector because we are pushing values from scratch, and we
know there won't by a large number of values overlapping. Hence better time here. Also, coordinate
mapping for trees: left child = xCoord -1. Right child = xCoord + 1. Y will always increment by one. And
we will use -minXCoord as offset to make sure all indices for our ans vector are +ve.

Analysis + Comments:
Would have been extremely easy if not the condition for pushing in the elements with same coordinates
in asc order. If not for this, the best way to solve this would be to keep track of x coordinate (for root = 0,
when we go to left child then xCoord-1 and for right xCoord+1). We push the values into the vector with
same xCoord (offset = -minxCoord can be added to make all coordinates positive and prevent negative
indexing in vectors). Calculating the min and max x coordinates also helped us with creating the 2d
vector of proper size at the beginning by vec<vec<int>> ans(max-min+1, vec<int>(0)). Due to the
sorting in the question,we have to do 2 sets of traversal. For the overlapping, we hve to keep track of
the y axis as well now (which always increases as we traverse down the tree. Also keep in mind that
both BFS and DFS can be used here). What i did for the first traversal was create a hashmap for
coordinate (pair<int, int>): priority queue (min heap). When i was traversing, i was making a pair of the
current coordinate and pushing the value at that node into the priority queu present at that pair in the
hashmap. When this was done, the hashmap was traversed in an order (x from min coord to max
coord, y from 0 to max y) in a 2d matrix style and all elements were popped into the answer vector (the
prio q ensured they popped in a proper order). Again, the offset was used.

________________________________________________________________________________

Problem 130.0
https://leetcode.com/problems/sum-root-to-leaf-numbers/description/
Topics: Tree, Depth-First Search, Binary Tree
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1234574803/

Takeaways:
Regular approach was easy here but using Morris Traversal and Threaded Binary trees (threading the
right most of left child of a node to the node itself and then unthreading) to traverse the tree in O(2*n)
and O(1) space was something new i learnt.

Analysis + Comments:
First approach was use DFS to traverse to every leaf alongside a reference variable ans. We will pass
currentSum (initially 0) and at each node, newSum = currentSum*10 + (*root).val, this is used because
we want to append the nodes value at the end of our number. Whenever we reach, after this
appendage, if the node is a leaf thenwe add it to our current ans. If not uing refence variable, while
moving downwards, we can return the newSm and the node will add both left sum and right sum and
then return it. Obviously if no node exists we will return 0. This and a similar BFS approch will be O(n)
time and O(n) space.If we want to reduce our space even further, we can use the concept of Threaded
Binary Tree + Morris Traversal, which is a method to traverse binary tree in O(2n) time (in dfs manner)
but O(1) space (just uses 4 extra variables). This basically has 2 cases. First case is then the left node
is null, then we can only move our curr to (*curr).right. If (*curr).left is not null, then we have 2 sub
cases. First we have to visit the rightMost node of (*curr).left (prev = (*curr).left and while (*prev).right !=
NULL). If we are visiting it for the first time, we will reach that node. Now, we have to connect this node
to our current (threading, by (*prev).right = curr) and after that we move our curr one step left. Now,
when we traverse this section again, we will run a loop while ( (*prev).right != NULL && (*prev).right !=
curr) to reach the bottom most right node, do our operations, disconnect the thread and move our curr
one step right.

________________________________________________________________________________

Problem 131.0
https://leetcode.com/problems/binary-tree-right-side-view/description/
Topics: Tree, Depth-First Search, Binary Tree, Breadth-First Search
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1232798584/

Takeaways:
Slight imagination which reduces this to a simple BFS problem, also queue.back() returns the last
element of the queue

Analysis + Comments:
If we were to look at the tree frmo either left / right, we would basically see dots, where each dot would
represent a different level. Since "level", we use BFS. At the start of each BFS level, we push the value
at the end of the queue into our ans by queue.back(). And then do the traversal for that level. In case
we wanted the left view, we would do queue.front() instead.

________________________________________________________________________________

Problem 132.0
https://leetcode.com/problems/number-of-good-pairs/description/
Topics: Array, Hash Table, Math, Counting
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1233221203/

Takeaways:
nC2 for finding number of pairs when given frequency or adding the freq to answer while traversing
itself (and before incrementing for that iteration).

Analysis + Comments:
Easy but a useful subproblem in other questions. Use hashmap / freqArray for mapping freq of different
elements. If we encounter an element, we add its frequency to our ans and THEN update it because for
ex if an element as appeared twice before already and is appearing again, then it can form 2 "good"
pairs with those two. And we are updating because just in case it comes again. This method allows for
a 1 pass solution. Another approach is two find all the frequencies first and then iterate over
frequencies and use nC2 to calculate the number of pairs. This is because we are choosing any 2 out
of n to form a pair (the order is irrelvant here due to problem statement)
________________________________________________________________________________

Problem 133.0
https://leetcode.com/problems/lowest-common-ancestor-of-a-binary-search-tree/description/
Topics: Tree, Depth-First Search, Binary Search Tree, Binary Tree
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1236685932/

Takeaways:
Unlike regular binary tree, selective recursive calls are made in the direction where we think we will
meet our answer (binary search logic). If roots value lies btween p and q, that means the node itself is
LCA and should be returned.

Analysis + Comments:
Soln is similar to #134 binary tree LCA, but here i wanted to use the BST property of searching. hence i
swaped the "main" funtino parameters in such a way that p is always less than q, which made future
case handling easier: if ( (*p).val >= (*q).val ) return LCA(root, q, p). While searching in our recursive
thing the same rules as the binary LCA were followed, with the difference being that based on roots
value, one out of 3 recursive calls were done. If the root value was between p and q, that meant that q
was to its right and p to its left, hence the root itself was LCA and that was returned. Else, depedning on
whether our root was greater or smaller than both, recursive calls were made either to the left or to the
right. Time and space comp ~O(logn), based on the structure of the tree. "I was following a path and
suddenly the path split"

________________________________________________________________________________

Problem 134.0
https://leetcode.com/problems/lowest-common-ancestor-of-a-binary-tree/
Topics: Tree, Depth-First Search, Binary Tree
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1236675897/

Takeaways:
Best observation is that once a matching element is found, no need to go even below, just return that
node. This makes it such that if the other node is "below" it, then we returned the correct LCA. Else the
parent case will handle it properly.

Analysis + Comments:
Took slightly longer to get the most intuitive solution for this. Basically any random node, the possibility
is that either the the nodes for which we are searching are on different sides, hence that node itself is
the LCA, or they are on either side (done by checking which side is returning NULLs and which is not).
In the former case, we return the current node and in the latter, we return the small ans we get. For the
base case, 1st is returning NULL for NULLs to prevent dereferencing NULL. Second is whenever we
see a node which matches either of the value, we return that node. This is done so because either the
other node is below that node, and in that case we returned the correct LCA. Or the other node is in the
opposite branch, in which case the recursive case of our parent will handle it. Time is O(n) with space
being O(n) as well. There is also some brute force for this where we find the path from root to the node
or both the nodes (and store these patsh in a vector). After that, we start ierating both these vectors and
the LCA is present at the index before which the paths start differing.

________________________________________________________________________________

Problem 135.0
https://leetcode.com/problems/number-of-visible-people-in-a-queue/description/
Topics: Array, Stack, Monotonic Stack
Difficulty: Hard
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1233314024/

Takeaways:
Think in terms of bottlenecks and what determines what. Also, when using a monotonic stack, updating
the answer for i while popping i out of the stack is not uncommon. But it does not mean that in the same
question you cannot update values while pushing in ~ my solution for this involves doing both.

Analysis + Comments:
We know that having a tallperson acts as a "bottleneck" and blocks the vision of everyone on their left.
Basically, while traversing if we see a person i with height[i] >= height[i-1], then person at i-1 can only
see the person at i, since their vision is now blocked. Also, when descending order, if height[i+1] <
height[i], then everybody on left of i can only see i, and not i+1. Based on these bottlenecks, we know
that we need to maintain some sort of monotonicity (find nearest max/min to left/right or previous/next).
Hence we use the monotonic stack. Testing both, dec makes most sense since if a larger person
appears, the smaller ones on left wont be able to see anobyd on its right anyways. While popping to
ensure monotonicity, before the pops we do ans[stack.top()]++ because since that person has a larger
height, ther smaller ones before can see them. When we push, if the stack is not empty we do the same
as well.

________________________________________________________________________________

Problem 136.0
https://leetcode.com/problems/add-one-row-to-tree/description/
Topics: Tree, Depth-First Search, Breadth-First Search
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1233693570/
Takeaways:
Easy question, BFS more natural here. Also, since we want to make linkage changes, just like in LL, we
stop a step before so that we can make appropriate changes.

Analysis + Comments:
For this one, since i was dealing with a layer / level of a tree, BFS felt more natural (even though this
problem is entirely possible to be solved in DFS). I stopped BFS traversal just when currDepth ==
depth-1. Hence we reach node parent node (nodes in our BFS queue) of the depth where we want to
work. We create newNodes and link it with the children of current appropriately.

________________________________________________________________________________

Problem 137.0
https://leetcode.com/problems/sum-of-total-strength-of-wizards/description/
Topics: Array, Stack, Monotonic Stack, Prefix Sum
Difficulty: Hard
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1236665805/

Takeaways:
This question managed to teach me like 5-6 things: 1) Start using [] for arrays, straight up more
convenient and readable. (*) can still be used when wanting to derference objects n stuff. 2) For overlap
prevention, take <= when doing from one side and < when doing from the other. 3) Also learnt abour
returning modulo 1e9. mod for adding and mult is easy but sub is hard: ans = (a-b) % mod, if ( ans < 0 )
ans += mod. this is basically the faster method of ans = ((a-b) % mod + mod) % mod. Division is
extremely ticky, eulers method. 4) As we already know, subarraySum i->j is prefix[j] - prefix[i-1]. For
this, write cases in such a way that value is set to zero when i-1 < 0 to prevent bad dereferences. Also,
Prefix of prefix some can be done and uesd in a similar way to find subarray sums in prefix array. 5)
Monotonic stack used for next / prev larger / smaller. To change direction of smaller or larget (left or
right) change direction of traversal in array. For next smaller / larger, we change stack nature (monotnic
asc or desc). Also, for long questions like this, keep checking if the naswer you are getting rn lines up
with what yuo want, just to maintain sanity while troubleshooting.

Analysis + Comments:
The crux of this quesion is escaping from the natural O(n^) brute force. This can be done here because
we know all subarray sums are multipliied by the minimum value. Here, we have n^2 subarray
possibilites but only n min value possibilites. Hence, again we pull out the "contribution" logic, and find
out all possible sub array sums possible where i is minimum, and we do this for every i. Since we know
the minimum element is a breakpoint, we find nearest min element to the right and left of i (if invalid,
then -1 and n). We then know that allvalid subarrays will lie between these 2 indices (exclusive). Then,
we find all possibilites of subarrays in this possible range by shifting poiinters in such a way that i itself
is always a part of this. Again, iterating from i to j has j-i+1 possible steps. We represetn each subarray
sum in terms of presfix sum differences (for O(1) retrieval of the sums) also then do some minimize
manhattan distance type rearrangeage and we findout that we need to basically find subarray sums of
the prefix array itself. Hence we build a prefixPrefixSum array and follow the same steps to find the
difference and perform calculations. Also, wehenver smoe operation is happening, do not forget to do
the modulo to prevent overflow so you are submitting the correct answer. But don't just mindlessly
spam it, since the streets are saying that this operator is pretty slow.

________________________________________________________________________________

Problem 138.0
https://leetcode.com/problems/smallest-string-starting-from-leaf/description/
Topics: String, Tree, Depth-First Search, Binary Tree
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1234647907/

Takeaways:
deque<T>::iterator itr = dq.begin() for traversing the deque in proper order (while itr != dq.end(), itr++).
Also, string += curr is overloaded as push_back() and is hence O(1) (in cpp), where was string = string
+ curr is O(n). (where curr is a character not a string). Also,the time space comp analysis is interesting
here. Time is worst when there the number of leaves is highest (because our comparison function will
be called the most), i.e. N/2 leaves and in this case height of tree would be logN. Therefore time comp:
O(n/2 * logn * n), where last n is for DFS tree traversal (in cases when we are not comparing). The
worst case space would be O(n) when the tree is completely skewed.

Analysis + Comments:
The important thing is that we can NOT use divide and conquer here. This is because while comparing
the strings, if the children strings are of the different lengths, we will also have to consider the parent for
the lexographic comparison, which will not be possible in a recursive DFS type traversal. Hence we
follow a backtracking type approach where we pass a string by reference (current ans) and build it as
we traverse te tree. Upon reacihng a root node we compare it with ans string and update accordingly.
The problem is that we want to check from leaf to root order. Hence we need to reverse the string for
each comparison. This can be taxing when consdering ( O(log2n) in best case, O(n) in worse case,
depending on tree's structure) the time. If only there was a way to push and pop front from a string. Oh
wait, just use a deque<char>.

________________________________________________________________________________

Problem 139.0
https://leetcode.com/problems/island-perimeter/description/
Topics: Array, Depth-First Search, Breadth-First Search, Matrix
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1235412369/

Takeaways:
Sometimes, in matrix type "graph" problems, other approachs (besides the regular DFS/BFS) can be
used for best solution, like boundary DFS in the surround regions, normal matrix traversal in this one.

Analysis + Comments:
2 major types of approaches are valid here, first is to start a regular DFS/BFS traversal whenever we
reach a piece of land on that island. The recursive call will return the perimeter for that side (which will
be summed for all 4 sides and then returned. Since our matrix itself is surrounded by water, hence
whenever we go out of bounds, we return 1 since that side is also covered by water. We also return 1
when we in general find water. This approach is O(n*m) time but O(n) space due to the recursive call
stack. Another approach here is slightly less intuitive but does not use O(n) space. It involves regular
traversal over the martrix. Whenever we find a piece of land, we check its above below left and right (if
invalid indices, we do smallAns++) and if they are have water, then for each we do smalAns++ and in
the end we do ans += smallAns. This makes it such that whenever a piece of land has water at a
region, that contribution it given towards the final ans.

________________________________________________________________________________

Problem 140.0
https://leetcode.com/problems/next-greater-element-i/description/
Topics: Array, Hash Table, Stack, Monotonic Stack
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1237213658/

Takeaways:
Monotonic stack helped to find the next greater element to the right (to the right = traversal from right to
left, greater element = stack should be desc). Also, creating an "inverse" array of nums2 proved to be
helpful.

Analysis + Comments:
Honestly, getting the best O(n) solution for this was not "easy". It required multiple mappings, and a
monotonic stack thing to get done in linear time. First, a map is made for where values of nums2 are
key sand indicies are values (basically, the opposite of this array). Then, the monotonic stack is used to
push values into a vector which stores all the lements which are next greater to i (in nums2) at i (default
val = -1). Then, nums1 is traversed, we find the index which we want to query using the map and push
that into our answer.

________________________________________________________________________________

Problem 141.0
https://leetcode.com/problems/count-of-smaller-numbers-after-self/description/
Topics: Array, Binary Search, Divide and Conquer, Binary Indexed Tree, Segment Tree, Merge Sort,
Ordered Set
Difficulty: Hard
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1346445471/

Takeaways:
The merge sort type approach is crazy. Basically we know a n^2 soln that wont pass, hence we need to
go one step below: nlogn. This is either due to binary search, sorting, segment/fenwick and
set/ordered_set. If we assume we have 2 sorted arrays (in desc, since we want smaller), and we are
merging them into one (like in merge sort), while doing this, all elements which were from right right
subarray will not have any more effect in the answer (because recursively already sort) but the ans for
elements in the left subarray can still be increased. To handle this, we iterate over both and if right el is
greater =, we insert that first (into merged like in merge sort), else we insert left and since we know the
element it wasa being compared to was greatest, we know all elements to its right will also be lesser
hence we increment remaining size to rights to the value at left. The problem is since both sections are
sorted, the indices will get jumbled and to fix this before doing all this we push indices along the values.

Analysis + Comments:
Tried TOO many approaches and obviously one of them worked. Segment tree approach is self
explanatory. Fenwick tree is similar but needs some tweaking (i still need to practice both of these DS
properly). The approach i had in mind was pbds ordered set. Since we need to find number of elements
strictly smaller than some ai to its right, if iterate right to left and add elements to an ordered multi-set
(comparator = less_equal<int>) and at each step do set.order_of_key(nums[i]) befre inserting it, we can
find number of elements currently smaller and hence put that in the answer.

________________________________________________________________________________

Problem 142.0
https://leetcode.com/problems/maximum-subarray-min-product/description/
Topics: Array, Stack, Monotonic Stack, Prefix Sum
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1237310008/

Takeaways:
Think in terms of contribution by each element. PrefixSum array is GOATED for finding out sum of all
elements (between 2 given indices) in O(1) and monotonic stack isgreated for finding out the ranges of
these indices for the entire array in~O(2n), when the bottle neck is the smallest / largest element (here,
the bottle neck is a just smaller element)

Analysis + Comments:
My initial intuition was mostly correct. The approach for this is mostly similar to the sum of wizard
strengths or the largest histogram. Unlike the wizard one, we just need a prefix sum array because we
want to find the MAX of all min products. Min product is constrained by th esmallest element in that
subarray. Hence for each element, we find these ranges (using monotonic stack ,this can be done in an
initial preprocessing step along with prefix array calculation, which will be done in O(3N)), Then, the
minproduct subarray sum can be calculated for in element i in O(1), by subtracting the prefix sums of
the right and the left of the index and then multiplying it with correct ans. Another approach is similar to
the histogram question, where we "stretch back" our range based on pop outs and then we find the
prefix sum (same method) and multiply it by current ele.

________________________________________________________________________________

Problem 143.0
https://leetcode.com/problems/find-array-given-subset-sums/description/
Difficulty: Hard

________________________________________________________________________________

Problem 144.0
https://leetcode.com/problems/find-the-sum-of-subsequence-powers/description/
Topics: Array, Dynamic Programming, Sorting
Difficulty: Hard
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1243157191/

Takeaways:
Take not take recursion for subsequence for some reason was giving TLE, hence swapped to the loop
version for subsequence. Also, sorting as a precomputation step because we already know that in the
end we would have to sort our resultant subsequence to find the power in minimum time. Instead of
passing the actual array just passing the relevant information for the next step is also another
optmization method. Then we memoize this using a hashmap. For hashmap memoization, use the
.count() method, because otherwise it will create a zero (NULL ) val at that point, and the next
dereference will give runtime error. Also, zero here can be a possible value so that can give wrong
answers. Could also be properly memoized using some precomputation with all the possible minimum
differences, since min diff can only be possible between 2 elements, there are n^2 minimum differences
and we can brute froce for all differences.

Analysis + Comments:
Crazy thought process required for this. First is that since we need to find the minimum absolute
differnce between any two elements of a subarray (which is defined as the power), we need to find the
best method do this. Normal method for finding this would be n^2, but if we sort the "subsequence", we
can find the min absolute diff in a single scan, since we want to minimize the power and sorting brings
elements with closer values next to each other. One method to maintain subsequence would be to pass
a current "array" as done in backtracking and once its size == k, we sort and do this. But considering
that we are sorting all these small subseqeunces at the end anyways, and the "power" is absolute
(hence order of elements in subsequence does not matter), how about we sort the array as a
preprocessing step, which will save time in as we won't have to do extra sorting. Now, another
optimization here could be that instead of passing the current array, we just pass the current minimum
diff (power) and reduce k. When k == 0, that means our "imaginary" array has been completed and the
power for this is the minimim difference passed in the arguement. Subsequences can be created either
using the recursive take not take or the loop from i = index -> n-1 where we pick every element. The
craziest thing for this was the 3D dp, which was done using a hashmap (because on of the dp states
difference between 2 numbers, which can be any random value, hence cannot be stored in an array).

________________________________________________________________________________

Problem 145.0
https://leetcode.com/problems/number-of-subarrays-with-bounded-maximum/description/
Topics: Array, Two Pointers
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1237493483/

Takeaways:
When wanting to find between a range, calculate for less than equal to the higher end, then for less
than equal to lower end -1 (to substitute for less than) and then subtract these 2. This works when we
are considering stuff which works according to set theory (like the number of subarrays in this case).

Analysis + Comments:
My initial approach of counting subarrays abiding to the constraing by marking 2 points (first
appearance of the abiding element and the last) an then adding that to the ans eachvalid iteration by
ans += last - first + 1, and reseting incase a greater element appears which resets stuff was not
working, and i have no idea as to why. It makes absolute sense in my head but it was straight up not
working. Hence i had to go with the treusted approach of finding for <= right, then <= left - 1 and then
subtracting the stuff (set theory).

________________________________________________________________________________

Problem 146.0
https://leetcode.com/problems/find-all-groups-of-farmland/description/
Topics: Array, Depth-First Search, Breadth-First Search, Matrix
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1237078994/

Takeaways:
Regular matrix traversal is more optimized than DFS/BFS because extra stack/queue spce is not being
used. Also, marking the distance with how much you should skip ahead in the visited array is
something new, and fitted well with the question constraints of the land being only in rectangular
shapes.

Analysis + Comments:
Following the pattern of non dfs/bfs traversal in question, the best solutino for this also does not want a
regular dfs/bfs traversal. If we do that, we would start matrix traversal (row wise) and then whenever we
find an unvisited land[i][j] == 1, we start DFS travsersal and kee track of min/max i and j, which is useful
for findind the top left and bottom right corners. But here, we are not utilizing the question constraints
properly, which states that all farmlands are guranteed to be rectangles.using that, we can sole this with
just regular matrix traversal. Once we find a suitable land, we start traversing it (col wise, j++ until valid)
to find its "length". Then we do the same going down (row wise, i++ until valid) and while doing this, we
mark the length in the visited array for that cell. At the end, we know we need to push startI, startJ,
startI + ln-1, startJ + height-1. The changes in visited will be important to us because when we are
traversing the matrix and we meet a part of this rectangle again, we now how much we should skip
ahead to get clear of this.

________________________________________________________________________________

Problem 147.0
https://leetcode.com/problems/recover-the-original-array/description/
Difficulty: Hard

________________________________________________________________________________

Problem 148.0
https://leetcode.com/problems/find-original-array-from-doubled-array/description/
Topics: Array, Hash Table, Greedy, Sorting
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1237586770/

Takeaways:
Start using hashmaps, i know they can be cheap but in case of finding ferquency (for bigger elements,
no sense pattern etc) and when checking if something has occured previously or not. When the
problem is dealing with some sort of values, then greedy + sorting helps as well.

Analysis + Comments:
Blud forgot hash maps exist. Basically, the "original" arry cannot exist if the changed version has odd
number of elements (straight up not possible since we are duplicating each element and 2xing it once).
Hence return empty in that case. Now for even numbers, let u sort the array first. Since we know all
elements are positive, hence the doubled elements are guranteed to be more in value than the regular
ones. Frstly, we map the frequency of each element. THen, we retraverse the input array. Let curr =
changed[i]. For a curr, if we find freq of 2*curr in the map be greater than one, then we push both curr
inside ans and decerement frequeny of both (so they do not get included again, we only push if
frequeny is greater than 1). If the frequency of curr is zero, that means it has already been pushed (by
ittself of by its original, hence do not do anything here). If we find an element whose doubled frequendy
coes not exist but it exists as an sole entity, then it mean sit was an outlier break the loop and push an
empty array into our ans (since changed[] is not a valid duplicate array)

________________________________________________________________________________
Problem 149.0
https://leetcode.com/problems/array-of-doubled-pairs/description/
Topics: Array, Hash Table, Greedy, Sorting
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1238901520/

Takeaways:
In questions like these, try sorting to force some sort of "rule" on the array. This will make later chekcs
with a hashmap easier. Here, we knew if we sorted, for positive the numbers which we encounter first
will be the non doubled once and for negatives, the numbers which we encountered first would be the
doubled ones. Also, before dividing by 2, do the odd number check for negatives to see if they even are
a possible x2 of some other number.

Analysis + Comments:
Similar to find original from doubled array. We need to check if the doubled pairs exist or not. Hence,
we need to maintain some sort of frequency. Also, to maintain some ordering, one needs to sort the
input array. And then traverse the array. If current element has non zero frequecy, then we first check if
its positive or negative. If positive (and since the array is sorted, we arrived on the original number
before its "doubled version"), we check if its x2 version exists in the map (>0 freq). If not, then return
false. For negattive, since a "smaller" negative number will have a greater absolute value, we willhave
to do this check by dividing by two (since it is the 2x version for the -ve number with less mag) (also
need to do the odd number check, if its odd then just return false, multiplying int by x2 cannot result into
an odd number). If we somehow reach the end of the array, that means the entire array was of doubled
pairs.

________________________________________________________________________________

Problem 150.0
https://leetcode.com/problems/next-greater-element-ii/description/
Topics: Array, Stack, Monotonic Stack
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1237395008/

Takeaways:
Normal next greater is easy, for circular, since we are findding next greater to the left, just assuem a
copy of a virtual array to its left, which is accomplished by iterating from 2n-1 too 0 and using % n for
each dereference. The final answers are only stored/updated for i <= n-1 bcause that is the only
existing thing.

Analysis + Comments:
Finding the next greater element to the right in the regular method is intuitive to me becase of the
monotonic stack (Shadow casting and stuff) but the circular thing was throwing me for a loop. Tried
if-else programming that but proved to be bad and hard to debug. The best solution is to assume a
cope of the array stuck to the right of our original array. Hence last index is 2*n-1. We traverse our i
from the end to 0, and follow the rules for building a monotonically decreasing stack. Due to our "virtual
array" using virtual addresses, we have to redirect each virtual addres to the real thing by using % n.
The values are only stored in the ans vector when i <= n-1 because thats when the actual values start
coming in. The i's are always pushed in though.

________________________________________________________________________________

Problem 151.0
https://leetcode.com/problems/next-greater-element-iii/description/
Topics: Math, Two Pointers, String
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1237556197/

Takeaways:
Number into array into next permute into number again. Next permute is doing by travelling from the
end while ascending. as soon just before order bcomes descending, stop. (if it sdoenst then reverse the
entire array). Then start re iterating to the end and mark the last element which is just greater than the
element which made the order descending. swap these two and then reverse the previousle ascending
section). c++ stl reverse = reverse(arr start pointer, arr start pointer + size) (aka pointer pointing 1 step
next to the start pointer)

Analysis + Comments:
Basically this is just a more tedious version of next permuation. Convert number into array (max size 9,
keep track of actual size of number), do next permuation, then convert array back into number. if the
resulatant is smaller than original number of greater than int max (hence store that number in a long
long), return -1. Else return the number

________________________________________________________________________________

Problem 152.0
https://leetcode.com/problems/next-greater-element-iv/
Topics: Array, Binary Search, Stack, Sorting, Heap (Priority Queue), Monotonic Stack
Difficulty: Hard
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1289261012/

Takeaways:
Setting a defined interaction between 2 datastructures is helpful, here we know we want next greater,
so going with a monotonic stack for 1 is good. But using a min heap for the 2nd one as a 2nd life type
beat and comparing with element being pushed in stack was something new. Intuition for 2 different
datastructures, when you need to access something which was previously used on some other basis.

Analysis + Comments:
Cool solution, you know next greater element (1), means monotonic stack for sure. The problem here is
that we need to find the next next greater element (obv n^2 brute force holds but will TLE), So my
intuition was to make sort of a stack which will have stuff without currently any next greater element (2
HP). As soon as we have a greater element, we start sending stuff to some other stack (?) which is
1HP. The problem is this thing, this should not be a stack as the order of greater does not need to hold
up. Basically, we want a ds which has the smallest val on top (min heap) and while pushing values in
our 2 hp we stack, we first check if its greater than the top of the heap and while its smaller, we keep
popping from heap and pushing in ans. After that, we transfer stuff from 2hp stack to 1hp heap if
necessary

________________________________________________________________________________

Problem 153.0
https://leetcode.com/problems/count-subarrays-with-median-k/description/
Difficulty: Hard

________________________________________________________________________________

Problem 154.0
https://leetcode.com/problems/find-if-path-exists-in-graph/description/
Topics: Depth-First Search, Breadth-First Search, Union Find, Graph
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1238112105/

Takeaways:
Disjoint sets are crazy fast when you need to consider questions which involves finding components of
a graph, checking if path exists or if traversal is possible. After the set is built, it can return ultimate
parent in O(1)

Analysis + Comments:
First common sense approach was to create agarph (adj list) and then try traversing from source. If im
able to meet the destn, then that means source and dest and connected. Else not. But this was giving
~1s time. To improve upon this, i learnt and implemented a disjoint_set for this graph. And then, the
ultimate parents of both source and destn were checked(by find parent, all edges were added by
union). Since if same, then yes these nodes are connected. Else no. This gave like 1/5th the original
time.
________________________________________________________________________________

Problem 155.0
https://leetcode.com/problems/checking-existence-of-edge-length-limited-paths/description/
Topics: Array, Two Pointers, Union Find, Graph, Sorting
Difficulty: Hard
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1238201794/

Takeaways:
Whenever dealing with graphs being dyamically built, think DISJOINT SET UNION. The most
impressive thing about this soln were: 1) Sorting the edges and queries based on edge weights / limits
to know if its possible that the currently built graph connects the 2 nodes in the query. 2) Since the
answer is sensitive to the original index of the query (before sorting), we push indices at the end of
each query to know where to push the answer. 3) the 2 pointer approach to traversing the sorted edges
and queries.

Analysis + Comments:
Basically, we need to check if every edge on the path between path between the nodes in the each is
less than the limit specified. FW will not work because it straight up returns the min path between any
two nodes, you will have no idea about the actual stuff present (since its dp, plus constraints don't allow
for it). What one can think is since the weights for the edges are given in the input edge list, if all edges
(which are less than some query limit) are used to build a graph, then we can check if the two query
nodes are connected with those edges or not. If not, then the path is not possible. Now, doing this in a
naive way will exhaust too much time. But if we sort both the edge input and the query based on the
edge weight / limit, we can do something. Now, ofcourse we know that the answer we have to return
has to be in the same order as the input queries, hence before the sort, we add another element (i) at
the end of each query which signifies its index (so after sortin we know where to put this answer). Since
this question involves with knowing the state of a "dynamic graph" at a given instance of time, the best
data structure to maintain the graph will be a DSU. We will traverse all queries. Will the edge index is
valid and the edge weight is less than the current query weight, we will keep adding the nodese into our
union. Once the edge weight exceeds the limit, we will check if with the current graph, the nodes in the
query are connected. If yes, (that means that all nodes have a cost less than the limit AND the nodes
are connected: all edges in the path must also be less than limit) then the answer will be pushed at the
RESPECTIVE index. This question is also somewhat like merge 2 sorted arrays which is lowkey
goated.

________________________________________________________________________________

Problem 156.0
https://leetcode.com/problems/count-the-number-of-special-characters-i/description/
Topics: String, Hash Table
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1238377663/

Takeaways:
Freq array goated as always, no need to create 2 as a single 256 size one is good when dealing with
both caps and small (no need to convert indices as well) and at the end you can just traverse a selected
region to get your final ans.

Analysis + Comments:
Basically, a freq array (for all ascii, [256]) and count freq for each character. Traverse this from 65 to 92,
(checking just capitals is okay, since we need to see if a character appears in both upper and
lowercase). If freqArray[i] && freqArray[i+32], then that means the element is in both upper and lower
case and hence we can ans++.

________________________________________________________________________________

Problem 157.0
https://leetcode.com/problems/count-the-number-of-special-characters-ii/description/
Topics: String, Hash Table
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1238375482/

Takeaways:
Read the question more properly, don't over engineer solutions and think what exactly best
"represents" the problem statement. Here, the first and last occurence index was more intuiive for
figuring out the solution than the frequency.

Analysis + Comments:
A slightly more specific version of the above question but requires change in logic. First, ready the
question properly: letters meaning unique occurences of a letter and characters mean multiple in a
string. Because of this misunderstanding, i lost a ton of time during the contest and my orignal
approach was uneccesarily intricate. Now, when properly thinking about it, we need 2 things. if a letter
occurs (in lower / upper case) and if the last lower case occurs before the first upper case. To map the
first, we can set a default value of -1 and for second, we can update indexes. For lower case, each
occurence will update the index and for upper case, only the first one will update the occurece (2 index
arrays will be used, for each lower and upper case) Once these indices are mapped in the array, these
arrays will be traversed (i will correspond to c in one and C in the other). If both values in both arrays
are greater than equal to zero, then thatmeans the element exists in both capital and small form. Then
it will be checked if value in the smalelr array is less than the value in the larger array. If yes then the
last occ of c is before the first occ of C and hence every lower case occurence of that character
happens before the first upper case occurence. Ans will be incremented.

________________________________________________________________________________
Problem 158.0
https://leetcode.com/problems/minimum-number-of-operations-to-satisfy-conditions/description/
Topics: Array, Dynamic Programming, Matrix
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1239229724/

Takeaways:
Try the thinking the absolute brute force solution first. In a matrix question like this, if it gets hard to
visualize, squash the matrix into a 1d array and try thinking solutions for that and then extendthat
solution to 2d.

Analysis + Comments:
When in doubt if a pproblem can be solved greedily or not, just think of ways to prove the greedy
wrong. It is always easier to prove a greedy wrong than to prove it correct. Coming up with cases for
proving the greedy wrong is imp as well. One way of solving this is for a column j, if we try replacing all
necessary with every digit possible (0 - 9), we can find the total cost which we will incur. Now, with the
digit we are replacing the current row with, the next row should be replaced with some other digit (to
make sure the second property lines up). Hence, when passing for the next col, we will make sure the
current digit does not get passed. We can see the time comp for recursive passing itself is O(9^n),
which can sky rocket real quick. Hence we need to memoize some stuff. Each recursive call depends
on 2 things, first is the digit with which to replace all cells with. The second is the column itself. Hence
our dp matrix will be a 2d matrix of the type dp[col][digit]. Another imporant thing is that the main
function will also run a loop for all the possible digits for the first column for the recursive call.

________________________________________________________________________________

Problem 159.0
https://leetcode.com/problems/find-edges-in-shortest-paths/description/
Topics: Depth-First Search, Breadth-First Search, Graph, Priority Queue, Shortest Path
Difficulty: Hard
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1239768557/

Takeaways:
Do not put uncessary continues in dijsktras. Check for visited after popping the node and if yes then
continue. If not then mark it as visited and do the appropriate stuf fand push its edges into toVisit if NOT
visited. Also, dijkstras does indeed find distance to EVERY node which is used here AND dijkstras can
be started from multiple sources (which is used here as well). A path can be divided into generic
sections when concerning an edge. Left to u, and v to right (we already know u to v is an edge w).

Analysis + Comments:
An extremely unique application for the SSSP (single source shortest path) algo: Dijkstras. One first
look, it appears that FW would be used here since we need to do something with an edge being
shortest path or not but never forget that the information about the actual path is lost in Floyd Warshall,
te time complexty here does not allow for FW anyways. Plus, the question asks us to consider all
shortest paths from node 0 to n-1 (multiple shortest paths can exist if the distance is same). This
statement reinforces dijkstras here (which abides with the time condition as well). After creating the
adjList (dijkstras = adjList, FW = adjMatrix), think of some arbitrary path 0 to n-1 and edge u, v with
weight w. If w is part of the shortest path, then that means for some shortest path 0 to u (x) and v to n-1
(y), x + w + y = SHORTEST DIST. Also, do NOT forget that the shortest distance can be 0 to v and u to
n-1 as well. If w fulfills that condition then it is indeed part of SOME shortest path. Hence, we need
shortest paths from 0 to any node and n-1 to any node (which can be found via dijkstras, bu changing
th source) and stored in 2 arrays. This precomputation can then be pulled out when needing to find the
distance while traversing the edges for answer vector. The most imp thing in this question was coming
up with the multiple angle approach for the dijsktras (from both directions) and exploiting its SSSP
nature.

________________________________________________________________________________

Problem 160.0
https://leetcode.com/problems/open-the-lock/description/
Topics: Array, Hash Table, String, Breadth-First Search
Difficulty: Medium
Status: Completed, Revisit
Solution Link: https://leetcode.com/submissions/detail/1239333185/

Takeaways:
The fact that trees/BFs themselves are not that complex but their applications are so well disguised in
questions like these is genuinely scary. Be on the lookout for application of stuff like this. Tree / tree
traversal can be used in stuff like cmobination / permutatino modelling and stuff.

Analysis + Comments:
The fact that this is a trees / graphs question in disguise is genuinely impressive. The tree nature can
come from the fact that we move our starting combination (0000) into 8 other combinations (back and
forward for each slot) and these 8 combinations can move into other 8 combinations. Since we can
move 0000 to 0001 and then move 0001 to 0000, we konw that there will be overlapping subproblems
and we do not need to go to 0000 again since we need to SHORTEST path. Hence we will maintain a
visited hashmap. Since visited hashmap is for knowing where not to go (again), we can mark the
deadlock nodes as visited as well. Another important thing now is how to traverse this tree. Since each
"move" is marked by a level, maybe level order traversal makes sense here. If the q.front() == target,
return level, else continue traversal. Find all combinations possible from current (by moving in all
directions) and push into q if valid. Once a level is completed, increment the level. Time complexity is
actually the number of possible combinations and not O(8^n) because we are visiting each combination
only once.

________________________________________________________________________________

Problem 161.0
https://leetcode.com/problems/balanced-binary-tree/description/
Topics: Tree, Depth-First Search, Binary Tree
Difficulty: Easy
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1238865335/

Takeaways:
Return pairs / objects with multiple information to prevent repetitive calls (changed O(n^2) to O(n))

Analysis + Comments:
Recursive exploitation of the tree structure again. For a bin tree to be balanced, both its left and right
subtrees should be balanced themselves first. Then, if the difference in heights of left and right should
not be greater than 1. To prevent repetitive calls for checking height, a pair is returned (bool, int) which
contains relevant sub information. If all chekcs are valid, then we return true + height of current tree.

________________________________________________________________________________

Problem 162.0
https://leetcode.com/problems/validate-binary-search-tree/
Topics: Tree, Depth-First Search, Binary Search Tree, Binary Tree
Difficulty: Medium
Status: Completed
Solution Link: https://leetcode.com/submissions/detail/1240095627/

Takeaways:
BST main property is that inorder traversal is sorted and that root lies between the smallest value in
right subtree (left most) and greatest value in left subtree (right most).

Analysis + Comments:
There are multiple ways of checking whether a tree is BST or not. First method is from an inherent
attribute of the BST, which is that the inorder traversal will result in a sorted array. Hence at the end, the
array can be checked and if its sorted then the BST is validated. The other method is to first check
whether the left and right subtrees themselves are BSTs or not. If yes then we need to check if the root
nodes value lies between the root vals of left and rght subtree and if its greater than the largest value in
left subtree and smallest value in right subtree. To prevent unecessary traversal like in other tree
problems, we can return pairs or something to save time. Another method which involves "going down"
is to set intervals. Initially, these wll be - to + infnity but as we traverse leftwards, we will update the right
interval and as we traverse rightwards, we will update the left interval. The node must lie between these
2 intervals else the tree is not validated.

________________________________________________________________________________

Problem 163.0
https://leetcode.com/problems/merge-bsts-to-create-single-bst/description/
Difficulty: Hard

________________________________________________________________________________

Problem 164.0
https://leetcode.com/problems/create-components-with-same-value/description/
Difficulty: Hard

________________________________________________________________________________

Problem 165.0
https://leetcode.com/problems/minimum-height-trees/description/
Topics: Depth-