Date: September 28 2021
Summary: An overview of asymptotic notation and time complexity
Keywords: #asymptotic #notation #complexity #bigo #masters #archive
Not Available
Complexity of an algorithm is often calculated by counting primitive operations
Primitive operations:
- Basic memory and reference management
- Simple comparisons
- Basic arithmetic
- Addition
- Subtraction
- Multiplication
- Division
- Modulo
Big-O (denoted as ) refers to the most accurate worst case analysis with regards to either
- Execution time
- Space used (in memory or disk)
It always assumes the maximum number of iterations
Constants get dropped from notation as it is small compared to infinity
Example:
Lower order terms get dropped in notation for the reason as before
Example:
Dropping constants theoretically is possible because constants do not grow towards infinity
In practice however, these constants can affect practical outcomes of algorithms
Performance does not scale with input size
Example is having a list and returning the first item in the list:
mylist = [1:5...]
first(mylist)
Performance does scale with size
Example is summing all elements in an array:
mylist = [1:5...]
summed_values = sum(mylist)
Performance scales logarithmically with input size
Base doesn't matter due to change of base:
It can intuitively be thought of the running time is proportional to the Stack Overflow Explanation
Another way to think about it is that the time goes up linearly while increases exponentially Stack Overflow Explanation
An example of this behavior is here:
Time () | |
---|---|
0 | 1 |
1 | 10 |
2 | 100 |
3 | 1000 |
4 | 10000 |
5 | 100000 |
Where the proportion can be stated as:
The following exercises can be found here
What is the time and space complexity of:
a = 0
i = 0
while i < N
a = a + rand(1)
i += 1
end
b = 0
j = 0
while j < M
b = b + rand(1)
j += 1
end
Answer:
Explanation: Since we measure complexity by worse case scenario of primitive operations ran, there could be and operations executed in this code. As no additional space is being utilized, space complexity is constant as no new variables are being defined.
What is the time complexity of:
a = 0
i = 0
j = N
while i < N
while j > i
a = a + i + j
j -= 1
end
i += 1
end
Answer:
Explanation: Both loops are dependent on so both loops, iterate times, therefore resulting in a time complexity of
What is the time complexity of the following code:
i = N / 2
k = 0
while i <= N
j = 2
while j <= N
k = k + N / 2
j *= 2
end
i += 1
end
Answer:
Explanation: As continues to increase, the variable, continues to loosely grow more than exponential. Furthermore, there are, primitive steps in the outer loop such that the total time complexity would be which is then simplified to I got this wrong initially because I did not account for the outer loop contributing a time complexity of .
What does it mean when we say that algorithm X is asymptotically more efficient than Y?
Answer: Algorithm X will always be better for large inputs
Explanation: When we consider an asymptote in terms of an algorithm, we also consider that algorithms "growth" over time. Meaning, that if you have some algorithm that is efficient at an asymptote, by nature of asymptotic analysis, that means it is "good" in the worst case scenario of that algorithm.
Addendum: I got this wrong when thinking about asymptotic notation as I failed to consider growth. I thought X would be better for all inputs to that algorithm but that would not be so in the case of possibly a smaller input to X.
What is the time complexity of the following code:
a = 0
i = N
while i > 0
a += i
i /= 2
end
Answer:
Explanation: There is a direct proportional relationship between and the final output.
What best describes the useful criterion for comparing the efficiency of algorithms?
Answer: Time and Memory
Explanation: Time dictates how long a program will evaluate for and memory dictates how much a program can evaluate
How is time complexity measured?
Answer: By counting the number of primitive operations in an algorithm on a given input size.
Explanation: Each primitive operation is generally assumed to evaluate at the cost of "one" for each operation.
What will be the time complexity of the following code?
<!βNOTE: SKIPPING FOR NOWβ>
i = 0
while i < N
i *= k
end
What will be the time complexity of the following code?
<!βNOTE: SKIPPING FOR NOWβ>
value = 0
i = 0
j = 0
while i < n
while j < i
value += 1
j += 1
end
i += 1
end
Algorithm A and B have a worst-case running time of and , respectively. Therefore, algorithm B always runs faster than algorithm A.
Answer: False
Explanation: Algorithm A could be faster on smaller inputs as compared to algorithm B
Zelko, Jacob. Asymptotic Notation. https://jacobzelko.com/09242021040445-asymptotic-notation. September 28 2021.