the cedar ledge

Asymptotic Notation

Date: September 28 2021

Summary: An overview of asymptotic notation and time complexity

Keywords: #asymptotic #notation #complexity #bigo #masters #archive


Not Available

Table of Contents

    1. Big-O Notation
      1. Overview
      2. Conventions
      3. Constant Complexity: O(1)\mathcal{O}(1)
      4. Linear Complexity: O(n)\mathcal{O}(n)
      5. Logarithmic Complexity: O(log(n))\mathcal{O}(log(n))
      6. Exercises
    2. Amortized Analysis
  1. How To Cite
  2. References:
  3. Discussion:
- Basic memory and reference management
- Simple comparisons
- Basic arithmetic
	- Addition
	- Subtraction
	- Multiplication
	- Division
	- Modulo

Big-O Notation


- Execution time
- Space used (in memory or disk)



O(5n)β†’O(n) \mathcal{O}(5n) \rightarrow \mathcal{O}(n)


O(n2+1000nβˆ’3)β†’O(n2) \mathcal{O}(n^{2} + 1000n - 3) \rightarrow \mathcal{O}(n^{2})

Constant Complexity: O(1)\mathcal{O}(1)

mylist = [1:5...]

Linear Complexity: O(n)\mathcal{O}(n)

mylist = [1:5...]
summed_values = sum(mylist)

Logarithmic Complexity: O(log(n))\mathcal{O}(log(n))

logm(n)=log2(n)log2(m)=Clog2(n)β†’O(logm(n))β†’O(log(n)) log_{m}(n) = \frac{log_{2}(n)}{log_{2}(m)} = Clog_{2}(n) \rightarrow \mathcal{O}(log_{m}(n)) \rightarrow \mathcal{O}(log(n))

An example of this behavior is here:

Time (tt)nn

Where the proportion can be stated as:

f(n)=log10(nt) f(n) = log_{10}(n^{t})


The following exercises can be found here

  1. What is the time and space complexity of:

a = 0
i = 0

while i < N
	a = a + rand(1)
	i += 1

b = 0
j = 0

while j < M
	b = b + rand(1)
	j += 1

Answer: O(N+M),O(1)\mathcal{O}(N + M), \mathcal{O}(1)

Explanation: Since we measure complexity by worse case scenario of primitive operations ran, there could be NN and MM operations executed in this code. As no additional space is being utilized, space complexity is constant as no new variables are being defined.

  1. What is the time complexity of:

a = 0
i = 0
j = N

while i < N
	while j > i
		a = a + i + j
		j -= 1
	i += 1

Answer: O(Nβ‹…N)\mathcal{O}(N \cdot N)

Explanation: Both loops are dependent on NN so both loops, iterate NN times, therefore resulting in a time complexity of O(Nβ‹…N)\mathcal{O}(N \cdot N)

  1. What is the time complexity of the following code:

i = N / 2
k = 0

while i <= N
	j = 2
	while j <= N
		k = k + N / 2
		j *= 2
	i += 1

Answer: O(nβ‹…log⁑(n))\mathcal{O}(n \cdot \log(n))

Explanation: As nn continues to increase, the variable, kk continues to loosely grow more than exponential. Furthermore, there are, n2\frac{n}{2} primitive steps in the outer loop such that the total time complexity would be O(n2β‹…log⁑n)\mathcal{O}(\frac{n}{2} \cdot \log{n}) which is then simplified to O(nβ‹…log⁑(n))\mathcal{O}(n \cdot \log(n)) I got this wrong initially because I did not account for the outer loop contributing a time complexity of n2\frac{n}{2}.

  1. What does it mean when we say that algorithm X is asymptotically more efficient than Y?

Answer: Algorithm X will always be better for large inputs

Explanation: When we consider an asymptote in terms of an algorithm, we also consider that algorithms "growth" over time. Meaning, that if you have some algorithm that is efficient at an asymptote, by nature of asymptotic analysis, that means it is "good" in the worst case scenario of that algorithm.

Addendum: I got this wrong when thinking about asymptotic notation as I failed to consider growth. I thought X would be better for all inputs to that algorithm but that would not be so in the case of possibly a smaller input to X.

  1. What is the time complexity of the following code:

a = 0
i = N

while i > 0
	a += i
	i /= 2

Answer: O(log⁑n)\mathcal{O}(\log{n})

Explanation: There is a direct proportional relationship between nn and the final output.

  1. What best describes the useful criterion for comparing the efficiency of algorithms?

Answer: Time and Memory

Explanation: Time dictates how long a program will evaluate for and memory dictates how much a program can evaluate

  1. How is time complexity measured?

Answer: By counting the number of primitive operations in an algorithm on a given input size.

Explanation: Each primitive operation is generally assumed to evaluate at the cost of "one" for each operation.

  1. What will be the time complexity of the following code?


i = 0 
while i < N
	i *= k
  1. What will be the time complexity of the following code?


value = 0
i = 0
j = 0

while i < n
	while j < i
		value += 1
		j += 1
	i += 1
  1. Algorithm A and B have a worst-case running time of O(n)\mathcal{O}(n) and O(logn)\mathcal{O}(logn), respectively. Therefore, algorithm B always runs faster than algorithm A.

Answer: False

Explanation: Algorithm A could be faster on smaller inputs as compared to algorithm B

Amortized Analysis

See note here for details

How To Cite

Zelko, Jacob. Asymptotic Notation. September 28 2021.



CC BY-SA 4.0 Jacob Zelko. Last modified: May 19, 2024. Website built with Franklin.jl and the Julia programming language.