Get startedGet started for free

Big-O notation: time complexity

1. Big-O notation: time complexity

Hello and welcome to Optimizing Code in Java! I'm Pavlos and I'll be your instructor for this course. I am a lead engineer at Wealthyhood, a fintech startup with tens of thousands of active users. Before that, I worked as a Java Senior Engineer at Apple, building APIs serving thousands of requests per second.

2. What is time complexity?

We'll start with one of the foundations for understanding code performance: Time Complexity and Big-O Notation. Time complexity measures how the runtime of some code grows as the input size increases. It helps us answer questions like "What will happen to my application's performance when my data grows 10x?" Note this does not focus on the absolute execution time, as that can vary across machines.

3. Big-O notation

We use Big-O notation to describe the worst-case time complexity of an algorithm. To understand how Big-O notation works, let's look at some common complexity classes we'll encounter. O(1) means constant time - the operation takes the same time regardless of input size.

4. Understanding O(1)

To understand O(1), let's look at how `ArrayList`'s `get` method is implemented in Java. Under the hood, `ArrayList` is really just an array. When you call `get(5)`, it accesses the array at index 5 using array notation. This is why `get()` operations are O(1) or constant time - accessing an array element by index takes the same amount of time regardless of the array's size. Whether our `ArrayList` has 10 elements or 10 million, accessing a specific index is equally fast.

5. Big-O notation

O(n) is linear time-the runtime grows in direct proportion to input size. Let's look at another example to understand this better.

6. Understanding O(n)

Keeping our `ArrayList` example, let's look at how the `contains()` method is implemented. When we call it, `ArrayList` needs to check if the element exists in the list. Looking at the implementation, `contains()` calls `indexOf()`, which searches through the entire array. In the worst case, it needs to check every element before determining if the object "o" is in the list. If our `ArrayList` has `n` elements, this operation might require up to `n` comparisons. That's why we classify it as `O(n)` or linear time - the time required grows in direct proportion to the size of our list.

7. Big-O notation

Finally, `O(n²)` is quadratic time - typically seen in algorithms with nested iterations. The difference in performance becomes dramatic as data sizes grow.

8. A practical example with quadratic complexity

To understand quadratic time, here, we have some code to find a pair of numbers in an `ArrayList` that sums to a target value. As we can see, for each element at position `i`, we compare it with all subsequent elements. In the worst case, we'll need to make (n*(n-1))/2 comparisons to find our pair. We can simplify this to `O(n²)`. For a list of 1,000 elements, we might need to make up to 499,500 comparisons! This approach becomes prohibitively slow as the list size grows.

9. Why time complexity matters

Let's see why this matters as our application scales. Consider what happens when our input grows from 1,000 to 1 million items: With a constant-time algorithm, execution time stays the same! With a linear-time algorithm, it becomes 1,000 times slower - manageable but noticeable. With a quadratic-time algorithm, it becomes a million times slower - potentially turning a 1-second operation into 11.5 days! That's why choosing the right algorithm and data structure is crucial.

10. Let's practice!

Now, let's practice!