Skip to content
Open
50 changes: 34 additions & 16 deletions javascript/computer_science/space_complexity.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ This section contains a general overview of topics that you will learn in this l
- What do we mean by space complexity.
- Why is it important to consider how our algorithm uses memory space.
- How do we measure space complexity.
- What is a situation where auxiliary space analysis is useful.
Comment thread
JoshDevHub marked this conversation as resolved.
Outdated

### What do we mean by space complexity?

Expand Down Expand Up @@ -95,17 +96,7 @@ function sumObjectValues(obj) {

Here as the object size increases, the space it uses grows in a linear way.

#### Other complexities

As we've stated, many data structures share O(N) space complexity, and therefore you won't write many algorithms with a space complexity that differs.

You do find some recursive functions that may have a different space complexity and some sorting algorithms. You normally won't have much reason to consider anything else though.

In the last lesson one of the assignments was a link to the [Big-O cheat sheet](https://www.bigocheatsheet.com/). If you take another look at it now, you may have a better appreciation for just how amazing it is as a reference for space and time complexity. If you scroll down to the data structures and then the sorting algorithms section, you'll see it gives you the time and space complexities. Notice just how many are O(N), especially for data structures. Many sorting algorithms have just O(1) space complexity, something to keep in mind as you come across different sorting algorithms during your learning.

That's why we won't be diving into examples for other Big O notations with space complexity. We'd have to come up with convoluted examples that wouldn't represent most code you'll write. If you do come across a good real world example in your own code, then do let us know and we may consider adding it here for others to consider.

#### Other considerations
#### Auxiliary space analysis

One of the common areas that causes confusion when considering space complexity is what constitutes using space in the context of an algorithm. In an earlier example we wrote methods that duplicated an array and object argument. We did that to be explicit. But what if we'd written the method as:

Expand All @@ -119,11 +110,38 @@ function sumArr(arr) {
}
```

When a data structure is passed in as the argument, especially for languages that pass arrays by reference rather than value, it can be a bit unclear if that method considers the space used by that data structure when calculating its space complexity. If we didn't count it, then it would be easy for all our methods to have great space usage on paper because we put the onus on the caller to allocate that space. If we did count it, but the data structure was created for use by many different methods, then the space complexity for all those methods is O(N) when they aren't utilizing additional space. Then consider that if your method receives an array as an input and loops it, an index must be created for the loop which uses additional space.
When a data structure is passed in as the argument, especially for languages that pass arrays by reference rather than value, it can be a bit unclear if that method considers the space used by that data structure when calculating its space complexity. We could still say this algorithm requires O(N) space, factoring in the size of the input. Or we could say that the creation of the input array belongs to whatever external routine allocated it before calling this function. This would mean that this algorithm takes up O(1) space, as all other variables are constant. This latter approach is known an "auxiliary space analysis." Using this approach, we only regard the *extra* space that an algorithm takes up, meaning the input isn't factored in. This can be situationally useful for describing algorithms that use extra space differently. Let's look at a quick example:
Comment thread
JoshDevHub marked this conversation as resolved.
Outdated

```javascript
function squareNumsInPlace(arr) {
for (let i = 0; i < arr; i++) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for (let i = 0; i < arr; i++) {
for (let i = 0; i < arr.length; i++) {

arr[i] = arr[i] * arr[i];
}

return arr;
}

function squareNumsNewArr(arr) {
const squaredNums = [];
arr.forEach((number) => {
squaredNums.push(number * number);
})
Comment thread
JoshDevHub marked this conversation as resolved.
Outdated

return squaredNums;
}
Comment on lines +126 to +133
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the use of an empty array and forEach+push to be extra extra explicit about the use of an additional array? Or would using .map suffice? Since at this point, .map should both be more idiomatic and probably sufficiently clear that an extra array is involved if the following paragraph is tweaked accordingly.

Not sure if it's as idiomatic in the Ruby version.
Ultimately, not married to this if the idea is to be extra extra explicit about an additional array being involved.

```

Here we have two functions that take in an array of numbers and return an array where each number has been squared. The first function directly changes the input array, while the second function builds a new array to `push()` the squared numbers into. Using traditional space analysis, both functions would take up O(N) space. But for this situation, that doesn't tell us much about the (very clear) differences between how these two algorithms use memory. It'd be better to use auxiliary space analysis and say that the top function uses O(1) extra space while the bottom one uses O(N).

#### Other complexities

As we've stated, many data structures share O(N) space complexity, and therefore you won't write many algorithms with a space complexity that differs.

The first answer to [analyzing space complexity](https://cs.stackexchange.com/questions/127933/analyzing-space-complexity-of-passing-data-to-function-by-reference) provides some great context to the question and gives some thought-provoking answers.
You do find some recursive functions that may have a different space complexity and some sorting algorithms. You normally won't have much reason to consider anything else though.
Comment thread
JoshDevHub marked this conversation as resolved.
Outdated

Ultimately when you consider Big O measures the worst-case scenario, it would be easier to err on the side of caution and do consider the space of arguments passed to your method.
In the last lesson one of the assignments was a link to the [Big-O cheat sheet](https://www.bigocheatsheet.com/). If you take another look at it now, you may have a better appreciation for just how amazing it is as a reference for space and time complexity. If you scroll down to the data structures and then the sorting algorithms section, you'll see it gives you the time and space complexities. Notice just how many are O(N), especially for data structures. Many sorting algorithms have just O(1) space complexity, something to keep in mind as you come across different sorting algorithms during your learning.
Comment thread
JoshDevHub marked this conversation as resolved.
Outdated

That's why we won't be diving into examples for other Big O notations with space complexity. We'd have to come up with convoluted examples that wouldn't represent most code you'll write. If you do come across a good real world example in your own code, then do let us know and we may consider adding it here for others to consider.
Comment thread
JoshDevHub marked this conversation as resolved.
Outdated

### Wrapping up

Expand All @@ -137,7 +155,7 @@ On top of these considerations, you also need to balance the readability of your

<div class="lesson-content__panel" markdown="1">

1. Read this [article on big O and space complexity](https://dev.to/mwong068/big-o-space-complexity-lcm). It isn't detail heavy but does a good job explaining things clearly, and does lightly cover recursive functions. The code examples are in Ruby, but you should be able to follow along.
1. Read the first answer to [analyzing space complexity on stack exchange](https://cs.stackexchange.com/questions/127933/analyzing-space-complexity-of-passing-data-to-function-by-reference) for some ideas about the different ways that space can be counted.
1. This [article on recursion and space complexity](https://dev.to/elmarshall/recursion-and-space-complexity-13gc) offers a little more context to recursive functions and their space complexity.

</div>
Expand All @@ -148,4 +166,4 @@ The following questions are an opportunity to reflect on key topics in this less

- [What is space complexity?](#what-do-we-mean-by-space-complexity)
- [How do we measure space complexity?](#measuring-space-complexity)
- [What are the main considerations we should take into account when optimizing code?](#other-considerations)
- [What is a situation where auxiliary space analysis is useful?](#auxiliary-space-analysis)
49 changes: 34 additions & 15 deletions ruby/computer_science/space_complexity.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ This section contains a general overview of topics that you will learn in this l
- What do we mean by space complexity.
- Why is it important to consider how our algorithm uses memory space.
- How do we measure space complexity.
- What is a situation where auxiliary space analysis is useful.

### What do we mean by space complexity?

Expand Down Expand Up @@ -95,17 +96,7 @@ end

Here as the hash size increases, the space it uses grows in a linear way.

#### Other complexities

As we've stated, many data structures share O(N) space complexity, and therefore you won't write many algorithms with a space complexity that differs.

You do find some recursive functions that may have a different space complexity and some sorting algorithms. You normally won't have much reason to consider anything else though.

In the last lesson one of the assignments was a link to the [Big-O cheat sheet](https://www.bigocheatsheet.com/). If you take another look at it now, you may have a better appreciation for just how amazing it is as a reference for space and time complexity. If you scroll down to the data structures and then the sorting algorithms section, you'll see it gives you the time and space complexities. Notice just how many are O(N), especially for data structures. Many sorting algorithms have just O(1) space complexity, something to keep in mind as you come across different sorting algorithms during your learning.

That's why we won't be diving into examples for other Big O notations with space complexity. We'd have to come up with convoluted examples that wouldn't represent most code you'll write. If you do come across a good real world example in your own code, then do let us know and we may consider adding it here for others to consider.

#### Other considerations
#### Auxiliary space analysis

One of the common areas that causes confusion when considering space complexity is what constitutes using space in the context of an algorithm. In an earlier example we wrote methods that duplicated an array and hash argument. We did that to be explicit. But what if we'd written the method as:

Expand All @@ -119,11 +110,38 @@ def sum_arr(arr)
end
```

When a data structure is passed in as the argument, especially for languages that pass arrays by reference rather than value, it can be a bit unclear if that method considers the space used by that data structure when calculating its space complexity. If we didn't count it, then it would be easy for all our methods to have great space usage on paper because we put the onus on the caller to allocate that space. If we did count it, but the data structure was created for use by many different methods, then the space complexity for all those methods is O(N) when they aren't utilizing additional space. Then consider that if your method receives an array as an input and loops it, an index must be created for the loop which uses additional space.
When a data structure is passed in as the argument, especially for languages that pass arrays by reference rather than value, it can be a bit unclear if that method considers the space used by the data structure when calculating its space complexity. We could still say this algorithm requires O(N) space, factoring in the size of the input. Or we could say that the creation of the input array belongs to whatever external routine allocated it before calling this function. This would mean that this algorithm takes up O(1) space, as all other variables are constant. This latter approach is known as "auxiliary space analysis." Using this approach, we only regard the *extra* space that an algorithm takes up, meaning the input isn't factored in. This can be situationally useful for describing algorithms that use extra space differently. Let's look at a quick example:

```ruby
def square_nums_in_place(arr)
0.upto(arr.length) do |idx|
arr[idx] = arr[idx] * arr[idx]
end

return arr
end

def square_nums_new_arr(arr)
squared_nums = []
arr.each do |number|
squared_nums << number * number
end

return squared_nums
end
```

Here we have two methods that take in an array of numbers and return an array where each number has been squared. The first method directly changes the input array, while the second method builds a new array to shovel the squared numbers into. Using traditional space analysis, both methods would take up O(N) space. But for this situation, that doesn't tell us much about the (very clear) differences between how these two algorithms use memory. It'd be better to use auxiliary space analysis and say that the top function uses O(1) extra space while the bottom one uses O(N).

The first answer to [analyzing space complexity](https://cs.stackexchange.com/questions/127933/analyzing-space-complexity-of-passing-data-to-function-by-reference) provides some great context to the question and gives some thought-provoking answers.
#### Other complexities

Ultimately when you consider Big O measures the worst-case scenario, it would be easier to err on the side of caution and do consider the space of arguments passed to your method.
As we've stated, many data structures share O(N) space complexity, and therefore you won't write many algorithms with a space complexity that differs.

You do find some recursive functions that may have a different space complexity and some sorting algorithms. You normally won't have much reason to consider anything else though.

In the last lesson one of the assignments was a link to the [Big-O cheat sheet](https://www.bigocheatsheet.com/). If you take another look at it now, you may have a better appreciation for just how amazing it is as a reference for space and time complexity. If you scroll down to the data structures and then the sorting algorithms section, you'll see it gives you the time and space complexities. Notice just how many are O(N), especially for data structures. Many sorting algorithms have just O(1) space complexity, something to keep in mind as you come across different sorting algorithms during your learning.

That's why we won't be diving into examples for other Big O notations with space complexity. We'd have to come up with convoluted examples that wouldn't represent most code you'll write. If you do come across a good real world example in your own code, then do let us know and we may consider adding it here for others to consider.

### Wrapping up

Expand All @@ -137,7 +155,7 @@ On top of these considerations, you also need to balance the readability of your

<div class="lesson-content__panel" markdown="1">

1. Read this [article on big O and space complexity](https://dev.to/mwong068/big-o-space-complexity-lcm). It isn't detail heavy but does a good job explaining things clearly, and does lightly cover recursive functions.
1. Read the first answer to [analyzing space complexity on stack exchange](https://cs.stackexchange.com/questions/127933/analyzing-space-complexity-of-passing-data-to-function-by-reference) for some ideas about the different ways space can be counted.
1. This [article on recursion and space complexity](https://dev.to/elmarshall/recursion-and-space-complexity-13gc) offers a little more context to recursive functions and their space complexity.

</div>
Expand All @@ -149,6 +167,7 @@ The following questions are an opportunity to reflect on key topics in this less
- [What is space complexity?](#what-do-we-mean-by-space-complexity)
- [How do we measure space complexity?](#measuring-space-complexity)
- [What are the main considerations we should consider before optimizing code?](#other-considerations)
- [What is a situation where auxiliary space analysis is useful?](#auxiliary-space-analysis)
Comment on lines 171 to +172
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you intended for this to be a replacement KC question instead of additional? Since the JS once has it replace the optimizing question, and the Other considerations heading has gone too

Suggested change
- [What are the main considerations we should consider before optimizing code?](#other-considerations)
- [What is a situation where auxiliary space analysis is useful?](#auxiliary-space-analysis)
- [What is a situation where auxiliary space analysis is useful?](#auxiliary-space-analysis)


### Additional resources

Expand Down