Array is a basic and fundamental concept in C/C++. It is a series of data which is arranged continiously in the memory space and can be accessed by index. Array is fast for indexing (using the given index to access the certain data), because the memory address of the destination can be calculated quickly. However, the capacity of the array is fixed after the declaration.
In I2PI (introduction of programming 1), we introduce a data structure called "linked list" which supports appending and deleting elements dynamically. The elements are stored in a series of "nodes", where each node points to the address of the next node. It is fast at appending elements but slow at indexing. To access the item of index i, you need to move i steps from head to obtain it.
In this problem, we will introduce a new data structure called "dynamic array" (abbreviated to "Darray" in the following statement) which supports dynamic capacity and indexing. We targeted to a simple Darray which has the following three variables
and two operations
To understand the concept of size and capacity, consider an array declaration:
At first, the capacity is 5, but the size is 0 because no data is stored inside the array. Next, we push 5 elements to the array:
the capacity is still 5 but the size changes from 0 to 5. Therefore, no element can be appended to the array.
If we still want to append additional elements, we'll allocate a new array space with the double capacity and copy the data from old array to the new one. Note that the old array should be freed to avoid memory leak.
In this case, the capacity becomes 10 and the size is 6.
You should implement the following functions based on the description above:
Note that main.cpp acts like a functional tester for your Darray. There's no need to understand it. You should test your Darray by yourself.
Although it seems that copying the whole array will make dynamic array slow, we will analyze the time complexity to show that dynamic array is fast. Recall what Big-O Notation is where we introduced it in "The Josephus problem". O(2n+100)=O(2n)=O(n) means that the operation takes about n steps while O(2)=O(1) takes "constant" time. For array operations, we wish to have O(1) of indexing and pushback. In the following analysis, we will evaluate the amortized time complexity, which can be comprehended as calculating the average time complexity instead of the complexity of total operations. Amortized time complexity = (complexity of total operations) / (number of operations).
Suppose that C0 is the initial capacity and n is the number of operation of pushback. We discuss the time complexity in several cases:
From the above analysis, if it expand i times, the amortized time complexity for appending an element is O(i+1). However, i won't be very large because the capacity grows exponentially so that we often regard the complexity as O(1), or, constant time. For instance, C0=100, after expanding 20 times, the capacity will be 100*220≈108.
This is handled by the main function.
This is handled by the main function.