13520 - Dynamic Array   

Description

Array vs. Linked List

Array is a basic and fundamental concept in C/C++. It is a series of data which is arranged continiously in the memory space and can be accessed by index. Array is fast for indexing (using the given index to access the certain data), because the memory address of the destination can be calculated quickly. However, the capacity of the array is fixed after the declaration.

In I2PI (introduction of programming 1), we introduce a data structure called "linked list" which supports appending and deleting elements dynamically. The elements are stored in a series of "nodes", where each node points to the address of the next node. It is fast at appending elements but slow at indexing. To access the item of index i, you need to move i steps from head to obtain it.

Dynamic Array

In this problem, we will introduce a new data structure called "dynamic array" (abbreviated to "Darray" in the following statement) which supports dynamic capacity and indexing. We targeted to a simple Darray which has the following three variables

  1. size : the number of elements stored in the array
  2. capacity : the maximum number of elements which can be stored in the array
  3. *data : the pointer which stores the address of the array

and two operations

  1. pushback: append an element to the back
  2. indexing : access the data by the given index.

To understand the concept of size and capacity, consider an array declaration:

​int data[5];
// or
int *data = new int[5];

 At first, the capacity is 5, but the size is 0 because no data is stored inside the array. Next, we push 5 elements to the array:

for (int i = 0; i < 5; i++) data[i] = i*i;

the capacity is still 5 but the size changes from 0 to 5. Therefore, no element can be appended to the array.

If we still want to append additional elements, we'll allocate a new array space with the double capacity and copy the data from old array to the new one. Note that the old array should be freed to avoid memory leak.

In this case, the capacity becomes 10 and the size is 6.

Implementation

You should implement the following functions based on the description above:

  1. int& operator[](int): access data like using array. Users and main.cpp should/will not access the index which is greater or equal to size.
  2. void pushback(int x): append the element x
  3. void clear(void): clear the array (set size to 0) so that the next pushbacks will place elements in data[0],data[1] and so on.
  4. int length(void): return the current size.
  5. void resize(void): double the capacity and copy the data.
  6. ~Darray(): destructor

Note that main.cpp acts like a functional tester for your Darray. There's no need to understand it. You should test your Darray by yourself.

// function.h
class Darray {
    public:
        Darray() {
            capacity = 100;
            size = 0;
            data = new int[capacity];
        };
        ~Darray();
        int& operator[](int);
        void pushback(int x);
        void clear(void);
        int length(void);
    private:
        void resize(void); // double the capacity
        int *data;
        int capacity;
        int size;
};
// usage
Darray arr;
for (int i = 0; i < 5; i++) arr.pushback(i*i);
arr[2] += 100 + arr[3];

for (int i = 0; i < arr.length(); i++)
    cout << arr[2] << ' ';             // 
Print: 0 1 113 9 16
cout << endl << arr.length() << endl;  // Print: 5
arr.clear();

cout << arr.length() << endl;          // Print: 0
arr.pushback(9487);

cout << arr.length() << ' ' << arr[0] << endl;  // Print: 1 9487

More Notes: Time Complexity of Dynamic Array

Although it seems that copying the whole array will make dynamic array slow, we will analyze the time complexity to show that dynamic array is fast. Recall what Big-O Notation is where we introduced it in "The Josephus problem". O(2n+100)=O(2n)=O(n) means that the operation takes about n steps while O(2)=O(1) takes "constant" time. For array operations, we wish to have O(1) of indexing and pushback. In the following analysis, we will evaluate the amortized time complexity, which can be comprehended as calculating the average time complexity instead of the complexity of total operations.  Amortized time complexity = (complexity of total operations) / (number of operations).

Suppose that C0 is the initial capacity and n is the number of operation of pushback. We discuss the time complexity in several cases:

  1. Expand 0 time, n <= C0.
    Since there's no expand operation, the total time complexity is O(1)*n=O(n). The amortized time complexity is O(n)/n=O(1).
  2. Expand 1 time, C0 < n <= C1C1 = 2*C0.
    Push C0 items to array: O(C0)
    Expand and copy: O(C0), since there're C0 elements to be copied.
    Push n-C0 items to array: O(n-C0)
    The total time complexity is O(C0)+O(C0)+O(n-C0) = O(n+C0). Since C0 < n <= C1, O(2C0) < total time complexity <= O(3C0). The amortized time complexity ranges in [O(3/2), O(2)). We identify it by the upper bound : O(2).
  3. Expand 2 times, C1 < n <= C2C2 = 2*C1.
    The amortized time complexity should be O(3). You can verify it by your own.

From the above analysis, if it expand i times, the amortized time complexity for appending an element is O(i+1). However, i won't be very large because the capacity grows exponentially so that we often regard the complexity as O(1), or, constant time. For instance, C0=100, after expanding 20 times, the capacity will be 100*220≈108.


 

Input

This is handled by the main function.

Output

This is handled by the main function.

Sample Input  Download

Sample Output  Download

Partial Judge Code

13520.cpp

Partial Judge Header

13520.h


Discuss