Data Structures and Algorithms

For this assignment, write all time complexities in the form `O`
([`k` -> `k`]), define what each of your variables represent,
and do not use n as a variable. Some examples of incorrect time complexities
are: [`k` -> `k`], `O`[`k` ->
`k`], or `O`(`k`).

Suppose we want an ordered, variable-size data structure that gives quick access to all elements.

Specifically, we need to be able to:

- insert elements at one end.
- get the
`k`th element.

In Racket, lists support insertion (and deletion) at one end in
constant time, but getting the `k`th element takes
`O`([`k` -> `k`]) time. This means lists alone
won't be sufficient. Arrays (known as vectors in Racket) support accessing
elements in constant time, but are fixed in size.

The fixed size of arrays is a major limitation. Once an array is full,
we can no longer insert elements. To solve this problem, we start out with a
vector of a certain size (assume a starting size of 1) and replace it with a
bigger array when it is full. Here is a pseudocode description of this
flexible array

:

makeFlexibleArray(): currentArray = new array[1] nextUnusedIndex = size of currentArray

insertAtEnd(elem): if nextUnusedIndex > size of currentArray: newArray = new array[size of currentArray + 1] copy each element of currentArray to newArray nextUnusedIndex = size of currentArray + 1 currentArray = newArray insertAtEnd(elem) else: currentArray[nextUnusedIndex] = elem nextUnusedIndex = nextUnusedIndex + 1

get(k): get kth element of currentArray

Answer the following questions:

- What is the worst case time complexity of each operation?
- Is the amortized complexity of a sequence of
`k`insert operations any better?

Fortunately, there is a way to improve the amortized cost per operation of
the flexible array. We change `insertAtEnd`

so that the array
doubles in size when it is full.

makeFlexibleArray(): -- unchanged

get(k): -- unchanged

insertAtEnd(elem): ONLY change is newArray = new array[size of currentArray + 1] is replaced by newArray = new array[size of currentArray * 2]

Answer the following question:

- What is the amortized cost per operation of
`k`insert operations?

Suppose we want to extend our datastructure to also support removing elements at the end. Specifically, we add the following operation:

deleteLast(): delete last element of currentArray

To conserve memory, we may also want to shrink the array if it gets too empty. Here are two choices:

- When the array is half-full, shrink its size to half of what it was.
- When the array is one-fourth full, shrink its size to half of what it was.

Shrinking the array takes constant time. But you should assume that it notifies the operating system that it is taking less space, so the OS is free to allocate the space beyond the end of the shrunk region. That is, future allocation must again copy all elements.

Answer the following question:

- Which option is better, and why? Give an example of a sequence of operations and the corresponding amortized analysis to support your claim.

Suppose we need a slightly different functionality from our data
structure. Instead of looking up the `k`th element, we need to be
able to search for elements.

Specifically, we need to be able to:

- Insert an element
- Determine whether a given element is in the structure

One way to do this is to keep an array of elements in sorted order.
Constant time access allows us to quickly search for elements using a
binary search in `O`([`k` -> `log k`]) time, but
inserting elements is slow, `O`([`k` -> `k`]),
due to the requirement that the array remain sorted: when we add an element,
all the elements after it in order must be shifted one array index to the right.

Here is a better design for a data structure that provides this functionality:
Keep a list of arrays, numbered `0` through `n`, where
array `k` is of size `2 ^{k}`. We will maintain
the invariant that all arrays are either empty or full, and all full
arrays are in sorted order.

For example,

Array 0: [3] Array 1: [1, 6] Array 2: [_, _, _, _] Array 3: [2, 5, 7, 8, 10, 12, 20, 42]

To insert an element, we first create an array of size 1
containing only that element. Then we iterate through the list of arrays,
starting with the smallest. If array `k` is empty, we replace it
with the full new array and we are done. Otherwise, we merge array
`k` into the new array, doubling its size. Then we set array
`k` to be empty and move on to array `k+` 1.

To insert the element `4`

into the example above, we would
create the array `[4]`

, then merge `[4]`

with
`[3]`

to get `[3, 4]`

, then merge that with
`[1, 6]`

to get `[1, 3, 4, 6]`

, and finally place
`[1, 3, 4, 6]`

in position `2`, resulting in the
following configuration:

Array 0: [_] Array 1: [_, _] Array 2: [1, 3, 4, 6] Array 3: [2, 5, 7, 8, 10, 12, 20, 42]

The pseudocode looks like this:

makeSortedDict(): listOfArrays = empty

search(elem): foreach array in listOfArrays: binary search for elem in array

insert(elem): newArray = [elem] foreach array in listOfArrays: if array is empty: replace array with newArray in listOfArrays return else: newArray = merge array with newArray array = empty add newArray to end of listOfArrays

Answer the following questions:

- What is the time complexity of search?
- What is the amortized cost per operation of
`k`insert operations? - Can you think of a data structure you already know that provides this functionality with a better time complexity? Naming the structure is fine; we don't expect a detailed response.

- A printed sheet of paper with your answers clearly labelled for all of the parts, handed in to the bin by the Fishbowl called "Timesheets".
- Please write your name as well as your login on the sheet.

Your analysis must be in a `pdf`

file on Letter-sized paper. Mathematical content
must be formatted appropriately. Please, no Arial.