## New course – Data Structures and Algorithms Bootcamp Tired of know knowing what about linked lists?
Frustrated because you can’t speak BigO notation?

Tech companies in the valley don’t interview engineers the same way we do in other parts of the world. They look for very specific knowledge, concepts, and experience that are heavily rooted in traditional computer science. And while you may already be a great engineer, chances are you would fail one of these interviews, not because of a lack of skill, but more because of a lack of understanding.

This course is about changing that.

https://www.udemy.com/data-structures-and-algorithms-bootcamp

By learning the basics behind how data structures work, and what the most commonly used computer algorithms are

You will be able to:

• Answer the most commonly asked interview questions
• Ace your technical interview, and
• Land that dream job

Now I got to warn you. This course goes pretty fast, I am assuming you already know how program.

But if you don’t have a lot of time, and you need to get up to speed fast, this course is for you.

And who am I? My name is Jonathan Rasmusson. I am a former Spotify engineer who lived and worked in the Valley, and who, just like you, also like you wasn’t trained in traditional computer science. I had to learn all this stuff from scratch – just like you.

But the good news is it can be learned. I am proof you can you can land your dream job. And in this course, I am going to show you show you how.

So what are you waiting for? Let’s get started.

Advertisements

## Dynamic programming

Dynamic programming is a method for solving a complex problem by breaking a big problem down into several simpler smaller ones. Take this shortest path problem for example. The goal here is to count how many paths there are from the upper left hand corner of the grid down to the bottom right. The rule is you can only continuously move down and to the right.

A dynamic programming approach to this problem would be to note that the number of paths from the start to the end can be broken down by summing the number of paths from A to the end, also with B to the end. So now we have broken down our big problem, into smaller sub problems. That that these subproblems can then be further broken down into as we walk through the grid. This recursion stops when we get to the end and the number new paths left is 1.

One way of translating this into a recursive algorithm would be to write something like this.

With code like this

DynamicProgrammingRecursive.java

```public class DynamicProgrammingRecursive {

private int grid[][];

public DynamicProgrammingRecursive(int[][] grid) {
this.grid = grid;
}

public int countPaths(int row, int col) {
if (!isValidSquare(row, col)) return 0;
if (isAtEnd(row, col)) return 1;
return countPaths(row + 1, col) + countPaths(row, col + 1);
}

public boolean isValidSquare(int row, int col) {
return isInBounds(row, col) && !isBlocked(row, col);
}

public boolean isBlocked(int row, int col) {
return this.grid[row][col] == 1;
}

public boolean isInBounds(int row, int col) {
return (row < grid.length && col < grid.length);
}

public boolean isAtEnd(int row, int col) {
return grid.length - 1 == row && grid[row].length - 1 == col;
}

}
```

foo

```import junit.framework.Assert;
import org.junit.Before;
import org.junit.Test;

public class DynamicProgrammingRecursiveTest {

private DynamicProgrammingIterative empty2x2;
private DynamicProgrammingIterative empty3x3;

@Before
public void setUp() throws Exception {

int[][] paths2x2 = new int[][] {
{0,0},
{0,0}
};
empty2x2 = new DynamicProgrammingIterative(paths2x2);

int[][] paths3x3 = new int[][] {
{0,0,0},
{0,0,0},
{0,0,0}
};
empty3x3 = new DynamicProgrammingIterative(paths3x3);

}

/*
2 1
1 x
*/
@Test
public void TwoByTwoEmptyPathCount() throws Exception {
Assert.assertEquals(2, empty2x2.countPaths(1,1));
}

/*
6 3 1
3 2 1
1 1 x
*/
@Test
public void ThreeByThreeEmptyPathCount() throws Exception {
Assert.assertEquals(6, empty3x3.countPaths(0,0));
}

@Test
public void IsAtEnd() throws Exception {
int row = 0;
int column = 0;
Assert.assertTrue(empty2x2.isAtEnd(row, column));
}

@Test
public void IsInBounds() throws Exception {
Assert.assertTrue(empty2x2.isInBounds(0,0));
Assert.assertTrue(empty2x2.isInBounds(0,1));
Assert.assertTrue(empty2x2.isInBounds(1,0));
Assert.assertTrue(empty2x2.isInBounds(1,1));
Assert.assertFalse(empty2x2.isInBounds(0,2));
Assert.assertFalse(empty2x2.isInBounds(2,0));
}

@Test
public void IsBlocked() throws Exception {
int[][] paths = new int[][] {
{0,0},
{0,1}
};
DynamicProgrammingRecursive brute = new DynamicProgrammingRecursive(paths);
Assert.assertFalse(brute.isBlocked(0,0));
Assert.assertFalse(brute.isBlocked(0,1));
Assert.assertFalse(brute.isBlocked(1,0));
Assert.assertTrue(brute.isBlocked(1,1));
}

@Test
public void CountBlockedPaths() throws Exception {
int[][] paths = new int[][] {
{0,0,0,0,0,0,0,0},
{0,0,1,0,0,0,1,0},
{0,0,0,0,1,0,0,0},
{1,0,1,0,0,1,0,0},
{0,0,1,0,0,0,0,0},
{0,0,0,1,1,0,1,0},
{0,1,0,0,0,1,0,0},
{0,0,0,0,0,0,0,0}
};
DynamicProgrammingRecursive brute = new DynamicProgrammingRecursive(paths);
Assert.assertEquals(27, brute.countPaths(0,0));
}

}
```

We just traverse down and to the right, making sure that we are not out of bounds or in a blocked off square. And when we get to the end, we just return 1, allowing the recursion of the algorithm to walk back up to the top where we started.

The validSquare method does the boundary checking as well as making sure we are not in a blocked off square. If we are blocked return 0. If we are at the end return 1. Else recurse and do the calc for the next down and right squares of the grid.

One Tip for when coding grids

Whenever you do matrix algorithms use row and column for your variable names rather than x and y.

The reason for this is the row and column translates into [y][x] and its a common cause for mistakes. So when dealing with matrices using row and column instead. Just easier.

Memoization

One thing you may have noticed in our previous examples, is that we calculate the number of paths from C to the end twice. That is something we can store for future look ups. And that’s a memoization approach.

DynamicProgrammingMemoized.java

```public class DynamicProgrammingMemoized {

private int grid[][];
private int paths[][];

public DynamicProgrammingMemoized(int[][] grid) {
this.grid = grid;
this.paths = new int[grid.length][grid.length]; // assume square
}

public int countPaths(int row, int col) {
if (!isValidSquare(row, col)) return 0;
if (isAtEnd(row, col)) return 1;
if (this.paths[row][col] == 0) {
this.paths[row][col] = countPaths(row + 1, col) + countPaths(row, col + 1);
}
return this.paths[row][col];
}

public boolean isValidSquare(int row, int col) {
return isInBounds(row, col) && !isBlocked(row, col);
}

public boolean isBlocked(int row, int col) {
return this.grid[row][col] == 1;
}

public boolean isInBounds(int row, int col) {
return (row < grid.length && col < grid.length);
}

public boolean isAtEnd(int row, int col) {
return grid.length - 1 == row && grid[row].length - 1 == col;
}
}

```

DynamicProgrammingMemoizedTest.java

```import junit.framework.Assert;
import org.junit.Before;
import org.junit.Test;

public class DynamicProgrammingMemoizedTest {

private DynamicProgrammingMemoized empty2x2;
private DynamicProgrammingMemoized empty3x3;

@Before
public void setUp() throws Exception {

int[][] paths2x2 = new int[][] {
{0,0},
{0,0}
};
empty2x2 = new DynamicProgrammingMemoized(paths2x2);

int[][] paths3x3 = new int[][] {
{0,0,0},
{0,0,0},
{0,0,0}
};
empty3x3 = new DynamicProgrammingMemoized(paths3x3);

}

/*
2 1
1 x
*/
@Test
public void TwoByTwoEmptyPathCount() throws Exception {
Assert.assertEquals(2, empty2x2.countPaths(0,0));
}

/*
6 3 1
3 2 1
1 1 x
*/
@Test
public void ThreeByThreeEmptyPathCount() throws Exception {
Assert.assertEquals(6, empty3x3.countPaths(0,0));
}

@Test
public void CountBlockedPaths() throws Exception {
int[][] paths = new int[][] {
{0,0,0,0,0,0,0,0},
{0,0,1,0,0,0,1,0},
{0,0,0,0,1,0,0,0},
{1,0,1,0,0,1,0,0},
{0,0,1,0,0,0,0,0},
{0,0,0,1,1,0,1,0},
{0,1,0,0,0,1,0,0},
{0,0,0,0,0,0,0,0}
};
DynamicProgrammingRecursive brute = new DynamicProgrammingRecursive(paths);
Assert.assertEquals(27, brute.countPaths(0,0));
}
}
```

Same idea as the recursive implementation. Only here we store the results as we calculate them which speeds the algorithm and makes the calculations happen a lot faster.

Traditional Dynamic Programming Approach

A more traditional dynamic approach to this problem would be to start at the end and work backwards working our way up.

If we think about what the recursive approach does, the first first concrete values it gets, other than blocked paths, are these two in the bottom right. 1 and 1.

Now, we can we go from there. If we take that recursion one step up, it may be filling one of the next three cells.

If we start on the left, all of these cells have just one path to the bottom right. So all of these values are one.

And if we continue doing this for the other cells, we can continue walking up and filling in values for the other cells as follows for the entire grid.

DynamicProgrammingIterative.java

```public class DynamicProgrammingIterative {

private int grid[][];
private int paths[][];

public DynamicProgrammingIterative(int[][] grid) {
this.grid = grid;
this.paths = new int[grid.length][grid.length];
}

public int countPaths(int row, int col) {
if (!isValidSquare(row, col)) return 0;
if (isAtEnd(row, col)) return 1;
if (isAtBeginning(row, col)) {
paths[row][col] = 1;
}
if (paths[row][col] == 0) {

int bottomCell = 0;
int rightCell = 0;

if (isValidSquare(row + 1, col)) {
bottomCell = paths[row + 1][col];
}

if (isValidSquare(row, col + 1)) {
rightCell = paths[row][col + 1];
}

paths[row][col] = bottomCell + rightCell;
}
return countPaths(row - 1, col) + countPaths(row, col - 1);
}

public boolean isValidSquare(int row, int col) {
if (!isInBounds(row, col)) {
return false;
}
if (isBlocked(row, col)) {
return false;
}
return true;
}

public boolean isBlocked(int row, int col) {
return grid[row][col] == 1;
}

public boolean isInBounds(int row, int col) {
if (row < 0 || col < 0) {
return false;
}
return (row < grid.length && col < grid.length);
}

public boolean isAtEnd(int row, int col) {
return row == 0 && col == 0;
}

public boolean isAtBeginning(int row, int col) {
return row == grid.length - 1 && col == grid.length - 1;
}

public void print2x2() {
System.out.println(paths + " " + paths);
System.out.println(paths + " " + paths);
}

public void print3x2() {
System.out.println(paths + " " + paths + " " + paths);
System.out.println(paths + " " + paths + " " + paths);
System.out.println(paths + " " + paths + " " + paths);
}

public void print8x8() {
System.out.println(paths + " " + paths + " " + paths + paths + " " + paths + " " + paths + paths + " " + paths);
System.out.println(paths + " " + paths + " " + paths + paths + " " + paths + " " + paths + paths + " " + paths);
System.out.println(paths + " " + paths + " " + paths + paths + " " + paths + " " + paths + paths + " " + paths);
System.out.println(paths + " " + paths + " " + paths + paths + " " + paths + " " + paths + paths + " " + paths);
System.out.println(paths + " " + paths + " " + paths + paths + " " + paths + " " + paths + paths + " " + paths);
System.out.println(paths + " " + paths + " " + paths + paths + " " + paths + " " + paths + paths + " " + paths);
System.out.println(paths + " " + paths + " " + paths + paths + " " + paths + " " + paths + paths + " " + paths);
System.out.println(paths + " " + paths + " " + paths + paths + " " + paths + " " + paths + paths + " " + paths);
}
}
```

DynamicProgrammingIterativeTest.java

```import junit.framework.Assert;
import org.junit.Before;
import org.junit.Test;

public class DynamicProgrammingIterativeTest {

/*
2 1
1 x
*/
@Test
public void TwoByTwoEmptyPathCount() throws Exception {
int[][] paths2x2 = new int[][] {
{0,0},
{0,0}
};
DynamicProgrammingIterative empty2x2 = new DynamicProgrammingIterative(paths2x2);
Assert.assertEquals(2, empty2x2.countPaths(1,1)); // start lower right
}

/*
1 x
1 1
*/
@Test
public void TwoByTwoOneCellBlocked() throws Exception {
int[][] paths2x2 = new int[][] {
{0,1},
{0,0}
};
DynamicProgrammingIterative iterative = new DynamicProgrammingIterative(paths2x2);
Assert.assertEquals(1, iterative.countPaths(1,1));
}

/*
6 3 1
3 2 1
1 1 1
*/
@Test
public void ThreeByThreeEmpty() throws Exception {
int[][] paths3x3 = new int[][] {
{0,0,0},
{0,0,0},
{0,0,0}
};
DynamicProgrammingIterative empty3x3 = new DynamicProgrammingIterative(paths3x3);
Assert.assertEquals(6, empty3x3.countPaths(2,2));
}

/*
3 1 0
2 1 x
1 1 1
*/
@Test
public void ThreeByThreeBlocked() throws Exception {
int[][] paths3x3 = new int[][] {
{0,0,0},
{0,0,1},
{0,0,0}
};
DynamicProgrammingIterative iterative = new DynamicProgrammingIterative(paths3x3);
Assert.assertEquals(3, iterative.countPaths(2,2));
}

/*
3 1 0
2 1 x
1 1 1
*/
@Test
public void ThreeByThreeBlocked2() throws Exception {
int[][] paths3x3 = new int[][] {
{0,0,0},
{1,0,0},
{0,0,0}
};
DynamicProgrammingIterative iterative = new DynamicProgrammingIterative(paths3x3);
Assert.assertEquals(3, iterative.countPaths(2,2));
}

/*
1 1 1
x x 1
1 1 1
*/
@Test
public void ThreeByThreeBlocked3() throws Exception {
int[][] paths3x3 = new int[][] {
{0,0,0},
{1,1,0},
{0,0,0}
};
DynamicProgrammingIterative iterative = new DynamicProgrammingIterative(paths3x3);
Assert.assertEquals(1, iterative.countPaths(2,2));
}

@Test
public void CountBlockedPaths() throws Exception {
int[][] paths = new int[][] {
{0,0,0,0,0,0,0,0},
{0,0,1,0,0,0,1,0},
{0,0,0,0,1,0,0,0},
{1,0,1,0,0,1,0,0},
{0,0,1,0,0,0,0,0},
{0,0,0,1,1,0,1,0},
{0,1,0,0,0,1,0,0},
{0,0,0,0,0,0,0,0}
};
DynamicProgrammingIterative iterative = new DynamicProgrammingIterative(paths);
Assert.assertEquals(27, iterative.countPaths(7,7));
}

}
```

Now the advantage of doing things this way is that we have used slightly less memory. Runtime is still O(n^2) but we’ve reduced the actual amount of memory used slightly because we no longer have to use the call stack.

So where does this leave us?

The main take away of dynamic programming problems is that when you have a big problem that consists of many similarly oriented smaller ones, you can often implement them using a combination of recursion, with memoization for performance.

Not also that if you look at the tests I wrote, a good place to start with any of these problems is which something really small and simple (like a 2×2 or 3×3 matrix), write tests, and then work your way up from there.

Happy coding!

Links that help

http://www.sanfoundry.com/dynamic-programming-solutions-longest-common-substring-problem/

https://stackoverflow.com/questions/4000169/getting-the-array-length-of-a-2d-array-in-java

https://stackoverflow.com/questions/12231453/syntax-for-creating-a-two-dimensional-array

## Binary Heaps

A binary heap is a binary tree heap data structure used in sorting and priority queue algorithms.

### What is a binary tree? A binary tree is a data structure that stores it’s data in the shape of a tree. At the top you have a root node, and underneath that you have at maximum two other nodes. One on the left and one on the right.

Binary trees are useful for all sorts of things. But one area where they really shine is in searching. When you sort a binary tree in certain ways, you can find elements much faster than if you were walking an array or linked list.

### What are heaps?

A heap data structure is binary tree that is sorted. We call it a max heap (if the largest number is on the top). And a min heap if the largest numbers are on the bottom. What’s cool about max heaps is that their structure lends itself very nicely to quickly finding maximum level elements in a queue. Which is why priority queues are often implemented using binary tree structures sorted as heaps.

### What is a binary heap?

As you’ve probably guess, binary heaps combine binary trees with heap sorted structures. In binary heaps any node has at least the value of the nodes children. There is no left/right orientation or distinction in values.

Also children further down the tree can be of greater value than nodes higher up the tree on the other side. That doesn’t matter. The only thing that matters is that the children at equal or less than their parents. That’s it.

Which means every node, it the root of it’s own sub-heap.

### Heap representation

Because the shape of a heap is represented so nicely it is easy to store heap nodes in an array. Top to bottom, left to right. Also, because we know where every element of the array lies, we can access its elements very easily using the above formula.

## Finding the maximum

Finding the maximum is easy. It’s simply the first, or highest element at the top of our heap. So it is always the first element in our array.

## Inserting

When inserting we stick the new element at the end of our tree (always reading top to bottom, left to right) and stick it in the last element of our array. And if is smaller than the parent we are done.

But what happens if our element is larger that the parent? For insertions like this we walk up the tree, swapping nodes until our heap constraint (of the child being of less than or equal value to our parent) is satisfied. And if we insert a number each to it’s parent, we just leave it there. It’s done.

## Deletion

Say we want to delete the root of our heap (i.e. we pulled off it’s max value and now we need the heap to resort itself – or heapify). To delete the max element, we take the max node, swap it with the last leaf, and then delete the last leaf which we know is so easy. Now we need to resort the heap (because that top element isn’t quite right as it doesn’t represent the max).

We do this by comparing the top with with each left/right subnode, and continuously swapping with the larger of the two. In this case the left child is larger, so we are going to swap the root to the left. No we repeat the process with the left hand subtree. We keep swapping the node in question with the largest of it’s children until we can’t swap anymore. At this point we say the heap is heapified, and finally sorted. It is good to go for another extraction.

### Cool things

Some other cool things about binary trees is that this data structure can sort itself very efficiently. i.e. It can sort itself in place. No need to copy all the contents somewhere else and then copy back. When we heapify it does it all with the existing array and simply swapping nodes. Very cool. Very efficient.

## Code

Here is an example of a Max Heap implementation for ints in Java.

MaxIntHeap.java

```import java.util.Arrays;

public class MaxIntHeap {
private int capactity = 10;
private int size = 0;

int[] items = new int[capactity];

private int leftChildIndex(int parentIndex) { return 2 * parentIndex + 1; }
private int rightChildIndex(int parentIndex) { return 2 * parentIndex + 2; }
private int parentIndex(int childIndex ) { return (childIndex - 1) / 2; }

private boolean hasLeftChild(int index) { return leftChildIndex(index) < size; }
private boolean hasRightChild(int index) { return rightChildIndex(index) < size; }
private boolean hasParent(int index) { return parentIndex(index) >= 0; }

private int leftChild(int index) { return items[leftChildIndex(index)]; }
private int rightChild(int index) { return items[rightChildIndex(index)]; }
private int parent(int index) { return items[parentIndex(index)]; }

private void swap(int indexOne, int indexTwo) {
int temp = items[indexOne];
items[indexOne] = items[indexTwo];
items[indexTwo] = temp;
}

private void ensureCapactity() {
if (size == capactity) {
items = Arrays.copyOf(items, capactity * 2);
capactity *= 2;
}
}

public int extractMax() {
if (size == 0) throw new IllegalStateException();
int item = items;        // grab the max
items = items[size - 1]; // copy the bottom to the top
size--;
heapifyDown();             // and no because top isn't right, we heapify down
return item;
}

public void add(int item) {
ensureCapactity();
items[size] = item;          // put in last spot
size++;
heapifyUp();
}

public void heapifyUp() {
int index = size - 1;       // start at last element
while (hasParent(index) && parent(index) > items[index]) {  // walk up as long as there is a parent and it is bigger than you
swap(parentIndex(index), index);
index = parentIndex(index); // walk upwards to next node
}
}

public void heapifyDown() {
int index = 0;              // starting at the top
while (hasLeftChild(index)) {  // as long as I have children Note: Only need to check left because if no left, there is no right

// pick a direction, and get the smaller of the two indexes
int smallerChildIndex = leftChildIndex(index);
if (hasRightChild(index) && rightChild(index) < leftChild(index)) {
// swap right (because we are min heap
smallerChildIndex = rightChildIndex(index);
}

// Now compare

// if I am smaller than the items of my two children...then everything is good. I am sorted.
if(items[index] < items[smallerChildIndex]) {
break;
} else { //  we are still not in order
swap(index, smallerChildIndex);         // so swap with the smaller child
}

index = smallerChildIndex;              // then move down to smaller child
}
}

public void print() {
for (int i=0; i < size; i++) {
System.out.println(i + "[" + items[i] + "]" );
}
}
}
```

MaxIntHeapTest.java

```import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;

public class MaxIntHeapTest {

private MaxIntHeap minHeap;

@Before
public void setUp() throws Exception {
minHeap = new MaxIntHeap();
minHeap.add(6);
minHeap.add(5);
minHeap.add(4);
minHeap.add(3);
minHeap.add(2);
minHeap.add(1);
}

@Test
public void Insert() throws Exception {
// Remember: The array walks top down / left to right
Assert.assertEquals(1, minHeap.items);
Assert.assertEquals(3, minHeap.items);
Assert.assertEquals(2, minHeap.items);
Assert.assertEquals(6, minHeap.items);
Assert.assertEquals(4, minHeap.items);
Assert.assertEquals(5, minHeap.items);
}

@Test
public void ExtractMin() throws Exception {
Assert.assertEquals(1, minHeap.extractMax());
Assert.assertEquals(2, minHeap.extractMax());
Assert.assertEquals(3, minHeap.extractMax());
Assert.assertEquals(4, minHeap.extractMax());
Assert.assertEquals(5, minHeap.extractMax());
Assert.assertEquals(6, minHeap.extractMax());
}
}
```

Links that help

## Arrays

We use arrays all the time in programming. But did you ever wonder what goes on under the hood and how they work?

## What is an array?

An array is a contiguous area of memory consisting of equal-size elements indexed by contiguous integers. Which is just a fancy way of all an an array’s elements are stored together right beside each other in memory.

### What is so about them?

Arrays are great at random access. Meaning you can easily access any element at any time so long as you know the index of the element you want.

### What are some it’s weaknesses?

While most people don’t have to worry about this, arrays can be a little wasteful on space. If you create a large array and don’t use most of it’s elements, you are wasting space. That’s one down side.

The other is arrays suffer from the same thing linked lists do. They aren’t great at doing this in the middle. Whenever you need to add or remove elements in the middle of an array you need to shuffle everything around to make sure you don’t have any gaps.

### How to arrays resize themselves?

When you create an array you, usually need to give it a default size. For the Java ArrayList this size is 10.

Once you approach that limit, the array needs to make itself bigger. Usually by doubling itself in size. Once it doubles in size, it creates a new array, copies the old values over, and gets rid of the old one.

## An example

Here are some common array operations

`Get(i) ` return element at location i
`Set(i, value) ` Sets element i to value
`PushBack(value) ` Adds value to the end
`Remove(i) ` Removes element at location i
`Size(i) ` the number of elements

And here is an examples of how an array works, minus all the error checking for array size and bounds.

### Initialization When you create an array, you often specify some default or initial capacity. You also want to track its size, along with the data contained in the array itself.

### Get / Set

Getting and setting are the easiest operations to do on an array. All you are doing is access your internal data store and then returning (or setting) whatever element lies there. Normally you would do some bounds checking here too but I have left those out for clarity. ### PushBack

This is where things get slightly more interesting for an array. So long as your array hasn’t reached it’s limit, all you need do is set your new value to the array index at position size and you are done. But if you are at your limit, this is where we double our array size, copy the old data over, and double our capactity.

### Remove

And remove is kind of neat. Here you are removing an element at a specific position, and to do that you need to jump into your array at the position of the element you want to remove, and then slide all the other elements above it down (while not forgetting to decrease your array size) ### So what is better? An linked list or an array?

Linked lists are pretty similar to arrays in that they both have similar operations, and both could be used internally for implementing Stacks and Queues.

Where linked lists win is resource management (they use less) and are better in that they don’t need to rebuild themselves from scratch every time an element in the middle is inserted or removed.

Arrays when when it comes to random access. They are just faster at access elements in the middle of themselves, while linked lists need to iterate through each node until they find the one they are looking for.

So that’s a quick run down on arrays, and how the compare to linked lists.

Happy coding.

Full listing

Source

```public class DynamicArray<E> {

private Object[] data;
private int size; // number of elements
private int initialCapacity;

public DynamicArray(int initialCapacity) {
this.initialCapacity = initialCapacity;
data = new Object[initialCapacity];
}

public void set(int i, E e) {
data[i] = e;
size++;
}

public E get(int i) {
return (E)data[i];
}

public void pushBack(E e) {

if (size == initialCapacity) {
Object[] newData = new Object[initialCapacity * 2];
for (int i = 0; i < initialCapacity; i++) {
newData[i] = data[i];
}
data = newData;
initialCapacity = initialCapacity * 2;
}

data[size] = e;
size++;
}

public int size() {
return size;
}
public int capacity() { return initialCapacity; }

public void remove(int i) {
for (int j = i; j < size - 1; j++) {
data[j] = data[j + 1];
}
size--;
}
}
```

Tests

```import junit.framework.Assert;
import org.junit.Before;
import org.junit.Test;

public class DynamicArrayTests {

private DynamicArray array;

@Before
public void setUp() throws Exception {
array = new DynamicArray<String>(2);
}

@Test
public void InitialState() throws Exception {
Assert.assertEquals(0, array.size());
}

@Test
public void GetAndSet() throws Exception {
array.set(0, "a");
Assert.assertEquals(1, array.size());
Assert.assertEquals("a", array.get(0));
}

@Test
public void PushBack() throws Exception {
array.pushBack("a");
array.pushBack("b");
Assert.assertEquals(2, array.size());
Assert.assertEquals("a", array.get(0));
Assert.assertEquals("b", array.get(1));
}

@Test
public void PushBeyondCurrentSize() throws Exception {
array.pushBack("a");
array.pushBack("b");
array.pushBack("c");
Assert.assertEquals(3, array.size());
Assert.assertEquals(4, array.capacity());
Assert.assertEquals("a", array.get(0));
Assert.assertEquals("b", array.get(1));
Assert.assertEquals("c", array.get(2));
}

@Test
public void Remove() throws Exception {
array.pushBack("a");
array.pushBack("b");
array.pushBack("c");

array.remove(1);

Assert.assertEquals("a", array.get(0));
Assert.assertEquals("c", array.get(1));
}
}
```

Links that help
http://ridiculousfish.com/blog/posts/array.html