## Dynamic programming

Dynamic programming is a method for solving a complex problem by breaking a big problem down into several simpler smaller ones. Take this shortest path problem for example. The goal here is to count how many paths there are from the upper left hand corner of the grid down to the bottom right. The rule is you can only continuously move down and to the right.

A dynamic programming approach to this problem would be to note that the number of paths from the start to the end can be broken down by summing the number of paths from A to the end, also with B to the end. So now we have broken down our big problem, into smaller sub problems. That that these subproblems can then be further broken down into as we walk through the grid. This recursion stops when we get to the end and the number new paths left is 1.

One way of translating this into a recursive algorithm would be to write something like this.

With code like this

DynamicProgrammingRecursive.java

```public class DynamicProgrammingRecursive {

private int grid[][];

public DynamicProgrammingRecursive(int[][] grid) {
this.grid = grid;
}

public int countPaths(int row, int col) {
if (!isValidSquare(row, col)) return 0;
if (isAtEnd(row, col)) return 1;
return countPaths(row + 1, col) + countPaths(row, col + 1);
}

public boolean isValidSquare(int row, int col) {
return isInBounds(row, col) && !isBlocked(row, col);
}

public boolean isBlocked(int row, int col) {
return this.grid[row][col] == 1;
}

public boolean isInBounds(int row, int col) {
return (row < grid.length && col < grid.length);
}

public boolean isAtEnd(int row, int col) {
return grid.length - 1 == row && grid[row].length - 1 == col;
}

}
```

foo

```import junit.framework.Assert;
import org.junit.Before;
import org.junit.Test;

public class DynamicProgrammingRecursiveTest {

private DynamicProgrammingIterative empty2x2;
private DynamicProgrammingIterative empty3x3;

@Before
public void setUp() throws Exception {

int[][] paths2x2 = new int[][] {
{0,0},
{0,0}
};
empty2x2 = new DynamicProgrammingIterative(paths2x2);

int[][] paths3x3 = new int[][] {
{0,0,0},
{0,0,0},
{0,0,0}
};
empty3x3 = new DynamicProgrammingIterative(paths3x3);

}

/*
2 1
1 x
*/
@Test
public void TwoByTwoEmptyPathCount() throws Exception {
Assert.assertEquals(2, empty2x2.countPaths(1,1));
}

/*
6 3 1
3 2 1
1 1 x
*/
@Test
public void ThreeByThreeEmptyPathCount() throws Exception {
Assert.assertEquals(6, empty3x3.countPaths(0,0));
}

@Test
public void IsAtEnd() throws Exception {
int row = 0;
int column = 0;
Assert.assertTrue(empty2x2.isAtEnd(row, column));
}

@Test
public void IsInBounds() throws Exception {
Assert.assertTrue(empty2x2.isInBounds(0,0));
Assert.assertTrue(empty2x2.isInBounds(0,1));
Assert.assertTrue(empty2x2.isInBounds(1,0));
Assert.assertTrue(empty2x2.isInBounds(1,1));
Assert.assertFalse(empty2x2.isInBounds(0,2));
Assert.assertFalse(empty2x2.isInBounds(2,0));
}

@Test
public void IsBlocked() throws Exception {
int[][] paths = new int[][] {
{0,0},
{0,1}
};
DynamicProgrammingRecursive brute = new DynamicProgrammingRecursive(paths);
Assert.assertFalse(brute.isBlocked(0,0));
Assert.assertFalse(brute.isBlocked(0,1));
Assert.assertFalse(brute.isBlocked(1,0));
Assert.assertTrue(brute.isBlocked(1,1));
}

@Test
public void CountBlockedPaths() throws Exception {
int[][] paths = new int[][] {
{0,0,0,0,0,0,0,0},
{0,0,1,0,0,0,1,0},
{0,0,0,0,1,0,0,0},
{1,0,1,0,0,1,0,0},
{0,0,1,0,0,0,0,0},
{0,0,0,1,1,0,1,0},
{0,1,0,0,0,1,0,0},
{0,0,0,0,0,0,0,0}
};
DynamicProgrammingRecursive brute = new DynamicProgrammingRecursive(paths);
Assert.assertEquals(27, brute.countPaths(0,0));
}

}
```

We just traverse down and to the right, making sure that we are not out of bounds or in a blocked off square. And when we get to the end, we just return 1, allowing the recursion of the algorithm to walk back up to the top where we started.

The validSquare method does the boundary checking as well as making sure we are not in a blocked off square. If we are blocked return 0. If we are at the end return 1. Else recurse and do the calc for the next down and right squares of the grid.

One Tip for when coding grids

Whenever you do matrix algorithms use row and column for your variable names rather than x and y.

The reason for this is the row and column translates into [y][x] and its a common cause for mistakes. So when dealing with matrices using row and column instead. Just easier.

Memoization

One thing you may have noticed in our previous examples, is that we calculate the number of paths from C to the end twice. That is something we can store for future look ups. And that’s a memoization approach.

DynamicProgrammingMemoized.java

```public class DynamicProgrammingMemoized {

private int grid[][];
private int paths[][];

public DynamicProgrammingMemoized(int[][] grid) {
this.grid = grid;
this.paths = new int[grid.length][grid.length]; // assume square
}

public int countPaths(int row, int col) {
if (!isValidSquare(row, col)) return 0;
if (isAtEnd(row, col)) return 1;
if (this.paths[row][col] == 0) {
this.paths[row][col] = countPaths(row + 1, col) + countPaths(row, col + 1);
}
return this.paths[row][col];
}

public boolean isValidSquare(int row, int col) {
return isInBounds(row, col) && !isBlocked(row, col);
}

public boolean isBlocked(int row, int col) {
return this.grid[row][col] == 1;
}

public boolean isInBounds(int row, int col) {
return (row < grid.length && col < grid.length);
}

public boolean isAtEnd(int row, int col) {
return grid.length - 1 == row && grid[row].length - 1 == col;
}
}

```

DynamicProgrammingMemoizedTest.java

```import junit.framework.Assert;
import org.junit.Before;
import org.junit.Test;

public class DynamicProgrammingMemoizedTest {

private DynamicProgrammingMemoized empty2x2;
private DynamicProgrammingMemoized empty3x3;

@Before
public void setUp() throws Exception {

int[][] paths2x2 = new int[][] {
{0,0},
{0,0}
};
empty2x2 = new DynamicProgrammingMemoized(paths2x2);

int[][] paths3x3 = new int[][] {
{0,0,0},
{0,0,0},
{0,0,0}
};
empty3x3 = new DynamicProgrammingMemoized(paths3x3);

}

/*
2 1
1 x
*/
@Test
public void TwoByTwoEmptyPathCount() throws Exception {
Assert.assertEquals(2, empty2x2.countPaths(0,0));
}

/*
6 3 1
3 2 1
1 1 x
*/
@Test
public void ThreeByThreeEmptyPathCount() throws Exception {
Assert.assertEquals(6, empty3x3.countPaths(0,0));
}

@Test
public void CountBlockedPaths() throws Exception {
int[][] paths = new int[][] {
{0,0,0,0,0,0,0,0},
{0,0,1,0,0,0,1,0},
{0,0,0,0,1,0,0,0},
{1,0,1,0,0,1,0,0},
{0,0,1,0,0,0,0,0},
{0,0,0,1,1,0,1,0},
{0,1,0,0,0,1,0,0},
{0,0,0,0,0,0,0,0}
};
DynamicProgrammingRecursive brute = new DynamicProgrammingRecursive(paths);
Assert.assertEquals(27, brute.countPaths(0,0));
}
}
```

Same idea as the recursive implementation. Only here we store the results as we calculate them which speeds the algorithm and makes the calculations happen a lot faster.

Traditional Dynamic Programming Approach

A more traditional dynamic approach to this problem would be to start at the end and work backwards working our way up.

If we think about what the recursive approach does, the first first concrete values it gets, other than blocked paths, are these two in the bottom right. 1 and 1.

Now, we can we go from there. If we take that recursion one step up, it may be filling one of the next three cells.

If we start on the left, all of these cells have just one path to the bottom right. So all of these values are one.

And if we continue doing this for the other cells, we can continue walking up and filling in values for the other cells as follows for the entire grid.

DynamicProgrammingIterative.java

```public class DynamicProgrammingIterative {

private int grid[][];
private int paths[][];

public DynamicProgrammingIterative(int[][] grid) {
this.grid = grid;
this.paths = new int[grid.length][grid.length];
}

public int countPaths(int row, int col) {
if (!isValidSquare(row, col)) return 0;
if (isAtEnd(row, col)) return 1;
if (isAtBeginning(row, col)) {
paths[row][col] = 1;
}
if (paths[row][col] == 0) {

int bottomCell = 0;
int rightCell = 0;

if (isValidSquare(row + 1, col)) {
bottomCell = paths[row + 1][col];
}

if (isValidSquare(row, col + 1)) {
rightCell = paths[row][col + 1];
}

paths[row][col] = bottomCell + rightCell;
}
return countPaths(row - 1, col) + countPaths(row, col - 1);
}

public boolean isValidSquare(int row, int col) {
if (!isInBounds(row, col)) {
return false;
}
if (isBlocked(row, col)) {
return false;
}
return true;
}

public boolean isBlocked(int row, int col) {
return grid[row][col] == 1;
}

public boolean isInBounds(int row, int col) {
if (row < 0 || col < 0) {
return false;
}
return (row < grid.length && col < grid.length);
}

public boolean isAtEnd(int row, int col) {
return row == 0 && col == 0;
}

public boolean isAtBeginning(int row, int col) {
return row == grid.length - 1 && col == grid.length - 1;
}

public void print2x2() {
System.out.println(paths + " " + paths);
System.out.println(paths + " " + paths);
}

public void print3x2() {
System.out.println(paths + " " + paths + " " + paths);
System.out.println(paths + " " + paths + " " + paths);
System.out.println(paths + " " + paths + " " + paths);
}

public void print8x8() {
System.out.println(paths + " " + paths + " " + paths + paths + " " + paths + " " + paths + paths + " " + paths);
System.out.println(paths + " " + paths + " " + paths + paths + " " + paths + " " + paths + paths + " " + paths);
System.out.println(paths + " " + paths + " " + paths + paths + " " + paths + " " + paths + paths + " " + paths);
System.out.println(paths + " " + paths + " " + paths + paths + " " + paths + " " + paths + paths + " " + paths);
System.out.println(paths + " " + paths + " " + paths + paths + " " + paths + " " + paths + paths + " " + paths);
System.out.println(paths + " " + paths + " " + paths + paths + " " + paths + " " + paths + paths + " " + paths);
System.out.println(paths + " " + paths + " " + paths + paths + " " + paths + " " + paths + paths + " " + paths);
System.out.println(paths + " " + paths + " " + paths + paths + " " + paths + " " + paths + paths + " " + paths);
}
}
```

DynamicProgrammingIterativeTest.java

```import junit.framework.Assert;
import org.junit.Before;
import org.junit.Test;

public class DynamicProgrammingIterativeTest {

/*
2 1
1 x
*/
@Test
public void TwoByTwoEmptyPathCount() throws Exception {
int[][] paths2x2 = new int[][] {
{0,0},
{0,0}
};
DynamicProgrammingIterative empty2x2 = new DynamicProgrammingIterative(paths2x2);
Assert.assertEquals(2, empty2x2.countPaths(1,1)); // start lower right
}

/*
1 x
1 1
*/
@Test
public void TwoByTwoOneCellBlocked() throws Exception {
int[][] paths2x2 = new int[][] {
{0,1},
{0,0}
};
DynamicProgrammingIterative iterative = new DynamicProgrammingIterative(paths2x2);
Assert.assertEquals(1, iterative.countPaths(1,1));
}

/*
6 3 1
3 2 1
1 1 1
*/
@Test
public void ThreeByThreeEmpty() throws Exception {
int[][] paths3x3 = new int[][] {
{0,0,0},
{0,0,0},
{0,0,0}
};
DynamicProgrammingIterative empty3x3 = new DynamicProgrammingIterative(paths3x3);
Assert.assertEquals(6, empty3x3.countPaths(2,2));
}

/*
3 1 0
2 1 x
1 1 1
*/
@Test
public void ThreeByThreeBlocked() throws Exception {
int[][] paths3x3 = new int[][] {
{0,0,0},
{0,0,1},
{0,0,0}
};
DynamicProgrammingIterative iterative = new DynamicProgrammingIterative(paths3x3);
Assert.assertEquals(3, iterative.countPaths(2,2));
}

/*
3 1 0
2 1 x
1 1 1
*/
@Test
public void ThreeByThreeBlocked2() throws Exception {
int[][] paths3x3 = new int[][] {
{0,0,0},
{1,0,0},
{0,0,0}
};
DynamicProgrammingIterative iterative = new DynamicProgrammingIterative(paths3x3);
Assert.assertEquals(3, iterative.countPaths(2,2));
}

/*
1 1 1
x x 1
1 1 1
*/
@Test
public void ThreeByThreeBlocked3() throws Exception {
int[][] paths3x3 = new int[][] {
{0,0,0},
{1,1,0},
{0,0,0}
};
DynamicProgrammingIterative iterative = new DynamicProgrammingIterative(paths3x3);
Assert.assertEquals(1, iterative.countPaths(2,2));
}

@Test
public void CountBlockedPaths() throws Exception {
int[][] paths = new int[][] {
{0,0,0,0,0,0,0,0},
{0,0,1,0,0,0,1,0},
{0,0,0,0,1,0,0,0},
{1,0,1,0,0,1,0,0},
{0,0,1,0,0,0,0,0},
{0,0,0,1,1,0,1,0},
{0,1,0,0,0,1,0,0},
{0,0,0,0,0,0,0,0}
};
DynamicProgrammingIterative iterative = new DynamicProgrammingIterative(paths);
Assert.assertEquals(27, iterative.countPaths(7,7));
}

}
```

Now the advantage of doing things this way is that we have used slightly less memory. Runtime is still O(n^2) but we’ve reduced the actual amount of memory used slightly because we no longer have to use the call stack.

So where does this leave us?

The main take away of dynamic programming problems is that when you have a big problem that consists of many similarly oriented smaller ones, you can often implement them using a combination of recursion, with memoization for performance.

Not also that if you look at the tests I wrote, a good place to start with any of these problems is which something really small and simple (like a 2×2 or 3×3 matrix), write tests, and then work your way up from there.

Happy coding!

http://www.sanfoundry.com/dynamic-programming-solutions-longest-common-substring-problem/

https://stackoverflow.com/questions/4000169/getting-the-array-length-of-a-2d-array-in-java

https://stackoverflow.com/questions/12231453/syntax-for-creating-a-two-dimensional-array

## Memoization

In computing, memoization is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached results when required.

The most common example of memoization is the implementation of a fibonacci series. Because a fibonacci series recursively calls upon itself, when calculating the series for any larger numbers, you end up making the same calculations over and over again.

With memoization you store the results of certain calculations along the way, and then simply lookup and return them when called again in the future.

For example, here is a naive implementation of a fibonacci series. One that does the full on fibonacci calculation for every element beginning to end.

### FibonacciNaive.java

```public class FibonacciNaive {

public int fib(int n) {
System.out.println("n = " + n);
if (n <= 0) {
return 0;
} else if (n == 1) {
return 1;
} else {
return fib(n - 1) + fib(n - 2);
}
}
}
```

### FibonacciNaive.java

```public class FibonacciNaive {

public int fib(int n) {
System.out.println("n = " + n);
if (n <= 0) {
return 0;
} else if (n == 1) {
return 1;
} else {
return fib(n - 1) + fib(n - 2);
}
}
}
```

This is an extremely slow implementation. fib(30) here takes about 19 seconds.

By contrast, here is the say series, only memoized. Here we store the result of fib(n) as we calculate it, and then use it each time in subsequent calculations. ### FibonacciMemoized.java

```public class FibonacciMemoized {

private int[] memo = new int;

public int fib(int n) {
System.out.println("n = " + n);
if (n <= 0) {
return 0;
} else if (n == 1) {
return 1;
} else if (memo[n] == 0){
memo[n] = fib(n - 1) + fib(n - 2);
}
return memo[n];
}
}

```

This one does fib(30) is less than a second. You have to bump it up to 1000 to approx 20s. So a massive increase in computational performance.

That’s the power and beauty of memoization. Also our runtime is now linear with memoization instead of exponential.

So that’s the power of memoization. Happy coding!

Here is the test harness.

### FibonacciTest.java

```import junit.framework.Assert;
import org.junit.Before;
import org.junit.Test;

import java.util.Date;

public class FibonacciTest {

private FibonacciNaive naive;
private FibonacciMemoized memo;

@Before
public void setUp() throws Exception {
naive = new FibonacciNaive();
memo = new FibonacciMemoized();
}

@Test
public void Naive() throws Exception {
Assert.assertEquals(0, naive.fib(0));
Assert.assertEquals(1, naive.fib(1));
Assert.assertEquals(1, naive.fib(2));
Assert.assertEquals(2, naive.fib(3));
Assert.assertEquals(3, naive.fib(4));
Assert.assertEquals(5, naive.fib(5));
Assert.assertEquals(8, naive.fib(6));
Assert.assertEquals(13, naive.fib(7));
Assert.assertEquals(21, naive.fib(8));
}

@Test
public void Memoized() throws Exception {
Assert.assertEquals(0, memo.fib(0));
Assert.assertEquals(1, memo.fib(1));
Assert.assertEquals(1, memo.fib(2));
Assert.assertEquals(2, memo.fib(3));
Assert.assertEquals(3, memo.fib(4));
Assert.assertEquals(5, memo.fib(5));
Assert.assertEquals(8, memo.fib(6));
Assert.assertEquals(13, memo.fib(7));
Assert.assertEquals(21, memo.fib(8));
}

@Test
public void RecordTimeNaive() throws Exception {
long startTime = System.currentTimeMillis();
naive.fib(30);
long endTime = System.currentTimeMillis();
long elapsedTime = (endTime - startTime) / 1000;
System.out.println("elapsedTime = " + elapsedTime); // 19s
}

@Test
public void RecordTimeMemoized() throws Exception {
long startTime = System.currentTimeMillis();
memo.fib(1000);
long endTime = System.currentTimeMillis();
long elapsedTime = (endTime - startTime) / 1000;
System.out.println("elapsedTime = " + elapsedTime); // 20s
}
}

```

## Binary Heaps

A binary heap is a binary tree heap data structure used in sorting and priority queue algorithms.

### What is a binary tree? A binary tree is a data structure that stores it’s data in the shape of a tree. At the top you have a root node, and underneath that you have at maximum two other nodes. One on the left and one on the right.

Binary trees are useful for all sorts of things. But one area where they really shine is in searching. When you sort a binary tree in certain ways, you can find elements much faster than if you were walking an array or linked list.

### What are heaps?

A heap data structure is binary tree that is sorted. We call it a max heap (if the largest number is on the top). And a min heap if the largest numbers are on the bottom. What’s cool about max heaps is that their structure lends itself very nicely to quickly finding maximum level elements in a queue. Which is why priority queues are often implemented using binary tree structures sorted as heaps.

### What is a binary heap?

As you’ve probably guess, binary heaps combine binary trees with heap sorted structures. In binary heaps any node has at least the value of the nodes children. There is no left/right orientation or distinction in values.

Also children further down the tree can be of greater value than nodes higher up the tree on the other side. That doesn’t matter. The only thing that matters is that the children at equal or less than their parents. That’s it.

Which means every node, it the root of it’s own sub-heap.

### Heap representation

Because the shape of a heap is represented so nicely it is easy to store heap nodes in an array. Top to bottom, left to right. Also, because we know where every element of the array lies, we can access its elements very easily using the above formula.

## Finding the maximum

Finding the maximum is easy. It’s simply the first, or highest element at the top of our heap. So it is always the first element in our array.

## Inserting

When inserting we stick the new element at the end of our tree (always reading top to bottom, left to right) and stick it in the last element of our array. And if is smaller than the parent we are done.

But what happens if our element is larger that the parent? For insertions like this we walk up the tree, swapping nodes until our heap constraint (of the child being of less than or equal value to our parent) is satisfied. And if we insert a number each to it’s parent, we just leave it there. It’s done.

## Deletion

Say we want to delete the root of our heap (i.e. we pulled off it’s max value and now we need the heap to resort itself – or heapify). To delete the max element, we take the max node, swap it with the last leaf, and then delete the last leaf which we know is so easy. Now we need to resort the heap (because that top element isn’t quite right as it doesn’t represent the max).

We do this by comparing the top with with each left/right subnode, and continuously swapping with the larger of the two. In this case the left child is larger, so we are going to swap the root to the left. No we repeat the process with the left hand subtree. We keep swapping the node in question with the largest of it’s children until we can’t swap anymore. At this point we say the heap is heapified, and finally sorted. It is good to go for another extraction.

### Cool things

Some other cool things about binary trees is that this data structure can sort itself very efficiently. i.e. It can sort itself in place. No need to copy all the contents somewhere else and then copy back. When we heapify it does it all with the existing array and simply swapping nodes. Very cool. Very efficient.

## Code

Here is an example of a Max Heap implementation for ints in Java.

MaxIntHeap.java

```import java.util.Arrays;

public class MaxIntHeap {
private int capactity = 10;
private int size = 0;

int[] items = new int[capactity];

private int leftChildIndex(int parentIndex) { return 2 * parentIndex + 1; }
private int rightChildIndex(int parentIndex) { return 2 * parentIndex + 2; }
private int parentIndex(int childIndex ) { return (childIndex - 1) / 2; }

private boolean hasLeftChild(int index) { return leftChildIndex(index) < size; }
private boolean hasRightChild(int index) { return rightChildIndex(index) < size; }
private boolean hasParent(int index) { return parentIndex(index) >= 0; }

private int leftChild(int index) { return items[leftChildIndex(index)]; }
private int rightChild(int index) { return items[rightChildIndex(index)]; }
private int parent(int index) { return items[parentIndex(index)]; }

private void swap(int indexOne, int indexTwo) {
int temp = items[indexOne];
items[indexOne] = items[indexTwo];
items[indexTwo] = temp;
}

private void ensureCapactity() {
if (size == capactity) {
items = Arrays.copyOf(items, capactity * 2);
capactity *= 2;
}
}

public int extractMax() {
if (size == 0) throw new IllegalStateException();
int item = items;        // grab the max
items = items[size - 1]; // copy the bottom to the top
size--;
heapifyDown();             // and no because top isn't right, we heapify down
return item;
}

public void add(int item) {
ensureCapactity();
items[size] = item;          // put in last spot
size++;
heapifyUp();
}

public void heapifyUp() {
int index = size - 1;       // start at last element
while (hasParent(index) && parent(index) > items[index]) {  // walk up as long as there is a parent and it is bigger than you
swap(parentIndex(index), index);
index = parentIndex(index); // walk upwards to next node
}
}

public void heapifyDown() {
int index = 0;              // starting at the top
while (hasLeftChild(index)) {  // as long as I have children Note: Only need to check left because if no left, there is no right

// pick a direction, and get the smaller of the two indexes
int smallerChildIndex = leftChildIndex(index);
if (hasRightChild(index) && rightChild(index) < leftChild(index)) {
// swap right (because we are min heap
smallerChildIndex = rightChildIndex(index);
}

// Now compare

// if I am smaller than the items of my two children...then everything is good. I am sorted.
if(items[index] < items[smallerChildIndex]) {
break;
} else { //  we are still not in order
swap(index, smallerChildIndex);         // so swap with the smaller child
}

index = smallerChildIndex;              // then move down to smaller child
}
}

public void print() {
for (int i=0; i < size; i++) {
System.out.println(i + "[" + items[i] + "]" );
}
}
}
```

MaxIntHeapTest.java

```import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;

public class MaxIntHeapTest {

private MaxIntHeap minHeap;

@Before
public void setUp() throws Exception {
minHeap = new MaxIntHeap();
}

@Test
public void Insert() throws Exception {
// Remember: The array walks top down / left to right
Assert.assertEquals(1, minHeap.items);
Assert.assertEquals(3, minHeap.items);
Assert.assertEquals(2, minHeap.items);
Assert.assertEquals(6, minHeap.items);
Assert.assertEquals(4, minHeap.items);
Assert.assertEquals(5, minHeap.items);
}

@Test
public void ExtractMin() throws Exception {
Assert.assertEquals(1, minHeap.extractMax());
Assert.assertEquals(2, minHeap.extractMax());
Assert.assertEquals(3, minHeap.extractMax());
Assert.assertEquals(4, minHeap.extractMax());
Assert.assertEquals(5, minHeap.extractMax());
Assert.assertEquals(6, minHeap.extractMax());
}
}
```

Coming from an electrical engineering background, I have never really studied some of our most basic computer data structures in depth. In this episode I am visiting the linked list sharing some notes on how it works.

## What is a linked list?

A linked list It’s a data structure consisting of a group of nodes linked together in sequence via a series of pointers. Each node consists of a piece of data, along with a pointer pointing to the next piece of data in the list. That’s it!

It is one of the simplest, most basic data structures we’ve got, and it is used to build everything from common Lists, Stacks, to Queues.

## What are it’s strengths?

The primary advantage of the linked list over a conventional array is that the linked list can handle the insertion and removal of elements, without having to reorganize the entire data structure.

In can do this because the link list is just a series of nodes and pointers (which means the data doesn’t have to sit in one contiguous block of memory), and when it does insert a new element, all it has to do is adjust it’s pointers before and after when the new node is to be inserted.

Also linked lists are very fast add adding, and accessing elements, at the top of the list O(1).

## What are it’s weaknesses?

The downside of the linked list is that it doesn’t allow random access. Meaning you can’t just read/write the nth element in the list. To access data in the middle of the list, you need to walk the list from beginning to end, looking for the element you are after.

Then, once you’ve found it, you can return it or remove it. But this iterating through all the elements means that insertion, removal, and random access to O(n) time. Meaning it dependent on the number of elements (n) you have on your list.

## Typical operations

Here is a list of the typical operations a linked list would perform. As you can see it looks very much like the implementation of a stack or queue. ## How does it work

Here is a walk through of a simple linked list operations in Java.

### Push

Here is our `Node` which holds our data and next element pointer in our list. And this little snippet of code is the entire basis for how linked lists work. This is the `push` operation. It basically adds a new element to the top of the list.

It takes the element of data you pass in `e`, creates a new `Node` with the data `e` along with a `null` representing the next element in the list.

And all the magic in how the linked list works is contained in these two little lines here. First we take the new node we just created, and set it’s next pointer to the current head. Which would be null if this is the first item on our list.

Then we take the head (which was null), and set it to be equal to our new node.

These two seemingly simple lines little lines still make my head hurt. I find them confusing because you are setting one thing in one line and then resetting it in the other.

But once you grok this, you basically get the linked list and all it’s magic. Because all we are doing with a push is taking the new node, setting it’s next to be the current head (which in effect pushes the current head down one deeper into the stack), and then resetting the new node to be at the top via setting it to the head. Phew!

### Pop Popping is pretty easy. Here we check to see if we have an empty list by looking to see if the head == null. And if it isn’t, we grab the next node, set it to be the new head, and decrement our size count before returning the item or data of the node.

Note: The API has us return the data. Not the node. The Node is just an internal data structure we use implementing the list. I makes no sense to the outside world.

### Remove

Here is where things get interesting. Here you are going to witness both an advantage of linked lists (the ability to remove items gracefully by merely changing several pointers) and a disadvantage of the linked list (having to iterate through all the elements to find the one you like). A lot of the link list implementations start by doing checks on the head of the linked list first. Which is exactly what we do here. First we grab the head, check to see if it contains the element we want, and remove it by repointing it to the next of the current element, while decrementing our size and returning true.

With the head handled, we now loop though the rest of the linked list, until we find the match we are looking for. Then we basically did exactly what we did before with the header, only now we are doing it for some arbitrary node in our list.

We grab the node, re-point the previous node to the new next, and off we go.

This is both the advantage and disadvantage of the linked list. With just a few lines of code, we pulled an element from the beginning of the list, without having to remap, or shuffle, any other elements in our data structure. That is huge. And that is worth a moment of your time to appreciate.

But there is also the slight disadvantage, which is that to find our element, we needed to loop through the entire list to the very end (in the worst case). Alas such is life.

### The rest

The rest of the implementation is basically variations on these themes of looping through the list, finding the element you are looking for, and adjusting pointers.

## Summary

The linked list is a beautiful little data structure that powers a lot of the software you use every day under the hood. It doesn’t allow random access like an Array, but it’s a beautiful data structure for building Stacks and Lists.

To see what a professional looking LinkedList.java looks like checkout this implementation by Joshua Bloch in Java.

But to really understanding it, it’s always best to build these things from scratch on your own. And here is my implementation below.

## Source and test

```public class SingleLinkedList<E> {

int size;

public void push(E e) {
Node newNode = new Node(e, null);

size++;
}

public int lastIndexOf(E e) {
Node current = head;
int index = 0;

while (current.next != null) {
if (current.item == e) return index;
index++;
current = current.next;
}

// check the last node
if (current.item == e) return index;

return -1;
}

public E peek() {
Node<E> f = head;
return (f == null) ? null : f.item;
}

public E pop() {
// get the head
// assign head to it's next
if (head == null) return null;

Node<E> first = head;
size--;

return first.item;
}

public boolean remove(E e) {
// walk the list from the beginning
// remove and return true if found
// return false

Node prev = null;
Node current = head;

if (current.item.equals(e)) {
// make the next element the new head
size--;
return true;
}

// all others
while (current.next != null) {
prev = current;
current = current.next;

if (current.item.equals(e)) {
// connect the previous node next to this nodes next
// thereby bypassing this current node we want to remove
prev.next = current.next;
size--;
return true;
}
}

return false;
}

public void set(int index, E e) {

Node newNode = new Node(e, null);

if (index == 0) {
return;
}

// all others
// take the previous node
// attach it's next to this new node
// attach this new node to current next node

Node prev = head;
Node current = head.next;

for (int counter = 1; counter <= index; counter++) {
if (counter == index) {
prev.next = newNode;
newNode.next = current.next;
return;
} else {
prev = current;
current = current.next;
}
}

}

private static class Node<E> {
E item;
Node<E> next;

Node(E element, Node<E> next) {
this.item = element;
this.next = next;
}
}
}

/*
push() is neat because you are basically:

Creating a new node, assigning the old head value to be the next chain in the link (like adding), and the making
new node you just created the next head. So you are just bumping everything down one.

1. Creating a new node.
2. Assigning it's next to the current head (which at first is null because it is empty).
3. And then assigning this newNode to the head. So it is at the front of the list.

Then when you add the next node you repeat the process.
1. Create the new node.
2. Assign it's next to the current head of the list (the last element you added).
3. And then making this next node the head.

So you are basically constantly taking making new nodes, assigning their next to be current head.
And then assigning them to be the new head (while their next points to the old head).
And you just keep doing this.

*/
```

```import junit.framework.Assert;
import junit.framework.TestCase;
import org.junit.Before;
import org.junit.Test;

import java.util.ArrayList;

public class LinkedListTests {

@Before
public void setUp() throws Exception {
sll = new SingleLinkedList<Integer>();
}

@Test
public void InitalState() throws Exception {
Assert.assertEquals(0, sll.size);
}

@Test
public void lastIndexOf() throws Exception {
sll.push(99);
sll.push(100);
sll.push(101);
Assert.assertEquals(2, sll.lastIndexOf(99));
Assert.assertEquals(1, sll.lastIndexOf(100));
Assert.assertEquals(0, sll.lastIndexOf(101));
}

@Test
public void peek() throws Exception {
sll.push(99);
Assert.assertEquals(99, sll.peek());
}

@Test
public void peekEmpty() throws Exception {
Assert.assertEquals(null, sll.peek());
}

@Test
public void pushAndPop() throws Exception {
sll.push(99);
sll.push(100);
sll.push(101);
Assert.assertEquals(101, sll.pop());
Assert.assertEquals(100, sll.pop());
Assert.assertEquals(99, sll.pop());
}

@Test
public void popEmptyList() throws Exception {
Assert.assertNull(sll.pop());
}

@Test
public void remove() throws Exception {
sll.push(99);
sll.push(100);
sll.push(101);

Assert.assertTrue(sll.remove(101));
Assert.assertEquals(100, sll.peek());

Assert.assertTrue(sll.remove(100));
Assert.assertEquals(99, sll.peek());

Assert.assertTrue(sll.remove(99));
Assert.assertEquals(null, sll.peek());
}

@Test
public void pushSize() throws Exception {
Assert.assertEquals(0, sll.size);
sll.push(99);
Assert.assertEquals(1, sll.size);
sll.push(100);
Assert.assertEquals(2, sll.size);
}

@Test
public void popSize() throws Exception {
sll.push(99);
sll.push(100);
Assert.assertEquals(2, sll.size);
sll.pop();
Assert.assertEquals(1, sll.size);
}

@Test
public void removeSize() throws Exception {
sll.push(99);
sll.push(100);
Assert.assertEquals(2, sll.size);
sll.remove(100);
Assert.assertEquals(1, sll.size);
}

@Test
public void setIndex() throws Exception {
sll.push(99);
sll.push(100);
sll.push(101);

sll.set(0, 49);
sll.set(1, 50);
sll.set(2, 51);

Assert.assertEquals(0, sll.lastIndexOf(49));
Assert.assertEquals(1, sll.lastIndexOf(50));
Assert.assertEquals(2, sll.lastIndexOf(51));
}

@Test
public void realLinkedList() throws Exception {

// Here are some tests written against the Java LinkedList class