Stacks and Queues

Leave a comment

Stacks and Queues are pretty similar data structures. The difference is in how they remove elements.

A Stack is LIFO – Last In First Out. Much like a stack of books.
A Queue is FIFI – First In First Out. If you are the first to line up, you are the first to come out.

While a Queue is more like when you are waiting at a super market. FILO – First in Last Out.

The are both linear data structures.
They can be implemented as Arrays or Linked Lists.

Here is a Stack and Queue implemented as a Linked List.

Stack.java

public class Stack {

    private class Node {
        private int data;
        private Node next;
        private Node (int data) {
            this.data = data;
        }
    }

    private Node top; // add and remove things here

    public boolean isEmpty() {
        return top == null;
    }

    public int peek() {
        return top.data;
    }

    public void push(int data) {
        // Create new node
        // Set it's next to be top
        // Set top to be the new node

        Node node = new Node(data);
        node.next = top;
        top = node;
    }

    public int pop() {
        // Store the value you want to return
        // Set the current top.next to be the new top
        // return the value

        // without null checks
        int data = top.data;
        top = top.next;
        return data;

        // with null check
//        int data = top.data;
//
//        if (top.next != null) {
//            top = top.next;
//        } else  {
//            top = null;
//        }
//        return data;
    }
}

Queue.java

public class Queue {

    private class Node {
        private int data;
        private Node next;
        private Node (int data) {
            this.data = data;
        }
    }

    private Node head; // remove things here
    private Node tail; // add things here

    public boolean isEmpty() {
        return head == null;
    }

    public int peek() {
        return head.data;
    }

    public void add(int data) {
        // Create a new node
        // Set the current tail.next to point to this new node
        // Then set the new node to be the new tail

        Node newTail = new Node(data);
        if (tail != null) {
            tail.next = newTail;
        }
        tail = newTail;

        // handle case of first element where head is null
        if (head == null) {
            head = tail;
        }
    }

    public int remove() {
        // Point the head at the current head next
        int data = head.data;
        head = head.next;

        // Handle queue now being empty
        if (head == null) {
            tail = null;
        }

        return data;
    }
}

Advertisements

Binary Search Trees

Leave a comment

Some notes from this excellent video by Gayle McDowell on trees.

What is a binary tree?

A binary tree is a data structure where by each node has no more than two child nodes.

what-is-binary-tree.PNG

What is a binary search tree?

A binary search tree is a binary tree sorted in a specific ordering property. On any subtree, the left nodes are less than the root node, which is less than all of the right nodes.

This ordering makes finding a node really fast because we have pretty good idea of which way to go as we search the tree, going left or right and essentially halving our search area every step of the way.

what -is-binary-serach-tree.PNG

Inserts

Insert works much like a find. We start at the top, do a comparison, and then head down to the child subtree and repeat the process again until we get down to an empty spot or null element.

insert-1.PNG

Note the one problem with this approach is that if we get elements in a particular order we can get really unbalanced.

unbalanced.PNG

And this will make our find ands inserts slow.

There are algorithms that you can use to keep a binary tree balanced, but they are pretty complicated, and in an interview you can generally assume that you are going to have a balanced tree.

Traversinge

There are three common ways we walk through a tree.

Preorder traversal: Means you visit the root of the tree first, and then you visit it’s left nodes and it’s right nodes. Top-bottom, left to right.

Inorder traversal: Left nodes first, then right. So left, parent, then right.

Post order: Left, then right, then root.

order-traversal.PNG

Typically in binary search trees we walk to do inorder traversal because that actually allows the nodes to be printed out in order.

inorder-preferred.PNG

Code for Binary Search Tree

NodeBST.java

public class NodeBST {

    NodeBST left, right;
    int data;

    public NodeBST(int data) {
        this.data = data;
    }

    public void insert(int value) {
        // look to the left and the right to see where we want to insert
        if (value <= data) {
            if (left == null) {
                left = new NodeBST(value);
            } else {
                // push down to child and ask it to handle : recursion
                left.insert(value);
            }
        } else {
            if (right == null) {
                right = new NodeBST(value);
            } else {
                right.insert(value);
            }
        }
    }

    public boolean contains(int value) {
        // if we are there, return true
        if (value == data) {
            return true;
        } else if (value < data) {
            // then it should be on the left
            // if there is no left node
            if (left == null) {
                return false;
            } else {
                return left.contains(value);
            }
        } else {
            if (right == null) {
                return false;
            } else {
                return right.contains(value);
            }
        }
    }

    // left child -> parent -> right child
    public void printInOrder() {
        if (left != null) {
            left.printInOrder();
        }

        System.out.println("data = " + data);

        if (right != null) {
            right.printInOrder();
        }
    }
}

NodeBSTTest.java

import junit.framework.Assert;
import org.junit.Before;
import org.junit.Test;

public class NodeBSTTest {

    private NodeBST node;

    @Before
    public void setUp() throws Exception {
        node = new NodeBST(10);
        node.insert(5);
        node.insert(15);
        node.insert(8);
    }

    @Test
    public void Contains() throws Exception {
        Assert.assertTrue(node.contains(5));
        Assert.assertTrue(node.contains(15));
        Assert.assertTrue(node.contains(8));
    }
    
    @Test
    public void PrintOrder() throws Exception {
        node.printInOrder();
    }
}

print.PNG

Binary Heaps

Leave a comment

A binary heap is a binary tree heap data structure used in sorting and priority queue algorithms.

What is a binary tree?

Screen Shot 2017-11-10 at 9.05.53 AM.png

A binary tree is a data structure that stores it’s data in the shape of a tree. At the top you have a root node, and underneath that you have at maximum two other nodes. One on the left and one on the right.

Binary trees are useful for all sorts of things. But one area where they really shine is in searching. When you sort a binary tree in certain ways, you can find elements much faster than if you were walking an array or linked list.

What are heaps?

A heap data structure is binary tree that is sorted. We call it a max heap (if the largest number is on the top). And a min heap if the largest numbers are on the bottom.

Screen Shot 2017-11-10 at 9.07.25 AM.png

What’s cool about max heaps is that their structure lends itself very nicely to quickly finding maximum level elements in a queue. Which is why priority queues are often implemented using binary tree structures sorted as heaps.

What is a binary heap?

As you’ve probably guess, binary heaps combine binary trees with heap sorted structures.

Screen Shot 2017-11-10 at 9.07.53 AM.png

In binary heaps any node has at least the value of the nodes children. There is no left/right orientation or distinction in values.

Also children further down the tree can be of greater value than nodes higher up the tree on the other side. That doesn’t matter. The only thing that matters is that the children at equal or less than their parents. That’s it.

Which means every node, it the root of it’s own sub-heap.

Heap representation

Because the shape of a heap is represented so nicely it is easy to store heap nodes in an array.

Screen Shot 2017-11-10 at 9.08.11 AM.png

Top to bottom, left to right. Also, because we know where every element of the array lies, we can access its elements very easily using the above formula.

Finding the maximum

Finding the maximum is easy. It’s simply the first, or highest element at the top of our heap. So it is always the first element in our array.

Inserting

When inserting we stick the new element at the end of our tree (always reading top to bottom, left to right) and stick it in the last element of our array. And if is smaller than the parent we are done.

But what happens if our element is larger that the parent?

Screen Shot 2017-11-10 at 9.08.26 AM.png

For insertions like this we walk up the tree, swapping nodes until our heap constraint (of the child being of less than or equal value to our parent) is satisfied.

Screen Shot 2017-11-10 at 9.08.44 AM.png

And if we insert a number each to it’s parent, we just leave it there. It’s done.

Deletion

Say we want to delete the root of our heap (i.e. we pulled off it’s max value and now we need the heap to resort itself – or heapify).

Screen Shot 2017-11-10 at 9.08.56 AM.png

To delete the max element, we take the max node, swap it with the last leaf, and then delete the last leaf which we know is so easy.

Screen Shot 2017-11-10 at 9.09.16 AM.png

Now we need to resort the heap (because that top element isn’t quite right as it doesn’t represent the max).

We do this by comparing the top with with each left/right subnode, and continuously swapping with the larger of the two.

Screen Shot 2017-11-10 at 9.09.46 AM.png

In this case the left child is larger, so we are going to swap the root to the left.

Screen Shot 2017-11-10 at 9.09.57 AM.png

No we repeat the process with the left hand subtree. We keep swapping the node in question with the largest of it’s children until we can’t swap anymore.

Screen Shot 2017-11-10 at 9.10.13 AM.png

At this point we say the heap is heapified, and finally sorted. It is good to go for another extraction.

Cool things

Some other cool things about binary trees is that this data structure can sort itself very efficiently. i.e. It can sort itself in place. No need to copy all the contents somewhere else and then copy back. When we heapify it does it all with the existing array and simply swapping nodes. Very cool. Very efficient.

Code

Here is an example of a Max Heap implementation for ints in Java.

MaxIntHeap.java

import java.util.Arrays;

public class MaxIntHeap {
    private int capactity = 10;
    private int size = 0;

    int[] items = new int[capactity];

    private int leftChildIndex(int parentIndex) { return 2 * parentIndex + 1; }
    private int rightChildIndex(int parentIndex) { return 2 * parentIndex + 2; }
    private int parentIndex(int childIndex ) { return (childIndex - 1) / 2; }

    private boolean hasLeftChild(int index) { return leftChildIndex(index) < size; }
    private boolean hasRightChild(int index) { return rightChildIndex(index) < size; }
    private boolean hasParent(int index) { return parentIndex(index) >= 0; }

    private int leftChild(int index) { return items[leftChildIndex(index)]; }
    private int rightChild(int index) { return items[rightChildIndex(index)]; }
    private int parent(int index) { return items[parentIndex(index)]; }

    private void swap(int indexOne, int indexTwo) {
        int temp = items[indexOne];
        items[indexOne] = items[indexTwo];
        items[indexTwo] = temp;
    }

    private void ensureCapactity() {
        if (size == capactity) {
            items = Arrays.copyOf(items, capactity * 2);
            capactity *= 2;
        }
    }


    public int extractMax() {
        if (size == 0) throw new IllegalStateException();
        int item = items[0];        // grab the max
        items[0] = items[size - 1]; // copy the bottom to the top
        size--;
        heapifyDown();             // and no because top isn't right, we heapify down
        return item;
    }

    public void add(int item) {
        ensureCapactity();
        items[size] = item;          // put in last spot
        size++;
        heapifyUp();
    }

    public void heapifyUp() {
        int index = size - 1;       // start at last element
        while (hasParent(index) && parent(index) > items[index]) {  // walk up as long as there is a parent and it is bigger than you
            swap(parentIndex(index), index);
            index = parentIndex(index); // walk upwards to next node
        }
    }

    public void heapifyDown() {
        int index = 0;              // starting at the top
        while (hasLeftChild(index)) {  // as long as I have children Note: Only need to check left because if no left, there is no right

            // pick a direction, and get the smaller of the two indexes
            int smallerChildIndex = leftChildIndex(index);
            if (hasRightChild(index) && rightChild(index) < leftChild(index)) {
                // swap right (because we are min heap
                smallerChildIndex = rightChildIndex(index);
            }

            // Now compare

            // if I am smaller than the items of my two children...then everything is good. I am sorted.
            if(items[index] < items[smallerChildIndex]) {
                break;
            } else { //  we are still not in order
                swap(index, smallerChildIndex);         // so swap with the smaller child
            }

            index = smallerChildIndex;              // then move down to smaller child
        }
    }

    public void print() {
        for (int i=0; i < size; i++) {
            System.out.println(i + "[" + items[i] + "]" );
        }
    }
}

MaxIntHeapTest.java

import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;

public class MaxIntHeapTest {

    private MaxIntHeap minHeap;

    @Before
    public void setUp() throws Exception {
        minHeap = new MaxIntHeap();
        minHeap.add(6);
        minHeap.add(5);
        minHeap.add(4);
        minHeap.add(3);
        minHeap.add(2);
        minHeap.add(1);
    }

    @Test
    public void Insert() throws Exception {
        // Remember: The array walks top down / left to right
        Assert.assertEquals(1, minHeap.items[0]);
        Assert.assertEquals(3, minHeap.items[1]);
        Assert.assertEquals(2, minHeap.items[2]);
        Assert.assertEquals(6, minHeap.items[3]);
        Assert.assertEquals(4, minHeap.items[4]);
        Assert.assertEquals(5, minHeap.items[5]);
    }

    @Test
    public void ExtractMin() throws Exception {
        Assert.assertEquals(1, minHeap.extractMax());
        Assert.assertEquals(2, minHeap.extractMax());
        Assert.assertEquals(3, minHeap.extractMax());
        Assert.assertEquals(4, minHeap.extractMax());
        Assert.assertEquals(5, minHeap.extractMax());
        Assert.assertEquals(6, minHeap.extractMax());
    }
}

Links that help

https://cs.stackexchange.com/questions/27860/whats-the-difference-between-a-binary-search-tree-and-a-binary-heap
http://www.geeksforgeeks.org/binary-heap/

Priority Queues

Leave a comment

A priority queue is like a regular queue only it has one extra method

ExtractMax()

This method returns the highest priority item in the queue, and it’s useful for things like sorting when used in conjunction with binary trees.

A naive approach

Now a naive approach to implementing a priority queue would be to use an array and then walk the entire array, every time, looking for the highest priority element.

unsorted.PNG

Things can improve if you sort the array, then ExtractMax() is easy as it is simply the last element

sorted-array.PNG

And of course you could do this using a list also, through you would lose the ability to quickly find the max position as you can’t do binary search with a linked list.

sorted-list.PNG

So sorted Arrays and LinkedLists are two ways you could implement a priority queue. But if you really want to go fast, what you really need is a binary heap.

binary-heap-fast.PNG

These notes were taken from the excellent Coursera data structures course.

Binary trees

1 Comment

Below are some notes I took from the excellent Data Structure course at coursera.

A binary tree is a data structure that is extremely handy for representing certain constructs in computer science.

For example, you can use a binary tree to represent a sentence.

sentence.PNG

A syntax tree for an expression.

syntax-tree.PNG

Or abstract tree for code.

code-tree.PNG

And when you order the nodes on your tree, you can have the number of elements when doing search. Which is why binary search trees are so popular for searching.

binary-search-tree.PNG

What is a binary tree?

A binary tree consists of a node, with with one or two other nodes hanging off it. It’s called binary, because each node can only have at most two other nodes hanging off it. And usually when you sort your tree, you put the lower values on the left branch of the node, the the higher value ones on the right.

That’s what makes search with binary trees so quick. You effectively start in the middle, and at most only really need to search half the tree.

defn-binary-tree.PNG

Common operations

The amount of time it takes to search a tree, is proportional to it’s height. That’s why many binary trees have a method that can tell you how high they are.

height.PNG

Another common operation is size (or how many nodes).

size.PNG

One thing you’ll notice when you look at binary tree algorithms is their heavy use of recursion. Recursion is one of those things that can make your head hurt a bit at first, because from within a method, you continuously call yourself again and again. A dream within a dream.

But it is also what makes binary trees so elegant. With very little code you get a lot of amazing functionality. When is why they are so beautiful to study and understand.

Walking trees

There are different ways you can walk a tree which fall into two general categories:

  • Depth first
    • InOrderTraversal
    • PreOrderTraversal
    • PostOrderTraversal
  • Breadth first
    • LevelTraversal

But I am not really sure yet when one would make more sense over the other. I suspect InOrderTraversal would be common.

So anyways, there are some notes on binary trees.

Arrays

Leave a comment

We use arrays all the time in programming. But did you ever wonder what goes on under the hood and how they work?

What is an array?

An array is a contiguous area of memory consisting of equal-size elements indexed by contiguous integers. Which is just a fancy way of all an an array’s elements are stored together right beside each other in memory.

What is so about them?

Arrays are great at random access. Meaning you can easily access any element at any time so long as you know the index of the element you want.

What are some it’s weaknesses?

While most people don’t have to worry about this, arrays can be a little wasteful on space. If you create a large array and don’t use most of it’s elements, you are wasting space. That’s one down side.

The other is arrays suffer from the same thing linked lists do. They aren’t great at doing this in the middle.

common-array-times.PNG

Whenever you need to add or remove elements in the middle of an array you need to shuffle everything around to make sure you don’t have any gaps.

How to arrays resize themselves?

When you create an array you, usually need to give it a default size. For the Java ArrayList this size is 10.

Once you approach that limit, the array needs to make itself bigger. Usually by doubling itself in size.

resize.PNG

Once it doubles in size, it creates a new array, copies the old values over, and gets rid of the old one.

An example

Here are some common array operations

Get(i) return element at location i
Set(i, value) Sets element i to value
PushBack(value) Adds value to the end
Remove(i) Removes element at location i
Size(i) the number of elements

And here is an examples of how an array works, minus all the error checking for array size and bounds.

Initialization

initialize.PNG
When you create an array, you often specify some default or initial capacity. You also want to track its size, along with the data contained in the array itself.

Get / Set

Getting and setting are the easiest operations to do on an array. All you are doing is access your internal data store and then returning (or setting) whatever element lies there. Normally you would do some bounds checking here too but I have left those out for clarity.

get-set.PNG

PushBack

This is where things get slightly more interesting for an array. So long as your array hasn’t reached it’s limit, all you need do is set your new value to the array index at position size and you are done.

pushback.PNG

But if you are at your limit, this is where we double our array size, copy the old data over, and double our capactity.

Remove

And remove is kind of neat. Here you are removing an element at a specific position, and to do that you need to jump into your array at the position of the element you want to remove, and then slide all the other elements above it down (while not forgetting to decrease your array size)

remove.PNG

So what is better? An linked list or an array?

Linked lists are pretty similar to arrays in that they both have similar operations, and both could be used internally for implementing Stacks and Queues.

Where linked lists win is resource management (they use less) and are better in that they don’t need to rebuild themselves from scratch every time an element in the middle is inserted or removed.

Arrays when when it comes to random access. They are just faster at access elements in the middle of themselves, while linked lists need to iterate through each node until they find the one they are looking for.

So that’s a quick run down on arrays, and how the compare to linked lists.

Happy coding.

Full listing

Source

public class DynamicArray<E> {

    private Object[] data;
    private int size; // number of elements
    private int initialCapacity;

    public DynamicArray(int initialCapacity) {
        this.initialCapacity = initialCapacity;
        data = new Object[initialCapacity];
    }

    public void set(int i, E e) {
        data[i] = e;
        size++;
    }

    public E get(int i) {
        return (E)data[i];
    }

    public void pushBack(E e) {

        if (size == initialCapacity) {
            Object[] newData = new Object[initialCapacity * 2];
            for (int i = 0; i < initialCapacity; i++) {
                newData[i] = data[i];
            }
            data = newData;
            initialCapacity = initialCapacity * 2;
        }

        data[size] = e;
        size++;
    }

    public int size() {
        return size;
    }
    public int capacity() { return initialCapacity; }

    public void remove(int i) {
        for (int j = i; j < size - 1; j++) {
            data[j] = data[j + 1];
        }
        size--;
    }
}

Tests

import junit.framework.Assert;
import org.junit.Before;
import org.junit.Test;

public class DynamicArrayTests {

    private DynamicArray array;

    @Before
    public void setUp() throws Exception {
        array = new DynamicArray<String>(2);
    }

    @Test
    public void InitialState() throws Exception {
        Assert.assertEquals(0, array.size());
    }

    @Test
    public void GetAndSet() throws Exception {
        array.set(0, "a");
        Assert.assertEquals(1, array.size());
        Assert.assertEquals("a", array.get(0));
    }

    @Test
    public void PushBack() throws Exception {
        array.pushBack("a");
        array.pushBack("b");
        Assert.assertEquals(2, array.size());
        Assert.assertEquals("a", array.get(0));
        Assert.assertEquals("b", array.get(1));
    }

    @Test
    public void PushBeyondCurrentSize() throws Exception {
        array.pushBack("a");
        array.pushBack("b");
        array.pushBack("c");
        Assert.assertEquals(3, array.size());
        Assert.assertEquals(4, array.capacity());
        Assert.assertEquals("a", array.get(0));
        Assert.assertEquals("b", array.get(1));
        Assert.assertEquals("c", array.get(2));
    }

    @Test
    public void Remove() throws Exception {
        array.pushBack("a");
        array.pushBack("b");
        array.pushBack("c");

        array.remove(1);

        Assert.assertEquals("a", array.get(0));
        Assert.assertEquals("c", array.get(1));
    }
}

Links that help
http://ridiculousfish.com/blog/posts/array.html

Linked Lists

Leave a comment

Coming from an electrical engineering background, I have never really studied some of our most basic computer data structures in depth. In this episode I am visiting the linked list sharing some notes on how it works.

What is a linked list?

A linked list It’s a data structure consisting of a group of nodes linked together in sequence via a series of pointers.

Screen Shot 2017-11-04 at 10.01.51 AM.png

Each node consists of a piece of data, along with a pointer pointing to the next piece of data in the list. That’s it!

It is one of the simplest, most basic data structures we’ve got, and it is used to build everything from common Lists, Stacks, to Queues.

What are it’s strengths?

The primary advantage of the linked list over a conventional array is that the linked list can handle the insertion and removal of elements, without having to reorganize the entire data structure.

In can do this because the link list is just a series of nodes and pointers (which means the data doesn’t have to sit in one contiguous block of memory), and when it does insert a new element, all it has to do is adjust it’s pointers before and after when the new node is to be inserted.

Also linked lists are very fast add adding, and accessing elements, at the top of the list O(1).

What are it’s weaknesses?

The downside of the linked list is that it doesn’t allow random access. Meaning you can’t just read/write the nth element in the list. To access data in the middle of the list, you need to walk the list from beginning to end, looking for the element you are after.

Then, once you’ve found it, you can return it or remove it. But this iterating through all the elements means that insertion, removal, and random access to O(n) time. Meaning it dependent on the number of elements (n) you have on your list.

Typical operations

Here is a list of the typical operations a linked list would perform. As you can see it looks very much like the implementation of a stack or queue.

Screen Shot 2017-11-04 at 10.01.56 AM.png

How does it work

Here is a walk through of a simple linked list operations in Java.

Push

Here is our Node which holds our data and next element pointer in our list.

Screen Shot 2017-11-04 at 10.42.41 AM.png

And this little snippet of code is the entire basis for how linked lists work.

Screen Shot 2017-11-04 at 10.05.23 AM.png

This is the push operation. It basically adds a new element to the top of the list.

It takes the element of data you pass in e, creates a new Node with the data e along with a null representing the next element in the list.

And all the magic in how the linked list works is contained in these two little lines here.

Screen Shot 2017-11-04 at 10.12.21 AM.png

First we take the new node we just created, and set it’s next pointer to the current head. Which would be null if this is the first item on our list.

Then we take the head (which was null), and set it to be equal to our new node.

These two seemingly simple lines little lines still make my head hurt. I find them confusing because you are setting one thing in one line and then resetting it in the other.

But once you grok this, you basically get the linked list and all it’s magic. Because all we are doing with a push is taking the new node, setting it’s next to be the current head (which in effect pushes the current head down one deeper into the stack), and then resetting the new node to be at the top via setting it to the head. Phew!

Pop

Screen Shot 2017-11-04 at 10.24.29 AM.png

Popping is pretty easy. Here we check to see if we have an empty list by looking to see if the head == null. And if it isn’t, we grab the next node, set it to be the new head, and decrement our size count before returning the item or data of the node.

Note: The API has us return the data. Not the node. The Node is just an internal data structure we use implementing the list. I makes no sense to the outside world.

Remove

Here is where things get interesting. Here you are going to witness both an advantage of linked lists (the ability to remove items gracefully by merely changing several pointers) and a disadvantage of the linked list (having to iterate through all the elements to find the one you like).

Screen Shot 2017-11-04 at 10.34.58 AM.png

A lot of the link list implementations start by doing checks on the head of the linked list first. Which is exactly what we do here.

Screen Shot 2017-11-04 at 10.36.17 AM.png

First we grab the head, check to see if it contains the element we want, and remove it by repointing it to the next of the current element, while decrementing our size and returning true.

With the head handled, we now loop though the rest of the linked list, until we find the match we are looking for.

Screen Shot 2017-11-04 at 10.39.01 AM.png

Then we basically did exactly what we did before with the header, only now we are doing it for some arbitrary node in our list.

We grab the node, re-point the previous node to the new next, and off we go.

This is both the advantage and disadvantage of the linked list. With just a few lines of code, we pulled an element from the beginning of the list, without having to remap, or shuffle, any other elements in our data structure. That is huge. And that is worth a moment of your time to appreciate.

But there is also the slight disadvantage, which is that to find our element, we needed to loop through the entire list to the very end (in the worst case). Alas such is life.

The rest

The rest of the implementation is basically variations on these themes of looping through the list, finding the element you are looking for, and adjusting pointers.

Summary

The linked list is a beautiful little data structure that powers a lot of the software you use every day under the hood. It doesn’t allow random access like an Array, but it’s a beautiful data structure for building Stacks and Lists.

To see what a professional looking LinkedList.java looks like checkout this implementation by Joshua Bloch in Java.

But to really understanding it, it’s always best to build these things from scratch on your own. And here is my implementation below.

Learn more

https://www.coursera.org/learn/data-structures/lecture/kHhgK/singly-linked-lists
https://codereview.stackexchange.com/questions/82698/singly-linked-list-in-java

Source and test

SingleLinkedList

public class SingleLinkedList<E> {

    Node head;
    int size;

    public void push(E e) {
        Node newNode = new Node(e, null);

        newNode.next = head;
        head = newNode;

        size++;
    }

    public int lastIndexOf(E e) {
        Node current = head;
        int index = 0;

        while (current.next != null) {
            if (current.item == e) return index;
            index++;
            current = current.next;
        }

        // check the last node
        if (current.item == e) return index;

        return -1;
    }

    public E peek() {
        Node<E> f = head;
        return (f == null) ? null : f.item;
    }

    public E pop() {
        // get the head
        // assign head to it's next
        // return head
        if (head == null) return null;

        Node<E> first = head;
        head = first.next;
        size--;

        return first.item;
    }

    public boolean remove(E e) {
        // walk the list from the beginning
        // remove and return true if found
        // return false

        // head
        Node prev = null;
        Node current = head;

        if (current.item.equals(e)) {
            // make the next element the new head
            head = current.next;
            size--;
            return true;
        }

        // all others
        while (current.next != null) {
            prev = current;
            current = current.next;

            if (current.item.equals(e)) {
                // connect the previous node next to this nodes next
                // thereby bypassing this current node we want to remove
                prev.next = current.next;
                size--;
                return true;
            }
        }

        return false;
    }

    public void set(int index, E e) {

        Node newNode = new Node(e, null);

        // head
        if (index == 0) {
            newNode.next = head.next;
            head = newNode;
            return;
        }

        // all others
        // take the previous node
        // attach it's next to this new node
        // attach this new node to current next node

        Node prev = head;
        Node current = head.next;

        for (int counter = 1; counter <= index; counter++) {
            if (counter == index) {
                prev.next = newNode;
                newNode.next = current.next;
                return;
            } else {
                prev = current;
                current = current.next;
            }
        }

    }

    private static class Node<E> {
        E item;
        Node<E> next;

        Node(E element, Node<E> next) {
            this.item = element;
            this.next = next;
        }
    }
}



/*
 push() is neat because you are basically:

Creating a new node, assigning the old head value to be the next chain in the link (like adding), and the making
new node you just created the next head. So you are just bumping everything down one.

 1. Creating a new node.
 2. Assigning it's next to the current head (which at first is null because it is empty).
 3. And then assigning this newNode to the head. So it is at the front of the list.

 Then when you add the next node you repeat the process.
 1. Create the new node.
 2. Assign it's next to the current head of the list (the last element you added).
 3. And then making this next node the head.

 So you are basically constantly taking making new nodes, assigning their next to be current head.
 And then assigning them to be the new head (while their next points to the old head).
 And you just keep doing this.

*/

LinkedListTests

import junit.framework.Assert;
import junit.framework.TestCase;
import org.junit.Before;
import org.junit.Test;

import java.util.ArrayList;
import java.util.LinkedList;

public class LinkedListTests {

    private SingleLinkedList sll;

    @Before
    public void setUp() throws Exception {
        sll = new SingleLinkedList<Integer>();
    }

    @Test
    public void InitalState() throws Exception {
        Assert.assertNull(sll.head);
        Assert.assertEquals(0, sll.size);
    }

    @Test
    public void lastIndexOf() throws Exception {
        sll.push(99);
        sll.push(100);
        sll.push(101);
        Assert.assertEquals(2, sll.lastIndexOf(99));
        Assert.assertEquals(1, sll.lastIndexOf(100));
        Assert.assertEquals(0, sll.lastIndexOf(101));
    }

    @Test
    public void peek() throws Exception {
        sll.push(99);
        Assert.assertEquals(99, sll.peek());
    }

    @Test
    public void peekEmpty() throws Exception {
        Assert.assertEquals(null, sll.peek());
    }

    @Test
    public void pushAndPop() throws Exception {
        sll.push(99);
        sll.push(100);
        sll.push(101);
        Assert.assertEquals(101, sll.pop());
        Assert.assertEquals(100, sll.pop());
        Assert.assertEquals(99, sll.pop());
    }

    @Test
    public void popEmptyList() throws Exception {
        Assert.assertNull(sll.pop());
    }

    @Test
    public void remove() throws Exception {
        sll.push(99);
        sll.push(100);
        sll.push(101);

        Assert.assertTrue(sll.remove(101));
        Assert.assertEquals(100, sll.peek());

        Assert.assertTrue(sll.remove(100));
        Assert.assertEquals(99, sll.peek());

        Assert.assertTrue(sll.remove(99));
        Assert.assertEquals(null, sll.peek());
    }

    @Test
    public void pushSize() throws Exception {
        Assert.assertEquals(0, sll.size);
        sll.push(99);
        Assert.assertEquals(1, sll.size);
        sll.push(100);
        Assert.assertEquals(2, sll.size);
    }

    @Test
    public void popSize() throws Exception {
        sll.push(99);
        sll.push(100);
        Assert.assertEquals(2, sll.size);
        sll.pop();
        Assert.assertEquals(1, sll.size);
    }

    @Test
    public void removeSize() throws Exception {
        sll.push(99);
        sll.push(100);
        Assert.assertEquals(2, sll.size);
        sll.remove(100);
        Assert.assertEquals(1, sll.size);
    }

    @Test
    public void setIndex() throws Exception {
        sll.push(99);
        sll.push(100);
        sll.push(101);

        sll.set(0, 49);
        sll.set(1, 50);
        sll.set(2, 51);

        Assert.assertEquals(0, sll.lastIndexOf(49));
        Assert.assertEquals(1, sll.lastIndexOf(50));
        Assert.assertEquals(2, sll.lastIndexOf(51));
    }

    @Test
    public void realLinkedList() throws Exception {
        
        // Here are some tests written against the Java LinkedList class
        
        LinkedList<Integer> linkedList = new LinkedList<Integer>();
        linkedList.push(99);
        linkedList.push(100);
        linkedList.push(101);
        Assert.assertEquals(2, linkedList.lastIndexOf(99));
        Assert.assertEquals(1, linkedList.lastIndexOf(100));
        Assert.assertEquals(0, linkedList.lastIndexOf(101));

        linkedList.peek();
    }
}

Older Entries Newer Entries

%d bloggers like this: