Others view: C gate ♦ Confinter ♦ Welcome ♦ Ring counter ♦ Disjoint squares ♦ Complex number ♦ ReverseComplement ♦

**Depth first search** and **Breadth first search** are the two simple algorithms to traverse a graph.

These algorithms can be used for finding the path to the specified goal node from the initial starting node. They are also used for the simple enumeration of all nodes, for example with the purpose of printing them. Java tree node provides methods to get both breadth first and depth first enumerations.

Depth first search is relatively easy to implement using the recursive call, something like

function visit(Node n) for each child of n visit(child)

At any point, the stack of the recursive function contains the full path from the current node to the root node. For languages where it is difficult to reach it directly (like Java), it can be additionally tracked using the extra parameter (see source code of the applet). For the formatted printing, sometimes only depth of recursion needs to be tracked to compute the correct left margin:

function visit(Node n, int depth) for each child of n visit(child, depth+1)

When we are deep in the stack of the nested calls, it may be less trivial to terminate the search after the goal node is found. This can be done, for example, by throwing exception that is caught at the top calling level. The "explicit stack" becomes a shortcoming when it is not required as the output of the algorithm.

When traversing a tree, this approach needs no remember the visited nodes - the nodes are visited only once anyway. However the list of visited nodes is required when traversing the arbitrary graph. This algorithm visits nodes in *preorder* sequences (first parent, then children). See below for more about the node order.

- Enqueue the start node.
- While the queue is not empty:

- Dequeue node.
- Visit it (print, for example)
- Enqueue all children/neighbours of the node

If we use the first-in first-out (FIFO) queue, the algorithm works as a breadth first search. If we use the first-in last-out stack instead, this converts the same actions into the depth first search algorithm. For the arbitrary graph, it may be required to maintain the list of visited nodes and not process them again.

Both depth first and breadth first search runs in time where *d* is the number of nodes and *b* is the branching factor (a most typical number of childs per node), *V* is the number of vertices and *E* is the number of edges^{[1]}^{[2]}.

The natural sequence of visits (first parent node, then children nodes), produced by the recursive depth first search algorithm, is a **preorder** sequence. The opposite order (first children nodes, then parent node) is called a **postorder** sequence. Postorder sequence is used to represent mathematical expressions in reverse Polish notation. If the parent can take some logical position between the children (for instance, left child, parent, right child in the binary tree), it is also possible to have **inorder** traversal (children before parent, the parent, children after the parent).

If the target node is known (and only path to it needs to be found), it is possible to run at the same time both search from goal to the target and search from target to the goal. The solution is found when these two searches meet. Bidirectional search reduces the search time from till that may be significant saving. Bidirectional search may be more complex to implement. It can be used with A* and Dijkstra algorithms as well.

It may be possible to search more efficiency by stopping the path evaluation when at least one possibility has been found that proves the path will be worse than a previously examined path. In the simplest case, if we search for the shortest path and already know some available path, we do not need to continue the search after our current path becomes longer than the known alternative. Depending on how graph is arranged this optimization may allow to leave a part of the graph unexplored without the loss of the best possible solution. We may further optimize remembering the know shortest path to the given node; if we are at that node and our current path is longer, continuation will not find a shorter path.

One of the possible optimizations is to run the depth first search only till the limited depth, and then gradually increase this limit after all possibilities at the current depth limit are already explored. In the "pure" implementation, this requires to repeat the search from scratch after the depth limit has been increased. While such algorithm may look sub optimal, usually majority of nodes are deep in the graph and multiple revisiting of the small number close to the root nodes may not be very expensive.

Limited depth search can ensure that at any moment all possibilities are already checked up till some current depth. This allows to break the search after spending the allocated time resources and be sure that we have not missed at least some obvious solutions. Differently, the standard depth first search may stuck somewhere deep in the tree when the solution is very close on the branch that has not been yet explored. However the worst case performance does not improve and due multiple repetitions simple limited depth search actually runs slower.

^{1 }Depth first search page in Wikipedia^{2 }Breadth first search page in Wikipedia