Data Structures & Algorithms in Java – Graphs – Clone Graph

Problem:

Given a graph , clone it.

For example , consider a graph represented by the below adjacency list:

[ [2,4],[1,3],[2,4],[1,3] ]

You need to create a clone with the same adjacency list as above.

An adjacency list is a list containing a list of neighbors of each node in a graph.

So in the above example.

The first node has the neighbors 2 and 4. (Each node’s value is represented by its position in the graph)

The second node has the neighbors 1 and 3.

The third node has the neighbors 2 and 4.

And the last node has the neighbors 1 and 3.

In total the graph has four nodes.

It can be visually represented as below:

Each line represents a connection between two nodes.

The connection is bidirectional , you can travel from node 1 to node 2 and also vice versa , for example.

Try out the solution here:

https://leetcode.com/problems/clone-graph/

Solution:

There are two optimal solutions to this problem:

  • Depth First Search (DFS)
  • Breadth First Search (BFS)

In depth first search you keep going deep from the first node , cloning it , then looking for its neighbors , and then the neighbor’s neighbor and then the neighbor’s neighbor’s neighbor until there are no more nodes to visit. Do this recursively for each clone’s neighbor.

To keep track of no more nodes to visit , use a map data structure .

Here is the algorithm:

ALGORITHM:

STEP 1: Initialize a map to keep track of visited nodes outside the main function

STEP 2: If the input node is null return it , this is a boundary condition.

STEP 3: If the node to be clones is already present in the visited map then it has already been cloned so just return the clone.

STEP 4: Create a clone for the current node and put it in visited map.

STEP 5: For each neighbor of the current node , create a clone recursively and add the clone to the neighbor list of the current node.

STEP 6: Return the current cloned node.

Here is the code:

Depth First Search:

/*
// Definition for a Node.
class Node {
    public int val;
    public List<Node> neighbors;
    public Node() {
        val = 0;
        neighbors = new ArrayList<Node>();
    }
    public Node(int _val) {
        val = _val;
        neighbors = new ArrayList<Node>();
    }
    public Node(int _val, ArrayList<Node> _neighbors) {
        val = _val;
        neighbors = _neighbors;
    }
}
*/

class Solution {
    
    
    Map<Node,Node> visited = new HashMap<Node,Node>();
       
    public Node cloneGraph(Node node) {
        
      
        
        if(node == null) return null;
        
        
        if(visited.containsKey(node)){
            
            return visited.get(node);
        }
        
        
        Node clone = new Node(node.val,new ArrayList<Node>());
        
        visited.put(node,clone);
        
        
        
        for(Node n:node.neighbors){
            
            
            clone.neighbors.add(cloneGraph(n));
            
        }
        
        
        return clone;
            
    }
}

Breadth First Search:

In Breadth First Search , instead of recursive cloning of the neighbors, clone one node and its neighbors at a time. Keep doing it until all the nodes are visited.

To do this , use a queue data structure.

Put the first node into the queue and clone it.

And then keep doing the following:

  • Pop the latest node from the queue
  • Find its neighbors
  • If the neighbors are not already in the queue , add them and clone them.
  • Add the clone of the neighbors to the latest popped node’s neighbor’s list.

At the end return the first cloned node.

Let’s assume a node has only two neighbors,

So basically, you:

Clone a node

Then clone the first neighbor and then the second neighbor.

Then you take the first neighbors neighbors and clone them.

Then you take the second neighbors neighbors and clone them.

Then you take the first neighbors neighbors neighbors and clone them.

Then you take the second neighbors neighbors neighbors and clone them.

And so on until all the nodes are visited during which the queue becomes empty.

The cloning is done along the breadth of the graph..

This is different from depth first search where you

Clone a node

Then clone the node’s neighbors neighbors neighbors……. node through recursion( until the end of the recursive stack is reached).

And then the previous node in the recursion stack and so on.

That is for the graph shown above,

You clone the first node , then the fourth node (end of recursion) , then the third node and then finally the second node

The cloning is done deep. We clone a node’s descendant nodes first before the node itself.

Coming back to breadth first search , Here is the algorithm:

ALGORITHM:

STEP 1: If input node is null , return it – this is the base condition

STEP 2: Create a map to keep track of visited nodes , clone the first node and put in it.

STEP 3: Create a queue to do breadth first search and put the first node in it.

STEP 4: While queue is not empty do the following:

STEP 4a: Pop the latest node from the queue

STEP 4b: For each of its neighbors , add them to queue , clone them and put them in visited map if they are not already in visited map.

STEP 4c: Add the cloned neighbors to the neighbors list of the latest popped out node.

STEP 5: Return the first cloned node.

Here is the code:

/*
// Definition for a Node.
class Node {
    public int val;
    public List<Node> neighbors;
    public Node() {
        val = 0;
        neighbors = new ArrayList<Node>();
    }
    public Node(int _val) {
        val = _val;
        neighbors = new ArrayList<Node>();
    }
    public Node(int _val, ArrayList<Node> _neighbors) {
        val = _val;
        neighbors = _neighbors;
    }
}
*/

class Solution {
    
    
    
       
    public Node cloneGraph(Node node) {
        
      
        
        if(node == null) return null;
        
        
        
        Map<Node,Node> visited = new HashMap<Node,Node>();
        
        Node clone = new Node(node.val,new ArrayList<Node>());
        
        visited.put(node,clone);
        
        
        LinkedList<Node> queue = new LinkedList<Node>();
        queue.add(node);
        
        
        while(!queue.isEmpty()){
            
            
            
            Node n = queue.remove();
            
            
            
            for(Node neighbor:n.neighbors){
                
                
                
                if(!visited.containsKey(neighbor)){
                    
                    
                    
                    queue.add(neighbor);
                    
                    
                    visited.put(neighbor, new Node(neighbor.val,new ArrayList<Node>()));
                }
                
                
                visited.get(n).neighbors.add(visited.get(neighbor));
                
            }
            
            
        }
        
        
        return clone;
        
        
            
    }
}

Time complexity:

Time complexity of both the algorithms is O(n + m )

where n is the number of nodes

m is the number of edges. (processing of each neighbor corresponds to the edges)

We process each node and the edges from the node along the traversal.

Space complexity:

Space complexity is O(n) where in the depth first search we use a recursive stack and in the breadth first search we use a queue both of size n. Also we use a visited hash map in both the cases of size n.

That’s it!

Comments

Leave a Reply

Discover more from The Full Stack Developer

Subscribe now to keep reading and get access to the full archive.

Continue reading