Skip to content

Added functionality of parallel maximal independent set#145

Open
pnijhara wants to merge 19 commits intonetworkx:mainfrom
pnijhara:parallel-mis
Open

Added functionality of parallel maximal independent set#145
pnijhara wants to merge 19 commits intonetworkx:mainfrom
pnijhara:parallel-mis

Conversation

@pnijhara
Copy link

@pnijhara pnijhara commented Dec 13, 2025

What does this PR change?

This PR implements the parallel maximal independent set via Luby's random permutation mechanism (solving issue #144)

  • Add parallel implementation of maximal_independent_set algorithm
  • Implements Luby-style randomized parallel algorithm for speedup on large graphs (50K+ nodes)
  • Works best on large graphs (n>50,000)

What is your approach, and why did you choose it?

A standard parallel MIS algorithm assigns a random priority to every vertex once per round, then selects all locally maximal vertices at the same time:

while graph not empty:
    generate random priority for each vertex
    S = { u | priority(u) > priority(v) for all v in N(u) }
    add S to MIS
    remove S ∪ N(S) from graph

Why this is parallel:

  • Priority comparisons for all vertices are independent operations.
  • Set S can be selected in one parallel pass.
  • All vertices in S are guaranteed independent.
  • Removing S ∪ N(S) can also be done in parallel.

This matches the PRAM formulation and is widely used in CPU and GPU environments.

Brief summary of changes (1–2 lines)

  • Added a file algorithms/mis.py
  • Added tests for algorithms/mis.py in algorithms/tests/test_mis.py
  • Found in the local benchmark tests that for node count less than 50K, sequential code is working best; however, for larger graphs such as n = 100K and 1M, parallel code is working best. This may be because of joblib's parallelization overhead. I tested this in C++ with OpenMP. The same issue persists there, but with a lesser node count.

@pnijhara pnijhara changed the title [DRAFT] Added functionality of parallel maximal independent set [WIP] Added functionality of parallel maximal independent set Dec 13, 2025
@pnijhara
Copy link
Author

@dschult please review it.

@Schefflera-Arboricola Schefflera-Arboricola added the type: Enhancement New feature or request label Dec 13, 2025
Copy link
Member

@dschult dschult left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this!

Some formatting suggestions before diving into the code/tests:

  • we typically don't put error messages on the "assert" command. The pytest report of the assert failure is sufficient.
  • we also typically dont include a comment line at the beginning of the test when the test function's name is sufficient. If the test involves soemthing special or unexpected, then a couple line doc_string (or comment) is used. We aren't building docs for the tests.
  • instead of building a sequential version of the code inside nx-parallel, does it make sense to just "fall back" to the NetworkX version?

@pnijhara
Copy link
Author

instead of building a sequential version of the code inside nx-parallel, does it make sense to just "fall back" to the NetworkX version?

I tried this approach; however, the tests, because of nx.maximal_independent_set, were failing. Because of this, I attempted to implement the sequential mis approach myself.

@dschult
Copy link
Member

dschult commented Dec 14, 2025

OK... I understtand. That makes sense. But the NetworkX backend system has a way to flag functions so that they should not run under certain conditions. The backend provides a "should_run" function which is called by NetworkX before it tries to run the backend version of the function. If the should_run function returns True, the backend version is called. If that function returns a string or False, the backend function does not run (and the string is used to describe why it should not run -- and is used in the logging system for backends.

In nx-parallel we have a set of utility functions to help us set that up. They are currently used in centrality/harmonic.py and algorithms/dag.py. But neither are checking the size of the graph. The way to use "should_run" is with a decorator provided in utils like this from harmonic_centrality:

@nxp._configure_if_nx_active(should_run=nxp.should_run_if_sparse(threshold=0.3))
 def harmonic_centrality(

That should_run function checks the density of the graph against a threshold. There is another function provided in utils called nxp.should_run_if_large. So I think we will use that one. Something like:

@nxp._configure_if_nx_active(should_run=nxp.should_run_if_large(50000))

Unfortunately, while the function has been created, it currently has no way to pass in the size of the threshold -- and it is set to 200 nodes and currently not used anywhere. You can still use it here though -- it will ignore the input -- and use 200. But you can get this code working with that cutoff.

If you are up for it, you could update the utility function should_run_if_large to use the input value in place of the currently hardcoded 200. Maybe call the argument nodes or nodes_threshold? You can use a default value of 200 nodes. Then this PR can use that function with argument 50000.
If you want, you could even make those changes in another PR and we can review/merge it pretty quickly.

If you'd prefer not to mess with that let me know and I'll fix it.

@pnijhara
Copy link
Author

pnijhara commented Dec 14, 2025

I tried as suggested, I am not very sure whether I have done this on correct side. Now, instead of duplicating NetworkX's sequential MIS code, we now fall back to their implementation for small graphs (< 50,000 nodes).

What I did:

  1. Dual-mode should_run_policies.py policy

Modified to work in two modes:

  • Direct function: should_run=nxp.should_run_if_large uses default 200 node threshold
  • Factory function: should_run=nxp.should_run_if_large(50000) returns wrapper with custom threshold

Also updated all policy signatures to accept **__ for keyword arguments passed by the backend dispatcher.

  1. Unwrapping NetworkX's dispatcher
    Used inspect.unwrap() to access the actual NetworkX implementation, bypassing multiple decorator layers. Direct import caused infinite recursion since NetworkX's maximal_independent_set is itself a dispatched function. This was hard to fix. I had to use LLMs, sorry :(

  2. Explicit fallback check

Added manual should_run check because NetworkX bypasses should_run when backend is explicitly specified (e.g., backend='parallel'). Also converted seed to Random object before fallback since the unwrapped NetworkX function expects Random, not int.

@pnijhara pnijhara requested a review from dschult December 14, 2025 17:10
Copy link
Member

@dschult dschult left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't worked through the actual code yet, but looked at the tests and the docs. The tests seem like maybe some of them are already covered when we run the networkx tests using the environment variable set for nx-parallel. In that case, each networkx test is run with the nx-parallel code being called when it is supported. And we run that in the nx-parallel CI tests.

  • can you see whether any of the nx-parallel tests are already tested by the networkx tests -- and remove the duplicates?
  • I have a couple other comments/questions below.

I hope to look at the code itself soon.
:)

Comment on lines +9 to +10
from networkx.algorithms.mis import maximal_independent_set as _nx_mis_dispatcher
_nx_mis = inspect.unwrap(_nx_mis_dispatcher)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can use nx.maximal_independent_set._orig_func as the way to call the original networkx function. Can you check if that works? It is possible I haven't fully understood how that works. That way we don't have to use inspect.

@pnijhara
Copy link
Author

can you see whether any of the nx-parallel tests are already tested by the networkx tests -- and remove the duplicates?

I checked all the tests of nx and nx-parallel and found most of the tests are already covered. For example, test_maximal_independent_set_basic is similr totest_random_graphs. The only test which I think is new and should be kept is test_maximal_independent_set_large_graph `. This is the test that tests the parallel execution. Let me know whether a single test seems to be viable.

Some other tests that I can think of are:

  1. Test chunking behavior - Test different get_chunks parameter values
    def test_custom_chunking():
    "Test with custom chunk function"

  2. Test threshold boundary - Verify fallback to sequential at threshold (50000 nodes) (this can be an overhead on a low-end machine)
    def test_fallback_below_threshold():
    "Graph with 49999 nodes should use sequential"
    def test_parallel_above_threshold():
    "Graph with 50001 nodes should use parallel"

  3. Test n_jobs behavior - Verify parallel execution with different n_jobs (I am not inclined towards this)
    def test_with_different_njobs():
    "Test with n_jobs=1, 2, 4, etc."

Let me know if these add value. I can think of implementing some of them.

I think you can use nx.maximal_independent_set._orig_func as the way to call the original networkx function. Can you check if that works? It is possible I haven't fully understood how that works. That way we don't have to use inspect.

I fixed this one by

from networkx.algorithms.mis import maximal_independent_set as _nx_mis_dispatcher
_nx_mis = _nx_mis_dispatcher.orig_func

Comment on lines +17 to +18
# If nodes_threshold is a graph-like object, it's being used as a direct should_run
# function instead of a factory. Use default threshold.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer the function signature here be: should_run_if_large(G, nodes_threshold=200, *_, **__)
So can you briefly explain where this might be used as a factory? The previous version doesn't seem to deal with that case. Of course, it didn't deal with keyword args either... ;}

Comment on lines +84 to +86
# Validate directed graph
if G.is_directed():
raise nx.NetworkXNotImplemented("Not implemented for directed graphs.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the networkx version work with directed graphs?
If the networkx version uses the not_implemented_for decorator, then I think that decorator runs before the backend is called. You can check whether this is ever called by changing the message (locally) and trying it to see which method gets shown.

Comment on lines +88 to +98
# Convert seed to Random object if needed (for fallback and parallel execution)
import random
if seed is not None:
if hasattr(seed, 'random'):
# It's already a RandomState/Random object
rng = seed
else:
# It's a seed value
rng = random.Random(seed)
else:
rng = random.Random()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly doesn't the see inut get handled by the decorators in the networkx code before it ever gets to the backend? Check...

Comment on lines +100 to +105
# Check if we should run parallel version
# This is needed when backend is explicitly specified
should_run_result = maximal_independent_set.should_run(G, nodes, seed)
if should_run_result is not True:
# Fall back to NetworkX sequential (unwrapped version needs Random object)
return _nx_mis(G, nodes=nodes, seed=rng)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This definitely gets called by the backend machinery. This code should never be reached.


# Parallel strategy: Run complete MIS algorithm on node chunks independently
# Then merge results by resolving conflicts
all_nodes = list(G.nodes())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
all_nodes = list(G.nodes())
all_nodes = list(G)

if nodes_set:
available = set(all_nodes) - nodes_set
for node in nodes_set:
available.discard(node)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dont these nodes already get removed two lines up?

Comment on lines +181 to +183
if nodes_set:
for node in nodes_set:
excluded.update(adj_dict[node])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't we do this already with available?

@pnijhara
Copy link
Author

@dschult thanks for the detailed review. I made the suggested changes and also tried to defend myself in some cases :p
Please review it again.

@pnijhara pnijhara requested a review from dschult December 26, 2025 11:32
@pnijhara
Copy link
Author

pnijhara commented Jan 7, 2026

@dschult
Can you please review this PR again?

@pnijhara pnijhara changed the title [WIP] Added functionality of parallel maximal independent set Added functionality of parallel maximal independent set Jan 7, 2026
Copy link
Member

@dschult dschult left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like the style test is not passing.
I have two more import related comments.

I think the tests should focus on aspects of the nx-parallel implementation. All the NetworkX tests are run on this function -- so don't duplicate those. You can add tests to the networkx tests if you like. :) but not in this PR. :)

And I think you still haven't responded to the comment about code that is inside a "should_run" check. The should_run gets called by the networkx dispatcher so we should never get to the check here in the backend.

Comment on lines +8 to +10
# Import the actual NetworkX implementation
from networkx.algorithms.mis import maximal_independent_set as _nx_mis_dispatcher
_nx_mis = _nx_mis_dispatcher.orig_func
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this can be simpler -- not even an import. And since we only use it once, maybe we can just use the expression directly without the abbreviation.

# abbreviation for the actual NetworkX implementation
_nx_mis = nx.maximal_independent_set.orig_func

@@ -0,0 +1,189 @@
from typing import Any
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isnt used

@pnijhara
Copy link
Author

pnijhara commented Mar 9, 2026

@dschult tests are failing because the parallel algorithm produces a valid MIS [4, 3, 2, 0, 1] but in a different order than the sequential [1, 0, 3, 2, 4]. This is expected behavior for parallel execution. The question is: should the parallel backend match sequential ordering exactly, or is correctness (independent + maximal) sufficient?

We can either accept the test failure as expected (parallel produces correct but different results) or make small graphs fall back to sequential, even when NETWORKX_TEST_BACKEND is set.

@pnijhara pnijhara requested a review from dschult March 10, 2026 07:05
@dschult
Copy link
Member

dschult commented Mar 11, 2026

We should be testing correctness. So we probably need a small change to the NetworkX test. I think that test should be updated in NetworkX to accept any ordering -- probably by comparing the set of the output to an expected value set.

There may be a way for backends to avoid NetworkX tests that give different results, but I don't know it. @eriknw, is there a way to e.g. mark a networkx test as xfail when running it on a backend?

In this case though, I think a small PR to change line 15 of test_mis.py to:

    assert set(nx.maximal_independent_set(G, seed=1)) == {1, 0, 3, 2, 4}

Also, there is even talk in NetworkX about whether mis should return a list or a set... But It'd be good to have this changed.

@eriknw
Copy link

eriknw commented Mar 11, 2026

Thanks for the ping!

I think that test should be updated in NetworkX to accept any ordering

Yeah, for easy changes, I favor tweaking tests in NetworkX. Moreover, if a backend add new tests to test for gaps in NX tests, I prefer to upstream those to so NetworkX and all backends benefit. One challenge with this approach though is that updated tests are part of the next NetworkX release, and these tests may still fail now.

There may be a way for backends to avoid NetworkX tests that give different results, but I don't know it. @eriknw, is there a way to e.g. mark a networkx test as xfail when running it on a backend?

on_start_tests can do this. It is minimally documented: https://networkx.org/documentation/stable/reference/backends.html#creating-a-custom-backend

That doc should also probably refer to https://docs.pytest.org/en/stable/reference/reference.html#std-hook-pytest_collection_modifyitems

You can see how nx-cugraph uses on_start_tests to skip tests here: https://github.com/rapidsai/nx-cugraph/blob/10426797772a3e99bd45cf3cfa96c7ae7184431c/nx_cugraph/interface.py#L33

@pnijhara
Copy link
Author

pnijhara commented Mar 12, 2026

@dschult let me know what decision we are taking for this? Ideally, we should update the NetworkX tests. However, those will take time to get into the final release. Three things we can do now:

  1. Keep the failing tests (not an ideal solution)
  2. I should send a PR on NetworkX to update the tests
  3. As suggested in the previous comment, I can edit the interface.py file for on_start_tests in the BackendInterface class. Something like this:
@staticmethod
def on_start_tests(items):
    """Modify pytest items after tests have been collected.
    
    This is called during pytest_collection_modifyitems phase.
    """
    try:
        import pytest
    except ModuleNotFoundError:
        return
    
    def key(testpath):
        """Parse testpath into (test_name, keywords) tuple."""
        filename, path = testpath.split(":")
        *names, testname = path.split(".")
        if names:
            [classname] = names
            return (testname, frozenset({classname, filename}))
        return (testname, frozenset({filename}))
    
    # Define tests that should be marked as xfail
    xfail = {
        key("test_mis.py:TestMaximalIndependentSet.test_random_order"):
            "Parallel MIS produces different valid results than sequential",
        key("test_mis.py:TestMaximalIndependentSet.test_seed_deterministic"):
            "Parallel MIS with chunks produces different order than sequential",
    }
    
    # Mark tests
    for item in items:
        kset = set(item.keywords)
        for test_name, reason in xfail.items():
            test_name_val, keywords = test_name
            if item.name == test_name_val and keywords.issubset(kset):
                item.add_marker(pytest.mark.xfail(reason=reason))

@dschult
Copy link
Member

dschult commented Mar 12, 2026

Nice! Let's do numbers 2 and 3.
Or, maybe let's focus on 3 here, and hopefully the issue/pr in NetworkX to refactor the mis return value from a list to a set will fix the other issues.

So, work on 3. And if by the time we've finished that here, we still need to do 2, we can do that then.
Thanks!

Copilot AI review requested due to automatic review settings March 12, 2026 21:21
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a parallel backend implementation of maximal_independent_set to nx_parallel, wires it into the backend interface/exports, and extends the should_run_if_large policy to support being used as a threshold factory (e.g., should_run_if_large(50000)).

Changes:

  • Added nx_parallel.algorithms.mis.maximal_independent_set and registered it in the backend interface and package exports.
  • Updated should_run_if_large to support both direct graph checks and “factory usage” (threshold-first) call patterns.
  • Added MIS-focused tests and updated chunking-related smoke tests to account for the new algorithm.

Reviewed changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
nx_parallel/utils/should_run_policies.py Extends should_run_if_large API to support threshold factory usage and broadens should_run signatures.
nx_parallel/algorithms/mis.py Introduces the new parallel MIS implementation and should_run policy thresholding.
nx_parallel/interface.py Registers maximal_independent_set in the backend algorithm list and adds a test-time xfail hook.
nx_parallel/algorithms/__init__.py Exports MIS algorithm(s) via from .mis import *.
nx_parallel/algorithms/tests/test_mis.py Adds unit tests for should_run, chunking, and seeded determinism.
nx_parallel/tests/test_get_chunks.py Excludes maximal_independent_set from the generic get_chunks equivalence smoke test.
_nx_parallel/__init__.py Adds get_info() metadata/docs entry for maximal_independent_set.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@pnijhara
Copy link
Author

@dschult, I just implemented and tested 3. Looks like it is working fine. If this is good to go, I can open a new issue at NetworkX for solving 2.

PS: Please ignore the reviews by Copilot. I didn't know it can review the PRs. I got it today with my education pack. Seems like a too much to me.

@pnijhara
Copy link
Author

@dschult, can you review and merge this? I think the tests are passing. We can revert these tests once we get the new release of NetworkX.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

type: Enhancement New feature or request

Development

Successfully merging this pull request may close these issues.

Add parallel implementation of Combinatorial Algorithms

5 participants