-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster without linker #2412
Cluster without linker #2412
Conversation
|
||
|
||
def cluster_pairwise_predictions_at_threshold( | ||
nodes: AcceptableInputTableType, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably eventually allow the input to also be SplinkDataFrame, but i think that's for a wider PR which allows all public-API functions to accept SplinkDataFrames
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
match probability = 1 hack no longer required due to this refactor
🙌 Thanks for pushing this out! This will be extremely helpful for using Splink where the data is periodically fed live into DuckDB |
@@ -1,18 +1,15 @@ | |||
--- | |||
tags: | |||
- API | |||
- Clustering | |||
- clustering |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the old file is now linker_clustering.md to distinguish from the 'plain' linker method
@@ -2,54 +2,41 @@ | |||
import pytest | |||
|
|||
from tests.cc_testing_utils import ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've switched all tests over to use the plain (no linker) clustering functions
return pd.DataFrame(rows) | ||
|
||
|
||
def check_df_equality(df1, df2, skip_dtypes=False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
syntax like assert (cc_df.values == nx_df.values).all()
is sufficient so this doesn't need to be a fn
Another testing scriptimport duckdb
import networkx as nx
import pandas as pd
import splink.comparison_library as cl
from splink import DuckDBAPI, Linker, SettingsCreator, block_on
def generate_random_graph(graph_size, seed=47):
if graph_size < 10:
density = 1 / graph_size
else:
density = 2 / graph_size
# print(f"Graph size: {graph_size}, Density: {density}")
graph = nx.fast_gnp_random_graph(graph_size, density, seed=seed, directed=False)
return graph
def nodes_and_edges_from_graph(G):
edges = nx.to_pandas_edgelist(G)
edges.columns = ["unique_id_l", "unique_id_r"]
nodes = pd.DataFrame({"unique_id": list(G.nodes)})
return nodes, edges
g = generate_random_graph(10000)
nodes, edges = nodes_and_edges_from_graph(g)
G = nx.from_pandas_edgelist(edges, "unique_id_l", "unique_id_r")
# Ensure all nodes from the original graph are in G
for node in nodes["unique_id"]:
if node not in G:
G.add_node(node)
connected_components = list(nx.connected_components(G))
# Create a dictionary mapping node to cluster
node_to_cluster = {}
for cluster_id, component in enumerate(connected_components):
for node in component:
node_to_cluster[node] = cluster_id
# Create the final DataFrame
nodes_with_clusters = nodes.copy()
nodes_with_clusters["cluster"] = nodes_with_clusters["unique_id"].map(node_to_cluster)
db_api = DuckDBAPI(":default:")
blocking_rules = [
block_on("cluster"),
]
settings = SettingsCreator(
link_type="dedupe_only",
probability_two_random_records_match=0.5,
blocking_rules_to_generate_predictions=blocking_rules,
comparisons=[
cl.ExactMatch("cluster").configure(
m_probabilities=[0.99, 0.01], u_probabilities=[0.01, 0.99]
)
],
retain_intermediate_calculation_columns=True,
)
linker = Linker(nodes_with_clusters, settings, db_api=db_api)
linker.visualisations.match_weights_chart()
df_predict = linker.inference.predict()
res = linker.clustering.cluster_pairwise_predictions_at_threshold(
df_predict=df_predict, threshold_match_probability=0.5
)
res_duck = res.as_duckdbpyrelation()
res_duck
sql = """
SELECT
COUNT(DISTINCT cluster_id) AS number_of_clusters,
AVG(cluster_size) AS average_cluster_size
FROM (
SELECT
cluster_id,
COUNT(*) AS cluster_size
FROM res_duck
GROUP BY cluster_id
)
"""
duckdb.sql(sql) |
change in #2412, but only just rebased so that it affects this branch
We've heard from several people who want to cluster without a linker. For instance if you are combining predictions from multiple models and want to cluster. e.g. #2358
This PR allows the clustering algorithm to be used without needing a linker, similar to exploratory analysis
Example without linker
Example
Also works for deterministic linking