Large query handling for SQLGraph#281
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #281 +/- ##
==========================================
+ Coverage 87.62% 87.69% +0.06%
==========================================
Files 57 57
Lines 4865 4924 +59
Branches 858 864 +6
==========================================
+ Hits 4263 4318 +55
- Misses 380 384 +4
Partials 222 222 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Hi @yfukai , thanks for the PR. Did you notice some performance improvement? Or was this something you could not compute before? I'm not super familiar with scratch tables, and the PR looks clean, so I asked an LLM to help with the review. It had some good comments. See below: The one thing worth fixing before merging: occurrences = 1 + (not self._include_targets) + (not self._include_sources)
id_set = _SqlIdSet(self._graph, node_ids, occurrences=occurrences)On the cycle: Smaller things: The
The comment in On the tests: the two new tests assert
Approve with minor fixes — the |
|
Thanks @JoOkuma! Sorry for the lack of context; yeah, I got the same error as #249 when I did |
|
That makes sense @yfukai |
This pull request introduces a robust mechanism to handle large lists of IDs in SQL queries, preventing SQL variable overflow errors by dynamically switching between inline
INclauses and temporary scratch tables. It adds a new_SqlIdSethelper class to encapsulate this logic, updates all relevant filtering and degree calculation code paths to use it, and provides comprehensive tests to ensure correctness—especially around edge cases where the number of IDs approaches backend-imposed limits.Key changes include:
Core SQL handling improvements:
_SqlIdSetclass, which automatically decides whether to use an inlineINclause or a temporary scratch table based on the number of IDs and the number of times they are used in a query, preventing SQL variable overflow (OperationalError: too many SQL variables). Scratch tables are created and cleaned up as needed, with automatic resource management usingweakref.finalize.SQLGraph.filter,overlaps, and_get_degreeto use_SqlIdSet, ensuring consistent handling of large ID sets across all query paths.Testing and validation:
test_subgraph.pythat create graphs with enough nodes to trigger the scratch-table code path, verify correct filtering and degree calculations, and ensure the cutoff logic accounts for the number of ID occurrences per query. This includes edge cases near the cutoff boundary.Internal utilities and cleanup:
_SqlIdSetinstances, with error handling and logging for robustness during interpreter shutdown or unexpected errors.These changes make the SQL backend resilient to large filter operations and improve maintainability by centralizing the handling of SQL variable limits.