Skip to content

feat: add NFT token name search to global search bar#76

Open
pthmas wants to merge 3 commits intomainfrom
issue-71
Open

feat: add NFT token name search to global search bar#76
pthmas wants to merge 3 commits intomainfrom
issue-71

Conversation

@pthmas
Copy link
Copy Markdown
Collaborator

@pthmas pthmas commented Apr 24, 2026

Summary

Extends the search API to query nft_tokens.name via ILIKE, returning up to 5 matching tokens as nft-typed results. Adds a GIN trigram index migration for fast name searches. The frontend already supports the nft result type end-to-end.

Summary by CodeRabbit

  • New Features

    • NFT token search: discover NFT tokens by name alongside collections and ERC‑20 tokens.
  • Bug Fixes

    • Text-search fallback now activates only for queries of three or more characters.
    • Wildcard query inputs are safely escaped to avoid unintended matches.
  • Chores

    • Added a trigram index to improve NFT token name search performance.

@pthmas pthmas self-assigned this Apr 24, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 24, 2026

📝 Walkthrough

Walkthrough

Adds token-level NFT search: new NftTokenResult and SearchResult::Nft variant, parallelizes searches to include NFT tokens, applies ILIKE escaping via like_escape, and adds a trigram GIN index on nft_tokens.name. Text-search fallback now triggers only when query length >= 3.

Changes

Cohort / File(s) Summary
Search Handler
backend/crates/atlas-server/src/api/handlers/search.rs
Adds NftTokenResult and SearchResult::Nft (serde tag "nft"); includes NFT tokens in parallel search flow; adds search_nft_tokens (ILIKE with like_escape, ordered by last_transfer_block, limit 5); changes text-search fallback to require query.len() >= 3.
Database Migration
backend/migrations/20260424000001_nft_tokens_name_trgm.sql
Ensures pg_trgm extension and creates a trigram GIN index on nft_tokens.name for faster pattern searches (restricted to non-null names).

Sequence Diagram

sequenceDiagram
    participant Client
    participant SearchHandler as Search Handler
    participant CollDB as Collections DB
    participant TokenDB as NFT Tokens DB
    participant ERC20DB as ERC-20 DB

    Client->>SearchHandler: POST /search?query=xyz
    SearchHandler->>SearchHandler: Validate query length ≥ 3
    par Parallel Searches
        SearchHandler->>CollDB: Search collections by name (ILIKE with escape)
        SearchHandler->>TokenDB: Search nft_tokens by name (ILIKE with escape, trigram index)
        SearchHandler->>ERC20DB: Search ERC-20 tokens (ILIKE with escape)
    and
        CollDB-->>SearchHandler: Collection results
        TokenDB-->>SearchHandler: Token results (up to 5)
        ERC20DB-->>SearchHandler: ERC-20 results
    end
    SearchHandler->>SearchHandler: Aggregate results into SearchResult variants
    SearchHandler-->>Client: Combined SearchResult array (includes `nft` items)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Suggested reviewers

  • tac0turtle

Poem

🐰 I nibble through indexes, swift and spry,
Tokens now spotted with a twinkling eye,
Trigrams hum, ILIKE kept neat,
Five bright finds hop into the fleet,
Hop—search is faster, tidy, and spry.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding NFT token name search functionality to the global search bar, which is the primary objective across both modified files.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch issue-71

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
backend/crates/atlas-server/src/api/handlers/search.rs (1)

210-226: Missing unit test coverage for the new search_nft_tokens path.

No #[cfg(test)] mod tests block was added for the new logic (struct shape, enum serde tag "nft", and helper behavior). Even a serialization smoke test that asserts SearchResult::Nft(...) serializes with "type":"nft" and the expected field set would guard against frontend-contract regressions, since the PR description relies on the existing frontend already consuming this shape.

As per coding guidelines: "Add unit tests for new logic in a #[cfg(test)] mod tests block in the same file".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/src/api/handlers/search.rs` around lines 210 -
226, Add a unit test module (#[cfg(test)] mod tests) in this file that covers
the new search_nft_tokens path and serialization shape: create a small
Vec<NftTokenResult> (or construct via the search_nft_tokens helper), wrap it
into the SearchResult::Nft variant, serialize it with serde_json, and assert the
output contains the expected "type":"nft" tag and the relevant fields
(contract_address, token_id, name, image_url); ensure the test uses the same
struct/enum names (NftTokenResult, SearchResult::Nft) so future changes to the
frontend contract are caught.
backend/migrations/20260424000001_nft_tokens_name_trgm.sql (1)

1-2: LGTM — appropriate trigram index for the new ILIKE search path.

gin_trgm_ops is the right choice for the ILIKE '%query%' pattern used in search_nft_tokens, and the partial predicate (WHERE name IS NOT NULL) keeps the index tight since name is nullable in the schema.

One operational note: CREATE INDEX takes an ACCESS EXCLUSIVE lock on writes to nft_tokens for the duration of the build. If this table is already sizable in production, consider running with CREATE INDEX CONCURRENTLY in a separate out-of-transaction migration to avoid blocking the indexer. If your migration runner wraps each file in a transaction, CONCURRENTLY won’t work here — in that case, it’s fine to leave as-is.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/migrations/20260424000001_nft_tokens_name_trgm.sql` around lines 1 -
2, The migration creates idx_nft_tokens_name_trgm on nft_tokens with
gin_trgm_ops which is correct for search_nft_tokens, but building the index can
take an ACCESS EXCLUSIVE lock; if nft_tokens is large in production, create the
index with CREATE INDEX CONCURRENTLY instead and run that migration outside of a
transaction (or as a separate non-transactional migration file) so the
CONCURRENTLY option is permitted; if your migration runner wraps files in a
transaction and cannot run non-transactional files, leave the current migration
as-is.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/crates/atlas-server/src/api/handlers/search.rs`:
- Around line 210-226: The ILIKE pattern currently uses format!("%{}%", query)
which lets user-supplied '%' and '_' act as wildcards and degrade trigram index
effectiveness; create a shared helper (e.g., escape_like_pattern) and use it
from search_nft_tokens, search_nft_collections, and search_erc20_tokens to
escape '%' and '_' in the input by prefixing them with the Postgres default
escape '\\', then wrap the escaped string with '%' on both sides before binding
to the ILIKE parameter so the query uses a literal match with surrounding
wildcards.
- Around line 104-111: The text-search gate currently uses query.len() >= 2
which allows 2-char queries and defeats the pg_trgm index; change the
conditional that guards the token/collection/ERC-20 parallel searches (the block
that calls search_nft_collections, search_nft_tokens, and search_erc20_tokens
where results.is_empty() && query.len() >= 2) to require a minimum length of 3
(query.len() >= 3) so short 2-character queries skip the expensive trigram-based
text searches.

---

Nitpick comments:
In `@backend/crates/atlas-server/src/api/handlers/search.rs`:
- Around line 210-226: Add a unit test module (#[cfg(test)] mod tests) in this
file that covers the new search_nft_tokens path and serialization shape: create
a small Vec<NftTokenResult> (or construct via the search_nft_tokens helper),
wrap it into the SearchResult::Nft variant, serialize it with serde_json, and
assert the output contains the expected "type":"nft" tag and the relevant fields
(contract_address, token_id, name, image_url); ensure the test uses the same
struct/enum names (NftTokenResult, SearchResult::Nft) so future changes to the
frontend contract are caught.

In `@backend/migrations/20260424000001_nft_tokens_name_trgm.sql`:
- Around line 1-2: The migration creates idx_nft_tokens_name_trgm on nft_tokens
with gin_trgm_ops which is correct for search_nft_tokens, but building the index
can take an ACCESS EXCLUSIVE lock; if nft_tokens is large in production, create
the index with CREATE INDEX CONCURRENTLY instead and run that migration outside
of a transaction (or as a separate non-transactional migration file) so the
CONCURRENTLY option is permitted; if your migration runner wraps files in a
transaction and cannot run non-transactional files, leave the current migration
as-is.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f6902436-63a7-40da-a09f-d3d4d7d71525

📥 Commits

Reviewing files that changed from the base of the PR and between 3b11f28 and e24e616.

📒 Files selected for processing (2)
  • backend/crates/atlas-server/src/api/handlers/search.rs
  • backend/migrations/20260424000001_nft_tokens_name_trgm.sql

Comment thread backend/crates/atlas-server/src/api/handlers/search.rs Outdated
Comment thread backend/crates/atlas-server/src/api/handlers/search.rs
@pthmas
Copy link
Copy Markdown
Collaborator Author

pthmas commented Apr 24, 2026

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 24, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
backend/crates/atlas-server/src/api/handlers/search.rs (2)

193-198: Add unit tests for like_escape and consider extracting a pattern builder.

like_escape is a pure, easily-testable function and no #[cfg(test)] mod tests block exists in this file. A few cases are worth locking down so a later refactor doesn't silently regress wildcard safety: plain input, input containing %/_, input containing \, and combinations (e.g. \_foo%). The escape order matters — backslash must be escaped first (as it already is) or %\% would get re-escaped to \\% on the subsequent pass.

Also, format!("%{}%", like_escape(query)) is now repeated in three call sites; a small helper would DRY it up.

♻️ Proposed refactor
 fn like_escape(input: &str) -> String {
     input
         .replace('\\', "\\\\")
         .replace('%', "\\%")
         .replace('_', "\\_")
 }
+
+fn ilike_contains_pattern(query: &str) -> String {
+    format!("%{}%", like_escape(query))
+}
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+
+    #[test]
+    fn like_escape_passes_through_plain_text() {
+        assert_eq!(like_escape("abc"), "abc");
+    }
+
+    #[test]
+    fn like_escape_escapes_wildcards() {
+        assert_eq!(like_escape("a%b_c"), "a\\%b\\_c");
+    }
+
+    #[test]
+    fn like_escape_escapes_backslash_first() {
+        // Must not double-escape: `\_` input becomes `\\\_`, not `\\\\_`.
+        assert_eq!(like_escape("\\_"), "\\\\\\_");
+    }
+
+    #[test]
+    fn ilike_contains_pattern_wraps_escaped_value() {
+        assert_eq!(ilike_contains_pattern("50%"), "%50\\%%");
+    }
+}

As per coding guidelines: "Add unit tests for new logic in a #[cfg(test)] mod tests block in the same file".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/src/api/handlers/search.rs` around lines 193 -
198, Add a #[cfg(test)] mod tests block in this file and add unit tests for
like_escape that cover plain strings, strings with '%', '_' and '\\', and mixed
cases (e.g. "\\_foo%") to lock down escape order; also extract the repeated
format!("%{}%", like_escape(query)) into a small helper (e.g. like_pattern or
build_like_pattern) and update the three call sites to use that helper, keeping
like_escape unchanged but ensuring tests assert exact returned strings for all
cases.

218-234: Consider returning collection name/symbol, and drop the unreachable NULLS LAST.

Two observations on search_nft_tokens:

  1. NftTokenResult exposes only contract_address for the parent collection. In a global search dropdown, a hit like "CoolApe #123" without the collection name/symbol forces the frontend to do a follow-up lookup (or render just an address), which is a worse UX than the sibling NftCollection results that carry name/symbol. A small JOIN nft_contracts would let the token result include collection name/symbol in a single round-trip.

  2. Per backend/migrations/20240101000001_initial.sql (lines 113–128), nft_tokens.last_transfer_block is BIGINT NOT NULL, so NULLS LAST on that column is a no-op. Minor, but it's misleading to readers who may assume the column is nullable.

♻️ Proposed shape (illustrative)
 #[derive(Debug, Clone, Serialize, Deserialize, sqlx::FromRow)]
 pub struct NftTokenResult {
     pub contract_address: String,
     pub token_id: String,
     pub name: Option<String>,
     pub image_url: Option<String>,
+    pub collection_name: Option<String>,
+    pub collection_symbol: Option<String>,
 }
@@
-    sqlx::query_as(
-        "SELECT contract_address, token_id::text AS token_id, name, image_url
-         FROM nft_tokens
-         WHERE name ILIKE $1
-         ORDER BY last_transfer_block DESC NULLS LAST
-         LIMIT 5",
-    )
+    sqlx::query_as(
+        "SELECT t.contract_address,
+                t.token_id::text AS token_id,
+                t.name,
+                t.image_url,
+                c.name   AS collection_name,
+                c.symbol AS collection_symbol
+         FROM nft_tokens t
+         JOIN nft_contracts c ON c.address = t.contract_address
+         WHERE t.name ILIKE $1
+         ORDER BY t.last_transfer_block DESC
+         LIMIT 5",
+    )

Please confirm the frontend's nft result renderer actually has the collection context it needs from just contract_address, or whether it expects name/symbol alongside (similar to the nft_collection payload).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/src/api/handlers/search.rs` around lines 218 -
234, search_nft_tokens currently only returns contract_address and uses a
misleading "NULLS LAST" on a non-nullable column; update the SQL in
search_nft_tokens to JOIN nft_contracts to include collection name and symbol in
the SELECT, remove "NULLS LAST" from the ORDER BY, and update the NftTokenResult
type (and any places that construct/consume it) to include the new
collection_name and collection_symbol fields so the handler returns those values
in one round-trip.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@backend/crates/atlas-server/src/api/handlers/search.rs`:
- Around line 193-198: Add a #[cfg(test)] mod tests block in this file and add
unit tests for like_escape that cover plain strings, strings with '%', '_' and
'\\', and mixed cases (e.g. "\\_foo%") to lock down escape order; also extract
the repeated format!("%{}%", like_escape(query)) into a small helper (e.g.
like_pattern or build_like_pattern) and update the three call sites to use that
helper, keeping like_escape unchanged but ensuring tests assert exact returned
strings for all cases.
- Around line 218-234: search_nft_tokens currently only returns contract_address
and uses a misleading "NULLS LAST" on a non-nullable column; update the SQL in
search_nft_tokens to JOIN nft_contracts to include collection name and symbol in
the SELECT, remove "NULLS LAST" from the ORDER BY, and update the NftTokenResult
type (and any places that construct/consume it) to include the new
collection_name and collection_symbol fields so the handler returns those values
in one round-trip.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 91dcc91f-e7c6-4dc5-87d9-df3e489141ab

📥 Commits

Reviewing files that changed from the base of the PR and between e24e616 and 44a1bef.

📒 Files selected for processing (1)
  • backend/crates/atlas-server/src/api/handlers/search.rs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant