Skip to content

Conversation

zinic
Copy link
Contributor

@zinic zinic commented Aug 13, 2025

Summary by CodeRabbit

  • New Features

    • Added unified database API with pluggable drivers and a driver registry.
    • Introduced Neo4j backend with schema management, result mapping, and session/transaction support.
    • Added PostgreSQL batch operations for nodes/relationships and refreshed PG schema/query layers.
    • Delivered a new fluent Cypher-like query builder and revamped traversal using the new API.
    • Provided a v1 compatibility layer to run existing integrations on the new stack.
  • Bug Fixes

    • Correct string escaping in SQL, RFC3339Nano time formatting, deterministic ordering/limits, stricter NOT NULL, and clearer error messages.
  • Refactor

    • Consolidated schema/types under database package, updated imports, and removed legacy PG/Neo4j driver implementations.

Copy link

coderabbitai bot commented Aug 13, 2025

Walkthrough

Introduces a new database abstraction (Instance/Driver/Result), migrates Neo4j and PostgreSQL implementations under database/*, adds a v1 compatibility layer, replaces legacy drivers, overhauls query building to a new fluent API, updates traversal to use the new database interfaces, adjusts tests, and refactors schema/types and formatting utilities.

Changes

Cohort / File(s) Summary
Cypher literal handling
cypher/models/cypher/model.go
Adjusted NewLiteral/NewStringLiteral to directly construct quoted string literals; internal logic simplified without API changes.
PostgreSQL value formatting
cypher/models/pgsql/format/format.go
Added time handling (RFC3339Nano, timestamptz cast), string quote-escaping, and []string via text array formatting.
PGSQL tests update
cypher/models/pgsql/test/testcase.go, cypher/models/pgsql/test/translation_test.go, cypher/models/pgsql/test/validation_integration_test.go, cypher/models/pgsql/visualization/visualizer_test.go
Switched from pg driver to database.Instance; Explain via Session/Driver; updated imports and schema construction; adjusted config to ConnectionString.
Core database abstraction
database/driver.go, database/schema.go, database/tooling.go
Added Option/Result/Driver/Instance interfaces, errorResult wrapper; new schema model (Index/Constraint/Graph/Schema); moved tooling package.
Neo4j v2 driver and database
database/neo4j/*
Implemented Neo4j database.Instance and Driver (Exec/Explain/Profile/Create*); schema manager to reconcile indexes/constraints; driver registration (neo4j_v2), config constants, and minimal tests.
PostgreSQL v2 driver and infra
database/pg/* (config.go, database.go, driver.go, batch.go, manager.go, mapper.go, model/, query/, statements.go, types.go, util.go, pg.go, query/sql/*)
Added v2-compatible Instance (session/transaction/raw), Driver (translate/Exec/Explain/Profile/Create*), batch ops, schema/graph model, manager, context-aware queries and SQL assets, statement return shapes updated to id-only, improved errors, registry integration with pool/conn-string.
Registry and opening APIs
registry.go, registry_integration_test.go
Introduced driver registry (Register/Open/OpenV1), dawgs.Config, and an integration test using PostgreSQL backend.
v1 compatibility layer
database/v1compat/*
Added v1-style Database/Transaction/Batch interfaces, result adapters, query builders, traversal utilities, wrappers bridging to v2 Instance/Driver, error helpers, type aliases, and size-aware IndexedSlice; updated ops imports.
Traversal refactor (v2)
traversal/traversal.go
Replaced graph.Transaction-based Driver with database-based Logic; new Instance wrapper with memory limit; criteria now cypher.SyntaxNode.
Graph package adjustments
graph/graph.go, graph/properties.go, graph/schema.go, graph/error.go
Removed legacy Database/Transaction/Batch and Start/End constants; removed graph schema types (moved under database); enhanced Properties.Merge; deleted no-op file.
Legacy driver removal (Neo4j)
drivers/neo4j/*
Removed entire legacy Neo4j driver, batching, query builder, transaction, result adapter, and tests.
Legacy driver removal (PostgreSQL)
drivers/pg/*
Removed legacy PostgreSQL driver, batching, node/relationship queries, result and tooling, and graph query layer.
Query API overhaul
query/query.go, query/util.go, query/query_test.go
Added a new fluent query builder (identifiers, continuations, build/update/projection); identifier extraction utilities; tests validating emitted Cypher.
Removed legacy query builder
query/neo4j/*, query/model.go
Deleted former Neo4j-specific query builder, rewriter, and public model helpers.
Graph cache changes
graphcache/cache.go, database/v1compat/traversal/*
Removed ID-based fetchers from cache; added v1compat traversal collectors and shallow fetch by IDs under v1compat.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Tester
  participant Dawgs as dawgs.Open
  participant Registry
  participant DB as database.Instance
  participant Sess as DB.Session
  participant Drv as database.Driver
  participant Neo4j as neo4j/pg backend

  Tester->>Dawgs: Open(ctx, "neo4j_v2"/"pg", Config)
  Dawgs->>Registry: constructor lookup
  Registry-->>Dawgs: DriverConstructor
  Dawgs->>DB: new Instance(Config)
  DB-->>Tester: Instance

  Tester->>DB: Session(ctx, logic, OptionReadOnly)
  DB->>Sess: acquire session/connection
  Sess->>Drv: provide Driver
  Tester->>Drv: Explain(ctx, RegularQuery, params)
  Drv->>Neo4j: run "explain <cypher>"
  Neo4j-->>Drv: Result
  Drv-->>Tester: Result
  Tester->>Result: Close(ctx)
  Result-->>Tester: ok
Loading
sequenceDiagram
  autonumber
  actor App
  participant V1 as v1compat.Database
  participant V2 as database.Instance
  participant Wrap as v1 wrapper
  participant D as database.Driver

  App->>V1: WriteTransaction(ctx, func(tx){...})
  V1->>V2: Session(ctx, delegate, OptionReadWrite)
  V2-->>Wrap: provide Driver to wrapper
  Wrap->>D: CreateNode / Exec / CreateRelationship
  D-->>Wrap: IDs / Result
  Wrap-->>V1: delegate completion
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~180+ minutes

Possibly related PRs

  • feat (v2): v2 #4 — Also modifies Cypher AST and walker patterns; likely adjacent to the literal/model and builder refactors in this PR.

Poem

A rabbit taps keys with a twinkle and grin,
New tunnels to Neo4j, Postgres within.
Old burrows retired, fresh paths now align,
Queries hop smoothly in orderly time.
V1 meets V2 with a soft bunny wink—
Carrots of schemas in perfect sync. 🥕✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 7.45% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title accurately and concisely summarizes the primary intent of the large changeset by indicating the migration to DAWGS Version 2, reflecting the major architectural overhaul and new API surfaces introduced.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch v2

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 36

🔭 Outside diff range comments (9)
database/v1compat/switch.go (2)

37-39: Fix data race on inSwitch by guarding writes with the same mutex used for reads

inSwitch is read under ctxLock (Line 89) but written without any synchronization (Lines 62, 65). This is a classic data race. Guard writes with ctxLock to align with newInternalContext’s read path.

Apply this diff to protect inSwitch:

 func (s *DatabaseSwitch) Switch(db Database) {
-	s.inSwitch = true
+	s.ctxLock.Lock()
+	s.inSwitch = true
+	s.ctxLock.Unlock()
 
 	defer func() {
-		s.inSwitch = false
+		s.ctxLock.Lock()
+		s.inSwitch = false
+		s.ctxLock.Unlock()
 	}()
 
 	s.cancelInternalContexts()

Also applies to: 61-67, 84-91


54-59: Consistently pass internalCtx to downstream DB calls (several methods currently use ctx)

ReadTransaction/WriteTransaction/BatchOperation/Run correctly pass internalCtx, but:

  • SetDefaultGraph (Lines 54-59) does not gate on inSwitch nor use internalCtx.
  • AssertSchema (Line 166), Close (Line 192), FetchKinds (Line 205), RefreshKinds (Line 218) create internalCtx but then pass ctx to the underlying DB, defeating cancellation and switch-guarding.

Apply this diff to align all calls with the internalCtx contract:

 func (s *DatabaseSwitch) SetDefaultGraph(ctx context.Context, graphSchema Graph) error {
-	s.currentDBLock.RLock()
-	defer s.currentDBLock.RUnlock()
-
-	return s.currentDB.SetDefaultGraph(ctx, graphSchema)
+	if internalCtx, err := s.newInternalContext(ctx); err != nil {
+		return err
+	} else {
+		defer s.retireInternalContext(internalCtx)
+
+		s.currentDBLock.RLock()
+		defer s.currentDBLock.RUnlock()
+
+		return s.currentDB.SetDefaultGraph(internalCtx, graphSchema)
+	}
 }
 
 func (s *DatabaseSwitch) AssertSchema(ctx context.Context, dbSchema Schema) error {
 	if internalCtx, err := s.newInternalContext(ctx); err != nil {
 		return err
 	} else {
 		defer s.retireInternalContext(internalCtx)
 
 		s.currentDBLock.RLock()
 		defer s.currentDBLock.RUnlock()
 
-		return s.currentDB.AssertSchema(ctx, dbSchema)
+		return s.currentDB.AssertSchema(internalCtx, dbSchema)
 	}
 }
 
 func (s *DatabaseSwitch) Close(ctx context.Context) error {
 	if internalCtx, err := s.newInternalContext(ctx); err != nil {
 		return err
 	} else {
 		defer s.retireInternalContext(internalCtx)
 
 		s.currentDBLock.RLock()
 		defer s.currentDBLock.RUnlock()
 
-		return s.currentDB.Close(ctx)
+		return s.currentDB.Close(internalCtx)
 	}
 }
 
 func (s *DatabaseSwitch) FetchKinds(ctx context.Context) (Kinds, error) {
 	if internalCtx, err := s.newInternalContext(ctx); err != nil {
 		return nil, err
 	} else {
 		defer s.retireInternalContext(internalCtx)
 
 		s.currentDBLock.RLock()
 		defer s.currentDBLock.RUnlock()
 
-		return s.currentDB.FetchKinds(ctx)
+		return s.currentDB.FetchKinds(internalCtx)
 	}
 }
 
 func (s *DatabaseSwitch) RefreshKinds(ctx context.Context) error {
 	if internalCtx, err := s.newInternalContext(ctx); err != nil {
 		return err
 	} else {
 		defer s.retireInternalContext(internalCtx)
 
 		s.currentDBLock.RLock()
 		defer s.currentDBLock.RUnlock()
 
-		return s.currentDB.RefreshKinds(ctx)
+		return s.currentDB.RefreshKinds(internalCtx)
 	}
 }

Also applies to: 157-168, 183-194, 196-207, 209-220

database/pg/mapper.go (1)

39-45: Don’t return success when type assertion failed in []any case.

If any element fails int16 assertion, validType is set false but we still attempt MapKindIDs and may return true. Guard mapping on validType to avoid masking type errors.

Apply this diff:

-    // Guard to prevent unnecessary thrashing of critical sections if there are no kind IDs to resolve
-    if len(kindIDs) > 0 {
+    // Guard to prevent unnecessary thrashing and avoid masking invalid types
+    if validType && len(kindIDs) > 0 {
         // Ignoring the error here is intentional. Failure to map the kinds here does not imply a fatal error.
         if mappedKinds, err := kindMapper.MapKindIDs(ctx, kindIDs); err == nil {
             return mappedKinds, true
         }
     }
database/v1compat/ops/ops.go (3)

150-171: Bug: taking address of loop variable when adding nodes yields duplicate pointers

You’re storing the address of a loop-local variable in the set, causing all entries to point at the same node. Allocate a new node per iteration and scan into it.

Apply this diff:

-        for (limit <= 0 || nodes.Len() < limit) && result.Next() {
-            var node graph.Node
-
-            if err := result.Scan(&node); err != nil {
-                return nil, err
-            }
-
-            nodes.Add(&node)
-        }
+        for (limit <= 0 || nodes.Len() < limit) && result.Next() {
+            node := &graph.Node{}
+
+            if err := result.Scan(node); err != nil {
+                return nil, err
+            }
+
+            nodes.Add(node)
+        }

38-70: Results resource handling and potential nil node dereference

  • The closure passed to Query does not Close the results nor return results.Error(); errors can be silently dropped, and resources may leak.
  • nodes[nodeID] can be nil if the NodeSet doesn’t contain the key; dereferencing to set Properties will panic.

Recommend closing results, returning results.Error(), and guarding node presence.

 return tx.Nodes().Filter(
     query.InIDs(query.NodeID(), nodes.IDs()...),
 ).Query(func(results graph.Result) error {
     var (
         nodeID graph.ID
         mapper = results.Mapper()
     )
+    defer results.Close()

     for results.Next() {
         var (
             nodeProperties = map[string]any{}
             values         = results.Values()
         )

         // Map the node ID first
         if !mapper.Map(values[0], &nodeID) {
             return fmt.Errorf("expected node ID as first value but received %T", values[0])
         }

         // Map requested properties next by matching the name index
         for idx := 0; idx < len(propertyNames); idx++ {
             nodeProperties[propertyNames[idx]] = values[1+idx]
         }

         // Update the node in the node set
-        nodes[nodeID].Properties = graph.AsProperties(nodeProperties)
+        if node := nodes[nodeID]; node != nil {
+            node.Properties = graph.AsProperties(nodeProperties)
+        } else {
+            return fmt.Errorf("node id %v not present in node set", nodeID)
+        }
     }

-    return nil
+    return results.Error()
 }, query.Returning(
     returningCriteria...,
 ))

176-227: Path assembly across rows likely incorrect; currentPath accumulates over all rows

currentPath is never reset per row and is only appended once at the end, so multiple rows can be flattened into one path. Reset per row and append at the end of each iteration.

 func FetchPathSetByQuery(tx graph.Transaction, query string) (graph.PathSet, error) {
     var (
-        currentPath graph.Path
         pathSet     graph.PathSet
     )
@@
     } else {
         defer result.Close()

-        for result.Next() {
+        for result.Next() {
+            // Reset per row
+            currentPath := graph.Path{}
             var (
                 relationship = &graph.Relationship{}
                 node         = &graph.Node{}
                 path         = &graph.Path{}
                 mapper       = result.Mapper()
             )
@@
             if tx.GraphQueryMemoryLimit() > 0 {
                 var (
-                    currentPathSize = size.OfSlice(currentPath.Edges) + size.OfSlice(currentPath.Nodes)
+                    currentPathSize = size.OfSlice(currentPath.Edges) + size.OfSlice(currentPath.Nodes)
                     pathSetSize     = size.Of(pathSet)
                 )
 
                 if currentPathSize > tx.GraphQueryMemoryLimit() || pathSetSize > tx.GraphQueryMemoryLimit() {
                     return pathSet, fmt.Errorf("%s - Limit: %.2f MB", "query required more memory than allowed", tx.GraphQueryMemoryLimit().Mebibytes())
                 }
             }
+
+            // Append the assembled path for this row if non-empty
+            if len(currentPath.Nodes) > 0 || len(currentPath.Edges) > 0 {
+                pathSet = append(pathSet, currentPath)
+            }
         }
 
-        // If there were elements added to the current path ensure that it's added to the path set before returning
-        if len(currentPath.Nodes) > 0 || len(currentPath.Edges) > 0 {
-            pathSet = append(pathSet, currentPath)
-        }
-
         return pathSet, result.Error()
     }
 }
database/v1compat/query/builder.go (1)

222-271: Unreachable type cases in Apply; clean up or widen parameter type

With Apply(criteria ...cypher.SyntaxNode), cases for []cypher.SyntaxNode and []*cypher.UpdatingClause are unreachable, since a slice type won’t satisfy cypher.SyntaxNode. This is misleading and dead code.

Apply this cleanup:

-func (s *Builder) Apply(criteria ...cypher.SyntaxNode) {
+func (s *Builder) Apply(criteria ...cypher.SyntaxNode) {
     for _, nextCriteria := range criteria {
         switch typedCriteria := nextCriteria.(type) {
-        case []cypher.SyntaxNode:
-            s.Apply(typedCriteria...)
-
         case *cypher.Where:
             firstReadingClause := GetFirstReadingClause(s.regularQuery)
@@
-        case []*cypher.UpdatingClause:
-            for _, updatingClause := range typedCriteria {
-                s.Apply(updatingClause)
-            }
-
         case *cypher.UpdatingClause:
             s.regularQuery.SingleQuery.SinglePartQuery.AddUpdatingClause(cypher.Copy(typedCriteria))

Alternative: if you need to support callers passing slices, change the signature to Apply(criteria ...any) and handle both cypher.SyntaxNode and the slice variants explicitly. I can draft that if preferred.

database/pg/manager.go (1)

84-97: Fix duplicate assignment in fetch method.

Line 91 has a redundant assignment s.kindsByID[kind] = kindID that duplicates the assignment from line 88.

Remove the duplicate assignment:

 	} else {
 		s.kindsByID = kinds
 
 		for kind, kindID := range s.kindsByID {
-			s.kindsByID[kind] = kindID
 			s.kindIDsByKind[kindID] = kind
 		}
 	}
traversal/traversal.go (1)

141-209: Fix incorrect query parameter for inbound direction.

In the inbound case (lines 159-163), the query incorrectly uses query.Start().ID().Equals(segment.Node.ID) when it should use query.End().ID().Equals(segment.Node.ID) to match relationships ending at the current node.

Apply this fix:

 			case graph.DirectionInbound:
-				queryBuilder.Where(append(allCriteria, query.Start().ID().Equals(segment.Node.ID))...).Return(
+				queryBuilder.Where(append(allCriteria, query.End().ID().Equals(segment.Node.ID))...).Return(
 					query.Relationship(),
-					query.End(),
+					query.Start(),
 				)
🧹 Nitpick comments (54)
database/tooling.go (2)

5-11: Nit: make the atomic flag type explicit at declaration.

Minor readability tweak: explicitly type the var to uint32 rather than relying on constant-typed inference.

Apply:

- var isQueryAnalysisEnabled = queryAnalysisDisabled
+ var isQueryAnalysisEnabled uint32 = queryAnalysisDisabled

5-8: Optional: align sentinel values with conventional semantics.

Currently enabled=0 and disabled=1. While correct, it’s counterintuitive. Consider flipping (enabled=1, disabled=0) to reduce cognitive overhead; no functional difference if applied consistently.

database/pg/query/sql/schema_up.sql (1)

356-366: Temporary table consistency: consider making all frontiers consistent or document why only next_front is NOT NULL.

Only next_front.satisfied is NOT NULL; forward_front/backward_front remain nullable. If this asymmetry is intentional (e.g., using NULL as “unknown” within the current frontier but requiring concrete bool when moving to the next frontier), add a brief comment near table definitions clarifying the invariant. Otherwise, consider aligning constraints across tables.

Also applies to: 367-377

database/pg/types.go (2)

216-261: Avoid slice accumulation when reusing pathComposite; pre-size for perf.

FromMap appends to s.Nodes and s.Edges without clearing. If s is reused, elements will accumulate. Also, pre-sizing avoids reallocs.

Suggested refactor:

 func (s *pathComposite) FromMap(compositeMap map[string]any) error {
-    if rawNodes, hasNodes := compositeMap["nodes"]; hasNodes {
+    // Reset slices to avoid accumulation on reuse
+    s.Nodes = s.Nodes[:0]
+    s.Edges = s.Edges[:0]
+
+    if rawNodes, hasNodes := compositeMap["nodes"]; hasNodes {
         if typedRawNodes, typeOK := rawNodes.([]any); !typeOK {
             return fmt.Errorf("expected nodes to be []any but got %T", rawNodes)
         } else {
-            for _, rawNode := range typedRawNodes {
+            // Pre-size to reduce allocations
+            s.Nodes = make([]nodeComposite, 0, len(typedRawNodes))
+            for _, rawNode := range typedRawNodes {
                 switch typedNode := rawNode.(type) {
                 case map[string]any:
                     var node nodeComposite
                     if err := node.FromMap(typedNode); err != nil {
                         return err
                     }
                     s.Nodes = append(s.Nodes, node)
                 default:
                     return fmt.Errorf("unexpected type for raw node: %T", rawNode)
                 }
             }
         }
     }
 
     if rawEdges, hasEdges := compositeMap["edges"]; hasEdges {
         if typedRawEdges, typeOK := rawEdges.([]any); !typeOK {
             return fmt.Errorf("expected edges to be []any but got %T", rawEdges)
         } else {
-            for _, rawEdge := range typedRawEdges {
+            s.Edges = make([]edgeComposite, 0, len(typedRawEdges))
+            for _, rawEdge := range typedRawEdges {
                 switch typedNode := rawEdge.(type) {
                 case map[string]any:
                     var edge edgeComposite
                     if err := edge.FromMap(typedNode); err != nil {
                         return err
                     }
                     s.Edges = append(s.Edges, edge)
                 default:
                     return fmt.Errorf("unexpected type for raw edge: %T", rawEdge)
                 }
             }
         }
     }
 
     return nil
 }

18-35: Minor: castSlice error message is good; consider supporting []T inputs.

If upstream sometimes passes a concrete []map[string]any or []int16 rather than []any, the assert to []any will fail. Optional: expand castSlice to accept concrete []T via reflection or by handling common cases. Not required if inputs are normalized.

database/pg/config.go (1)

5-7: Add brief doc on lifecycle/ownership of Pool.

Consider documenting whether the driver owns closing the pool or expects the caller to manage it. A single-line comment helps avoid double-close or leaks.

database/v1compat/errors.go (1)

20-23: Consider consistent error message formatting.

The error message format is good, but consider using a structured format that makes it easier to parse programmatically if needed.

-	return fmt.Errorf("driver error: %w - query: %s", driverErr, query)
+	return fmt.Errorf("driver error: %w (query: %s)", driverErr, query)
database/driver.go (2)

13-16: Consider using iota for Option constants.

Using iota would make the constants more maintainable and idiomatic.

 const (
-	OptionReadOnly  Option = 0
-	OptionReadWrite Option = 1
+	OptionReadOnly Option = iota
+	OptionReadWrite
 )

42-42: Consider adding documentation for QueryLogic.

The callback pattern is good, but would benefit from documentation explaining its purpose and usage patterns.

+// QueryLogic represents a function that performs database operations within a session or transaction context.
+// The function receives a configured Driver instance and should return an error if any operation fails.
 type QueryLogic func(ctx context.Context, driver Driver) error
database/pg/batch.go (3)

27-41: Consider using strings.Builder for better performance.

The current implementation is correct, but strings.Builder is more efficient than bytes.Buffer for string concatenation operations.

 type Int2ArrayEncoder struct {
-	buffer *bytes.Buffer
+	builder *strings.Builder
 }

 func NewInt2ArrayEncoder() Int2ArrayEncoder {
 	return Int2ArrayEncoder{
-		buffer: &bytes.Buffer{},
+		builder: &strings.Builder{},
 	}
 }

 func (s *Int2ArrayEncoder) Encode(values []int16) string {
-	s.buffer.Reset()
-	s.buffer.WriteRune('{')
+	s.builder.Reset()
+	s.builder.WriteRune('{')

 	for idx, value := range values {
 		if idx > 0 {
-			s.buffer.WriteRune(',')
+			s.builder.WriteRune(',')
 		}

-		s.buffer.WriteString(strconv.Itoa(int(value)))
+		s.builder.WriteString(strconv.Itoa(int(value)))
 	}

-	s.buffer.WriteRune('}')
-	return s.buffer.String()
+	s.builder.WriteRune('}')
+	return s.builder.String()
 }

65-81: Improve error context in Append method.

The error messages could be more descriptive to help with debugging.

 	if mappedKindIDs, err := schemaManager.AssertKinds(ctx, update.Node.Kinds); err != nil {
-		return fmt.Errorf("unable to map kinds %w", err)
+		return fmt.Errorf("unable to map node kinds %v: %w", update.Node.Kinds, err)
 	} else {
 		s.KindIDSlices = append(s.KindIDSlices, kindIDEncoder.Encode(mappedKindIDs))
 	}

 	if propertiesJSONB, err := pgsql.PropertiesToJSONB(update.Node.Properties); err != nil {
-		return err
+		return fmt.Errorf("failed to encode node properties to JSONB: %w", err)
 	} else {
 		s.Properties = append(s.Properties, propertiesJSONB)
 	}

200-232: Consider using a struct for the composite key instead of string concatenation.

The current string concatenation approach for deduplication keys works but could be more type-safe and potentially more efficient.

+type relationshipKey struct {
+	startID graph.ID
+	endID   graph.ID
+	kind    graph.Kind
+}
+
 type relationshipCreateBatchBuilder struct {
-	keyToEdgeID             map[string]uint64
+	keyToEdgeID             map[relationshipKey]uint64
 	relationshipUpdateBatch *relationshipCreateBatch
 	edgePropertiesIndex     map[uint64]int
 	edgePropertiesBatch     []*graph.Properties
 }

 func newRelationshipCreateBatchBuilder(size int) *relationshipCreateBatchBuilder {
 	return &relationshipCreateBatchBuilder{
-		keyToEdgeID:             map[string]uint64{},
+		keyToEdgeID:             map[relationshipKey]uint64{},
 		relationshipUpdateBatch: newRelationshipCreateBatch(size),
 		edgePropertiesIndex:     map[uint64]int{},
 	}
 }

 func (s *relationshipCreateBatchBuilder) Add(ctx context.Context, kindMapper KindMapper, edge *graph.Relationship) error {
-	keyBuilder := strings.Builder{}
-
-	keyBuilder.WriteString(edge.StartID.String())
-	keyBuilder.WriteString(edge.EndID.String())
-	keyBuilder.WriteString(edge.Kind.String())
-
-	key := keyBuilder.String()
+	key := relationshipKey{
+		startID: edge.StartID,
+		endID:   edge.EndID,
+		kind:    edge.Kind,
+	}

 	if existingPropertiesIdx, hasExisting := s.keyToEdgeID[key]; hasExisting {
database/neo4j/database_integration_test.go (1)

1-1: Gate the integration test behind a build tag and add a stub to avoid an empty test file

This file currently contains only a package declaration. If you intend to add integration tests later, consider gating it with an "integration" build tag and adding a skipped smoke test to document intent and prevent accidental inclusion in default test runs.

Apply this diff:

-package neo4j_test
+//go:build integration
+// +build integration
+
+package neo4j_test
+
+import "testing"
+
+func TestNeo4j_Integration_Smoke(t *testing.T) {
+	t.Skip("TODO: implement integration test using database/neo4j New(...) with a live Neo4j instance")
+}
database/v1compat/switch.go (1)

99-106: Clean up activeContexts entries when canceling to avoid stale entries during a switch

cancelInternalContexts currently cancels but does not remove entries from the map. While defers in callers will eventually retire, proactively clearing can reduce retained references during a switch.

Apply this diff:

 func (s *DatabaseSwitch) cancelInternalContexts() {
 	s.ctxLock.Lock()
 	defer s.ctxLock.Unlock()
 
-	for _, doneFunc := range s.activeContexts {
-		doneFunc()
-	}
+	for ctx, doneFunc := range s.activeContexts {
+		doneFunc()
+		delete(s.activeContexts, ctx)
+	}
 }
database/v1compat/ops/traversal.go (2)

154-156: Rename “descendents” → “descendants” for clarity and consistency.

Minor spelling cleanup; also rename local “nextDescendent” to “nextDescendant”.

Apply this diff:

-        if descendents, err := nextTraversal(tx, next, plan.Direction, plan.BranchQuery, requireTraversalOrder, plan.ExpansionFilter); err != nil {
+        if descendants, err := nextTraversal(tx, next, plan.Direction, plan.BranchQuery, requireTraversalOrder, plan.ExpansionFilter); err != nil {
@@
-                for idx := 0; idx < len(descendents); idx++ {
-                    if nextDescendent := descendents[idx]; plan.DescentFilter(ctx, nextDescendent) {
-                        stack = append(stack, nextDescendent)
+                for idx := 0; idx < len(descendants); idx++ {
+                    if nextDescendant := descendants[idx]; plan.DescentFilter(ctx, nextDescendant) {
+                        stack = append(stack, nextDescendant)
                     }
                 }
             } else {
                 // No filter means descend into all potential paths
-                stack = append(stack, descendents...)
+                stack = append(stack, descendants...)
             }
@@
-            // If this node does not have descendents then it's a path terminal
+            // If this node does not have descendants then it's a path terminal

Also applies to: 168-176, 178-178


137-143: Inline ctx initialization to reduce churn.

Small readability improvement.

-    ctx = &TraversalContext{
-        LimitSkipTracker: LimitSkipTracker{
-            Limit: plan.Limit,
-            Skip:  plan.Skip,
-        },
-    }
+    ctx = &TraversalContext{LimitSkipTracker: LimitSkipTracker{Limit: plan.Limit, Skip: plan.Skip}}
database/pg/model/format.go (1)

27-47: LGTM; consider sanitization and identifier length constraints

The type shift to database.{Index,Constraint} is consistent with the schema changes. As a follow-up, consider normalizing/sanitizing Field and truncating/hashing to comply with PostgreSQL’s 63-byte identifier limit to avoid name collisions in long labels.

database/neo4j/mapper.go (2)

12-29: Temporal coverage: ensure all Neo4j temporal types are handled

You handle Time, LocalTime, Date, LocalDateTime. Verify whether the driver also exposes Zoned/Offset variants (e.g., DateTime with timezone) that should be mapped. If present, extend asTime to cover them to avoid silent mapping failures.

Would you like me to extend asTime with the additional dbtype temporal variants after confirming the driver’s set?


31-56: Unify dbtype vs neo4j aliases for clarity

newNode/newRelationship/newPath accept neo4j.Node/Relationship/Path, but elsewhere you use dbtype.. Since neo4j. are type aliases to dbtype., consider standardizing on dbtype. in signatures and removing the neo4j import to reduce cognitive overhead.

-import (
-    "time"
-
-    "github.com/neo4j/neo4j-go-driver/v5/neo4j"
-
-    "github.com/neo4j/neo4j-go-driver/v5/neo4j/dbtype"
-    "github.com/specterops/dawgs/graph"
-)
+import (
+    "time"
+    "github.com/neo4j/neo4j-go-driver/v5/neo4j/dbtype"
+    "github.com/specterops/dawgs/graph"
+)
@@
-func newNode(internalNode neo4j.Node) *graph.Node {
+func newNode(internalNode dbtype.Node) *graph.Node {
@@
-func newRelationship(internalRelationship neo4j.Relationship) *graph.Relationship {
+func newRelationship(internalRelationship dbtype.Relationship) *graph.Relationship {
@@
-func newPath(internalPath neo4j.Path) graph.Path {
+func newPath(internalPath dbtype.Path) graph.Path {
registry_integration_test.go (3)

11-11: Clean up commented code or document its purpose.

The commented import suggests incomplete migration or future work. Consider removing it or adding a TODO comment explaining its purpose.

-	//pg_v2 "github.com/specterops/dawgs/drivers/pg/v2"

28-31: Remove or extract commented PostgreSQL configuration to a separate test.

The commented code block suggests alternative driver testing. Consider removing it or creating a separate test function for PostgreSQL v2 driver testing.

-	//graphDB, err := v2.Open(ctx, pg_v2.DriverName, v2.Config{
-	//	GraphQueryMemoryLimit: size.Gibibyte * 1,
-	//	ConnectionString:      "postgresql://postgres:postgres@localhost:5432/bhe",
-	//})

67-79: Remove or document the commented Cypher query example.

This commented block appears to demonstrate an alternative query pattern. Consider moving it to documentation or removing it to keep the test focused.

-	//require.NoError(t, graphDB.Transaction(ctx, func(ctx context.Context, driver v2.Driver) error {
-	//	builder := v2.Query().Create(
-	//		v2.Node().NodePattern(graph.Kinds{graph.StringKind("A")}, cypher.NewParameter("props", map[string]any{
-	//			"name": "1234",
-	//		})),
-	//	)
-	//
-	//	if preparedQuery, err := builder.Build(); err != nil {
-	//		return err
-	//	} else {
-	//		return driver.CypherQuery(ctx, preparedQuery.Query, preparedQuery.Parameters).Close(ctx)
-	//	}
-	//}))
database/neo4j/neo4j.go (2)

26-42: Add connection validation and improve error messages.

Consider validating the connection after creating the driver and providing more context in error messages.

 func newNeo4jDB(ctx context.Context, cfg dawgs.Config) (database.Instance, error) {
 	if connectionURL, err := url.Parse(cfg.ConnectionString); err != nil {
-		return nil, err
+		return nil, fmt.Errorf("failed to parse connection string: %w", err)
 	} else if connectionURL.Scheme != Neo4jConnectionScheme {
 		return nil, fmt.Errorf("expected connection URL scheme %s for Neo4J but got %s", Neo4jConnectionScheme, connectionURL.Scheme)
 	} else if password, isSet := connectionURL.User.Password(); !isSet {
 		return nil, fmt.Errorf("no password provided in connection URL")
 	} else {
 		boltURL := fmt.Sprintf("bolt://%s:%s", connectionURL.Hostname(), connectionURL.Port())
 
 		if internalDriver, err := neo4j.NewDriverWithContext(boltURL, neo4j.BasicAuth(connectionURL.User.Username(), password, "")); err != nil {
 			return nil, fmt.Errorf("unable to connect to Neo4J: %w", err)
 		} else {
+			// Verify the connection is valid
+			if err := internalDriver.VerifyConnectivity(ctx); err != nil {
+				internalDriver.Close(ctx)
+				return nil, fmt.Errorf("failed to verify Neo4J connectivity: %w", err)
+			}
 			return New(internalDriver, cfg), nil
 		}
 	}
 }

34-34: Consider making the bolt URL scheme configurable.

The hardcoded "bolt://" scheme might limit flexibility for different Neo4j deployment configurations (e.g., bolt+s, neo4j+s).

You could derive the bolt scheme from the input connection scheme or make it configurable to support secure connections:

-		boltURL := fmt.Sprintf("bolt://%s:%s", connectionURL.Hostname(), connectionURL.Port())
+		// Map neo4j schemes to bolt schemes
+		boltScheme := "bolt"
+		if connectionURL.Scheme == "neo4j+s" || connectionURL.Scheme == "neo4j+ssc" {
+			boltScheme = "bolt+s"
+		}
+		boltURL := fmt.Sprintf("%s://%s:%s", boltScheme, connectionURL.Hostname(), connectionURL.Port())
registry.go (1)

12-14: Consider enriching ErrDriverMissing context.

Using a sentinel is fine, but wrapping it with driverName where appropriate improves debuggability while preserving errors.Is semantics.

database/pg/query/format.go (1)

14-34: IndexType migration looks correct and aligns with new schema model.

Switching to database.IndexType and mapping to pg index kinds is consistent. Returning "NOT SUPPORTED" for unsupported types adheres to the team’s preference to fail at execution time with full query context.

Consider adding a short comment documenting that "NOT SUPPORTED" is intentional to defer error handling to the SQL execution layer (see team learning).

database/pg/driver.go (2)

102-105: Prefer context.Background over context.TODO in Mapper.

Mapper() is a public API surface; TODO is intended for temporary placeholders. Background is more appropriate here.

 func (s *dawgsDriver) Mapper() graph.ValueMapper {
-	return newValueMapper(context.TODO(), s.schemaManager)
+	return newValueMapper(context.Background(), s.schemaManager)
 }

188-191: Avoid mutating shared driver state in WithGraph; return a scoped copy.

Mutating s.targetGraph on the receiver makes concurrent use ambiguous and error-prone. Prefer returning a shallow copy with the target graph set.

I can propose a small, safe copy pattern here if you want to make this driver trivially concurrency-safe with respect to graph targeting.

query/util.go (2)

26-36: Early guard against mixing node/relationship identifiers is good; consider hinting which identifiers conflicted.

The error is generic. Including which identifiers were seen would improve debuggability.

If helpful, I can sketch a version that surfaces the specific conflicting identifiers.


122-131: Remove unused state in identifierExtractor.

inDelete, inUpdate, inCreate, inWhere are declared but unused, adding noise.

 type identifierExtractor struct {
 	walk.Visitor[cypher.SyntaxNode]
 
 	seen *identifierSet
-
-	inDelete bool
-	inUpdate bool
-	inCreate bool
-	inWhere  bool
 }
database/pg/database.go (1)

86-90: Consider improving transaction rollback error handling

The deferred rollback will always attempt to execute, even after a successful commit. While Neo4j and most databases handle this gracefully by returning a "no transaction" error that's logged at debug level, this pattern could be improved.

Consider tracking whether the transaction was successfully committed to avoid unnecessary rollback attempts:

 		} else {
+			committed := false
 			defer func() {
-				if err := transaction.Rollback(ctx); err != nil {
-					slog.DebugContext(ctx, "failed to rollback transaction", slog.String("err", err.Error()))
+				if !committed {
+					if err := transaction.Rollback(ctx); err != nil {
+						slog.DebugContext(ctx, "failed to rollback transaction", slog.String("err", err.Error()))
+					}
 				}
 			}()
 
 			if err := driverLogic(ctx, newInternalDriver(transaction, s.schemaManager)); err != nil {
 				return err
 			}
 
-			return transaction.Commit(ctx)
+			if err := transaction.Commit(ctx); err != nil {
+				return err
+			}
+			committed = true
+			return nil
 		}
database/v1compat/node.go (1)

118-141: Consider returning result.Error() consistently in First() method

The First() method returns nil for the error on line 133, but other similar methods in the codebase return result.Error() to capture any errors that occurred during iteration.

For consistency with the Count() method and to capture any iteration errors:

-			return &node, nil
+			return &node, result.Error()
database/neo4j/driver.go (2)

38-50: Consider adding nil check for result before accessing it.

While the error check prevents execution when s.result is nil, adding an explicit nil check would make the code more defensive.

 func (s *sessionResult) HasNext(ctx context.Context) bool {
 	if s.err != nil {
 		return false
 	}
+	if s.result == nil {
+		s.err = fmt.Errorf("result is nil")
+		return false
+	}
 
 	hasNext := s.result.NextRecord(ctx, &s.nextRecord)

176-182: Consider extracting common logic in Explain and Profile methods.

Both methods share identical error handling and formatting logic with only a prefix difference.

Extract the common logic to reduce duplication:

+func (s *dawgsDriver) execWithPrefix(ctx context.Context, prefix string, query *cypher.RegularQuery, parameters map[string]any) database.Result {
+	if cypherQuery, err := format.RegularQuery(query, false); err != nil {
+		return database.NewErrorResult(err)
+	} else {
+		return s.exec(ctx, prefix+cypherQuery, parameters)
+	}
+}
+
 func (s *dawgsDriver) Explain(ctx context.Context, query *cypher.RegularQuery, parameters map[string]any) database.Result {
-	if cypherQuery, err := format.RegularQuery(query, false); err != nil {
-		return database.NewErrorResult(err)
-	} else {
-		return s.exec(ctx, "explain "+cypherQuery, parameters)
-	}
+	return s.execWithPrefix(ctx, "explain ", query, parameters)
 }
 
 func (s *dawgsDriver) Profile(ctx context.Context, query *cypher.RegularQuery, parameters map[string]any) database.Result {
-	if cypherQuery, err := format.RegularQuery(query, false); err != nil {
-		return database.NewErrorResult(err)
-	} else {
-		return s.exec(ctx, "profile "+cypherQuery, parameters)
-	}
+	return s.execWithPrefix(ctx, "profile ", query, parameters)
 }
database/neo4j/schema.go (2)

151-190: Consider combining index removal and addition into a single transaction.

Currently, index removals and additions are performed in separate transactions. For better atomicity, consider combining them into a single transaction to prevent partial schema updates.

 func (s *SchemaManager) assertIndexes(ctx context.Context, indexesToRemove []string, indexesToAdd map[string]neo4jIndex) error {
-	if err := s.transaction(ctx, func(transaction neo4j.ExplicitTransaction) error {
+	return s.transaction(ctx, func(transaction neo4j.ExplicitTransaction) error {
+		// Remove indexes first
 		for _, indexToRemove := range indexesToRemove {
 			slog.InfoContext(ctx, fmt.Sprintf("Removing index %s", indexToRemove))
 
 			if result, err := transaction.Run(ctx, strings.Replace(dropPropertyIndexStatement, "$name", indexToRemove, 1), nil); err != nil {
 				return err
 			} else if _, err := result.Consume(ctx); err != nil {
 				return err
 			} else if err := result.Err(); err != nil {
 				return err
 			}
 		}
 
-		return nil
-	}); err != nil {
-		return err
-	}
-
-	return s.transaction(ctx, func(transaction neo4j.ExplicitTransaction) error {
+		// Add new indexes
 		for indexName, indexToAdd := range indexesToAdd {
 			slog.InfoContext(ctx, fmt.Sprintf("Adding index %s to labels %s on properties %s using %s", indexName, indexToAdd.kind.String(), indexToAdd.Field, indexTypeProvider(indexToAdd.Type)))
 
 			if result, err := transaction.Run(ctx, createPropertyIndexStatement, map[string]interface{}{
 				"name":       indexName,
 				"labels":     []string{indexToAdd.kind.String()},
 				"properties": []string{indexToAdd.Field},
 				"provider":   indexTypeProvider(indexToAdd.Type),
 			}); err != nil {
 				return err
 			} else if _, err := result.Consume(ctx); err != nil {
 				return err
 			} else if err := result.Err(); err != nil {
 				return err
 			}
 		}
 
 		return nil
 	})
 }

249-250: Consider documenting why empty labels are skipped.

The comment mentions "neo4j 4.4+ which creates a weird index by default" but doesn't explain what this index is or why it's problematic.

-			// Need this for neo4j 4.4+ which creates a weird index by default
+			// Skip the default token lookup index created by Neo4j 4.4+ which has no labels
+			// This index is automatically managed by Neo4j and should not be modified
 			if len(labels) == 0 {
 				continue
 			}
database/v1compat/types.go (1)

129-148: Size tracking in Put method could lead to incorrect totals.

When updating an existing value, the old value's size is subtracted before being overwritten. However, if the value was modified elsewhere between insertion and update, the size calculation might be incorrect.

Consider storing the size with each value to ensure accurate tracking:

 type IndexedSlice[K comparable, V any] struct {
 	index  map[K]int
 	values []V
+	sizes  []size.Size
 	size   size.Size
 }
database/v1compat/wrapper.go (3)

26-64: Consider adding support for additional slice types.

The downcastPropertyFields function handles common primitive slice types but misses some potentially useful ones like []byte (for binary data) and []interface{} (for mixed-type arrays).

Consider adding these cases:

 func downcastPropertyFields(props *graph.Properties) {
 	for key, value := range props.MapOrEmpty() {
 		switch typedValue := value.(type) {
+		case []byte:
+			// Keep []byte as-is since it's commonly used for binary data
+			// and may have special handling downstream
+			continue
 		case []string:
 			props.Map[key] = typedSliceToAnySlice(typedValue)

91-99: Add validation for required v1 compatibility methods.

The type assertion panics if the provided instance doesn't implement BackwardCompatibleInstance, but it would be helpful to validate that all required methods are properly implemented at construction time.

Consider adding a validation step:

 func V1Wrapper(v2DB database.Instance) Database {
 	if v1CompatibleInstanceRef, typeOK := v2DB.(BackwardCompatibleInstance); !typeOK {
 		panic(fmt.Sprintf("type %T is not a v1CompatibleInstance", v2DB))
 	} else {
+		// Validate that required methods are implemented
+		if v1CompatibleInstanceRef.RefreshKinds == nil || v1CompatibleInstanceRef.Raw == nil {
+			panic(fmt.Sprintf("type %T does not fully implement BackwardCompatibleInstance", v2DB))
+		}
 		return &v1Wrapper{
 			v2DB: v1CompatibleInstanceRef,
 		}
 	}
 }

255-258: Handle error from CreateNode properly.

The CreateNode method for batch operations ignores the returned node ID, which might be needed for subsequent operations.

Consider preserving the created node ID if needed:

 func (s *driverBatchWrapper) CreateNode(node *graph.Node) error {
-	_, err := s.driver.CreateNode(s.ctx, node)
+	nodeID, err := s.driver.CreateNode(s.ctx, node)
+	if err == nil && node != nil {
+		node.ID = nodeID
+	}
 	return err
 }
database/pg/query/query.go (6)

26-33: Consider validating args parameter.

The exec method appends the args map to queryArgs when non-nil and non-empty, but doesn't validate if the args are already of type pgx.NamedArgs.

Consider adding validation:

 func (s Query) exec(ctx context.Context, statement string, args map[string]any, queryArgs ...any) error {
 	if args != nil && len(args) > 0 {
+		// Wrap in pgx.NamedArgs if not already wrapped
 		queryArgs = append(queryArgs, pgx.NamedArgs(args))
 	}

43-76: Improve error message clarity for regex mismatches.

The error message on line 53 could be more descriptive about what pattern was expected vs. what was found.

Enhance the error message:

-				return graphPartition, fmt.Errorf("regex mis-match on schema definition: %s", tableIndexDefinition)
+				return graphPartition, fmt.Errorf("unexpected index format in schema definition (expected property or column index): %s", tableIndexDefinition)

121-148: Improve error handling consistency.

The method returns pgx.ErrNoRows when no rows are found (line 136), but this might not be the most descriptive error for the caller.

Consider wrapping the error with more context:

-			return model.Graph{}, pgx.ErrNoRows
+			return model.Graph{}, fmt.Errorf("graph %q not found: %w", name, pgx.ErrNoRows)

227-241: Remove unnecessary nil return.

Line 232 has an unnecessary return nil after a successful exec call that already returns an error.

Simplify the method:

 func (s Query) CreateSchema(ctx context.Context) error {
-	if err := s.exec(ctx, sqlSchemaUp, nil); err != nil {
-		return err
-	}
-
-	return nil
+	return s.exec(ctx, sqlSchemaUp, nil)
 }

Apply the same simplification to DropSchema:

 func (s Query) DropSchema(ctx context.Context) error {
-	if err := s.exec(ctx, sqlSchemaDown, nil); err != nil {
-		return err
-	}
-
-	return nil
+	return s.exec(ctx, sqlSchemaDown, nil)
 }

282-306: Handle empty result case explicitly.

The method converts the definition to lowercase (line 301) but doesn't validate if the definition is empty.

Add validation:

 		for result.Next() {
 			if err := result.Scan(&nextDefinition); err != nil {
 				return nil, err
 			}
+			if nextDefinition == "" {
+				continue
+			}
 
 			definitions = append(definitions, strings.ToLower(nextDefinition))
 		}

374-457: Complex logic could benefit from helper methods.

The AssertGraph method is quite long (83 lines) with deeply nested logic for managing indexes and constraints. Consider extracting the index/constraint comparison logic into helper methods.

Extract helper methods for clarity:

+func (s Query) compareNodeIndexes(present, required model.GraphPartition, changeSet *model.IndexChangeSet) {
+	// Extract lines 384-413 logic here
+}
+
+func (s Query) compareEdgeIndexes(present, required model.GraphPartition, changeSet *model.IndexChangeSet) {
+	// Extract lines 418-447 logic here
+}
+
 func (s Query) AssertGraph(ctx context.Context, schema database.Graph, definition model.Graph) (model.Graph, error) {
 	// ... initialization code ...
 	
 	if presentNodePartition, err := s.describeGraphPartition(ctx, definition.Partitions.Node.Name); err != nil {
 		return model.Graph{}, err
 	} else {
-		// Lines 384-413
+		s.compareNodeIndexes(presentNodePartition, requiredNodePartition, &indexChangeSet)
 	}
database/pg/manager.go (1)

251-286: Consider adding error context in assertKinds.

The error from defineKinds is returned directly without additional context about which kinds failed to be defined.

Add context to the error:

 		if err := s.transaction(ctx, func(transaction pgx.Tx) error {
 			// Previously calls like this required - pgx.QueryExecModeSimpleProtocol while that seems to no longer be
 			// the case, this comment has been left here in case the issue reappears
 			return s.defineKinds(ctx, transaction, missingKinds)
 		}); err != nil {
-			return nil, err
+			return nil, fmt.Errorf("failed to define missing kinds %v: %w", missingKinds.Strings(), err)
 		}
traversal/traversal.go (1)

261-271: Add validation for Instance fields.

The Instance struct has a memoryLimit field but no validation to ensure it's a reasonable value (not negative).

Add validation in the constructor:

 func New(db database.Instance, numParallelWorkers int) Instance {
+	if numParallelWorkers <= 0 {
+		numParallelWorkers = 1
+	}
 	return Instance{
 		db:                 db,
 		numParallelWorkers: numParallelWorkers,
 	}
 }
database/v1compat/query.go (1)

42-77: Add context to resultWrapper struct.

The resultWrapper stores a context but never validates if it's nil or cancelled before using it.

Consider adding validation:

 func (s resultWrapper) Next() bool {
+	if s.ctx == nil {
+		return false
+	}
 	return s.result.HasNext(s.ctx)
 }

Apply similar checks to other methods that use the context.

database/v1compat/traversal/traversal.go (1)

509-509: Consider making debug logs conditional

The debug logging statements in the FilteredSkipLimit function could impact performance when processing large datasets. Consider making these logs conditional based on a debug flag or log level check.

-				if skip == 0 || shouldCollect() {
+				collected := skip == 0 || shouldCollect()
+				if collected {
 					// If we should collect this result, check to see if we're already at a limit for the number of results
 					if limit > 0 && atLimit() {
-						slog.Debug(fmt.Sprintf("At collection limit, rejecting path: %s", graph.FormatPathSegment(next)))
+						if slog.Default().Enabled(nil, slog.LevelDebug) {
+							slog.Debug(fmt.Sprintf("At collection limit, rejecting path: %s", graph.FormatPathSegment(next)))
+						}
 						return false
 					}
 
-					slog.Debug(fmt.Sprintf("Collected path: %s", graph.FormatPathSegment(next)))
+					if slog.Default().Enabled(nil, slog.LevelDebug) {
+						slog.Debug(fmt.Sprintf("Collected path: %s", graph.FormatPathSegment(next)))
+					}
 					visitorFilter(next)
 				} else {
-					slog.Debug(fmt.Sprintf("Skipping path visit: %s", graph.FormatPathSegment(next)))
+					if slog.Default().Enabled(nil, slog.LevelDebug) {
+						slog.Debug(fmt.Sprintf("Skipping path visit: %s", graph.FormatPathSegment(next)))
+					}
 				}
database/v1compat/database.go (1)

90-90: Consider documenting the expected return type for GraphQueryMemoryLimit

The GraphQueryMemoryLimit() method returns size.Size but there's no documentation about what value should be returned when no limit is set (e.g., 0, negative value, or a specific constant).

Add documentation to clarify the expected behavior:

+	// GraphQueryMemoryLimit returns the maximum memory that can be consumed by a graph query.
+	// Returns 0 if no limit is set.
 	GraphQueryMemoryLimit() size.Size
query/query.go (2)

415-416: Fix typo in variable name

There's a typo in the variable name allShorestPathsQuery - it should be allShortestPathsQuery.

-	allShorestPathsQuery bool
+	allShortestPathsQuery bool

Also update the setter method:

 func (s *builder) WithAllShortestPaths() QueryBuilder {
-	s.allShorestPathsQuery = true
+	s.allShortestPathsQuery = true
 	return s
 }

And update the usage in line 761:

-		if err := prepareRelationshipPattern(match, seenIdentifiers, relationshipKinds, s.shortestPathQuery, s.allShorestPathsQuery); err != nil {
+		if err := prepareRelationshipPattern(match, seenIdentifiers, relationshipKinds, s.shortestPathQuery, s.allShortestPathsQuery); err != nil {

505-509: Return early for better readability

The early return pattern would improve readability here.

 func (s *builder) buildCreates(singlePartQuery *cypher.SinglePartQuery) error {
-	// Early exit to hide this part of the business logic while handling queries with no create statements
 	if len(s.creates) == 0 {
 		return nil
 	}
database/v1compat/query/model.go (1)

335-343: Simplify InIDs function implementation

The type switch with generics seems unnecessarily complex here. The function could be simplified.

 func InIDs[T *cypher.FunctionInvocation | *cypher.Variable](reference T, ids ...graph.ID) *cypher.Comparison {
-	switch any(reference).(type) {
-	case *cypher.FunctionInvocation:
+	// Check if reference is already a FunctionInvocation (like id())
+	if _, isFuncInvocation := any(reference).(*cypher.FunctionInvocation); isFuncInvocation {
 		return cypher.NewComparison(reference, cypher.OperatorIn, Parameter(ids))
-
-	default:
-		return cypher.NewComparison(Identity(any(reference).(*cypher.Variable)), cypher.OperatorIn, Parameter(ids))
 	}
+	// Otherwise, it's a Variable and we need to wrap it with Identity
+	return cypher.NewComparison(Identity(any(reference).(*cypher.Variable)), cypher.OperatorIn, Parameter(ids))
 }

Comment on lines +3 to +10
import (
"context"
"errors"

"github.com/specterops/dawgs/database"
"github.com/specterops/dawgs/database/v1compat"
"github.com/specterops/dawgs/util/size"
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add required imports for concurrency-safe registry and better error messages.

To support mutex locking and formatted error wrapping in the registry, add sync and fmt to imports.

 import (
 	"context"
 	"errors"
+	"fmt"
+	"sync"
 
 	"github.com/specterops/dawgs/database"
 	"github.com/specterops/dawgs/database/v1compat"
 	"github.com/specterops/dawgs/util/size"
 )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import (
"context"
"errors"
"github.com/specterops/dawgs/database"
"github.com/specterops/dawgs/database/v1compat"
"github.com/specterops/dawgs/util/size"
)
import (
"context"
"errors"
"fmt"
"sync"
"github.com/specterops/dawgs/database"
"github.com/specterops/dawgs/database/v1compat"
"github.com/specterops/dawgs/util/size"
)
🤖 Prompt for AI Agents
In registry.go around lines 3 to 10, the import block is missing packages
required for a concurrency-safe registry and improved error messages; add "sync"
(for mutex locking) and "fmt" (for formatted error wrapping) to the import list
so subsequent code can use sync.Mutex/RWMutex and fmt.Errorf for wrapped errors.

Comment on lines 18 to 25
var availableDrivers = map[string]DriverConstructor{}

func Register(driverName string, constructor DriverConstructor) {
availableDrivers[driverName] = constructor
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make the registry concurrency-safe and prevent silent overwrites.

availableDrivers is a plain map with no synchronization or duplicate checks. Register and Open may race in init-heavy code; duplicates silently overwrite.

Apply this diff to add locking and avoid accidental double-registration:

-var availableDrivers = map[string]DriverConstructor{}
+var (
+	availableDrivers = map[string]DriverConstructor{}
+	registryMu       sync.RWMutex
+)
 
 func Register(driverName string, constructor DriverConstructor) {
-	availableDrivers[driverName] = constructor
+	registryMu.Lock()
+	defer registryMu.Unlock()
+
+	if _, exists := availableDrivers[driverName]; exists {
+		panic(fmt.Sprintf("driver %q already registered", driverName))
+	}
+	availableDrivers[driverName] = constructor
 }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In registry.go around lines 18 to 22, the global availableDrivers map is not
concurrency-safe and allows silent overwrites; add a package-level sync.RWMutex
(e.g., driversMu) and use driversMu.Lock()/Unlock() in Register to protect
writes, check if the driverName already exists and abort (panic or log.Fatal) to
prevent double-registration, and use driversMu.RLock()/RUnlock() in Open (and
any other readers) to safely read from availableDrivers concurrently.

Comment on lines +33 to +45
func Open(ctx context.Context, driverName string, config Config) (database.Instance, error) {
if driverConstructor, hasDriver := availableDrivers[driverName]; !hasDriver {
return nil, ErrDriverMissing
} else {
return driverConstructor(ctx, config)
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Wrap missing-driver errors with context and guard lookup with RLock.

This preserves the sentinel (errors.Is(err, ErrDriverMissing)) while adding the driver name and prevents data races.

 func Open(ctx context.Context, driverName string, config Config) (database.Instance, error) {
-	if driverConstructor, hasDriver := availableDrivers[driverName]; !hasDriver {
-		return nil, ErrDriverMissing
-	} else {
-		return driverConstructor(ctx, config)
-	}
+	registryMu.RLock()
+	driverConstructor, hasDriver := availableDrivers[driverName]
+	registryMu.RUnlock()
+
+	if !hasDriver {
+		return nil, fmt.Errorf("%w: %s", ErrDriverMissing, driverName)
+	}
+
+	return driverConstructor(ctx, config)
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func Open(ctx context.Context, driverName string, config Config) (database.Instance, error) {
if driverConstructor, hasDriver := availableDrivers[driverName]; !hasDriver {
return nil, ErrDriverMissing
} else {
return driverConstructor(ctx, config)
}
}
func Open(ctx context.Context, driverName string, config Config) (database.Instance, error) {
registryMu.RLock()
driverConstructor, hasDriver := availableDrivers[driverName]
registryMu.RUnlock()
if !hasDriver {
return nil, fmt.Errorf("%w: %s", ErrDriverMissing, driverName)
}
return driverConstructor(ctx, config)
}
🤖 Prompt for AI Agents
In registry.go around lines 33 to 39, the lookup of availableDrivers must be
protected by a read lock and missing-driver errors should be wrapped with the
driver name while preserving the sentinel. Use the registry RWMutex RLock()
before checking availableDrivers[driverName], capture the constructor into a
local variable, RUnlock() immediately, then call the constructor outside the
lock; if the driver is missing return fmt.Errorf("driver %s: %w", driverName,
ErrDriverMissing) so errors.Is still works.

@zinic zinic force-pushed the v2 branch 4 times, most recently from ebcef82 to 90a694e Compare August 14, 2025 01:26
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

♻️ Duplicate comments (13)
cypher/models/pgsql/format/format.go (2)

74-76: Good fix: preserve sub-second precision for time values

Switching to RFC3339Nano and casting to timestamptz avoids precision loss and is accepted by Postgres input parsing.


122-125: Escape single quotes in strings; consider a tiny helper to centralize escaping

You correctly fixed the injection/broken-SQL risk by doubling single quotes. To avoid repeating this logic (and to keep it discoverable for future callers), consider extracting it to a helper.

Apply within the selected lines:

-        // Double single quotes per SQL string literal rules
-        builder.Write("'", strings.ReplaceAll(typedValue, "'", "''"), "'")
+        // Double single quotes per SQL string literal rules
+        builder.Write("'", escapeSQLString(typedValue), "'")

Add once (outside this hunk):

// escapeSQLString doubles single quotes per SQL string literal rules.
// standard_conforming_strings is assumed ON (default).
func escapeSQLString(s string) string {
    return strings.ReplaceAll(s, "'", "''")
}
traversal/traversal.go (2)

210-236: Fetch direction reuse across expansion steps is correct here

Retaining a single fetchDirection (reverse of the current expansion direction) when stepping into the next expansion is consistent with the intended traversal semantics for this codepath.


152-156: Compile-time type mismatch: fetchFunc expects single criteria but callers pass a slice

currentExpansion.criteria and nextExpansion.criteria are []cypher.SyntaxNode, but fetchFunc takes cypher.SyntaxNode. This won’t compile and also blocks passing multiple criteria.

Apply this diff to accept multiple criteria and preserve existing call sites:

-        fetchFunc = func(criteria cypher.SyntaxNode, direction graph.Direction) error {
-            var (
-                queryBuilder = query.New()
-                allCriteria  = []cypher.SyntaxNode{criteria}
-            )
+        fetchFunc = func(criteria []cypher.SyntaxNode, direction graph.Direction) error {
+            queryBuilder := query.New()
+            allCriteria := criteria
database/pg/batch.go (2)

361-369: No-op early return on empty delete batch

Avoid executing a delete with an empty input.

 func (s *dawgsDriver) DeleteNodes(ctx context.Context, batch []graph.ID) error {
+    if len(batch) == 0 {
+        return nil
+    }
     if result, err := s.internalConn.Query(ctx, deleteNodeWithIDStatement, batch); err != nil {
         return err
     } else {
         result.Close()
     }

     return nil
 }

371-379: No-op early return on empty delete batch

Apply the same guard to relationship deletes.

 func (s *dawgsDriver) DeleteRelationships(ctx context.Context, batch []graph.ID) error {
+    if len(batch) == 0 {
+        return nil
+    }
     if result, err := s.internalConn.Query(ctx, deleteEdgeWithIDStatement, batch); err != nil {
         return err
     } else {
         result.Close()
     }

     return nil
 }
database/pg/pg.go (1)

85-96: Support both value and pointer forms of DriverConfig.Config

Handle pointer-based configs to avoid silent fallthrough when users pass *Config.

-    dawgs.Register(DriverName, func(ctx context.Context, cfg dawgs.Config) (database.Instance, error) {
-        if cfg.DriverConfig != nil {
-            if pgConfig, typeOK := cfg.DriverConfig.(Config); typeOK && pgConfig.Pool != nil {
-                return New(pgConfig.Pool, cfg), nil
-            }
-        }
+    dawgs.Register(DriverName, func(ctx context.Context, cfg dawgs.Config) (database.Instance, error) {
+        if cfg.DriverConfig != nil {
+            switch c := cfg.DriverConfig.(type) {
+            case Config:
+                if c.Pool != nil {
+                    return New(c.Pool, cfg), nil
+                }
+            case *Config:
+                if c != nil && c.Pool != nil {
+                    return New(c.Pool, cfg), nil
+                }
+            }
+        }
registry.go (2)

3-10: Implement the concurrency-safe registry with proper error handling.

Based on the previous review comments, the registry still lacks synchronization for concurrent access and doesn't prevent silent overwrites. This could lead to race conditions and unexpected behavior when drivers are registered from multiple goroutines.

Add the missing imports and implement the suggested concurrency protection:

 import (
 	"context"
 	"errors"
+	"fmt"
+	"sync"
 
 	"github.com/specterops/dawgs/database"
 	"github.com/specterops/dawgs/database/v1compat"
 	"github.com/specterops/dawgs/util/size"
 )

And protect the registry map with a mutex:

-var availableDrivers = map[string]DriverConstructor{}
+var (
+	availableDrivers = map[string]DriverConstructor{}
+	registryMu       sync.RWMutex
+)
 
 func Register(driverName string, constructor DriverConstructor) {
-	availableDrivers[driverName] = constructor
+	registryMu.Lock()
+	defer registryMu.Unlock()
+
+	if _, exists := availableDrivers[driverName]; exists {
+		panic(fmt.Sprintf("driver %q already registered", driverName))
+	}
+	availableDrivers[driverName] = constructor
 }

33-39: Add read lock protection and wrap errors with context.

The Open function accesses the shared availableDrivers map without synchronization, which could lead to data races. Also, the error should include the driver name for better debugging.

 func Open(ctx context.Context, driverName string, config Config) (database.Instance, error) {
-	if driverConstructor, hasDriver := availableDrivers[driverName]; !hasDriver {
-		return nil, ErrDriverMissing
-	} else {
-		return driverConstructor(ctx, config)
-	}
+	registryMu.RLock()
+	driverConstructor, hasDriver := availableDrivers[driverName]
+	registryMu.RUnlock()
+
+	if !hasDriver {
+		return nil, fmt.Errorf("%w: %s", ErrDriverMissing, driverName)
+	}
+
+	return driverConstructor(ctx, config)
 }
database/v1compat/wrapper.go (3)

84-90: Fix data race on s.schema; lock around all reads/writes and avoid holding lock across v2 calls

SetDefaultGraph and AssertSchema mutate the shared s.schema without synchronization. Guard with a mutex and avoid holding the lock while invoking s.v2DB.AssertSchema to prevent lock contention/deadlocks. This was raised previously and still applies.

@@
 type v1Wrapper struct {
     schema         *database.Schema
     v2DB           BackwardCompatibleInstance
     writeFlushSize int
     batchWriteSize int
+    mu             sync.RWMutex
 }
@@
 func (s *v1Wrapper) AssertSchema(ctx context.Context, dbSchema database.Schema) error {
-    s.schema = &dbSchema
-    return s.v2DB.AssertSchema(ctx, dbSchema)
+    s.mu.Lock()
+    s.schema = &dbSchema
+    localCopy := *s.schema
+    s.mu.Unlock()
+    return s.v2DB.AssertSchema(ctx, localCopy)
 }
@@
 func (s *v1Wrapper) SetDefaultGraph(ctx context.Context, graphSchema database.Graph) error {
-    if s.schema != nil {
-        s.schema.GraphSchemas[graphSchema.Name] = graphSchema
-        s.schema.DefaultGraphName = graphSchema.Name
-    } else {
-        s.schema = &database.Schema{
-            GraphSchemas: map[string]database.Graph{
-                graphSchema.Name: graphSchema,
-            },
-            DefaultGraphName: graphSchema.Name,
-        }
-    }
-
-    return s.v2DB.AssertSchema(ctx, *s.schema)
+    s.mu.Lock()
+    if s.schema != nil {
+        s.schema.GraphSchemas[graphSchema.Name] = graphSchema
+        s.schema.DefaultGraphName = graphSchema.Name
+    } else {
+        s.schema = &database.Schema{
+            GraphSchemas: map[string]database.Graph{
+                graphSchema.Name: graphSchema,
+            },
+            DefaultGraphName: graphSchema.Name,
+        }
+    }
+    localCopy := *s.schema
+    s.mu.Unlock()
+    return s.v2DB.AssertSchema(ctx, localCopy)
 }

Add the missing import:

 import (
     "context"
     "fmt"
     "time"
+    "sync"
@@
 )

Also applies to: 132-151


348-375: UpdateNode query is missing MATCH; WHERE alone is insufficient to scope updates

Without an explicit MATCH, the update may fail or affect unintended targets depending on the query builder semantics. Add a MATCH on the node ID before applying updates.

-func (s driverTransactionWrapper) UpdateNode(node *graph.Node) error {
-    updateQuery := query.New().Where(query.Node().ID().Equals(node.ID))
+func (s driverTransactionWrapper) UpdateNode(node *graph.Node) error {
+    updateQuery := query.New()
+    updateQuery.Match(query.Node().Where(query.Node().ID().Equals(node.ID)))

399-418: UpdateRelationship is missing MATCH on relationship ID

Add a MATCH to scope the update to the relationship by ID.

-func (s driverTransactionWrapper) UpdateRelationship(relationship *graph.Relationship) error {
-    updateQuery := query.New().Where(query.Relationship().ID().Equals(relationship.ID))
+func (s driverTransactionWrapper) UpdateRelationship(relationship *graph.Relationship) error {
+    updateQuery := query.New()
+    updateQuery.Match(query.Relationship().Where(query.Relationship().ID().Equals(relationship.ID)))
database/v1compat/query/model.go (1)

12-22: Critical: Fix potential panic in type assertions.

The convertCriteria function performs unchecked type assertions that could panic if the conversion fails, creating runtime instability.

Apply this fix to make the function safer:

-func convertCriteria[T any](criteria ...cypher.SyntaxNode) []T {
+func convertCriteria[T any](criteria ...cypher.SyntaxNode) ([]T, error) {
 	var (
 		converted = make([]T, len(criteria))
 	)

 	for idx, nextCriteria := range criteria {
-		converted[idx] = nextCriteria.(T)
+		if typedCriteria, ok := nextCriteria.(T); ok {
+			converted[idx] = typedCriteria
+		} else {
+			return nil, fmt.Errorf("failed to convert criteria at index %d: expected %T, got %T", idx, *new(T), nextCriteria)
+		}
 	}

-	return converted
+	return converted, nil
 }

You'll also need to update all callers to handle the error return value.

🧹 Nitpick comments (15)
database/pg/mapper.go (2)

20-27: Broaden numeric coercion for kind IDs; accept common integer types, not just int16

Postgres drivers frequently surface SMALLINT/INT as int, int32 or int64 when scanned into []any. Restricting to int16 will wrongly mark otherwise valid inputs as invalid. Coercing to int16 with bounds checks improves robustness without changing semantics.

Apply this diff:

-        for idx, untypedElement := range typedValue {
-            if typedElement, typeOK := untypedElement.(int16); typeOK {
-                kindIDs[idx] = typedElement
-            } else {
-                // Type assertion failed, mark as invalid type
-                validType = false
-                break
-            }
-        }
+        for idx, untypedElement := range typedValue {
+            switch vv := untypedElement.(type) {
+            case int16:
+                kindIDs[idx] = vv
+            case int:
+                iv := int64(vv)
+                if iv < -32768 || iv > 32767 {
+                    validType = false
+                    kindIDs = nil
+                    break
+                }
+                kindIDs[idx] = int16(iv)
+            case int32:
+                iv := int64(vv)
+                if iv < -32768 || iv > 32767 {
+                    validType = false
+                    kindIDs = nil
+                    break
+                }
+                kindIDs[idx] = int16(iv)
+            case int64:
+                iv := vv
+                if iv < -32768 || iv > 32767 {
+                    validType = false
+                    kindIDs = nil
+                    break
+                }
+                kindIDs[idx] = int16(iv)
+            default:
+                // Unsupported type
+                validType = false
+                kindIDs = nil
+                break
+            }
+        }

11-11: Nit: comment grammar

The sentence is incomplete (“can be mapped from …”). Suggest tightening the grammar.

-        // The default assumption is that the untyped value contains a type that can be mapped from
+        // The default assumption is that the untyped value is of a type that can be mapped to graph Kinds
cypher/models/pgsql/format/format.go (1)

749-749: No-op write call can be removed

builder.Write() with no arguments is a no-op and likely a leftover.

Apply:

-        builder.Write()
+        // removed no-op
database/pg/batch.go (1)

308-311: Minor consistency: prefer constructor for Int2ArrayEncoder

Use the provided constructor for readability and consistency.

-        kindIDEncoder = Int2ArrayEncoder{
-            buffer: &bytes.Buffer{},
-        }
+        kindIDEncoder = NewInt2ArrayEncoder()
query/query.go (1)

399-401: Rename typo: allShorestPathsQuery -> allShortestPathsQuery

Spelling nit for clarity; keeps API intent obvious.

-    allShorestPathsQuery bool
+    allShortestPathsQuery bool
 func (s *builder) WithAllShortestPaths() QueryBuilder {
-    s.allShorestPathsQuery = true
+    s.allShortestPathsQuery = true
     return s
 }
-            if err := prepareRelationshipPattern(match, seenIdentifiers, relationshipKinds, s.shortestPathQuery, s.allShorestPathsQuery); err != nil {
+            if err := prepareRelationshipPattern(match, seenIdentifiers, relationshipKinds, s.shortestPathQuery, s.allShortestPathsQuery); err != nil {
                 return nil, err
             }

Also applies to: 415-416, 763-764

database/neo4j/database.go (1)

80-110: Consider handling rollback errors more explicitly.

The transaction method properly manages the transaction lifecycle with proper cleanup. The rollback in the defer block ensures cleanup even on panic, which is good. However, the rollback error is only logged at debug level. Consider whether critical rollback failures should be handled differently.

The current implementation is solid, but you might want to consider whether rollback failures should be elevated beyond debug logging in certain scenarios, especially if they could indicate data integrity issues.

database/neo4j/schema.go (3)

151-190: Inconsistent index vs. constraint application flow; consider unifying or add IF EXISTS/IF NOT EXISTS guards

Indexes are dropped and created in two separate transactions, while constraints are handled in a single transaction. This inconsistency may leave the system in a transient “no index” state if the second phase fails. Consider:

  • Unifying flows (either single transaction for both or consistently two-phase for both), and/or
  • Using IF EXISTS/IF NOT EXISTS variants (if available in target Neo4j version) to be more resilient to concurrent DDL.

Please confirm expected Neo4j 4.4/5.x behavior and whether single-transaction drop+create on same name is safe for your target versions.


172-179: Prefer structured logging over string formatting

Use structured attributes for slog to improve observability and avoid fmt.Sprintf allocations.

Apply this change:

-            slog.InfoContext(ctx, fmt.Sprintf("Adding index %s to labels %s on properties %s using %s", indexName, indexToAdd.kind.String(), indexToAdd.Field, indexTypeProvider(indexToAdd.Type)))
+            slog.InfoContext(ctx, "Adding index",
+                slog.String("name", indexName),
+                slog.String("label", indexToAdd.kind.String()),
+                slog.String("property", indexToAdd.Field),
+                slog.String("provider", indexTypeProvider(indexToAdd.Type)),
+            )

247-254: Hard-failing on composite indexes hurts portability; warn and skip instead

Returning an error when encountering a composite index/constraint will cause AssertSchema to fail on databases with pre-existing composite types. Prefer warning and skipping to keep schema assertion resilient.

-                    if len(labels) > 1 || len(properties) > 1 {
-                        return fmt.Errorf("composite index types are currently not supported")
-                    }
+                    if len(labels) > 1 || len(properties) > 1 {
+                        slog.WarnContext(ctx, "Composite index/constraint encountered; skipping",
+                            slog.String("name", name),
+                            slog.Int("label_count", len(labels)),
+                            slog.Int("property_count", len(properties)),
+                        )
+                        continue
+                    }
database/pg/database.go (1)

86-90: Reduce noisy rollback logging: ignore pgx.ErrTxClosed

Align rollback error logging with manager.go by ignoring ErrTxClosed to prevent noise after successful commit.

-            defer func() {
-                if err := transaction.Rollback(ctx); err != nil {
-                    slog.DebugContext(ctx, "failed to rollback transaction", slog.String("err", err.Error()))
-                }
-            }()
+            defer func() {
+                if err := transaction.Rollback(ctx); err != nil && !errors.Is(err, pgx.ErrTxClosed) {
+                    slog.DebugContext(ctx, "failed to rollback transaction", slog.String("err", err.Error()))
+                }
+            }()

Add import:

 import (
     "context"
+    "errors"
     "fmt"
     "log/slog"
database/pg/manager.go (1)

90-93: Redundant map assignment in fetch

Inside the loop over s.kindsByID, assigning s.kindsByID[kind] = kindID is redundant. Remove the no-op write.

-        for kind, kindID := range s.kindsByID {
-            s.kindsByID[kind] = kindID
-            s.kindIDsByKind[kindID] = kind
-        }
+        for kind, kindID := range s.kindsByID {
+            s.kindIDsByKind[kindID] = kind
+        }
database/v1compat/database.go (2)

12-16: Fix minor typos in comments (double space in “multiple graphs”)

Cosmetic, but worth tightening up.

-    // multiple  graphs the resulting transaction will target the default graph instead and this call becomes a no-op.
+    // multiple graphs the resulting transaction will target the default graph instead and this call becomes a no-op.

Also applies to: 55-61


110-153: Confirm SetWriteFlushSize/SetBatchWriteSize are honored by wrapper implementations

The interface exposes these configuration methods. In the current v1 wrapper, SetBatchWriteSize wasn’t applied to the batch wrapper (fixed above), and SetWriteFlushSize appears unused. Please confirm intended behavior and wire it through or remove if obsolete.

database/v1compat/query/model.go (2)

335-343: Improve generic type constraint safety.

The InIDs function uses a generic constraint but then performs type assertions that could be simplified and made safer.

Consider this cleaner approach:

-func InIDs[T *cypher.FunctionInvocation | *cypher.Variable](reference T, ids ...graph.ID) *cypher.Comparison {
-	switch any(reference).(type) {
-	case *cypher.FunctionInvocation:
-		return cypher.NewComparison(reference, cypher.OperatorIn, Parameter(ids))
-
-	default:
-		return cypher.NewComparison(Identity(any(reference).(*cypher.Variable)), cypher.OperatorIn, Parameter(ids))
-	}
+func InIDs[T *cypher.FunctionInvocation | *cypher.Variable](reference T, ids ...graph.ID) *cypher.Comparison {
+	switch ref := any(reference).(type) {
+	case *cypher.FunctionInvocation:
+		return cypher.NewComparison(ref, cypher.OperatorIn, Parameter(ids))
+	case *cypher.Variable:
+		return cypher.NewComparison(Identity(ref), cypher.OperatorIn, Parameter(ids))
+	default:
+		// This should never happen due to type constraints, but provides safety
+		panic(fmt.Sprintf("unexpected type %T", reference))
+	}
 }

24-26: Consider if this utility function adds value.

The Update function appears to be a simple pass-through that doesn't add functionality.

If this function is intended for future enhancement or API consistency, consider adding a comment explaining its purpose. Otherwise, it might be unnecessary.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3fe35e5 and 90a694e.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (92)
  • cypher/models/cypher/model.go (1 hunks)
  • cypher/models/pgsql/format/format.go (3 hunks)
  • cypher/models/pgsql/test/testcase.go (3 hunks)
  • cypher/models/pgsql/test/translation_test.go (1 hunks)
  • cypher/models/pgsql/test/validation_integration_test.go (3 hunks)
  • cypher/models/pgsql/visualization/visualizer_test.go (1 hunks)
  • database/driver.go (1 hunks)
  • database/neo4j/database.go (1 hunks)
  • database/neo4j/database_integration_test.go (1 hunks)
  • database/neo4j/driver.go (1 hunks)
  • database/neo4j/mapper.go (3 hunks)
  • database/neo4j/neo4j.go (1 hunks)
  • database/neo4j/schema.go (1 hunks)
  • database/pg/batch.go (1 hunks)
  • database/pg/config.go (1 hunks)
  • database/pg/database.go (1 hunks)
  • database/pg/driver.go (1 hunks)
  • database/pg/manager.go (10 hunks)
  • database/pg/mapper.go (3 hunks)
  • database/pg/model/format.go (3 hunks)
  • database/pg/model/model.go (1 hunks)
  • database/pg/pg.go (2 hunks)
  • database/pg/query/format.go (2 hunks)
  • database/pg/query/query.go (1 hunks)
  • database/pg/query/sql.go (1 hunks)
  • database/pg/query/sql/select_graph_by_name.sql (1 hunks)
  • database/pg/query/sql/select_graphs.sql (1 hunks)
  • database/pg/statements.go (1 hunks)
  • database/pg/types.go (2 hunks)
  • database/pg/util.go (1 hunks)
  • database/schema.go (1 hunks)
  • database/tooling.go (1 hunks)
  • database/v1compat/database.go (1 hunks)
  • database/v1compat/errors.go (1 hunks)
  • database/v1compat/graph.go (1 hunks)
  • database/v1compat/node.go (1 hunks)
  • database/v1compat/ops/ops.go (1 hunks)
  • database/v1compat/ops/parallel.go (1 hunks)
  • database/v1compat/ops/traversal.go (1 hunks)
  • database/v1compat/query.go (7 hunks)
  • database/v1compat/query/builder.go (3 hunks)
  • database/v1compat/query/model.go (1 hunks)
  • database/v1compat/query/sort.go (1 hunks)
  • database/v1compat/relationship.go (1 hunks)
  • database/v1compat/result.go (4 hunks)
  • database/v1compat/switch.go (1 hunks)
  • database/v1compat/traversal/collection.go (2 hunks)
  • database/v1compat/traversal/query.go (1 hunks)
  • database/v1compat/traversal/traversal.go (1 hunks)
  • database/v1compat/traversal/traversal_test.go (1 hunks)
  • database/v1compat/types.go (1 hunks)
  • database/v1compat/wrapper.go (1 hunks)
  • drivers/neo4j/batch.go (0 hunks)
  • drivers/neo4j/const.go (0 hunks)
  • drivers/neo4j/cypher.go (0 hunks)
  • drivers/neo4j/cypher_internal_test.go (0 hunks)
  • drivers/neo4j/driver.go (0 hunks)
  • drivers/neo4j/index.go (0 hunks)
  • drivers/neo4j/neo4j.go (0 hunks)
  • drivers/neo4j/node.go (0 hunks)
  • drivers/neo4j/relationship.go (0 hunks)
  • drivers/neo4j/result.go (0 hunks)
  • drivers/neo4j/result_internal_test.go (0 hunks)
  • drivers/neo4j/transaction.go (0 hunks)
  • drivers/neo4j/wrapper.go (0 hunks)
  • drivers/pg/batch.go (0 hunks)
  • drivers/pg/driver.go (0 hunks)
  • drivers/pg/model/model.go (0 hunks)
  • drivers/pg/node.go (0 hunks)
  • drivers/pg/query.go (0 hunks)
  • drivers/pg/query/query.go (0 hunks)
  • drivers/pg/relationship.go (0 hunks)
  • drivers/pg/result.go (0 hunks)
  • drivers/pg/tooling.go (0 hunks)
  • drivers/pg/transaction.go (0 hunks)
  • drivers/pg/util.go (0 hunks)
  • go.mod (1 hunks)
  • graph/error.go (0 hunks)
  • graph/graph.go (0 hunks)
  • graph/properties.go (1 hunks)
  • graph/schema.go (0 hunks)
  • graphcache/cache.go (0 hunks)
  • query/model.go (0 hunks)
  • query/neo4j/neo4j.go (0 hunks)
  • query/neo4j/neo4j_test.go (0 hunks)
  • query/neo4j/rewrite.go (0 hunks)
  • query/query.go (1 hunks)
  • query/query_test.go (1 hunks)
  • query/util.go (1 hunks)
  • registry.go (2 hunks)
  • registry_integration_test.go (1 hunks)
  • traversal/traversal.go (10 hunks)
💤 Files with no reviewable changes (32)
  • drivers/neo4j/cypher.go
  • drivers/neo4j/cypher_internal_test.go
  • drivers/pg/query.go
  • drivers/pg/tooling.go
  • drivers/neo4j/transaction.go
  • drivers/pg/driver.go
  • graph/error.go
  • drivers/pg/model/model.go
  • drivers/pg/batch.go
  • drivers/neo4j/const.go
  • drivers/pg/util.go
  • drivers/neo4j/result.go
  • drivers/neo4j/wrapper.go
  • query/neo4j/neo4j.go
  • drivers/pg/relationship.go
  • drivers/pg/node.go
  • drivers/pg/result.go
  • graph/schema.go
  • drivers/pg/transaction.go
  • query/neo4j/rewrite.go
  • query/neo4j/neo4j_test.go
  • drivers/neo4j/neo4j.go
  • drivers/neo4j/result_internal_test.go
  • drivers/neo4j/index.go
  • drivers/neo4j/driver.go
  • drivers/neo4j/relationship.go
  • graphcache/cache.go
  • drivers/neo4j/batch.go
  • drivers/pg/query/query.go
  • graph/graph.go
  • drivers/neo4j/node.go
  • query/model.go
✅ Files skipped from review due to trivial changes (1)
  • database/pg/util.go
🚧 Files skipped from review as they are similar to previous changes (39)
  • cypher/models/pgsql/test/translation_test.go
  • database/v1compat/ops/parallel.go
  • database/neo4j/database_integration_test.go
  • database/v1compat/switch.go
  • database/v1compat/traversal/traversal_test.go
  • database/tooling.go
  • database/pg/config.go
  • database/pg/query/sql/select_graph_by_name.sql
  • database/pg/types.go
  • database/pg/query/sql.go
  • database/v1compat/ops/traversal.go
  • database/pg/model/format.go
  • database/pg/query/sql/select_graphs.sql
  • database/pg/statements.go
  • cypher/models/pgsql/visualization/visualizer_test.go
  • database/v1compat/ops/ops.go
  • database/v1compat/errors.go
  • cypher/models/cypher/model.go
  • registry_integration_test.go
  • database/v1compat/traversal/traversal.go
  • database/neo4j/neo4j.go
  • database/neo4j/mapper.go
  • query/query_test.go
  • database/pg/driver.go
  • query/util.go
  • database/driver.go
  • database/v1compat/traversal/query.go
  • database/neo4j/driver.go
  • database/v1compat/node.go
  • cypher/models/pgsql/test/testcase.go
  • database/schema.go
  • database/v1compat/result.go
  • database/v1compat/query.go
  • database/v1compat/query/sort.go
  • database/v1compat/relationship.go
  • database/pg/query/query.go
  • database/v1compat/query/builder.go
  • database/v1compat/types.go
  • graph/properties.go
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-06-27T15:24:24.735Z
Learnt from: zinic
PR: SpecterOps/DAWGS#4
File: drivers/pg/v2/driver.go:82-85
Timestamp: 2025-06-27T15:24:24.735Z
Learning: The pgx.Rows.Close() method in the jackc/pgx library (including v5) does not return an error - it has the signature Close() with no return value. This is different from the standard database/sql library where Rows.Close() does return an error. Do not suggest changing patterns like "s.rows.Close(); return nil" to "return s.rows.Close()" for pgx.Rows.

Applied to files:

  • database/pg/batch.go
📚 Learning: 2025-06-27T15:24:24.735Z
Learnt from: zinic
PR: SpecterOps/DAWGS#4
File: drivers/pg/v2/driver.go:82-85
Timestamp: 2025-06-27T15:24:24.735Z
Learning: The pgx.Rows.Close() method in the jackc/pgx/v5 library does not return an error - it has the signature Close() with no return value. Do not suggest changing patterns like "s.rows.Close(); return nil" to "return s.rows.Close()" for pgx.Rows.

Applied to files:

  • database/pg/batch.go
📚 Learning: 2025-06-27T15:56:17.950Z
Learnt from: zinic
PR: SpecterOps/DAWGS#4
File: drivers/neo4j/v2/schema.go:231-231
Timestamp: 2025-06-27T15:56:17.950Z
Learning: In the Neo4j driver v2 schema.go file, cleanup operations in defer statements like result.Consume(ctx) are intentionally treated as best-effort and don't require explicit error checking. This is a conscious design pattern used consistently throughout the codebase for cleanup operations.

Applied to files:

  • database/pg/batch.go
  • database/neo4j/schema.go
📚 Learning: 2025-06-27T15:32:33.244Z
Learnt from: zinic
PR: SpecterOps/DAWGS#4
File: drivers/neo4j/v2/schema.go:136-140
Timestamp: 2025-06-27T15:32:33.244Z
Learning: The Neo4j Go driver v5 may not have the same transaction rollback error handling patterns as suggested. Check the specific v5 documentation before recommending `errors.Is(err, neo4j.ErrTransactionTerminated)` or similar patterns.

Applied to files:

  • database/v1compat/wrapper.go
📚 Learning: 2025-08-14T01:22:17.813Z
Learnt from: zinic
PR: SpecterOps/DAWGS#10
File: database/v1compat/traversal/traversal.go:227-227
Timestamp: 2025-08-14T01:22:17.813Z
Learning: In database/v1compat/traversal/traversal.go, the fetchDirection variable should remain consistent during expansion. The fetchDirection represents the query perspective for retrieving relationships and must be the same for both current and next expansion steps to maintain proper traversal semantics.

Applied to files:

  • database/v1compat/traversal/collection.go
  • traversal/traversal.go
📚 Learning: 2025-06-27T15:34:42.128Z
Learnt from: zinic
PR: SpecterOps/DAWGS#4
File: drivers/pg/v2/query/format.go:14-23
Timestamp: 2025-06-27T15:34:42.128Z
Learning: In the PostgreSQL v2 driver codebase, the team prefers to let invalid index type conversions (like "NOT SUPPORTED") fail at the SQL query execution level rather than returning errors from conversion functions like `postgresIndexType`. This allows error handling to occur with full query context rather than at the individual type conversion level.

Applied to files:

  • database/pg/query/format.go
🧬 Code Graph Analysis (17)
database/pg/model/model.go (1)
database/pg/model/format.go (2)
  • IndexName (27-36)
  • ConstraintName (38-47)
cypher/models/pgsql/test/validation_integration_test.go (3)
registry.go (1)
  • Open (33-39)
database/schema.go (1)
  • NewSchema (52-63)
database/v1compat/graph.go (2)
  • Kinds (11-11)
  • StringKind (35-35)
database/neo4j/database.go (3)
database/neo4j/neo4j.go (2)
  • DefaultNeo4jTransactionTimeout (15-15)
  • DefaultConcurrentConnections (20-20)
database/driver.go (3)
  • Option (11-11)
  • OptionReadOnly (14-14)
  • QueryLogic (42-42)
database/v1compat/database.go (1)
  • Transaction (57-91)
database/v1compat/database.go (4)
database/v1compat/query.go (2)
  • NodeQuery (118-160)
  • RelationshipQuery (164-217)
database/v1compat/query/model.go (1)
  • Size (503-505)
cypher/frontend/context.go (1)
  • Context (17-21)
database/v1compat/graph.go (1)
  • Kinds (11-11)
query/query.go (5)
cypher/models/cypher/model.go (24)
  • Comparison (1073-1076)
  • NewPartialComparison (1131-1136)
  • KindMatcher (744-747)
  • NewComparison (1078-1086)
  • NewSimpleFunctionInvocation (1037-1042)
  • NewStringLiteral (785-790)
  • NewNegation (930-934)
  • NewStringListLiteral (883-891)
  • Remove (546-548)
  • Set (594-596)
  • SetItem (614-618)
  • RemoveItem (566-569)
  • FunctionInvocation (1028-1035)
  • RemoveKindsByMatcher (571-575)
  • NewKindMatcher (749-754)
  • Skip (1496-1498)
  • NewUpdatingClause (465-469)
  • NewProjection (1386-1390)
  • ProjectionItem (1169-1172)
  • NewProjectionItemWithExpr (1178-1182)
  • NewRegularQueryWithSingleQuery (142-150)
  • Match (411-415)
  • NewWhere (1333-1335)
  • NewReadingClause (207-209)
database/v1compat/graph.go (2)
  • Kinds (11-11)
  • String (9-9)
database/v1compat/query/model.go (6)
  • Count (148-153)
  • Property (141-146)
  • Equals (267-269)
  • GreaterThan (271-273)
  • LessThan (283-285)
  • SetProperties (80-92)
cypher/models/pgsql/translate/model.go (1)
  • Pattern (213-215)
cypher/models/pgsql/model.go (1)
  • Join (566-569)
database/v1compat/graph.go (1)
database/schema.go (2)
  • IndexTypeBTree (9-9)
  • IndexTypeTextSearch (10-10)
database/v1compat/wrapper.go (7)
cypher/frontend/context.go (2)
  • Context (17-21)
  • NewContext (23-33)
database/v1compat/database.go (6)
  • Database (111-153)
  • TransactionDelegate (96-96)
  • TransactionOption (108-108)
  • BatchDelegate (99-99)
  • Batch (12-53)
  • Transaction (57-91)
database/driver.go (1)
  • OptionReadOnly (14-14)
database/v1compat/graph.go (1)
  • Kinds (11-11)
database/v1compat/query.go (2)
  • NodeQuery (118-160)
  • RelationshipQuery (164-217)
database/v1compat/query/identifiers.go (2)
  • StartID (48-50)
  • EndID (56-58)
database/v1compat/query/model.go (1)
  • Size (503-505)
database/pg/mapper.go (1)
graph/mapper.go (1)
  • ValueMapper (358-360)
database/neo4j/schema.go (2)
database/v1compat/graph.go (2)
  • String (9-9)
  • StringKind (35-35)
database/schema.go (4)
  • IndexType (5-5)
  • IndexTypeBTree (9-9)
  • IndexTypeTextSearch (10-10)
  • IndexTypeUnsupported (8-8)
database/pg/pg.go (1)
registry.go (1)
  • Register (20-22)
database/v1compat/traversal/collection.go (2)
database/v1compat/database.go (2)
  • Database (111-153)
  • Transaction (57-91)
database/v1compat/ops/ops.go (1)
  • FetchNodeProperties (30-70)
database/pg/manager.go (2)
database/v1compat/graph.go (2)
  • String (9-9)
  • Kinds (11-11)
database/pg/query/query.go (1)
  • On (20-24)
database/pg/query/format.go (1)
database/schema.go (4)
  • IndexType (5-5)
  • IndexTypeBTree (9-9)
  • IndexTypeTextSearch (10-10)
  • IndexTypeUnsupported (8-8)
traversal/traversal.go (3)
cypher/frontend/context.go (1)
  • Context (17-21)
database/v1compat/query/model.go (2)
  • Equals (267-269)
  • Size (503-505)
cypher/models/cypher/model.go (1)
  • Return (1411-1413)
database/v1compat/query/model.go (6)
cypher/models/cypher/model.go (18)
  • UpdatingClause (459-463)
  • NewUpdatingClause (465-469)
  • Set (594-596)
  • SetItem (614-618)
  • Remove (546-548)
  • RemoveItem (566-569)
  • KindMatcher (744-747)
  • PropertyLookup (1201-1204)
  • NewKindMatcher (749-754)
  • FunctionInvocation (1028-1035)
  • WithErrors (112-118)
  • Skip (1496-1498)
  • PatternPredicate (1516-1518)
  • NewPatternPredicate (1520-1522)
  • NewVariableWithSymbol (1153-1157)
  • NewWhere (1333-1335)
  • Return (1411-1413)
  • Match (411-415)
database/v1compat/graph.go (1)
  • Kinds (11-11)
cypher/models/pgsql/operators.go (8)
  • OperatorEquals (70-70)
  • OperatorGreaterThan (72-72)
  • OperatorGreaterThanOrEqualTo (73-73)
  • OperatorLessThan (74-74)
  • OperatorLessThanOrEqualTo (75-75)
  • OperatorIsNot (91-91)
  • OperatorIn (89-89)
  • OperatorIs (90-90)
database/v1compat/query/identifiers.go (5)
  • NodeSymbol (22-22)
  • EdgeSymbol (23-23)
  • EdgeStartSymbol (24-24)
  • EdgeEndSymbol (25-25)
  • Identity (13-18)
cypher/models/pgsql/pgtypes.go (1)
  • Null (64-64)
cypher/models/pgsql/translate/model.go (1)
  • Pattern (213-215)
database/pg/database.go (6)
cypher/frontend/context.go (1)
  • Context (17-21)
database/driver.go (3)
  • Option (11-11)
  • OptionReadOnly (14-14)
  • QueryLogic (42-42)
database/v1compat/query/model.go (1)
  • Size (503-505)
database/v1compat/wrapper.go (1)
  • BackwardCompatibleInstance (66-71)
database/v1compat/database.go (1)
  • Transaction (57-91)
database/v1compat/graph.go (2)
  • String (9-9)
  • Kinds (11-11)
registry.go (2)
database/v1compat/database.go (1)
  • Database (111-153)
database/v1compat/wrapper.go (1)
  • V1Wrapper (91-99)
🔇 Additional comments (41)
database/pg/mapper.go (1)

106-108: API change: NewValueMapper → newValueMapper — repo scan shows no internal breakage; confirm external consumers

Short summary: I searched the repo and found no usages of the exported NewValueMapper(...) symbol. Internal code now uses the unexported newValueMapper and the driver exposes Mapper() accessors.

Files/locations to note:

  • database/pg/mapper.go:106 — now defines func newValueMapper(ctx context.Context, kindMapper KindMapper) graph.ValueMapper
  • database/pg/driver.go:94 and 136 — call newValueMapper(...) (Mapper() returns/uses it)
  • database/neo4j/mapper.go:113 and database/neo4j/driver.go:16,101 — local newValueMapper() and uses
  • database/v1compat/query.go:56,92 — Mapper() implementations
  • graph/mapper.go:362 — exported func NewValueMapper(...) (graph package) remains

Action required: manually verify there are no external consumers (other repositories/packages) calling database/pg.NewValueMapper. If external users exist, restore an exported constructor or add a compatibility wrapper and document the breaking change.

cypher/models/pgsql/test/validation_integration_test.go (3)

12-16: Migration to the new database schema/driver APIs looks correct

Importing database for NewSchema and pg for DriverName aligns with the v2 abstractions and registry.Open flow. No issues spotted here.


43-53: Schema wiring via database.NewSchema is correct

Graph.Name matches the default graph name, and Nodes/Edges use graph.Kinds with StringKind as expected. This matches the database.Schema API.


73-73: No action required — AssertLive already targets v2 database.Instance

AssertLive is defined as:

  • cypher/models/pgsql/test/testcase.go:180 — func (s *TranslationTestCase) AssertLive(ctx context.Context, t *testing.T, db database.Instance)

And it's invoked here:

  • cypher/models/pgsql/test/validation_integration_test.go:73 — testCase.AssertLive(testCtx, t, connection)
cypher/models/pgsql/format/format.go (3)

7-7: Import looks correct and justified

time is used in the new time.Time branch below.


126-127: []string support via formatSlice(TextArray) is a solid extension

Delegating each element back through formatValue ensures proper quoting/escaping.


74-76: Action: Verify/Update tests that assert on timestamptz SQL literals (RFC3339Nano now used)

The SQL formatter now emits timestamps with fractional seconds (time.RFC3339Nano). Tests that snapshot or assert on exact SQL strings with timestamptz literals may need updates.

Files to check:

  • cypher/models/pgsql/format/format.go:75 — source of change:
    case time.Time:
    builder.Write("'", typedValue.Format(time.RFC3339Nano), "'::timestamp with time zone")
  • cypher/models/pgsql/test/translation_cases/nodes.sql — contains timestamptz comparisons and a literal '2019-06-01T18:40:32.142+0100' (lines ~105,108,214,217)
  • cypher/models/pgsql/test/translation_cases/multipart.sql — contains now()::timestamp with time zone (line ~33)
  • cypher/models/pgsql/pgtypes.go:97 — DataType = "timestamp with time zone" (type name used in SQL)
  • graph/properties.go:182 and graph/mapper.go:134 — parsing uses time.RFC3339Nano (consistency check)

Suggested action: update affected SQL snapshots/expectations to include fractional seconds (or normalize timestamps in tests before asserting).

go.mod (1)

32-32: Keep golang.org/x/net v0.41.0 as an indirect dependency

Verified: go mod tidy added the indirect entry and go mod verify passed. go mod why shows the module is pulled in transitively (via cuelang.org/go's net package → golang.org/x/net/idna) from the test package github.com/specterops/dawgs/cypher/models/pgsql/test — there are no direct imports in the repo.

  • Evidence:
    • go.mod: line 32 contains the indirect entry
    • go mod why -m golang.org/x/net chain: github.com/specterops/dawgs/cypher/models/pgsql/test → cuelang.org/go/pkg/net → golang.org/x/net/idna
    • go.sum contains v0.41.0 entries

Snippet:

	golang.org/x/net v0.41.0 // indirect
traversal/traversal.go (2)

22-24: Good API surfacing: Logic type alias aligns traversal with database.Driver

The Logic signature cleanly exposes the new db driver flow for traversal execution. Looks coherent with the rest of the PR.


217-218: Fix fetchFunc invocations to match the updated signature

With fetchFunc now accepting []cypher.SyntaxNode, the call sites should pass the slice directly (they already do). This diff just aligns the param type change to avoid confusion:

-            if err := fetchFunc(currentExpansion.criteria, fetchDirection); err != nil {
+            if err := fetchFunc(currentExpansion.criteria, fetchDirection); err != nil {
-                if err := fetchFunc(nextExpansion.criteria, fetchDirection); err != nil {
+                if err := fetchFunc(nextExpansion.criteria, fetchDirection); err != nil {

Note: No behavioral change here; this is to confirm consistency with the signature fix above.

Also applies to: 235-236

registry.go (1)

41-47: LGTM! Correct error propagation implemented.

The function now properly returns the actual error from Open instead of masking it with ErrDriverMissing, which aligns with the previous review feedback.

database/pg/query/format.go (3)

14-23: LGTM! Consistent approach to invalid index types.

The function correctly maps database index types to PostgreSQL equivalents and returns "NOT SUPPORTED" for unrecognized types, allowing the error to be caught at SQL execution time with full query context, as per team preference.


48-53: LGTM! Proper constraint creation formatting.

The function correctly formats PostgreSQL unique index creation statements using the appropriate index type.


55-68: LGTM! Proper handling of text search indexes.

The function correctly differentiates between text search indexes (which require the gin_trgm_ops operator) and other index types, properly formatting the SQL for each case.

database/v1compat/traversal/collection.go (4)

3-9: LGTM! Imports correctly switched to v1compat packages.

The imports have been properly updated to use the v1compat packages, maintaining consistent layering throughout the compatibility layer.


11-21: LGTM! NodeCollector properly updated for v1compat types.

The NodeCollector struct and constructor have been correctly updated to use v1compat types while maintaining thread-safety with the mutex.


23-31: LGTM! Methods correctly updated to use v1compat interfaces.

The Collect and PopulateProperties methods have been properly updated to work with v1compat types and interfaces, maintaining the expected behavior.


40-62: LGTM! PathCollector properly migrated to v1compat.

The PathCollector struct and its methods have been correctly updated to use v1compat types while maintaining thread-safety and proper functionality.

database/neo4j/database.go (4)

17-33: LGTM! Well-structured instance initialization.

The constructor properly initializes all required components including the concurrency limiter, schema manager, and configuration parameters. The use of constants for defaults is a good practice.


39-59: LGTM! Proper session acquisition with concurrency control.

The method correctly manages concurrency limits and properly configures session access modes based on the provided options.


61-78: LGTM! Robust session lifecycle management.

The Session method properly manages the limiter slot release and session closure in defer blocks, ensuring resources are cleaned up even if errors occur.


112-159: LGTM! Comprehensive kind discovery implementation.

The FetchKinds method properly queries both node labels and relationship types, with appropriate error handling and type checking for the results.

database/pg/model/model.go (4)

7-25: LGTM! Well-structured change tracking type.

The IndexChangeSet type provides a clear separation between items to remove and items to add, with proper initialization in the constructor.


27-39: LGTM! Clean partition representation.

The GraphPartition type and constructor provide a clean abstraction for managing indexes and constraints with proper map initialization.


41-57: LGTM! Efficient schema-based initialization.

The NewGraphPartitionFromSchema constructor efficiently pre-allocates maps with the correct capacity and properly uses the IndexName and ConstraintName helper functions to generate consistent keys.


59-68: LGTM! Clear graph model hierarchy.

The GraphPartitions and Graph types provide a clean hierarchical structure for representing graph schemas with proper separation between node and edge partitions.

database/neo4j/schema.go (2)

120-149: Transaction lifecycle and best-effort cleanup look solid

Session close and rollback are handled via defers with debug-only logging on failures, aligning with the codebase’s “best-effort cleanup” pattern.


223-231: Best-effort result consumption is appropriate here

Deferring result.Consume(ctx) for cleanup while returning result.Err() is consistent with best-effort cleanup patterns used elsewhere and avoids resource leaks when exiting early.

database/v1compat/wrapper.go (1)

202-251: Threshold checks fixed to >=; good

All buffer flush thresholds now use >=, ensuring flush when limits are reached.

database/pg/database.go (2)

18-33: Access mode handling via options looks correct

Read-only vs read-write mapping to pgx.TxOptions is implemented as expected.


45-54: Confirm pgx NamedArgs usage in Raw paths

Raw() uses pgx.NamedArgs, which requires queries to use named placeholders. Please verify that all Raw callers format queries accordingly; otherwise, prefer positional args or a translator to rewrite named placeholders.

database/pg/manager.go (3)

58-82: Transaction helper is clean and robust

Good use of connection acquisition, transaction management, and rollback suppression for pgx.ErrTxClosed with debug-level logs.


288-294: DefaultGraph accessor under read lock looks correct

The read-locked access pattern and return signature are appropriate.


330-363: Schema assertion sequence looks correct

Validates default graph, creates schema, refreshes kinds, and asserts kind definitions per-graph within a single transaction. Good balance of correctness and efficiency.

database/v1compat/graph.go (5)

1-8: LGTM! Clean v1 compatibility layer setup.

The package structure and imports are well-organized for the v1 compatibility layer. The imports from the new v2 packages (database, graph, cypher) are correctly established to provide the underlying functionality.


9-31: Comprehensive type aliasing for v1 compatibility.

The type aliases effectively bridge v1 and v2 APIs by re-exporting core graph types. The selection covers all essential graph entities (nodes, relationships, paths, properties, etc.) needed for v1 compatibility.


33-64: Well-structured function and constant re-exports.

The re-export pattern maintains v1 API surface while delegating to v2 implementations. The inclusion of error variables ensures consistent error handling across the compatibility layer.


66-80: Database type aliases and constants correctly mapped.

The database-related type aliases and constants are properly mapped to their v2 counterparts. The index type constants correctly reference the underlying database.IndexType* values as confirmed in the relevant code snippets.


82-111: Generic property conversion function with proper type handling.

The AsProperties function provides a clean abstraction for converting different property map types to the unified Properties structure. The type switch correctly handles all supported input types and the helper function symbolMapToStringMap properly converts String keys to standard strings.

database/v1compat/query/model.go (2)

194-217: Well-designed type-safe KindsOf function.

The function properly handles different variable types and provides appropriate error handling using cypher.WithErrors. The logic correctly distinguishes between node and relationship references.


535-591: Complex but well-structured query building logic.

The SinglePartQuery function handles multiple expression types appropriately and provides proper error handling. The logic for assembling different query components (WHERE, RETURN, LIMIT, etc.) is comprehensive and well-organized.

Comment on lines +35 to 38
if connection, err := dawgs.Open(context.TODO(), pg.DriverName, dawgs.Config{
GraphQueryMemoryLimit: size.Gibibyte,
Pool: pgxPool,
ConnectionString: pgConnectionStr,
}); err != nil {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Pass testCtx to dawgs.Open instead of context.TODO()

Use the test’s cancellable context so open/handshakes honor deadlines/cancellation hooks, consistent with the rest of the test.

Apply this diff:

-    if connection, err := dawgs.Open(context.TODO(), pg.DriverName, dawgs.Config{
+    if connection, err := dawgs.Open(testCtx, pg.DriverName, dawgs.Config{
         GraphQueryMemoryLimit: size.Gibibyte,
         ConnectionString:      pgConnectionStr,
     }); err != nil {

Run this to double-check config fields and driver symbol existence:


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Config struct (expect: ConnectionString, GraphQueryMemoryLimit):"
rg -n --no-heading $'^type\\s+Config\\s+struct\\s*\\{' -A 30

echo
echo "pg driver name symbol (expect: DriverName):"
rg -n --no-heading $'DriverName' --glob '!**/vendor/**'

Length of output: 2303


Use testCtx when opening the DB connection (replace context.TODO())

Use the test’s cancellable context so open/handshakes honor deadlines/cancellation hooks. Verified: dawgs.Open accepts a context and dawgs.Config contains GraphQueryMemoryLimit and ConnectionString; pg.DriverName exists.

  • File: cypher/models/pgsql/test/validation_integration_test.go (around lines 35–38)

Apply this diff:

-    if connection, err := dawgs.Open(context.TODO(), pg.DriverName, dawgs.Config{
+    if connection, err := dawgs.Open(testCtx, pg.DriverName, dawgs.Config{
         GraphQueryMemoryLimit: size.Gibibyte,
         ConnectionString:      pgConnectionStr,
     }); err != nil {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if connection, err := dawgs.Open(context.TODO(), pg.DriverName, dawgs.Config{
GraphQueryMemoryLimit: size.Gibibyte,
Pool: pgxPool,
ConnectionString: pgConnectionStr,
}); err != nil {
if connection, err := dawgs.Open(testCtx, pg.DriverName, dawgs.Config{
GraphQueryMemoryLimit: size.Gibibyte,
ConnectionString: pgConnectionStr,
}); err != nil {
🤖 Prompt for AI Agents
In cypher/models/pgsql/test/validation_integration_test.go around lines 35 to
38, replace the call to context.TODO() passed into dawgs.Open with the test's
cancellable context (testCtx) so the open/handshake respects test
cancellation/deadlines; if testCtx is not already defined in this test, create
it with context.WithTimeout or context.WithCancel before use and defer its
cancel to ensure cleanup.

Comment on lines +203 to +229
keyBuilder.WriteString(edge.StartID.String())
keyBuilder.WriteString(edge.EndID.String())
keyBuilder.WriteString(edge.Kind.String())

key := keyBuilder.String()

if existingPropertiesIdx, hasExisting := s.keyToEdgeID[key]; hasExisting {
s.edgePropertiesBatch[existingPropertiesIdx].Merge(edge.Properties)
} else {
var (
startID = edge.StartID.Uint64()
edgeID = edge.ID.Uint64()
endID = edge.EndID.Uint64()
edgeProperties = edge.Properties.Clone()
)

if edgeKindID, err := kindMapper.MapKind(ctx, edge.Kind); err != nil {
return err
} else {
s.relationshipUpdateBatch.Add(startID, endID, edgeKindID)
}

s.keyToEdgeID[key] = edgeID

s.edgePropertiesBatch = append(s.edgePropertiesBatch, edgeProperties)
s.edgePropertiesIndex[edgeID] = len(s.edgePropertiesBatch) - 1
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Deduplication bug: incorrect keying and property-batch indexing can corrupt merges

Two issues:

  • Key construction lacks delimiters, causing potential collisions (e.g., "1|23|4" vs "12|3|4").
  • existingPropertiesIdx is an edge ID, but it’s used as a slice index; should look up the index via edgePropertiesIndex.

Apply this diff:

-    keyBuilder := strings.Builder{}
-
-    keyBuilder.WriteString(edge.StartID.String())
-    keyBuilder.WriteString(edge.EndID.String())
-    keyBuilder.WriteString(edge.Kind.String())
+    keyBuilder := strings.Builder{}
+    keyBuilder.Grow(len(edge.StartID.String()) + len(edge.EndID.String()) + len(edge.Kind.String()) + 2)
+    keyBuilder.WriteString(edge.StartID.String())
+    keyBuilder.WriteByte('|')
+    keyBuilder.WriteString(edge.EndID.String())
+    keyBuilder.WriteByte('|')
+    keyBuilder.WriteString(edge.Kind.String())
@@
-    if existingPropertiesIdx, hasExisting := s.keyToEdgeID[key]; hasExisting {
-        s.edgePropertiesBatch[existingPropertiesIdx].Merge(edge.Properties)
+    if existingEdgeID, hasExisting := s.keyToEdgeID[key]; hasExisting {
+        if idx, ok := s.edgePropertiesIndex[existingEdgeID]; ok {
+            s.edgePropertiesBatch[idx].Merge(edge.Properties)
+        } else {
+            // Fallback: should not happen if maps are kept in sync; create new entry
+            var (
+                startID        = edge.StartID.Uint64()
+                endID          = edge.EndID.Uint64()
+                edgeProperties = edge.Properties.Clone()
+            )
+            if edgeKindID, err := kindMapper.MapKind(ctx, edge.Kind); err != nil {
+                return err
+            } else {
+                s.relationshipUpdateBatch.Add(startID, endID, edgeKindID)
+            }
+            s.keyToEdgeID[key] = existingEdgeID
+            s.edgePropertiesBatch = append(s.edgePropertiesBatch, edgeProperties)
+            s.edgePropertiesIndex[existingEdgeID] = len(s.edgePropertiesBatch) - 1
+        }
     } else {
         var (
             startID        = edge.StartID.Uint64()
             edgeID         = edge.ID.Uint64()
             endID          = edge.EndID.Uint64()
             edgeProperties = edge.Properties.Clone()
         )
@@
-        s.edgePropertiesBatch = append(s.edgePropertiesBatch, edgeProperties)
-        s.edgePropertiesIndex[edgeID] = len(s.edgePropertiesBatch) - 1
+        s.edgePropertiesBatch = append(s.edgePropertiesBatch, edgeProperties)
+        s.edgePropertiesIndex[edgeID] = len(s.edgePropertiesBatch) - 1
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
keyBuilder.WriteString(edge.StartID.String())
keyBuilder.WriteString(edge.EndID.String())
keyBuilder.WriteString(edge.Kind.String())
key := keyBuilder.String()
if existingPropertiesIdx, hasExisting := s.keyToEdgeID[key]; hasExisting {
s.edgePropertiesBatch[existingPropertiesIdx].Merge(edge.Properties)
} else {
var (
startID = edge.StartID.Uint64()
edgeID = edge.ID.Uint64()
endID = edge.EndID.Uint64()
edgeProperties = edge.Properties.Clone()
)
if edgeKindID, err := kindMapper.MapKind(ctx, edge.Kind); err != nil {
return err
} else {
s.relationshipUpdateBatch.Add(startID, endID, edgeKindID)
}
s.keyToEdgeID[key] = edgeID
s.edgePropertiesBatch = append(s.edgePropertiesBatch, edgeProperties)
s.edgePropertiesIndex[edgeID] = len(s.edgePropertiesBatch) - 1
}
keyBuilder := strings.Builder{}
keyBuilder.Grow(len(edge.StartID.String()) + len(edge.EndID.String()) + len(edge.Kind.String()) + 2)
keyBuilder.WriteString(edge.StartID.String())
keyBuilder.WriteByte('|')
keyBuilder.WriteString(edge.EndID.String())
keyBuilder.WriteByte('|')
keyBuilder.WriteString(edge.Kind.String())
key := keyBuilder.String()
if existingEdgeID, hasExisting := s.keyToEdgeID[key]; hasExisting {
if idx, ok := s.edgePropertiesIndex[existingEdgeID]; ok {
s.edgePropertiesBatch[idx].Merge(edge.Properties)
} else {
// Fallback: should not happen if maps are kept in sync; create new entry
var (
startID = edge.StartID.Uint64()
endID = edge.EndID.Uint64()
edgeProperties = edge.Properties.Clone()
)
if edgeKindID, err := kindMapper.MapKind(ctx, edge.Kind); err != nil {
return err
} else {
s.relationshipUpdateBatch.Add(startID, endID, edgeKindID)
}
s.keyToEdgeID[key] = existingEdgeID
s.edgePropertiesBatch = append(s.edgePropertiesBatch, edgeProperties)
s.edgePropertiesIndex[existingEdgeID] = len(s.edgePropertiesBatch) - 1
}
} else {
var (
startID = edge.StartID.Uint64()
edgeID = edge.ID.Uint64()
endID = edge.EndID.Uint64()
edgeProperties = edge.Properties.Clone()
)
if edgeKindID, err := kindMapper.MapKind(ctx, edge.Kind); err != nil {
return err
} else {
s.relationshipUpdateBatch.Add(startID, endID, edgeKindID)
}
s.keyToEdgeID[key] = edgeID
s.edgePropertiesBatch = append(s.edgePropertiesBatch, edgeProperties)
s.edgePropertiesIndex[edgeID] = len(s.edgePropertiesBatch) - 1
}
🤖 Prompt for AI Agents
In database/pg/batch.go around lines 203 to 229, the deduplication key and
property-batch indexing are incorrect: add a clear delimiter between StartID,
EndID and Kind when building the key (e.g., include "|" or ":" between each
value) to avoid collisions, and when an existing entry is found, treat the
stored value as an edge ID and look up the actual slice index via
s.edgePropertiesIndex[existingEdgeID] before calling Merge (do not use the
stored value directly as a slice index); when inserting a new entry, continue to
set s.keyToEdgeID[key] = edgeID and s.edgePropertiesIndex[edgeID] =
len(s.edgePropertiesBatch)-1 after appending the cloned properties.

Comment on lines 20 to 27
for idx, untypedElement := range typedValue {
if typedElement, typeOK := untypedElement.(int16); typeOK {
kindIDs[idx] = typedElement
} else {
// Type assertion failed, mark as invalid type
validType = false
break
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix: invalid element in []any still triggers mapping; gate mapping by validType and avoid partial/zero-filled kindIDs

When a non-int16 element is encountered in the []any case, you set validType = false and break the loop. However, mapping is still attempted because len(kindIDs) > 0, potentially passing zero-valued entries to MapKindIDs. If the mapper tolerates 0 or returns nil error, this will incorrectly return (mappedKinds, true) despite an invalid element being present.

Prevent mapping on invalid input and avoid leaking partial/zero-initialized IDs.

Apply this diff:

@@
-        for idx, untypedElement := range typedValue {
-            if typedElement, typeOK := untypedElement.(int16); typeOK {
-                kindIDs[idx] = typedElement
-            } else {
-                // Type assertion failed, mark as invalid type
-                validType = false
-                break
-            }
-        }
+        for idx, untypedElement := range typedValue {
+            if typedElement, typeOK := untypedElement.(int16); typeOK {
+                kindIDs[idx] = typedElement
+            } else {
+                // Type assertion failed, mark as invalid type
+                validType = false
+                // Ensure we don't attempt mapping with a partially filled/zeroed slice
+                kindIDs = nil
+                break
+            }
+        }
@@
-    // Guard to prevent unnecessary thrashing of critical sections if there are no kind IDs to resolve
-    if len(kindIDs) > 0 {
+    // Guard to prevent unnecessary thrashing of critical sections if there are no kind IDs to resolve
+    if validType && len(kindIDs) > 0 {
         // Ignoring the error here is intentional. Failure to map the kinds here does not imply a fatal error.
         if mappedKinds, err := kindMapper.MapKindIDs(ctx, kindIDs); err == nil {
             return mappedKinds, true
         }
     }

Also applies to: 38-44

🤖 Prompt for AI Agents
In database/pg/mapper.go around lines 20-27 (and similarly lines 38-44), the
loop that converts []any to []int16 sets validType = false and breaks on a bad
element but mapping still proceeds because len(kindIDs) > 0; change control flow
so that after the conversion loop you only call MapKindIDs when validType is
true (and kindIDs is non-empty), and if validType is false return (nil, false)
immediately to avoid passing partially filled or zero-initialized IDs to the
mapper; apply the same guard to the second occurrence at lines 38-44.

Comment on lines +141 to +146
func Property(qualifier cypher.SyntaxNode, name string) *cypher.PropertyLookup {
return &cypher.PropertyLookup{
Atom: qualifier.(*cypher.Variable),
Symbol: name,
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Unsafe type assertion without validation.

The Property function performs an unchecked type assertion on Line 143 that could panic if qualifier is not a *cypher.Variable.

Apply this fix to add type safety:

 func Property(qualifier cypher.SyntaxNode, name string) *cypher.PropertyLookup {
+	variable, ok := qualifier.(*cypher.Variable)
+	if !ok {
+		return cypher.WithErrors(&cypher.PropertyLookup{}, fmt.Errorf("qualifier must be a Variable, got %T", qualifier))
+	}
 	return &cypher.PropertyLookup{
-		Atom:   qualifier.(*cypher.Variable),
+		Atom:   variable,
 		Symbol: name,
 	}
 }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In database/v1compat/query/model.go around lines 141 to 146, the function
currently does an unchecked type assertion on qualifier which can panic; change
it to perform a safe type assertion (v, ok := qualifier.(*cypher.Variable)) and
if not ok handle the case gracefully (e.g., return nil) instead of asserting,
then construct and return the PropertyLookup using v when ok. Ensure the
function signature and callers tolerate a nil return or adjust them accordingly.

Comment on lines +382 to +397
func Delete(leaves ...cypher.SyntaxNode) *cypher.UpdatingClause {
deleteClause := &cypher.Delete{
Detach: true,
}

for _, leaf := range leaves {
switch leaf.(*cypher.Variable).Symbol {
case EdgeSymbol, EdgeStartSymbol, EdgeEndSymbol:
deleteClause.Detach = false
}

deleteClause.Expressions = append(deleteClause.Expressions, leaf)
}

return cypher.NewUpdatingClause(deleteClause)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential panic in Delete function.

The function assumes all leaves are *cypher.Variable without type checking, which could cause a panic on Line 388.

Apply this fix to add type safety:

 func Delete(leaves ...cypher.SyntaxNode) *cypher.UpdatingClause {
 	deleteClause := &cypher.Delete{
 		Detach: true,
 	}

 	for _, leaf := range leaves {
-		switch leaf.(*cypher.Variable).Symbol {
+		variable, ok := leaf.(*cypher.Variable)
+		if !ok {
+			return cypher.WithErrors(cypher.NewUpdatingClause(deleteClause), fmt.Errorf("delete target must be a Variable, got %T", leaf))
+		}
+		
+		switch variable.Symbol {
 		case EdgeSymbol, EdgeStartSymbol, EdgeEndSymbol:
 			deleteClause.Detach = false
 		}

 		deleteClause.Expressions = append(deleteClause.Expressions, leaf)
 	}

 	return cypher.NewUpdatingClause(deleteClause)
 }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In database/v1compat/query/model.go around lines 382 to 397, the loop in Delete
assumes each leaf is a *cypher.Variable and directly dereferences it, which can
panic; change the loop to perform a safe type assertion or type switch: if the
leaf is a *cypher.Variable inspect v.Symbol and set deleteClause.Detach = false
for EdgeSymbol, EdgeStartSymbol, EdgeEndSymbol, otherwise do not access Symbol;
in all cases append the original leaf to deleteClause.Expressions so
non-variable nodes are handled safely without panics.

Comment on lines +117 to +130
func (s *v1Wrapper) BatchOperation(ctx context.Context, batchDelegate BatchDelegate) error {
return s.v2DB.Session(ctx, func(ctx context.Context, driver database.Driver) error {
var (
batchWrapper = wrapDriverToBatch(ctx, driver)
delegateErr = batchDelegate(batchWrapper)
)

if delegateErr != nil {
return delegateErr
}

return batchWrapper.tryFlush(0)
})
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

BatchOperation ignores configured batchWriteSize; propagate it to the batch wrapper

SetBatchWriteSize(..) on the wrapper is not used. Wire it into the batch wrapper so the user’s configuration takes effect.

 func (s *v1Wrapper) BatchOperation(ctx context.Context, batchDelegate BatchDelegate) error {
     return s.v2DB.Session(ctx, func(ctx context.Context, driver database.Driver) error {
         var (
             batchWrapper = wrapDriverToBatch(ctx, driver)
+            // Propagate configured batch write size if set
+            _ = func() {
+                if s.batchWriteSize > 0 {
+                    batchWrapper.batchWriteSize = s.batchWriteSize
+                }
+            }()
             delegateErr  = batchDelegate(batchWrapper)
         )

Also, SetWriteFlushSize is currently unused in this wrapper; confirm whether it should be applied anywhere (e.g., transaction-scoped writers) or remove it from the interface if obsolete.


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Ripgrep: key symbols =="
rg -n --hidden -S -e "type v1Wrapper" -e "v1Wrapper" -e "BatchOperation(" -e "wrapDriverToBatch" -e "batchWriteSize" -e "SetBatchWriteSize" -e "SetWriteFlushSize" -e "tryFlush" -e "BatchDelegate" || true

echo
echo "== Show database/v1compat/wrapper.go (if present) =="
if [ -f database/v1compat/wrapper.go ]; then
  sed -n '1,240p' database/v1compat/wrapper.go || true
else
  echo "database/v1compat/wrapper.go not found"
fi

echo
echo "== Find implementation(s) of wrapDriverToBatch =="
rg -n --hidden -S "wrapDriverToBatch" || true
rg -n --hidden -S "func .*wrapDriverToBatch" || true

echo
echo "== Show files that define batch wrapper types or tryFlush =="
rg -n --hidden -S -e "type .*Batch" -e "tryFlush" -e "batchWriteSize" || true

echo
echo "== Show any methods named SetBatchWriteSize or SetWriteFlushSize =="
rg -n --hidden -S -e "SetBatchWriteSize" -e "SetWriteFlushSize" || true

echo
echo "== Grep for batch-wrapper-looking identifiers (writer, flush, batch) =="
rg -n --hidden -S -e "batch" -e "flush" -e "writer" | sed -n '1,200p' || true

Length of output: 30288


Propagate configured batchWriteSize into driver batch wrapper

SetBatchWriteSize on v1Wrapper is never applied: BatchOperation creates a driverBatchWrapper that defaults to 2000 and never receives s.batchWriteSize. Apply the configured value when constructing the wrapper so flush thresholds inside the batch honor the caller's setting.

  • Files to update:
    • database/v1compat/wrapper.go — v1Wrapper.BatchOperation (set batchWrapper.batchWriteSize from s.batchWriteSize)
  • Note: SetWriteFlushSize on v1Wrapper is currently unused in this file; confirm whether it should be applied (e.g., to transaction-scoped writers) or removed from the interface.

Suggested patch:

 func (s *v1Wrapper) BatchOperation(ctx context.Context, batchDelegate BatchDelegate) error {
     return s.v2DB.Session(ctx, func(ctx context.Context, driver database.Driver) error {
-        var (
-            batchWrapper = wrapDriverToBatch(ctx, driver)
-            delegateErr  = batchDelegate(batchWrapper)
-        )
+        var (
+            batchWrapper = wrapDriverToBatch(ctx, driver)
+        )
+
+        // Propagate configured batch write size if set
+        if s.batchWriteSize > 0 {
+            batchWrapper.batchWriteSize = s.batchWriteSize
+        }
+
+        delegateErr := batchDelegate(batchWrapper)
 
         if delegateErr != nil {
             return delegateErr
         }
 
         return batchWrapper.tryFlush(0)
     })
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func (s *v1Wrapper) BatchOperation(ctx context.Context, batchDelegate BatchDelegate) error {
return s.v2DB.Session(ctx, func(ctx context.Context, driver database.Driver) error {
var (
batchWrapper = wrapDriverToBatch(ctx, driver)
delegateErr = batchDelegate(batchWrapper)
)
if delegateErr != nil {
return delegateErr
}
return batchWrapper.tryFlush(0)
})
}
func (s *v1Wrapper) BatchOperation(ctx context.Context, batchDelegate BatchDelegate) error {
return s.v2DB.Session(ctx, func(ctx context.Context, driver database.Driver) error {
var (
batchWrapper = wrapDriverToBatch(ctx, driver)
)
// Propagate configured batch write size if set
if s.batchWriteSize > 0 {
batchWrapper.batchWriteSize = s.batchWriteSize
}
delegateErr := batchDelegate(batchWrapper)
if delegateErr != nil {
return delegateErr
}
return batchWrapper.tryFlush(0)
})
}
🤖 Prompt for AI Agents
In database/v1compat/wrapper.go around lines 117 to 130, BatchOperation
constructs a driver batch wrapper that currently uses the wrapper's default
batch size instead of the configured s.batchWriteSize; set the created
batchWrapper's batchWriteSize field to s.batchWriteSize before returning so
flush thresholds respect the caller's configuration, and also evaluate whether
SetWriteFlushSize should be applied similarly to transaction-scoped writers or
removed from the interface if unused.

Comment on lines +467 to +494
func (s *builder) Delete(deleteItems ...any) QueryBuilder {
for _, nextDelete := range deleteItems {
switch typedNextUpdate := nextDelete.(type) {
case QualifiedExpression:
qualifier := typedNextUpdate.qualifier()

switch qualifier {
case Identifiers.node, Identifiers.start, Identifiers.end:
s.detachDelete = true
}

s.deleteItems = append(s.deleteItems, qualifier)

case *cypher.Variable:
switch typedNextUpdate.Symbol {
case Identifiers.node, Identifiers.start, Identifiers.end:
s.detachDelete = true
}

s.deleteItems = append(s.deleteItems, typedNextUpdate)

default:
s.trackError(fmt.Errorf("unknown delete type: %T", nextDelete))
}
}

return s
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Bug: Comparing a cypher.Expression to string identifiers in Delete()

In the QualifiedExpression branch, you’re switching on a cypher.Expression and comparing to string symbols (Identifiers.node/start/end). This never matches and is a compile-time/type mismatch risk. Assert to *cypher.Variable and compare the Symbol.

Apply this diff:

     case QualifiedExpression:
-        qualifier := typedNextUpdate.qualifier()
-
-        switch qualifier {
-        case Identifiers.node, Identifiers.start, Identifiers.end:
-            s.detachDelete = true
-        }
-
-        s.deleteItems = append(s.deleteItems, qualifier)
+        qualifier := typedNextUpdate.qualifier()
+        if v, ok := qualifier.(*cypher.Variable); ok {
+            switch v.Symbol {
+            case Identifiers.node, Identifiers.start, Identifiers.end:
+                s.detachDelete = true
+            }
+        }
+        s.deleteItems = append(s.deleteItems, qualifier)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func (s *builder) Delete(deleteItems ...any) QueryBuilder {
for _, nextDelete := range deleteItems {
switch typedNextUpdate := nextDelete.(type) {
case QualifiedExpression:
qualifier := typedNextUpdate.qualifier()
switch qualifier {
case Identifiers.node, Identifiers.start, Identifiers.end:
s.detachDelete = true
}
s.deleteItems = append(s.deleteItems, qualifier)
case *cypher.Variable:
switch typedNextUpdate.Symbol {
case Identifiers.node, Identifiers.start, Identifiers.end:
s.detachDelete = true
}
s.deleteItems = append(s.deleteItems, typedNextUpdate)
default:
s.trackError(fmt.Errorf("unknown delete type: %T", nextDelete))
}
}
return s
}
func (s *builder) Delete(deleteItems ...any) QueryBuilder {
for _, nextDelete := range deleteItems {
switch typedNextUpdate := nextDelete.(type) {
case QualifiedExpression:
qualifier := typedNextUpdate.qualifier()
if v, ok := qualifier.(*cypher.Variable); ok {
switch v.Symbol {
case Identifiers.node, Identifiers.start, Identifiers.end:
s.detachDelete = true
}
}
s.deleteItems = append(s.deleteItems, qualifier)
case *cypher.Variable:
switch typedNextUpdate.Symbol {
case Identifiers.node, Identifiers.start, Identifiers.end:
s.detachDelete = true
}
s.deleteItems = append(s.deleteItems, typedNextUpdate)
default:
s.trackError(fmt.Errorf("unknown delete type: %T", nextDelete))
}
}
return s
}
🤖 Prompt for AI Agents
In query/query.go around lines 467 to 494, the QualifiedExpression branch is
switching on a cypher.Expression and comparing it to string Identifiers
(node/start/end), which never matches; change the handling to assert the
QualifiedExpression to *cypher.Variable, use its Symbol for comparison (like the
*cypher.Variable case), set s.detachDelete when Symbol matches
Identifiers.node/start/end, and append the actual *cypher.Variable to
s.deleteItems; if the assertion fails, call s.trackError with an appropriate
unknown-type error.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🔭 Outside diff range comments (7)
database/v1compat/switch.go (4)

166-167: Context parameter mismatch in AssertSchema

The method creates an internalCtx but passes the original ctx to the underlying database's AssertSchema method instead of the managed internal context.

-		return s.currentDB.AssertSchema(ctx, dbSchema)
+		return s.currentDB.AssertSchema(internalCtx, dbSchema)

192-193: Context parameter mismatch in Close

Similar to the AssertSchema issue, the method creates an internalCtx but passes the original ctx to the underlying database's Close method.

-		return s.currentDB.Close(ctx)
+		return s.currentDB.Close(internalCtx)

205-206: Context parameter mismatch in FetchKinds

The method creates an internalCtx but passes the original ctx to the underlying database's FetchKinds method.

-		return s.currentDB.FetchKinds(ctx)
+		return s.currentDB.FetchKinds(internalCtx)

218-219: Context parameter mismatch in RefreshKinds

The method creates an internalCtx but passes the original ctx to the underlying database's RefreshKinds method.

-		return s.currentDB.RefreshKinds(ctx)
+		return s.currentDB.RefreshKinds(internalCtx)
database/v1compat/query/sort.go (1)

19-31: Handle unknown directions and nil SortCriteria defensively

If Direction is unset or unexpected, the item is dropped silently. Also, a nil SortCriteria may end up appended. Prefer an explicit default or skip.

Apply this diff:

 func (s SortItems) FormatCypherOrder() *cypher.Order {
-	var orderCriteria []cypher.SyntaxNode
+	var orderCriteria []cypher.SyntaxNode

 	for _, sortItem := range s {
+		// Skip nil criteria
+		if sortItem.SortCriteria == nil {
+			continue
+		}
 		switch sortItem.Direction {
 		case SortDirectionAscending:
 			orderCriteria = append(orderCriteria, Order(sortItem.SortCriteria, Ascending()))
 		case SortDirectionDescending:
 			orderCriteria = append(orderCriteria, Order(sortItem.SortCriteria, Descending()))
+		default:
+			// Default to ascending when unspecified/unknown
+			orderCriteria = append(orderCriteria, Order(sortItem.SortCriteria, Ascending()))
 		}
 	}
-	return OrderBy(orderCriteria...)
+	if len(orderCriteria) == 0 {
+		return nil
+	}
+	return OrderBy(orderCriteria...)
 }
cypher/models/pgsql/test/testcase.go (1)

180-206: Ensure both Explain and Close errors are surfaced; prefer defer for Close

Currently, Close errors overshadow Explain errors, and if both fail only Close is returned. Capture both and join; also defer Close for clarity.

Apply this diff:

 func (s *TranslationTestCase) AssertLive(ctx context.Context, t *testing.T, db database.Instance) {
@@
-        require.NoError(t, db.Session(ctx, func(ctx context.Context, driver database.Driver) error {
-            result := driver.Explain(ctx, regularQuery, s.CypherParams)
-
-            if err := result.Close(ctx); err != nil {
-                return err
-            }
-
-            return result.Error()
-        }))
+        require.NoError(t, db.Session(ctx, func(ctx context.Context, driver database.Driver) error {
+            result := driver.Explain(ctx, regularQuery, s.CypherParams)
+            defer func() { _ = result.Close(ctx) }()
+
+            explainErr := result.Error()
+            closeErr := result.Close(ctx)
+
+            if explainErr != nil && closeErr != nil {
+                // Go 1.20+; if earlier, return explainErr or wrap manually
+                return errors.Join(explainErr, closeErr)
+            }
+            if explainErr != nil {
+                return explainErr
+            }
+            return closeErr
+        }))

Note: add the following import if not already present:

import "errors"
database/v1compat/result.go (1)

56-81: Consider adding nil check for marshaller function.

The NewResultIterator function should validate that the marshaller parameter is not nil to prevent runtime panics.

 func NewResultIterator[T any](ctx context.Context, result database.Result, marshaller ResultMarshaller[T]) Cursor[T] {
+	if marshaller == nil {
+		panic("marshaller cannot be nil")
+	}
 	var (
 		cursorCtx, cancelFunc = context.WithCancel(ctx)
 		resultIterator        = &ResultIterator[T]{
♻️ Duplicate comments (13)
traversal/traversal.go (1)

151-207: Inconsistent criteria handling in fetchFunc

The fetchFunc accepts a single criteria parameter but treats it as if it could be multiple by initializing allCriteria as a slice. However, this slice always contains exactly one element (the passed criteria), and the actual expansion criteria from currentExpansion.criteria are never incorporated into the query.

The function should either:

  1. Accept multiple criteria and combine them properly, or
  2. Combine the single criteria with currentExpansion.criteria before building the query

Here's a fix that properly combines the criteria:

-		fetchFunc = func(criteria cypher.SyntaxNode, direction graph.Direction) error {
+		fetchFunc = func(expansionCriteria []cypher.SyntaxNode, direction graph.Direction) error {
 			var (
 				queryBuilder = query.New()
-				allCriteria  = []cypher.SyntaxNode{criteria}
+				allCriteria  = make([]cypher.SyntaxNode, 0, len(expansionCriteria)+1)
 			)
 
+			// Add expansion criteria if present
+			allCriteria = append(allCriteria, expansionCriteria...)
+
 			switch direction {
 			case graph.DirectionInbound:
-				queryBuilder.Where(append(allCriteria, query.Start().ID().Equals(segment.Node.ID))...).Return(
+				allCriteria = append(allCriteria, query.Start().ID().Equals(segment.Node.ID))
+				queryBuilder.Where(allCriteria...).Return(
 					query.Relationship(),
 					query.End(),
 				)
 
 			case graph.DirectionOutbound:
-				queryBuilder.Where(append(allCriteria, query.End().ID().Equals(segment.Node.ID))...).Return(
+				allCriteria = append(allCriteria, query.End().ID().Equals(segment.Node.ID))
+				queryBuilder.Where(allCriteria...).Return(
 					query.Relationship(),
 					query.Start(),
 				)

And update the call sites:

-			if err := fetchFunc(currentExpansion.criteria, fetchDirection); err != nil {
+			if err := fetchFunc(currentExpansion.criteria, fetchDirection); err != nil {
-				if err := fetchFunc(nextExpansion.criteria, fetchDirection); err != nil {
+				if err := fetchFunc(nextExpansion.criteria, fetchDirection); err != nil {
registry.go (2)

7-9: Import fmt and sync to enable error wrapping and a concurrency-safe registry

You’re using a global map for driver constructors; adding fmt and sync imports will support wrapped errors and a RWMutex for safe concurrent access.

 import (
 	"context"
 	"errors"
+	"fmt"
+	"sync"
 
 	"github.com/specterops/dawgs/database"
 	"github.com/specterops/dawgs/database/v1compat"
 	"github.com/specterops/dawgs/util/size"
 )

Additionally apply the following changes outside this hunk to guard the registry and avoid double-registration:

// At package scope:
var (
	availableDrivers = map[string]DriverConstructor{}
	registryMu       sync.RWMutex
)

func Register(driverName string, constructor DriverConstructor) {
	registryMu.Lock()
	defer registryMu.Unlock()

	if _, exists := availableDrivers[driverName]; exists {
		panic(fmt.Sprintf("driver %q already registered", driverName))
	}
	availableDrivers[driverName] = constructor
}

33-39: Guard registry lookup with RLock and wrap missing-driver error with context

Avoid races on the global map and provide the driver name when missing while preserving the sentinel with %w.

-func Open(ctx context.Context, driverName string, config Config) (database.Instance, error) {
-	if driverConstructor, hasDriver := availableDrivers[driverName]; !hasDriver {
-		return nil, ErrDriverMissing
-	} else {
-		return driverConstructor(ctx, config)
-	}
-}
+func Open(ctx context.Context, driverName string, config Config) (database.Instance, error) {
+	registryMu.RLock()
+	driverConstructor, hasDriver := availableDrivers[driverName]
+	registryMu.RUnlock()
+
+	if !hasDriver {
+		return nil, fmt.Errorf("%w: %s", ErrDriverMissing, driverName)
+	}
+
+	return driverConstructor(ctx, config)
+}
database/pg/database.go (1)

119-125: Typo from earlier review is corrected (“unable”); helper reads cleanly

Message now reads correctly and the type assertion keeps the API honest.

database/v1compat/wrapper.go (4)

132-151: Fix potential race condition in SetDefaultGraph.

The SetDefaultGraph method modifies the shared s.schema field without any synchronization, which could lead to race conditions if multiple goroutines call this method concurrently.


202-252: Fix inconsistent buffer flushing logic.

The tryFlush method uses >= comparison for checking buffer sizes against the threshold, but this is inconsistent with typical flushing behavior which should flush when the limit is reached.


348-375: Fix missing MATCH clause in UpdateNode query.

The UpdateNode method builds an update query but doesn't include a MATCH clause to identify the node to update. This will likely fail or update the wrong nodes.


399-418: Fix missing MATCH clause in UpdateRelationship query.

The UpdateRelationship method has critical issues: it doesn't properly match the relationship to update (missing MATCH clause).

database/pg/batch.go (3)

252-262: Inconsistent error handling for result.Close().

There's inconsistent error handling for result.Close() across methods. Line 252 uses defer result.Close() while lines 297, 334, 356, 366, 376 call it directly without checking the error.

Based on the learnings, if cleanup operations are intentionally best-effort and don't require error checking, this should be documented or made consistent throughout the file.


361-369: Add validation for empty batch operations.

The delete operations don't validate if the batch is empty before executing the query.


371-379: Add validation for empty batch operations.

The delete operations don't validate if the batch is empty before executing the query.

database/v1compat/traversal/traversal.go (1)

213-213: Potential logic error in line 213

The condition tag.depth > 0 && currentExpansion.minDepth == 0 || tag.depth >= currentExpansion.minDepth appears to have incorrect operator precedence. The && operator has higher precedence than ||, which may lead to unexpected behavior.

database/v1compat/query/model.go (1)

12-22: Potential panic risk in generic type assertion

The convertCriteria function performs unchecked type assertions which could panic if the conversion fails.

🧹 Nitpick comments (20)
query/query.go (1)

758-769: Pattern preparation logic

The pattern preparation distinguishes between node and relationship patterns, with special support for shortest path queries. The error message when neither pattern type is identified could be more specific.

-			return nil, fmt.Errorf("query has no node and relationship query identifiers specified")
+			return nil, fmt.Errorf("query must specify either node identifiers (%s, %s, %s) or relationship identifiers (%s, %s, %s)", 
+				Identifiers.node, Identifiers.start, Identifiers.end,
+				Identifiers.path, Identifiers.relationship, Identifiers.end)
database/pg/types.go (2)

216-237: Clearer type error for nodes: good; consider preallocating and resetting Nodes to avoid residual state

The new error message is helpful. Minor improvement: defensively reset and preallocate s.Nodes before the loop to prevent accidental accumulation if FromMap is called more than once on the same instance and to reduce allocations.

Apply this diff:

 func (s *pathComposite) FromMap(compositeMap map[string]any) error {
   if rawNodes, hasNodes := compositeMap["nodes"]; hasNodes {
     if typedRawNodes, typeOK := rawNodes.([]any); !typeOK {
       return fmt.Errorf("expected nodes to be []any but got %T", rawNodes)
     } else {
+      // Reset and preallocate to avoid residual state and reduce allocations
+      s.Nodes = make([]nodeComposite, 0, len(typedRawNodes))
       for _, rawNode := range typedRawNodes {
         switch typedNode := rawNode.(type) {
         case map[string]any:
           var node nodeComposite

           if err := node.FromMap(typedNode); err != nil {
             return err
           }

           s.Nodes = append(s.Nodes, node)

         default:
           return fmt.Errorf("unexpected type for raw node: %T", rawNode)
         }
       }
     }
   }

239-259: Clearer type error for edges: good; mirror the preallocation and fix a tiny naming nit

Same preallocation advice for s.Edges. Also, the variable name typedNode inside the edges loop is a minor naming mismatch; typedEdge would better reflect the content.

Apply this diff:

   if rawEdges, hasEdges := compositeMap["edges"]; hasEdges {
     if typedRawEdges, typeOK := rawEdges.([]any); !typeOK {
       return fmt.Errorf("expected edges to be []any but got %T", rawEdges)
     } else {
+      // Reset and preallocate to avoid residual state and reduce allocations
+      s.Edges = make([]edgeComposite, 0, len(typedRawEdges))
       for _, rawEdge := range typedRawEdges {
-        switch typedNode := rawEdge.(type) {
+        switch typedEdge := rawEdge.(type) {
         case map[string]any:
           var edge edgeComposite

-          if err := edge.FromMap(typedNode); err != nil {
+          if err := edge.FromMap(typedEdge); err != nil {
             return err
           }

           s.Edges = append(s.Edges, edge)

         default:
           return fmt.Errorf("unexpected type for raw edge: %T", rawEdge)
         }
       }
     }
   }
cypher/models/pgsql/format/format.go (1)

123-128: Escaping strings and supporting []string arrays look good

Doubling single quotes avoids broken SQL/injection. Delegating []string through formatSlice(pgsql.TextArray) is consistent.

Minor nit: consider centralizing the escape logic to a small helper for reuse and readability (useful if other code paths emit strings in the future):

// outside selected lines, near imports
func escapeSQLString(s string) string {
    return strings.ReplaceAll(s, "'", "''")
}

Then:

case string:
    builder.Write("'", escapeSQLString(typedValue), "'")
registry.go (1)

28-30: DriverConfig as any: clarify expectations and typing across drivers

Using any makes this flexible but weakly-typed. Please document the expected types per driver and consider defining typed config structs (e.g., Neo4jConfig, PGConfig) that callers can pass via this field. This helps catch misconfiguration at compile time where call sites can type-assert appropriately.

database/schema.go (2)

37-45: Document Graph fields for maintainability

Add short comments for NodeConstraints/EdgeConstraints and NodeIndexes/EdgeIndexes clarifying semantics (e.g., uniqueness vs property existence, indexing purpose), and whether the driver is expected to enforce or best-effort.


52-63: Consider validating default graph presence and duplicate names

NewSchema doesn’t guard that DefaultGraphName exists in GraphSchemas or that names are unique. You can preserve the current constructor and add a Validate() helper for callers, or return (Schema, error) from NewSchema.

Example Validate helper (outside this hunk):

func (s Schema) Validate() error {
	if s.DefaultGraphName == "" {
		return fmt.Errorf("default graph name is empty")
	}
	if _, ok := s.GraphSchemas[s.DefaultGraphName]; !ok {
		return fmt.Errorf("default graph %q not found", s.DefaultGraphName)
	}
	// Names are keys; if GraphSchemas is built elsewhere consider re-checking.
	return nil
}
database/v1compat/traversal/collection.go (6)

11-14: Prefer a value mutex over a pointer to avoid nil risks and extra allocation

Using sync.Mutex by value is idiomatic and avoids potential nil deref if constructed incorrectly.

 type NodeCollector struct {
-	Nodes v1compat.NodeSet
-	lock  *sync.Mutex
+	Nodes v1compat.NodeSet
+	lock  sync.Mutex
 }

16-21: Constructor can omit explicit mutex initialization when using value mutex

If you adopt a value mutex, you don’t need to allocate it.

 func NewNodeCollector() *NodeCollector {
 	return &NodeCollector{
-		Nodes: v1compat.NewNodeSet(),
-		lock:  &sync.Mutex{},
+		Nodes: v1compat.NewNodeSet(),
 	}
 }

27-31: Provide a variant that reuses an existing transaction to avoid nested reads

Current implementation always opens a new read transaction. When called inside an existing transaction, this adds overhead and may complicate semantics. Consider adding a WithTx variant.

Example method outside this hunk:

func (s *NodeCollector) PopulatePropertiesWithTx(_ context.Context, tx v1compat.Transaction, propertyNames ...string) error {
	return ops.FetchNodeProperties(tx, s.Nodes, propertyNames)
}

If you want, I can update call sites in traversal code to use WithTx where a tx is already in scope.


40-43: Mirror mutex change for PathCollector (value mutex instead of pointer)

Stay consistent with NodeCollector to reduce allocations and nil risks.

 type PathCollector struct {
-	Paths v1compat.PathSet
-	lock  *sync.Mutex
+	Paths v1compat.PathSet
+	lock  sync.Mutex
 }

45-49: Constructor can elide mutex init with value mutex

Same rationale as NodeCollector.

 func NewPathCollector() *PathCollector {
 	return &PathCollector{
-		lock: &sync.Mutex{},
 	}
 }

51-55: Add a WithTx variant to reuse existing transactions

Match NodeCollector suggestion to avoid nested transactions when a tx is already available.

Example method outside this hunk:

func (s *PathCollector) PopulateNodePropertiesWithTx(_ context.Context, tx v1compat.Transaction, propertyNames ...string) error {
	return ops.FetchNodeProperties(tx, s.Paths.AllNodes(), propertyNames)
}
database/pg/model/model.go (1)

41-57: Consider pre-allocating map capacity for better performance.

The function correctly pre-allocates map capacity on lines 44-45, which is good. However, consider adding bounds checking to ensure the slices are not empty before allocation.

 func NewGraphPartitionFromSchema(name string, indexes []database.Index, constraints []database.Constraint) GraphPartition {
+	indexCapacity := len(indexes)
+	constraintCapacity := len(constraints)
+	
 	graphPartition := GraphPartition{
 		Name:        name,
-		Indexes:     make(map[string]database.Index, len(indexes)),
-		Constraints: make(map[string]database.Constraint, len(constraints)),
+		Indexes:     make(map[string]database.Index, indexCapacity),
+		Constraints: make(map[string]database.Constraint, constraintCapacity),
 	}
database/neo4j/schema.go (1)

151-190: Consider extracting drop statement execution to reduce duplication.

Both assertIndexes and assertConstraints have similar patterns for executing drop statements. Consider extracting this to a helper function.

+func (s *SchemaManager) executeDropStatement(ctx context.Context, transaction neo4j.ExplicitTransaction, statement string, name string) error {
+	if result, err := transaction.Run(ctx, strings.Replace(statement, "$name", name, 1), nil); err != nil {
+		return err
+	} else if _, err := result.Consume(ctx); err != nil {
+		return err
+	} else {
+		return result.Err()
+	}
+}

 func (s *SchemaManager) assertIndexes(ctx context.Context, indexesToRemove []string, indexesToAdd map[string]neo4jIndex) error {
 	if err := s.transaction(ctx, func(transaction neo4j.ExplicitTransaction) error {
 		for _, indexToRemove := range indexesToRemove {
 			slog.InfoContext(ctx, fmt.Sprintf("Removing index %s", indexToRemove))
-
-			if result, err := transaction.Run(ctx, strings.Replace(dropPropertyIndexStatement, "$name", indexToRemove, 1), nil); err != nil {
-				return err
-			} else if _, err := result.Consume(ctx); err != nil {
-				return err
-			} else if err := result.Err(); err != nil {
+			if err := s.executeDropStatement(ctx, transaction, dropPropertyIndexStatement, indexToRemove); err != nil {
 				return err
 			}
 		}
database/pg/manager.go (1)

330-363: Consider extracting kind mapping logic to reduce duplication.

The kind mapping logic is duplicated for nodes and edges (lines 348-352 and 354-358). Consider extracting this to a helper method.

+func (s *SchemaManager) ensureKindsDefined(ctx context.Context, tx pgx.Tx, kinds graph.Kinds) error {
+	if _, missingKinds := s.mapKinds(kinds); len(missingKinds) > 0 {
+		return s.defineKinds(ctx, tx, missingKinds)
+	}
+	return nil
+}

 func (s *SchemaManager) assertSchema(ctx context.Context, schema database.Schema) error {
 	// ... existing code ...
 	return s.transaction(ctx, func(transaction pgx.Tx) error {
 		// ... existing code ...
 		for _, graphSchema := range schema.GraphSchemas {
-			if _, missingKinds := s.mapKinds(graphSchema.Nodes); len(missingKinds) > 0 {
-				if err := s.defineKinds(ctx, transaction, missingKinds); err != nil {
-					return err
-				}
-			}
-
-			if _, missingKinds := s.mapKinds(graphSchema.Edges); len(missingKinds) > 0 {
-				if err := s.defineKinds(ctx, transaction, missingKinds); err != nil {
-					return err
-				}
-			}
+			if err := s.ensureKindsDefined(ctx, transaction, graphSchema.Nodes); err != nil {
+				return err
+			}
+			if err := s.ensureKindsDefined(ctx, transaction, graphSchema.Edges); err != nil {
+				return err
+			}
 		}
database/v1compat/wrapper.go (1)

29-64: Consider using reflection to reduce code duplication.

The function has many repetitive case statements. Consider using reflection for a more maintainable solution.

 func downcastPropertyFields(props *Properties) {
 	for key, value := range props.MapOrEmpty() {
-		switch typedValue := value.(type) {
-		case []string:
-			props.Map[key] = typedSliceToAnySlice(typedValue)
-		case []time.Time:
-			props.Map[key] = typedSliceToAnySlice(typedValue)
-		// ... many more cases ...
-		}
+		// Use reflection to check if it's a slice of a concrete type
+		v := reflect.ValueOf(value)
+		if v.Kind() == reflect.Slice && v.Type().Elem().Kind() != reflect.Interface {
+			// Convert typed slice to []any
+			anySlice := make([]any, v.Len())
+			for i := 0; i < v.Len(); i++ {
+				anySlice[i] = v.Index(i).Interface()
+			}
+			props.Map[key] = anySlice
+		}
 	}
 }
database/pg/batch.go (1)

27-41: Consider using a strings.Builder for better performance.

The Int2ArrayEncoder uses bytes.Buffer for string building, but strings.Builder would be more appropriate and performant for string construction.

Apply this diff to use strings.Builder:

-import (
-	"bytes"
-	"context"
-	"fmt"
-	"strconv"
-	"strings"
-
-	"github.com/jackc/pgtype"
-	"github.com/specterops/dawgs/cypher/models/pgsql"
-	"github.com/specterops/dawgs/database/pg/model"
-	"github.com/specterops/dawgs/database/pg/query"
-	"github.com/specterops/dawgs/graph"
-)
+import (
+	"context"
+	"fmt"
+	"strconv"
+	"strings"
+
+	"github.com/jackc/pgtype"
+	"github.com/specterops/dawgs/cypher/models/pgsql"
+	"github.com/specterops/dawgs/database/pg/model"
+	"github.com/specterops/dawgs/database/pg/query"
+	"github.com/specterops/dawgs/graph"
+)

 type Int2ArrayEncoder struct {
-	buffer *bytes.Buffer
+	builder *strings.Builder
 }

 func NewInt2ArrayEncoder() Int2ArrayEncoder {
 	return Int2ArrayEncoder{
-		buffer: &bytes.Buffer{},
+		builder: &strings.Builder{},
 	}
 }

 func (s *Int2ArrayEncoder) Encode(values []int16) string {
-	s.buffer.Reset()
-	s.buffer.WriteRune('{')
+	s.builder.Reset()
+	s.builder.WriteRune('{')

 	for idx, value := range values {
 		if idx > 0 {
-			s.buffer.WriteRune(',')
+			s.builder.WriteRune(',')
 		}

-		s.buffer.WriteString(strconv.Itoa(int(value)))
+		s.builder.WriteString(strconv.Itoa(int(value)))
 	}

-	s.buffer.WriteRune('}')
-	return s.buffer.String()
+	s.builder.WriteRune('}')
+	return s.builder.String()
 }
database/v1compat/database.go (1)

34-35: Consider documenting the upsert behavior for CreateRelationship.

The TODO comment indicates that the existing batch logic expects an upsert on conflicts with (start_id, end_id, kind), but states this is incorrect and should be refactored. This behavior discrepancy should be clearly documented to avoid confusion.

Consider adding more detailed documentation about the current behavior and the intended future behavior:

-	// TODO: Existing batch logic expects this to perform an upsert on conficts with (start_id, end_id, kind). This is incorrect and should be refactored
+	// TODO: Existing batch logic expects this to perform an upsert on conflicts with (start_id, end_id, kind).
+	// This is incorrect behavior - CreateRelationship should fail if the relationship already exists.
+	// Future refactoring should separate create (fail on conflict) from upsert (merge on conflict) operations.
 	CreateRelationship(relationship *graph.Relationship) error
database/v1compat/query/model.go (1)

335-343: Complex type switch with interface conversion.

The InIDs function uses a type switch on any(reference) followed by another type assertion. This could be simplified for better readability.

Consider simplifying the type handling:

 func InIDs[T *cypher.FunctionInvocation | *cypher.Variable](reference T, ids ...graph.ID) *cypher.Comparison {
-	switch any(reference).(type) {
+	switch ref := any(reference).(type) {
 	case *cypher.FunctionInvocation:
-		return cypher.NewComparison(reference, cypher.OperatorIn, Parameter(ids))
-
-	default:
-		return cypher.NewComparison(Identity(any(reference).(*cypher.Variable)), cypher.OperatorIn, Parameter(ids))
+		return cypher.NewComparison(ref, cypher.OperatorIn, Parameter(ids))
+	case *cypher.Variable:
+		return cypher.NewComparison(Identity(ref), cypher.OperatorIn, Parameter(ids))
+	default:
+		// This should never happen due to type constraints
+		panic(fmt.Sprintf("unexpected type in InIDs: %T", reference))
 	}
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3fe35e5 and 90a694e.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (92)
  • cypher/models/cypher/model.go (1 hunks)
  • cypher/models/pgsql/format/format.go (3 hunks)
  • cypher/models/pgsql/test/testcase.go (3 hunks)
  • cypher/models/pgsql/test/translation_test.go (1 hunks)
  • cypher/models/pgsql/test/validation_integration_test.go (3 hunks)
  • cypher/models/pgsql/visualization/visualizer_test.go (1 hunks)
  • database/driver.go (1 hunks)
  • database/neo4j/database.go (1 hunks)
  • database/neo4j/database_integration_test.go (1 hunks)
  • database/neo4j/driver.go (1 hunks)
  • database/neo4j/mapper.go (3 hunks)
  • database/neo4j/neo4j.go (1 hunks)
  • database/neo4j/schema.go (1 hunks)
  • database/pg/batch.go (1 hunks)
  • database/pg/config.go (1 hunks)
  • database/pg/database.go (1 hunks)
  • database/pg/driver.go (1 hunks)
  • database/pg/manager.go (10 hunks)
  • database/pg/mapper.go (3 hunks)
  • database/pg/model/format.go (3 hunks)
  • database/pg/model/model.go (1 hunks)
  • database/pg/pg.go (2 hunks)
  • database/pg/query/format.go (2 hunks)
  • database/pg/query/query.go (1 hunks)
  • database/pg/query/sql.go (1 hunks)
  • database/pg/query/sql/select_graph_by_name.sql (1 hunks)
  • database/pg/query/sql/select_graphs.sql (1 hunks)
  • database/pg/statements.go (1 hunks)
  • database/pg/types.go (2 hunks)
  • database/pg/util.go (1 hunks)
  • database/schema.go (1 hunks)
  • database/tooling.go (1 hunks)
  • database/v1compat/database.go (1 hunks)
  • database/v1compat/errors.go (1 hunks)
  • database/v1compat/graph.go (1 hunks)
  • database/v1compat/node.go (1 hunks)
  • database/v1compat/ops/ops.go (1 hunks)
  • database/v1compat/ops/parallel.go (1 hunks)
  • database/v1compat/ops/traversal.go (1 hunks)
  • database/v1compat/query.go (7 hunks)
  • database/v1compat/query/builder.go (3 hunks)
  • database/v1compat/query/model.go (1 hunks)
  • database/v1compat/query/sort.go (1 hunks)
  • database/v1compat/relationship.go (1 hunks)
  • database/v1compat/result.go (4 hunks)
  • database/v1compat/switch.go (1 hunks)
  • database/v1compat/traversal/collection.go (2 hunks)
  • database/v1compat/traversal/query.go (1 hunks)
  • database/v1compat/traversal/traversal.go (1 hunks)
  • database/v1compat/traversal/traversal_test.go (1 hunks)
  • database/v1compat/types.go (1 hunks)
  • database/v1compat/wrapper.go (1 hunks)
  • drivers/neo4j/batch.go (0 hunks)
  • drivers/neo4j/const.go (0 hunks)
  • drivers/neo4j/cypher.go (0 hunks)
  • drivers/neo4j/cypher_internal_test.go (0 hunks)
  • drivers/neo4j/driver.go (0 hunks)
  • drivers/neo4j/index.go (0 hunks)
  • drivers/neo4j/neo4j.go (0 hunks)
  • drivers/neo4j/node.go (0 hunks)
  • drivers/neo4j/relationship.go (0 hunks)
  • drivers/neo4j/result.go (0 hunks)
  • drivers/neo4j/result_internal_test.go (0 hunks)
  • drivers/neo4j/transaction.go (0 hunks)
  • drivers/neo4j/wrapper.go (0 hunks)
  • drivers/pg/batch.go (0 hunks)
  • drivers/pg/driver.go (0 hunks)
  • drivers/pg/model/model.go (0 hunks)
  • drivers/pg/node.go (0 hunks)
  • drivers/pg/query.go (0 hunks)
  • drivers/pg/query/query.go (0 hunks)
  • drivers/pg/relationship.go (0 hunks)
  • drivers/pg/result.go (0 hunks)
  • drivers/pg/tooling.go (0 hunks)
  • drivers/pg/transaction.go (0 hunks)
  • drivers/pg/util.go (0 hunks)
  • go.mod (1 hunks)
  • graph/error.go (0 hunks)
  • graph/graph.go (0 hunks)
  • graph/properties.go (1 hunks)
  • graph/schema.go (0 hunks)
  • graphcache/cache.go (0 hunks)
  • query/model.go (0 hunks)
  • query/neo4j/neo4j.go (0 hunks)
  • query/neo4j/neo4j_test.go (0 hunks)
  • query/neo4j/rewrite.go (0 hunks)
  • query/query.go (1 hunks)
  • query/query_test.go (1 hunks)
  • query/util.go (1 hunks)
  • registry.go (2 hunks)
  • registry_integration_test.go (1 hunks)
  • traversal/traversal.go (10 hunks)
💤 Files with no reviewable changes (32)
  • graph/error.go
  • drivers/neo4j/driver.go
  • drivers/neo4j/wrapper.go
  • drivers/neo4j/const.go
  • graph/graph.go
  • query/neo4j/rewrite.go
  • drivers/pg/node.go
  • drivers/pg/query/query.go
  • drivers/pg/model/model.go
  • drivers/pg/result.go
  • drivers/neo4j/cypher.go
  • graph/schema.go
  • drivers/pg/driver.go
  • drivers/pg/util.go
  • drivers/neo4j/transaction.go
  • drivers/pg/relationship.go
  • drivers/neo4j/result_internal_test.go
  • query/neo4j/neo4j_test.go
  • drivers/neo4j/neo4j.go
  • drivers/neo4j/cypher_internal_test.go
  • drivers/neo4j/result.go
  • query/neo4j/neo4j.go
  • drivers/neo4j/node.go
  • drivers/pg/tooling.go
  • drivers/pg/batch.go
  • drivers/pg/query.go
  • drivers/neo4j/relationship.go
  • drivers/neo4j/batch.go
  • graphcache/cache.go
  • drivers/neo4j/index.go
  • drivers/pg/transaction.go
  • query/model.go
✅ Files skipped from review due to trivial changes (1)
  • cypher/models/pgsql/test/translation_test.go
🚧 Files skipped from review as they are similar to previous changes (34)
  • database/neo4j/database_integration_test.go
  • database/pg/util.go
  • database/pg/statements.go
  • cypher/models/pgsql/visualization/visualizer_test.go
  • database/pg/query/sql.go
  • database/pg/config.go
  • database/pg/query/sql/select_graph_by_name.sql
  • database/tooling.go
  • database/v1compat/errors.go
  • cypher/models/cypher/model.go
  • query/util.go
  • database/v1compat/traversal/traversal_test.go
  • database/pg/query/query.go
  • database/pg/query/sql/select_graphs.sql
  • query/query_test.go
  • database/v1compat/ops/traversal.go
  • database/pg/pg.go
  • database/v1compat/node.go
  • database/v1compat/ops/ops.go
  • database/v1compat/ops/parallel.go
  • database/pg/model/format.go
  • registry_integration_test.go
  • database/v1compat/traversal/query.go
  • database/v1compat/query/builder.go
  • database/pg/query/format.go
  • database/neo4j/database.go
  • database/v1compat/types.go
  • database/neo4j/mapper.go
  • cypher/models/pgsql/test/validation_integration_test.go
  • database/driver.go
  • database/v1compat/graph.go
  • database/neo4j/driver.go
  • database/v1compat/relationship.go
  • database/pg/driver.go
🧰 Additional context used
🧠 Learnings (5)
📚 Learning: 2025-06-27T15:32:33.244Z
Learnt from: zinic
PR: SpecterOps/DAWGS#4
File: drivers/neo4j/v2/schema.go:136-140
Timestamp: 2025-06-27T15:32:33.244Z
Learning: The Neo4j Go driver v5 may not have the same transaction rollback error handling patterns as suggested. Check the specific v5 documentation before recommending `errors.Is(err, neo4j.ErrTransactionTerminated)` or similar patterns.

Applied to files:

  • database/neo4j/neo4j.go
  • database/v1compat/wrapper.go
📚 Learning: 2025-06-27T15:56:17.950Z
Learnt from: zinic
PR: SpecterOps/DAWGS#4
File: drivers/neo4j/v2/schema.go:231-231
Timestamp: 2025-06-27T15:56:17.950Z
Learning: In the Neo4j driver v2 schema.go file, cleanup operations in defer statements like result.Consume(ctx) are intentionally treated as best-effort and don't require explicit error checking. This is a conscious design pattern used consistently throughout the codebase for cleanup operations.

Applied to files:

  • database/neo4j/schema.go
  • database/pg/batch.go
📚 Learning: 2025-06-27T15:24:24.735Z
Learnt from: zinic
PR: SpecterOps/DAWGS#4
File: drivers/pg/v2/driver.go:82-85
Timestamp: 2025-06-27T15:24:24.735Z
Learning: The pgx.Rows.Close() method in the jackc/pgx library (including v5) does not return an error - it has the signature Close() with no return value. This is different from the standard database/sql library where Rows.Close() does return an error. Do not suggest changing patterns like "s.rows.Close(); return nil" to "return s.rows.Close()" for pgx.Rows.

Applied to files:

  • database/pg/batch.go
📚 Learning: 2025-06-27T15:24:24.735Z
Learnt from: zinic
PR: SpecterOps/DAWGS#4
File: drivers/pg/v2/driver.go:82-85
Timestamp: 2025-06-27T15:24:24.735Z
Learning: The pgx.Rows.Close() method in the jackc/pgx/v5 library does not return an error - it has the signature Close() with no return value. Do not suggest changing patterns like "s.rows.Close(); return nil" to "return s.rows.Close()" for pgx.Rows.

Applied to files:

  • database/pg/batch.go
📚 Learning: 2025-08-14T01:22:17.813Z
Learnt from: zinic
PR: SpecterOps/DAWGS#10
File: database/v1compat/traversal/traversal.go:227-227
Timestamp: 2025-08-14T01:22:17.813Z
Learning: In database/v1compat/traversal/traversal.go, the fetchDirection variable should remain consistent during expansion. The fetchDirection represents the query perspective for retrieving relationships and must be the same for both current and next expansion steps to maintain proper traversal semantics.

Applied to files:

  • traversal/traversal.go
  • database/v1compat/traversal/collection.go
  • database/v1compat/traversal/traversal.go
🧬 Code Graph Analysis (19)
database/pg/model/model.go (1)
database/pg/model/format.go (2)
  • IndexName (27-36)
  • ConstraintName (38-47)
cypher/models/pgsql/format/format.go (1)
cypher/models/pgsql/pgtypes.go (1)
  • TextArray (89-89)
database/pg/database.go (5)
database/driver.go (3)
  • Option (11-11)
  • OptionReadOnly (14-14)
  • QueryLogic (42-42)
database/v1compat/query/model.go (1)
  • Size (503-505)
database/v1compat/wrapper.go (1)
  • BackwardCompatibleInstance (66-71)
database/v1compat/database.go (1)
  • Transaction (57-91)
database/v1compat/graph.go (2)
  • String (9-9)
  • Kinds (11-11)
database/neo4j/neo4j.go (1)
registry.go (1)
  • Register (20-22)
registry.go (2)
database/v1compat/database.go (1)
  • Database (111-153)
database/v1compat/wrapper.go (1)
  • V1Wrapper (91-99)
database/v1compat/database.go (2)
database/v1compat/query.go (2)
  • NodeQuery (118-160)
  • RelationshipQuery (164-217)
database/v1compat/graph.go (1)
  • Kinds (11-11)
database/v1compat/wrapper.go (9)
cypher/frontend/context.go (2)
  • Context (17-21)
  • NewContext (23-33)
database/v1compat/ops/ops.go (2)
  • DeleteNodes (72-76)
  • DeleteRelationships (84-86)
database/v1compat/database.go (6)
  • Database (111-153)
  • TransactionDelegate (96-96)
  • TransactionOption (108-108)
  • BatchDelegate (99-99)
  • Batch (12-53)
  • Transaction (57-91)
database/driver.go (1)
  • OptionReadOnly (14-14)
database/v1compat/graph.go (1)
  • Kinds (11-11)
database/v1compat/query.go (2)
  • NodeQuery (118-160)
  • RelationshipQuery (164-217)
database/v1compat/query/identifiers.go (2)
  • StartID (48-50)
  • EndID (56-58)
database/v1compat/query/model.go (3)
  • Size (503-505)
  • Equals (267-269)
  • SetProperties (80-92)
cypher/models/cypher/model.go (1)
  • Remove (546-548)
database/neo4j/schema.go (2)
database/v1compat/graph.go (2)
  • String (9-9)
  • StringKind (35-35)
database/schema.go (4)
  • IndexType (5-5)
  • IndexTypeBTree (9-9)
  • IndexTypeTextSearch (10-10)
  • IndexTypeUnsupported (8-8)
cypher/models/pgsql/test/testcase.go (1)
cypher/frontend/context.go (1)
  • Context (17-21)
database/pg/batch.go (6)
database/v1compat/graph.go (2)
  • String (9-9)
  • Kinds (11-11)
cypher/models/pgsql/pgtypes.go (1)
  • JSONB (77-77)
cypher/frontend/context.go (1)
  • Context (17-21)
database/pg/query/format.go (6)
  • NodeUpdateBatch (137-140)
  • RelationshipUpdateBatch (223-227)
  • FormatNodeUpsert (106-115)
  • ValidateNodeUpdateByBatch (195-205)
  • ValidateRelationshipUpdateByBatch (290-300)
  • FormatRelationshipPartitionUpsert (117-124)
database/v1compat/query/identifiers.go (2)
  • StartID (48-50)
  • EndID (56-58)
database/v1compat/ops/ops.go (2)
  • DeleteNodes (72-76)
  • DeleteRelationships (84-86)
database/v1compat/result.go (3)
database/v1compat/graph.go (1)
  • Kinds (11-11)
database/v1compat/query/identifiers.go (2)
  • StartID (48-50)
  • EndID (56-58)
cypher/frontend/context.go (1)
  • Context (17-21)
database/pg/mapper.go (2)
cypher/frontend/context.go (1)
  • Context (17-21)
graph/mapper.go (1)
  • ValueMapper (358-360)
database/v1compat/query.go (3)
graph/mapper.go (1)
  • ValueMapper (358-360)
database/v1compat/graph.go (1)
  • Criteria (26-26)
database/v1compat/result.go (2)
  • DirectionalResult (28-32)
  • RelationshipTripleResult (17-21)
database/schema.go (1)
database/v1compat/graph.go (2)
  • String (9-9)
  • Kinds (11-11)
traversal/traversal.go (3)
cypher/frontend/context.go (1)
  • Context (17-21)
database/v1compat/query/model.go (1)
  • Equals (267-269)
cypher/models/cypher/model.go (1)
  • Return (1411-1413)
database/v1compat/traversal/collection.go (2)
database/v1compat/database.go (2)
  • Database (111-153)
  • Transaction (57-91)
database/v1compat/ops/ops.go (1)
  • FetchNodeProperties (30-70)
database/v1compat/traversal/traversal.go (10)
cypher/frontend/context.go (1)
  • Context (17-21)
database/v1compat/graph.go (2)
  • Criteria (26-26)
  • Kinds (11-11)
database/v1compat/query/model.go (4)
  • Equals (267-269)
  • Returning (477-501)
  • KindsOf (194-217)
  • Ascending (374-376)
database/v1compat/query/identifiers.go (4)
  • StartID (48-50)
  • EndID (56-58)
  • RelationshipID (40-42)
  • Identity (13-18)
database/v1compat/result.go (1)
  • DirectionalResult (28-32)
database/v1compat/ops/traversal.go (1)
  • ErrGraphQueryMemoryLimit (42-42)
database/v1compat/query.go (1)
  • RelationshipQuery (164-217)
cardinality/roaring64.go (1)
  • NewBitmap64 (23-27)
database/v1compat/traversal/query.go (1)
  • ShallowFetchNodesByID (16-45)
database/v1compat/types.go (1)
  • DuplexToGraphIDs (159-169)
database/pg/manager.go (2)
database/v1compat/graph.go (2)
  • String (9-9)
  • Kinds (11-11)
database/pg/query/query.go (1)
  • On (20-24)
database/v1compat/query/model.go (6)
cypher/models/cypher/model.go (24)
  • UpdatingClause (459-463)
  • NewUpdatingClause (465-469)
  • Set (594-596)
  • SetItem (614-618)
  • Remove (546-548)
  • RemoveItem (566-569)
  • KindMatcher (744-747)
  • PropertyLookup (1201-1204)
  • NewKindMatcher (749-754)
  • FunctionInvocation (1028-1035)
  • NewDisjunction (988-994)
  • WithErrors (112-118)
  • Skip (1496-1498)
  • Comparison (1073-1076)
  • NewComparison (1078-1086)
  • NewSimpleFunctionInvocation (1037-1042)
  • PatternPredicate (1516-1518)
  • NewPatternPredicate (1520-1522)
  • NewVariableWithSymbol (1153-1157)
  • NewWhere (1333-1335)
  • Return (1411-1413)
  • ProjectionItem (1169-1172)
  • Negation (926-928)
  • Match (411-415)
database/v1compat/graph.go (1)
  • Kinds (11-11)
cypher/models/pgsql/operators.go (9)
  • OperatorAssignment (94-94)
  • OperatorEquals (70-70)
  • OperatorGreaterThan (72-72)
  • OperatorGreaterThanOrEqualTo (73-73)
  • OperatorLessThan (74-74)
  • OperatorLessThanOrEqualTo (75-75)
  • OperatorIsNot (91-91)
  • OperatorIn (89-89)
  • OperatorIs (90-90)
database/v1compat/query/identifiers.go (5)
  • NodeSymbol (22-22)
  • EdgeSymbol (23-23)
  • EdgeStartSymbol (24-24)
  • EdgeEndSymbol (25-25)
  • Identity (13-18)
cypher/models/pgsql/pgtypes.go (1)
  • Null (64-64)
cypher/models/pgsql/translate/model.go (1)
  • Pattern (213-215)
🔇 Additional comments (45)
go.mod (1)

32-32: Addition of golang.org/x/net as an indirect dependency

The addition of golang.org/x/net v0.41.0 as an indirect dependency aligns with the new database abstraction layer introduced in this PR. This dependency likely provides network-related utilities needed by the database drivers.

database/v1compat/switch.go (1)

1-1: Package declaration aligned with v1 compatibility layer

The package has been correctly renamed from graph to v1compat, consistent with the v1 compatibility layer structure across the codebase.

graph/properties.go (1)

233-262: Improved Merge implementation with pre-allocation

The updated Merge method now properly initializes nil maps with appropriate capacity hints based on the source maps, which improves memory efficiency and reduces allocations during map growth. The logic correctly handles the interaction between Modified and Deleted states.

query/query.go (3)

61-86: Special handling for relationship kinds in Not function

The Not function contains special logic to handle relationship kind matching differently by using the EdgeTypeFunction instead of standard negation. This is a clever optimization for query performance.


685-688: PreparedQuery with empty Parameters map

The PreparedQuery struct is well-defined with both Query and Parameters fields. The Parameters map is initialized as empty in the Build method, which is appropriate as parameters will be populated at query execution time.


474-478: Detach delete logic for node identifiers

The implementation correctly identifies node-type variables (node, start, end) and sets the detachDelete flag appropriately. This ensures that relationships are properly removed when deleting nodes.

database/v1compat/query/sort.go (1)

12-15: SortCriteria change verified — no callers remain using graph.Criteria

I searched the repo and inspected the sort-related code; construction sites and helpers now use cypher.SyntaxNode and there are no remaining callers that pass graph.Criteria for sort items.

Checked locations:

  • database/v1compat/query/sort.go — SortItem.SortCriteria is cypher.SyntaxNode (FormatCypherOrder).
  • database/v1compat/query/model.go — Order / OrderBy produce *cypher.SortItem from cypher.SyntaxNode.
  • database/v1compat/query/builder.go — NewBuilderWithCriteria(criteria ...cypher.SyntaxNode) / Apply use cypher.SyntaxNode.
  • query/query.go — buildProjectionOrder consumes *cypher.SortItem; s.sortItems handling is compatible.

Remaining graph.Criteria occurrences are unrelated (ops/traversal, parallel, etc.) and do not construct SortItem instances.

cypher/models/pgsql/format/format.go (1)

7-8: Use of RFC3339Nano for timestamptz literals is correct

Good fix; this preserves sub-second precision and casts correctly to timestamptz.

Also applies to: 74-76

cypher/models/pgsql/test/testcase.go (1)

22-23: Good move to the new database abstraction

Importing and using database.Instance aligns tests with the v2 driver model.

database/pg/mapper.go (2)

20-27: Correctly fail-fast on invalid element types in []any → []int16 mapping

Setting validType=false on a failed assertion prevents partially-populated kind IDs from being treated as valid. This avoids silent, inconsistent mappings.


106-108: No internal references to exported NewValueMapper remain — please verify external consumers

I searched the repository: there are no usages of an exported database/pg.NewValueMapper. The repo uses the unexported newValueMapper and the graph package's NewValueMapper.

  • Occurrences of NewValueMapper (graph package):

    • graph/mapper.go:362 — func NewValueMapper(...)
    • database/neo4j/mapper.go:114 — return graph.NewValueMapper(mapValue)
    • database/pg/mapper.go:106–108 — func newValueMapper(...) { return graph.NewValueMapper(...) } (this file was changed to an unexported function)
  • Call sites of newValueMapper (internal):

    • database/pg/driver.go:94, 136
    • database/pg/mapper.go:106 (definition)
    • database/neo4j/mapper.go:113 (definition)
    • database/neo4j/driver.go:16
    • database/neo4j/schema.go:243

Conclusion: Internal repo references have been updated and no internal breakage is detected. However, I cannot verify external repositories/packages that may import database/pg.NewValueMapper — please confirm consumers outside this repo have been updated before merging.

registry.go (2)

16-16: DriverConstructor signature update looks good

Aligns the registry with the new database.Instance return type.


41-47: OpenV1 correctly propagates constructor errors and wraps v2 as v1

Good fix to avoid masking constructor failures. v1 wrapper usage is appropriate.

database/neo4j/neo4j.go (2)

14-24: Reasonable Neo4j defaults and exported identifiers

Defaults and DriverName are sensible and align with the registry integration plan.


44-48: Driver registration via init is wired correctly

Registration uses the exported DriverName and constructor shape expected by the registry.

database/schema.go (1)

13-27: IndexType String() mapping is clear and robust

Covers supported values and provides “invalid” for out-of-range cases.

database/pg/database.go (2)

18-33: beginTx honors OptionReadOnly; default RW is sensible

Clean and minimal. This mirrors the OptionReadOnly semantics.


45-54: NamedArgs usage OK — no change required

Quick repo scan shows Raw is only called via the v1compat wrapper and the PG driver uses named parameters for translated SQL, so pgx.NamedArgs(parameters) is the correct call here.

Relevant locations inspected:

  • database/pg/database.go — Raw(ctx, query, parameters) uses pgx.NamedArgs(parameters)
  • database/v1compat/wrapper.go — Run(ctx, query, parameters) delegates to v2DB.Raw(...)
  • database/pg/driver.go — translated queries use @nAmed placeholders and s.queryArgs(...) appends pgx.NamedArgs(parameters) before Query/Exec
  • database/pg/statements.go & database/pg/batch.go — some SQL constants contain positional $1… placeholders, but the code also builds positional []any argument slices (e.g., NodeUpsertParameters.Format) and uses those where appropriate (separate code paths from Raw)

Conclusion: Raw correctly expects named placeholders (@name) and the codebase uses named args on the translated/Raw path. No changes required.

database/v1compat/traversal/collection.go (4)

7-9: Imports correctly target v1compat layers

Good alignment with the v1 compatibility surface.


23-25: Collect delegates to Add; interface change to v1compat.PathSegment is consistent

Looks good.


33-38: Thread-safe Add with minimal critical section

This is straightforward and correct.


57-62: Append semantics over a nil slice are fine; lock usage is correct

No issues here.

database/pg/model/model.go (4)

7-16: LGTM! Well-structured change tracking design.

The IndexChangeSet struct provides a clean separation between removal operations (tracked as string slices) and additions (tracked as maps). This design facilitates efficient change tracking and application during schema migrations.


18-25: Good initialization practice.

The constructor properly initializes all maps to prevent nil map panics. This defensive programming approach ensures the struct is always in a valid state.


27-31: LGTM! Consistent map-based design for efficient lookups.

Using maps for both Indexes and Constraints enables O(1) lookups by name, which is appropriate for schema management operations.


59-68: LGTM! Clear graph model hierarchy.

The GraphPartitions and Graph structs establish a clear hierarchical model for representing database graphs with their associated schema elements. The separation between node and edge partitions is well-designed.

database/v1compat/result.go (4)

1-10: LGTM! Clean package migration to v1compat.

The package rename from graph to v1compat and the addition of necessary imports properly establishes this as a compatibility layer between v1 and v2 interfaces.


12-32: LGTM! Proper type alignment with graph package.

The result types correctly reference the graph package types, maintaining clear separation between the compatibility layer and the core graph types.


83-97: LGTM! Proper context-aware iteration.

The iteration logic correctly uses HasNext(ctx) for context-aware checking and properly handles errors from the marshaller. The use of channels.Submit with context ensures non-blocking sends.


111-114: Good context propagation in cleanup.

The Close method correctly propagates context to the underlying result's Close method, ensuring proper cleanup with context awareness.

database/neo4j/schema.go (4)

16-24: LGTM! Well-defined constants for Neo4j providers and statements.

The constants provide clear separation between BTree and Lucene index providers, along with properly parameterized Cypher statements for schema operations.


88-97: LGTM! Robust provider type parsing.

The function properly maps provider strings to database index types with a sensible default for unknown providers.


120-149: LGTM! Proper transaction management with best-effort cleanup.

The transaction helper correctly manages session lifecycle and transaction rollback. The deferred rollback is appropriately treated as best-effort cleanup, consistent with the learned pattern for Neo4j cleanup operations.


285-334: LGTM! Comprehensive schema reconciliation logic.

The AssertSchema method properly computes the differences between present and required schemas, handling both additions and removals. The type change detection (lines 313-316, 322-325) is particularly important for ensuring schema consistency.

database/pg/manager.go (4)

58-82: LGTM! Improved transaction management with proper error handling.

The new transaction helper method provides cleaner transaction management with proper rollback handling and context propagation. The deferred rollback with error checking for pgx.ErrTxClosed prevents spurious error logs.


99-110: Good defensive programming with map copy.

The method correctly returns a copy of the internal map to prevent external mutations, which is essential for maintaining thread safety.


288-293: LGTM! Thread-safe DefaultGraph accessor.

The method properly protects shared state access with read locks and returns both the graph and existence flag, following Go's idiomatic pattern.


112-122: No action needed — go.mod requires Go 1.23.0, so clear() is supported

go.mod lists "go 1.23.0" (>= 1.21), so the use of clear() is safe and no change is required.

database/v1compat/wrapper.go (1)

15-24: LGTM! Clean generic implementation for slice conversion.

The generic function properly handles type conversion to []any which is necessary for v1 compatibility.

database/v1compat/query.go (2)

72-78: Verify context handling in the resultWrapper implementation.

The wrapResult function correctly sets the context field on the resultWrapper. This addresses the potential nil pointer issue that was raised in previous reviews.


48-70: Good implementation of the result wrapper pattern.

The resultWrapper correctly delegates all methods to the underlying database.Result while providing the context for operations that need it. The implementation properly handles the mapping and closing operations.

database/v1compat/database.go (1)

1-154: Comprehensive v1 compatibility API surface.

The v1 compatibility layer provides a well-structured interface hierarchy with Database, Transaction, and Batch interfaces. The design properly separates concerns and provides appropriate abstractions for different operation types.

database/v1compat/traversal/traversal.go (2)

272-361: Well-structured BFS traversal implementation.

The BreadthFirst method provides a robust multi-worker traversal implementation with proper error handling, context cancellation, and memory limit checking. The use of channels for work distribution and atomic counters for descent tracking is appropriate.


536-594: Efficient lightweight driver implementation.

The LightweightDriver function provides an optimized traversal driver that minimizes network traffic by fetching only IDs and Kind information. The integration with the graph cache and proper handling of terminal segments is well done.

database/v1compat/query/model.go (1)

535-591: Well-structured query building logic.

The SinglePartQuery function provides comprehensive handling of various Cypher clauses with proper error handling for invalid types. The switch statement correctly routes each expression type to the appropriate part of the query structure.

Comment on lines +27 to +33
if connectionURL, err := url.Parse(cfg.ConnectionString); err != nil {
return nil, err
} else if connectionURL.Scheme != Neo4jConnectionScheme {
return nil, fmt.Errorf("expected connection URL scheme %s for Neo4J but got %s", Neo4jConnectionScheme, connectionURL.Scheme)
} else if password, isSet := connectionURL.User.Password(); !isSet {
return nil, fmt.Errorf("no password provided in connection URL")
} else {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Broaden scheme support, validate credentials, and improve error text

Only accepting the “neo4j” scheme will surprise users; the official driver supports both neo4j and bolt schemes (and secure variants). Also validate username and provide clearer error messages.

-	if connectionURL, err := url.Parse(cfg.ConnectionString); err != nil {
-		return nil, err
-	} else if connectionURL.Scheme != Neo4jConnectionScheme {
-		return nil, fmt.Errorf("expected connection URL scheme %s for Neo4J but got %s", Neo4jConnectionScheme, connectionURL.Scheme)
-	} else if password, isSet := connectionURL.User.Password(); !isSet {
-		return nil, fmt.Errorf("no password provided in connection URL")
+	if connectionURL, err := url.Parse(cfg.ConnectionString); err != nil {
+		return nil, err
+	} else if scheme := connectionURL.Scheme; scheme != Neo4jConnectionScheme && scheme != "bolt" {
+		return nil, fmt.Errorf("unexpected connection URL scheme %q (expected %q or %q)", scheme, Neo4jConnectionScheme, "bolt")
+	} else if connectionURL.User.Username() == "" {
+		return nil, fmt.Errorf("no username provided in connection URL")
+	} else if password, isSet := connectionURL.User.Password(); !isSet || password == "" {
+		return nil, fmt.Errorf("no password provided in connection URL")
 	} else {
-		boltURL := fmt.Sprintf("bolt://%s:%s", connectionURL.Hostname(), connectionURL.Port())
+		boltURL := fmt.Sprintf("bolt://%s", connectionURL.Host)
+		if connectionURL.Port() == "" {
+			boltURL += ":7687"
+		}

Note: consider accepting secure schemes (neo4j+s/neo4j+ssc/bolt+s/bolt+ssc) in a follow-up to enable TLS by default.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if connectionURL, err := url.Parse(cfg.ConnectionString); err != nil {
return nil, err
} else if connectionURL.Scheme != Neo4jConnectionScheme {
return nil, fmt.Errorf("expected connection URL scheme %s for Neo4J but got %s", Neo4jConnectionScheme, connectionURL.Scheme)
} else if password, isSet := connectionURL.User.Password(); !isSet {
return nil, fmt.Errorf("no password provided in connection URL")
} else {
if connectionURL, err := url.Parse(cfg.ConnectionString); err != nil {
return nil, err
} else if scheme := connectionURL.Scheme; scheme != Neo4jConnectionScheme && scheme != "bolt" {
return nil, fmt.Errorf("unexpected connection URL scheme %q (expected %q or %q)", scheme, Neo4jConnectionScheme, "bolt")
} else if connectionURL.User.Username() == "" {
return nil, fmt.Errorf("no username provided in connection URL")
} else if password, isSet := connectionURL.User.Password(); !isSet || password == "" {
return nil, fmt.Errorf("no password provided in connection URL")
} else {
boltURL := fmt.Sprintf("bolt://%s", connectionURL.Host)
if connectionURL.Port() == "" {
boltURL += ":7687"
}
🤖 Prompt for AI Agents
In database/neo4j/neo4j.go around lines 27 to 33, the code only accepts the
"neo4j" URL scheme and only checks for a password, which is too strict and
produces unclear errors; update validation to accept both "neo4j" and "bolt"
schemes (and prepare to allow secure variants later), explicitly verify that a
username is present and that a password is present, and replace generic errors
with clearer messages that mention which part is missing or which schemes are
accepted (e.g., "unsupported scheme X, expected neo4j or bolt" and "missing
username in connection URL" / "missing password in connection URL").

Comment on lines +36 to +41
if internalDriver, err := neo4j.NewDriverWithContext(boltURL, neo4j.BasicAuth(connectionURL.User.Username(), password, "")); err != nil {
return nil, fmt.Errorf("unable to connect to Neo4J: %w", err)
} else {
return New(internalDriver, cfg), nil
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Verify connectivity and close the driver on failure

NewDriverWithContext does not validate connectivity; add VerifyConnectivity(ctx) and close the driver if it fails.

-		if internalDriver, err := neo4j.NewDriverWithContext(boltURL, neo4j.BasicAuth(connectionURL.User.Username(), password, "")); err != nil {
-			return nil, fmt.Errorf("unable to connect to Neo4J: %w", err)
-		} else {
-			return New(internalDriver, cfg), nil
-		}
+		if internalDriver, err := neo4j.NewDriverWithContext(boltURL, neo4j.BasicAuth(connectionURL.User.Username(), password, "")); err != nil {
+			return nil, fmt.Errorf("unable to connect to Neo4j: %w", err)
+		} else {
+			if err := internalDriver.VerifyConnectivity(ctx); err != nil {
+				_ = internalDriver.Close(ctx)
+				return nil, fmt.Errorf("unable to verify neo4j connectivity: %w", err)
+			}
+			return New(internalDriver, cfg), nil
+		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if internalDriver, err := neo4j.NewDriverWithContext(boltURL, neo4j.BasicAuth(connectionURL.User.Username(), password, "")); err != nil {
return nil, fmt.Errorf("unable to connect to Neo4J: %w", err)
} else {
return New(internalDriver, cfg), nil
}
}
if internalDriver, err := neo4j.NewDriverWithContext(boltURL, neo4j.BasicAuth(connectionURL.User.Username(), password, "")); err != nil {
return nil, fmt.Errorf("unable to connect to Neo4j: %w", err)
} else {
if err := internalDriver.VerifyConnectivity(ctx); err != nil {
_ = internalDriver.Close(ctx)
return nil, fmt.Errorf("unable to verify neo4j connectivity: %w", err)
}
return New(internalDriver, cfg), nil
}
🤖 Prompt for AI Agents
In database/neo4j/neo4j.go around lines 36 to 41, after creating the driver with
neo4j.NewDriverWithContext you must call driver.VerifyConnectivity(ctx) to
validate the connection and, if VerifyConnectivity returns an error, close the
driver (driver.Close(ctx) or driver.Close(context.Background())) before
returning the error; update the code to create the driver in a variable, call
VerifyConnectivity with the same ctx, on error call Close to release resources
and then return a wrapped error, and only return New(driver, cfg) after
successful verification.

Comment on lines +223 to +283
func (s *SchemaManager) fetchPresentSchema(ctx context.Context) (neo4jSchema, error) {
var (
presentSchema = newNeo4jSchema()
err = s.transaction(ctx, func(transaction neo4j.ExplicitTransaction) error {
if result, err := transaction.Run(ctx, "call db.indexes() yield name, uniqueness, provider, labelsOrTypes, properties;", nil); err != nil {
return err
} else {
defer result.Consume(ctx)

var (
name string
uniqueness string
provider string
labels []string
properties []string
)

for result.Next(ctx) {
values := result.Record().Values

if !newValueMapper().MapAll(values, []any{&name, &uniqueness, &provider, &labels, &properties}) {
return errors.New("failed to map present schema")
}

// Need this for neo4j 4.4+ which creates a weird index by default
if len(labels) == 0 {
continue
}

if len(labels) > 1 || len(properties) > 1 {
return fmt.Errorf("composite index types are currently not supported")
}

if uniqueness == "UNIQUE" {
presentSchema.Constraints[name] = neo4jConstraint{
Constraint: database.Constraint{
Name: name,
Field: properties[0],
Type: parseProviderType(provider),
},
kind: graph.StringKind(labels[0]),
}
} else {
presentSchema.Indexes[name] = neo4jIndex{
Index: database.Index{
Name: name,
Field: properties[0],
Type: parseProviderType(provider),
},
kind: graph.StringKind(labels[0]),
}
}
}

return result.Err()
}
})
)

return presentSchema, err
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify composite index support requirements.

The function explicitly rejects composite indexes on line 252. Ensure this limitation is documented and that there's a plan to support them if needed in the future.


🏁 Script executed:

#!/bin/bash
# Check if composite indexes are used anywhere in the codebase
rg -A 5 "composite.*index" --type go

Length of output: 403


🏁 Script executed:

#!/bin/bash
set -euo pipefail
echo "Search for 'composite' across repo:"
rg -n --hidden --no-ignore-vcs "composite" || true
echo
echo "Search for 'composite index' exact:"
rg -n --hidden --no-ignore-vcs "composite index|composite indexes|composite-index" || true
echo
echo "Show surrounding context in database/neo4j/schema.go (lines around the check):"
nl -ba database/neo4j/schema.go | sed -n '220,290p' || true
echo
echo "Search for 'db.indexes()' usages:"
rg -n --hidden --no-ignore-vcs "db.indexes\\(\\)" || true

Length of output: 137237


Action: document or add support for composite Neo4j indexes (currently rejected)

The code explicitly rejects composite indexes at database/neo4j/schema.go (search result: database/neo4j/schema.go:253). No other code paths were found that handle composite Neo4j indexes.

Files/locations to address:

  • database/neo4j/schema.go — around line 253:
    if len(labels) > 1 || len(properties) > 1 {
        return fmt.Errorf("composite index types are currently not supported")
    }
    

Requested action:

  • Document this limitation (repo README / DB schema docs / migration notes) and add a TODO or tracking issue to implement composite-index support, or
  • Implement composite index handling in schema discovery and schema management (and add tests) if composite indexes must be supported.

Comment on lines +307 to +310
kindIDSlices = make([]string, numCreates)
kindIDEncoder = Int2ArrayEncoder{
buffer: &bytes.Buffer{},
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Inconsistent encoder initialization pattern.

The CreateNodes method creates an Int2ArrayEncoder with direct field initialization instead of using the NewInt2ArrayEncoder() constructor, which is inconsistent with the pattern used elsewhere in the file.

Apply this diff to use the consistent constructor pattern:

-		kindIDEncoder = Int2ArrayEncoder{
-			buffer: &bytes.Buffer{},
-		}
+		kindIDEncoder = NewInt2ArrayEncoder()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
kindIDSlices = make([]string, numCreates)
kindIDEncoder = Int2ArrayEncoder{
buffer: &bytes.Buffer{},
}
kindIDSlices = make([]string, numCreates)
kindIDEncoder = NewInt2ArrayEncoder()
🤖 Prompt for AI Agents
In database/pg/batch.go around lines 307 to 310, the Int2ArrayEncoder is
instantiated directly via struct literal instead of using the existing
NewInt2ArrayEncoder() constructor; replace the direct initialization with a call
to NewInt2ArrayEncoder() so the encoder is created using the consistent
constructor pattern used elsewhere in the file, ensuring any internal setup
performed by NewInt2ArrayEncoder() is applied.

Comment on lines +77 to +98
func (s *instance) Transaction(ctx context.Context, driverLogic database.QueryLogic, options ...database.Option) error {
if acquiredConn, err := s.pool.Acquire(ctx); err != nil {
return err
} else {
defer acquiredConn.Release()

if transaction, err := beginTx(ctx, acquiredConn, options); err != nil {
return err
} else {
defer func() {
if err := transaction.Rollback(ctx); err != nil {
slog.DebugContext(ctx, "failed to rollback transaction", slog.String("err", err.Error()))
}
}()

if err := driverLogic(ctx, newInternalDriver(transaction, s.schemaManager)); err != nil {
return err
}

return transaction.Commit(ctx)
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid logging rollback failures after a successful commit; conditionally rollback

The defer-rollback pattern logs a debug error even after commit (tx already closed). Use a committed flag to only rollback on error paths.

-		if transaction, err := beginTx(ctx, acquiredConn, options); err != nil {
+		if transaction, err := beginTx(ctx, acquiredConn, options); err != nil {
 			return err
 		} else {
-			defer func() {
-				if err := transaction.Rollback(ctx); err != nil {
-					slog.DebugContext(ctx, "failed to rollback transaction", slog.String("err", err.Error()))
-				}
-			}()
-
-			if err := driverLogic(ctx, newInternalDriver(transaction, s.schemaManager)); err != nil {
-				return err
-			}
-
-			return transaction.Commit(ctx)
+			committed := false
+			defer func() {
+				if !committed {
+					if err := transaction.Rollback(ctx); err != nil {
+						slog.DebugContext(ctx, "failed to rollback transaction", slog.String("err", err.Error()))
+					}
+				}
+			}()
+
+			if err := driverLogic(ctx, newInternalDriver(transaction, s.schemaManager)); err != nil {
+				return err
+			}
+
+			if err := transaction.Commit(ctx); err != nil {
+				return err
+			}
+			committed = true
+			return nil
 		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func (s *instance) Transaction(ctx context.Context, driverLogic database.QueryLogic, options ...database.Option) error {
if acquiredConn, err := s.pool.Acquire(ctx); err != nil {
return err
} else {
defer acquiredConn.Release()
if transaction, err := beginTx(ctx, acquiredConn, options); err != nil {
return err
} else {
defer func() {
if err := transaction.Rollback(ctx); err != nil {
slog.DebugContext(ctx, "failed to rollback transaction", slog.String("err", err.Error()))
}
}()
if err := driverLogic(ctx, newInternalDriver(transaction, s.schemaManager)); err != nil {
return err
}
return transaction.Commit(ctx)
}
}
func (s *instance) Transaction(ctx context.Context, driverLogic database.QueryLogic, options ...database.Option) error {
if acquiredConn, err := s.pool.Acquire(ctx); err != nil {
return err
} else {
defer acquiredConn.Release()
if transaction, err := beginTx(ctx, acquiredConn, options); err != nil {
return err
} else {
committed := false
defer func() {
if !committed {
if err := transaction.Rollback(ctx); err != nil {
slog.DebugContext(ctx, "failed to rollback transaction", slog.String("err", err.Error()))
}
}
}()
if err := driverLogic(ctx, newInternalDriver(transaction, s.schemaManager)); err != nil {
return err
}
if err := transaction.Commit(ctx); err != nil {
return err
}
committed = true
return nil
}
}
}
🤖 Prompt for AI Agents
In database/pg/database.go around lines 77 to 98, the deferred rollback always
runs and logs even after a successful Commit; add a committed boolean flag,
change the defer to only attempt and log a rollback when committed is false,
then perform Commit and set committed = true only after Commit returns nil so
the defer won't run after successful commit.

Comment on lines +91 to +99
func V1Wrapper(v2DB database.Instance) Database {
if v1CompatibleInstanceRef, typeOK := v2DB.(BackwardCompatibleInstance); !typeOK {
panic(fmt.Sprintf("type %T is not a v1CompatibleInstance", v2DB))
} else {
return &v1Wrapper{
v2DB: v1CompatibleInstanceRef,
}
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consider returning an error instead of panicking.

Panicking on type assertion failure could crash the entire application. Consider returning an error for more graceful error handling.

-func V1Wrapper(v2DB database.Instance) Database {
+func V1Wrapper(v2DB database.Instance) (Database, error) {
 	if v1CompatibleInstanceRef, typeOK := v2DB.(BackwardCompatibleInstance); !typeOK {
-		panic(fmt.Sprintf("type %T is not a v1CompatibleInstance", v2DB))
+		return nil, fmt.Errorf("type %T is not a v1CompatibleInstance", v2DB)
 	} else {
 		return &v1Wrapper{
 			v2DB: v1CompatibleInstanceRef,
-		}
+		}, nil
 	}
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func V1Wrapper(v2DB database.Instance) Database {
if v1CompatibleInstanceRef, typeOK := v2DB.(BackwardCompatibleInstance); !typeOK {
panic(fmt.Sprintf("type %T is not a v1CompatibleInstance", v2DB))
} else {
return &v1Wrapper{
v2DB: v1CompatibleInstanceRef,
}
}
}
func V1Wrapper(v2DB database.Instance) (Database, error) {
if v1CompatibleInstanceRef, typeOK := v2DB.(BackwardCompatibleInstance); !typeOK {
return nil, fmt.Errorf("type %T is not a v1CompatibleInstance", v2DB)
} else {
return &v1Wrapper{
v2DB: v1CompatibleInstanceRef,
}, nil
}
}
🤖 Prompt for AI Agents
In database/v1compat/wrapper.go around lines 91-99, the V1Wrapper function
panics when the v2DB does not implement BackwardCompatibleInstance; change the
function to return (Database, error) instead of panicking: perform the type
assertion, and if it fails return a wrapped error (e.g., fmt.Errorf("v2 instance
of type %T is not BackwardCompatibleInstance", v2DB)), otherwise return the
v1Wrapper and nil; update all callers to handle the error accordingly.

Comment on lines +190 to +200
func wrapDriverToBatch(ctx context.Context, driver database.Driver) *driverBatchWrapper {
if v1CompatibleDriverRef, typeOK := driver.(BackwardCompatibleDriver); !typeOK {
panic(fmt.Sprintf("type %T is not a v1CompatibleDriver", driver))
} else {
return &driverBatchWrapper{
ctx: ctx,
driver: v1CompatibleDriverRef,
batchWriteSize: 2000,
}
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consider returning an error instead of panicking.

Similar to the V1Wrapper function, consider returning an error for better error handling.

-func wrapDriverToBatch(ctx context.Context, driver database.Driver) *driverBatchWrapper {
+func wrapDriverToBatch(ctx context.Context, driver database.Driver) (*driverBatchWrapper, error) {
 	if v1CompatibleDriverRef, typeOK := driver.(BackwardCompatibleDriver); !typeOK {
-		panic(fmt.Sprintf("type %T is not a v1CompatibleDriver", driver))
+		return nil, fmt.Errorf("type %T is not a v1CompatibleDriver", driver)
 	} else {
 		return &driverBatchWrapper{
 			ctx:            ctx,
 			driver:         v1CompatibleDriverRef,
 			batchWriteSize: 2000,
-		}
+		}, nil
 	}
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func wrapDriverToBatch(ctx context.Context, driver database.Driver) *driverBatchWrapper {
if v1CompatibleDriverRef, typeOK := driver.(BackwardCompatibleDriver); !typeOK {
panic(fmt.Sprintf("type %T is not a v1CompatibleDriver", driver))
} else {
return &driverBatchWrapper{
ctx: ctx,
driver: v1CompatibleDriverRef,
batchWriteSize: 2000,
}
}
}
func wrapDriverToBatch(ctx context.Context, driver database.Driver) (*driverBatchWrapper, error) {
if v1CompatibleDriverRef, typeOK := driver.(BackwardCompatibleDriver); !typeOK {
return nil, fmt.Errorf("type %T is not a v1CompatibleDriver", driver)
} else {
return &driverBatchWrapper{
ctx: ctx,
driver: v1CompatibleDriverRef,
batchWriteSize: 2000,
}, nil
}
}
🤖 Prompt for AI Agents
In database/v1compat/wrapper.go around lines 190 to 200, the function
wrapDriverToBatch currently panics when the provided driver isn't a
BackwardCompatibleDriver; change its signature to return (*driverBatchWrapper,
error), perform the type assertion and if it fails return a descriptive error
(not a panic), and if it succeeds return the constructed &driverBatchWrapper
with nil error (preserve ctx and batchWriteSize). Update all callers to handle
the returned error accordingly, mirroring the error-handling style used by
V1Wrapper.

Comment on lines +319 to +328
func wrapDriverToTransaction(ctx context.Context, driver database.Driver) Transaction {
if v1CompatibleDriverRef, typeOK := driver.(BackwardCompatibleDriver); !typeOK {
panic(fmt.Sprintf("type %T is not a v1CompatibleDriver", driver))
} else {
return &driverTransactionWrapper{
ctx: ctx,
driver: v1CompatibleDriverRef,
}
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consider returning an error instead of panicking.

Consistent with previous suggestions, return an error for better error handling.

-func wrapDriverToTransaction(ctx context.Context, driver database.Driver) Transaction {
+func wrapDriverToTransaction(ctx context.Context, driver database.Driver) (Transaction, error) {
 	if v1CompatibleDriverRef, typeOK := driver.(BackwardCompatibleDriver); !typeOK {
-		panic(fmt.Sprintf("type %T is not a v1CompatibleDriver", driver))
+		return nil, fmt.Errorf("type %T is not a v1CompatibleDriver", driver)
 	} else {
 		return &driverTransactionWrapper{
 			ctx:    ctx,
 			driver: v1CompatibleDriverRef,
-		}
+		}, nil
 	}
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func wrapDriverToTransaction(ctx context.Context, driver database.Driver) Transaction {
if v1CompatibleDriverRef, typeOK := driver.(BackwardCompatibleDriver); !typeOK {
panic(fmt.Sprintf("type %T is not a v1CompatibleDriver", driver))
} else {
return &driverTransactionWrapper{
ctx: ctx,
driver: v1CompatibleDriverRef,
}
}
}
func wrapDriverToTransaction(ctx context.Context, driver database.Driver) (Transaction, error) {
if v1CompatibleDriverRef, typeOK := driver.(BackwardCompatibleDriver); !typeOK {
return nil, fmt.Errorf("type %T is not a v1CompatibleDriver", driver)
} else {
return &driverTransactionWrapper{
ctx: ctx,
driver: v1CompatibleDriverRef,
}, nil
}
}
🤖 Prompt for AI Agents
In database/v1compat/wrapper.go around lines 319 to 328, the function currently
panics when the provided driver is not a BackwardCompatibleDriver; change it to
return an error instead: update the signature to return (Transaction, error),
test the type assertion and if it fails return (nil, fmt.Errorf("type %T is not
a BackwardCompatibleDriver", driver)), otherwise return the constructed
*driverTransactionWrapper and nil error; then update all call sites to handle
the error (propagate or handle appropriately).

Comment on lines +186 to 189

if err := result.Scan(&nextNode, &nextRelationship); err != nil {
return err
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Missing error handling for Scan operation

The error from scanning the result into nextNode and nextRelationship should be checked immediately after the Scan call, not after processing the segment.

-					if err := result.Scan(&nextNode, &nextRelationship); err != nil {
+					if err := result.Scan(&nextRelationship, &nextNode); err != nil {
 						return err
 					}

Note: Also verify the order of parameters in the Scan call matches the order in the Return clause (relationship first, then node).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if err := result.Scan(&nextNode, &nextRelationship); err != nil {
return err
}
if err := result.Scan(&nextRelationship, &nextNode); err != nil {
return err
}
🤖 Prompt for AI Agents
In traversal/traversal.go around lines 186 to 189, the call to
result.Scan(&nextNode, &nextRelationship) does not check the returned error
immediately and may also have the arguments in the wrong order relative to the
Cypher RETURN clause; change the code to call result.Scan(...) and immediately
if err != nil return err, before any processing of the segment, and verify/swap
the Scan parameter order so it matches the RETURN order (relationship first,
then node) i.e. Scan(&nextRelationship, &nextNode) if the Cypher returns
relationship before node.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
graph/properties.go (1)

237-266: Guard against nil other in Properties.Merge
Add at the top of func (s *Properties) Merge(other *Properties):

if other == nil {
    return
}
traversal/traversal.go (1)

141-149: Guard against empty pattern expansions to avoid panic

Accessing s.expansions[tag.patternIdx] panics when no expansions were configured.

 func (s *pattern) Driver(ctx context.Context, dbDriver database.Driver, segment *graph.PathSegment) ([]*graph.PathSegment, error) {
     var (
         nextSegments []*graph.PathSegment
@@
-       tag              = popSegmentPatternTag(segment)
-       currentExpansion = s.expansions[tag.patternIdx]
+       tag = popSegmentPatternTag(segment)
     )
+
+    if len(s.expansions) == 0 {
+        if s.delegate != nil {
+            if err := s.delegate(segment); err != nil {
+                return nil, err
+            }
+        }
+        return nil, nil
+    }
+
+    currentExpansion := s.expansions[tag.patternIdx]
♻️ Duplicate comments (24)
database/v1compat/query/model.go (3)

12-22: Avoid convertCriteria panics on unexpected node types

convertCriteria still uses unchecked assertions (nextCriteria.(T)), so any caller that passes a non‑matching syntax node will panic the server. Please add the comma‑ok check and propagate an error (e.g., ([]T, error)) so every call site can fail fast instead of crashing.


141-145: Harden Property() against non-variable qualifiers

Directly asserting qualifier.(*cypher.Variable) will panic whenever a non-variable sneaks through (and callers in this file accept general SyntaxNodes). Guard the assertion and surface a descriptive error instead.

 func Property(qualifier cypher.SyntaxNode, name string) *cypher.PropertyLookup {
-	return &cypher.PropertyLookup{
-		Atom:   qualifier.(*cypher.Variable),
-		Symbol: name,
-	}
+	variable, ok := qualifier.(*cypher.Variable)
+	if !ok {
+		return cypher.WithErrors(&cypher.PropertyLookup{}, fmt.Errorf("qualifier must be *cypher.Variable, got %T", qualifier))
+	}
+
+	return &cypher.PropertyLookup{
+		Atom:   variable,
+		Symbol: name,
+	}
 }

382-394: Prevent Delete() from crashing on non-variable leaves

leaf.(*cypher.Variable) will panic when the slice contains anything else (callers accept arbitrary SyntaxNodes). Capture the type assertion result and handle the error gracefully before dereferencing.

 func Delete(leaves ...cypher.SyntaxNode) *cypher.UpdatingClause {
 	deleteClause := &cypher.Delete{
 		Detach: true,
 	}
 
 	for _, leaf := range leaves {
-		switch leaf.(*cypher.Variable).Symbol {
+		variable, ok := leaf.(*cypher.Variable)
+		if !ok {
+			return cypher.WithErrors(cypher.NewUpdatingClause(deleteClause), fmt.Errorf("delete target must be *cypher.Variable, got %T", leaf))
+		}
+
+		switch variable.Symbol {
 		case EdgeSymbol, EdgeStartSymbol, EdgeEndSymbol:
 			deleteClause.Detach = false
 		}
 
 		deleteClause.Expressions = append(deleteClause.Expressions, leaf)
database/pg/pg.go (1)

86-90: Support both value and pointer types for DriverConfig.

The type assertion only handles Config by value. If a pointer *Config is provided, it will silently fall through to the connection string path, potentially causing misconfiguration.

Apply this diff to handle both forms:

-		if cfg.DriverConfig != nil {
-			if pgConfig, typeOK := cfg.DriverConfig.(Config); typeOK && pgConfig.Pool != nil {
-				return New(pgConfig.Pool, cfg), nil
-			}
-		}
+		if cfg.DriverConfig != nil {
+			switch c := cfg.DriverConfig.(type) {
+			case Config:
+				if c.Pool != nil {
+					return New(c.Pool, cfg), nil
+				}
+			case *Config:
+				if c != nil && c.Pool != nil {
+					return New(c.Pool, cfg), nil
+				}
+			}
+		}

Based on learnings

cypher/models/pgsql/test/validation_integration_test.go (1)

35-38: Use testCtx instead of context.TODO() when opening the connection.

Ensures open/handshake respects test cancellation/deadlines.

-    if connection, err := dawgs.Open(context.TODO(), pg.DriverName, dawgs.Config{
+    if connection, err := dawgs.Open(testCtx, pg.DriverName, dawgs.Config{
         GraphQueryMemoryLimit: size.Gibibyte,
         ConnectionString:      pgConnectionStr,
     }); err != nil {
database/pg/mapper.go (1)

20-27: Avoid mapping with partially filled/zeroed kind IDs on invalid []any elements.

If any element in []any isn’t int16, validType=false but len(kindIDs)>0, so mapping still runs with zero values. Gate on validType and clear kindIDs when a bad element is found.

@@
-        for idx, untypedElement := range typedValue {
-            if typedElement, typeOK := untypedElement.(int16); typeOK {
-                kindIDs[idx] = typedElement
-            } else {
-                // Type assertion failed, mark as invalid type
-                validType = false
-                break
-            }
-        }
+        for idx, untypedElement := range typedValue {
+            if typedElement, typeOK := untypedElement.(int16); typeOK {
+                kindIDs[idx] = typedElement
+            } else {
+                // Type assertion failed, mark as invalid type and prevent mapping
+                validType = false
+                kindIDs = nil
+                break
+            }
+        }
@@
-    // Guard to prevent unnecessary thrashing of critical sections if there are no kind IDs to resolve
-    if len(kindIDs) > 0 {
+    // Guard to prevent unnecessary thrashing when input type is invalid or there are no kind IDs
+    if validType && len(kindIDs) > 0 {
         // Ignoring the error here is intentional. Failure to map the kinds here does not imply a fatal error.
         if mappedKinds, err := kindMapper.MapKindIDs(ctx, kindIDs); err == nil {
             return mappedKinds, true
         }
     }

Optional: consider accepting other integer widths (e.g., int32/int64) and narrowing to int16 safely.

Also applies to: 38-44

registry.go (3)

3-10: Add sync and fmt imports to support concurrency-safe registry and better errors.

Needed for RWMutex and error wrapping.

 import (
     "context"
     "errors"
+    "fmt"
+    "sync"
 
     "github.com/specterops/dawgs/database"
     "github.com/specterops/dawgs/database/v1compat"
     "github.com/specterops/dawgs/util/size"
 )

20-25: Protect the drivers registry with RWMutex and prevent silent overwrites.

Avoid data races and duplicate registrations.

-var availableDrivers = map[string]DriverConstructor{}
+var (
+    availableDrivers = map[string]DriverConstructor{}
+    registryMu       sync.RWMutex
+)
 
 // Register registers a dawgs driver under the given driverName
 func Register(driverName string, constructor DriverConstructor) {
-    availableDrivers[driverName] = constructor
+    registryMu.Lock()
+    defer registryMu.Unlock()
+
+    if _, exists := availableDrivers[driverName]; exists {
+        panic(fmt.Sprintf("driver %q already registered", driverName))
+    }
+    availableDrivers[driverName] = constructor
 }

39-45: Guard lookups with RLock and wrap missing-driver error with driver name.

Improves thread-safety and diagnostics while preserving the sentinel.

 func Open(ctx context.Context, driverName string, config Config) (database.Instance, error) {
-    if driverConstructor, hasDriver := availableDrivers[driverName]; !hasDriver {
-        return nil, ErrDriverMissing
-    } else {
-        return driverConstructor(ctx, config)
-    }
+    registryMu.RLock()
+    driverConstructor, hasDriver := availableDrivers[driverName]
+    registryMu.RUnlock()
+
+    if !hasDriver {
+        return nil, fmt.Errorf("%w: %s", ErrDriverMissing, driverName)
+    }
+    return driverConstructor(ctx, config)
 }
database/pg/database.go (2)

77-99: Avoid logging rollback failures after a successful commit; conditionally rollback.

The defer-rollback pattern logs a debug error even after commit (tx already closed). Use a committed flag to only rollback on error paths.


119-125: Fix typo in error message.

The error message contains a typo: "uanble" should be "unable".

database/neo4j/neo4j.go (2)

27-33: Broaden scheme support, validate credentials, and improve error text.

Only accepting the "neo4j" scheme will surprise users; the official driver supports both neo4j and bolt schemes (and secure variants). Also validate username and provide clearer error messages.


36-41: Verify connectivity and close the driver on failure.

NewDriverWithContext does not validate connectivity; add VerifyConnectivity(ctx) and close the driver if it fails.

database/v1compat/relationship.go (1)

277-282: Fix incorrect ID assignment in FetchKinds.

The ID field is incorrectly set to endID instead of edgeID, resulting in the wrong identifier being returned for the relationship.

Apply this diff to fix the assignment:

 			return RelationshipKindsResult{
 				RelationshipTripleResult: RelationshipTripleResult{
 					StartID: startID,
 					EndID:   endID,
-					ID:      endID,
+					ID:      edgeID,
 				},
 				Kind: edgeKind,
 			}, err
database/neo4j/driver.go (1)

105-129: CREATE now returns IDs and EndID compare fixed — LGTM

RETURN clauses added and HasNext checked before Scan; EndID comparison corrected. Matches earlier feedback.

Also applies to: 132-162

traversal/traversal.go (2)

181-189: Scan order must match RETURN: relationship first, then node

RETURN returns (Relationship, End/Start), but Scan maps (node, relationship). Swap arguments.

-                   if err := result.Scan(&nextNode, &nextRelationship); err != nil {
+                   if err := result.Scan(&nextRelationship, &nextNode); err != nil {
                        return err
                    }

Also applies to: 160-169, 166-169


152-156: Type mismatch: fetchFunc takes single criteria but call sites pass a slice

This won’t compile. Make fetchFunc accept []cypher.SyntaxNode and pass slices as-is. Also avoid appending a nil criterion.

-       fetchFunc = func(criteria cypher.SyntaxNode, direction graph.Direction) error {
-           var (
-               queryBuilder = query.New()
-               allCriteria  = []cypher.SyntaxNode{criteria}
-           )
+       fetchFunc = func(criteria []cypher.SyntaxNode, direction graph.Direction) error {
+           queryBuilder := query.New()
+           allCriteria := make([]cypher.SyntaxNode, 0, len(criteria)+1)
+           allCriteria = append(allCriteria, criteria...)
            switch direction {
                // ...
            }
        }
@@
-           if err := fetchFunc(currentExpansion.criteria, fetchDirection); err != nil {
+           if err := fetchFunc(currentExpansion.criteria, fetchDirection); err != nil {
                return nil, err
            }
@@
-               if err := fetchFunc(nextExpansion.criteria, fetchDirection); err != nil {
+               if err := fetchFunc(nextExpansion.criteria, fetchDirection); err != nil {
                    return nil, err
                }

Also applies to: 217-218, 235-236

query/query.go (1)

467-494: Bug: Type mismatch in Delete() QualifiedExpression case (still present)

Lines 473-475 compare a cypher.Expression directly to string identifiers (Identifiers.node/start/end), which will never match. The correct approach (shown in lines 481-483 for the *cypher.Variable case) is to assert to *cypher.Variable and compare the Symbol field.

Apply this diff to fix the type mismatch:

     case QualifiedExpression:
         qualifier := typedNextUpdate.qualifier()
-
-        switch qualifier {
-        case Identifiers.node, Identifiers.start, Identifiers.end:
-            s.detachDelete = true
+        
+        if v, ok := qualifier.(*cypher.Variable); ok {
+            switch v.Symbol {
+            case Identifiers.node, Identifiers.start, Identifiers.end:
+                s.detachDelete = true
+            }
         }
-
         s.deleteItems = append(s.deleteItems, qualifier)
database/v1compat/wrapper.go (6)

117-130: Avoid panic in wrapDriverToBatch; propagate batchWriteSize to honor configuration

Return an error instead of panicking when the driver isn’t BackwardCompatibleDriver, and apply s.batchWriteSize to the batch wrapper so thresholds take effect.

@@
-func (s *v1Wrapper) BatchOperation(ctx context.Context, batchDelegate BatchDelegate) error {
-	return s.v2DB.Session(ctx, func(ctx context.Context, driver database.Driver) error {
-		var (
-			batchWrapper = wrapDriverToBatch(ctx, driver)
-			delegateErr  = batchDelegate(batchWrapper)
-		)
+func (s *v1Wrapper) BatchOperation(ctx context.Context, batchDelegate BatchDelegate) error {
+	return s.v2DB.Session(ctx, func(ctx context.Context, driver database.Driver) error {
+		batchWrapper, err := wrapDriverToBatch(ctx, driver)
+		if err != nil {
+			return err
+		}
+
+		if s.batchWriteSize > 0 {
+			batchWrapper.batchWriteSize = s.batchWriteSize
+		}
+
+		delegateErr := batchDelegate(batchWrapper)
@@
-		return batchWrapper.tryFlush(0)
+		return batchWrapper.tryFlush(0)
 	})
 }
@@
-func wrapDriverToBatch(ctx context.Context, driver database.Driver) *driverBatchWrapper {
-	if v1CompatibleDriverRef, typeOK := driver.(BackwardCompatibleDriver); !typeOK {
-		panic(fmt.Sprintf("type %T is not a v1CompatibleDriver", driver))
-	} else {
-		return &driverBatchWrapper{
-			ctx:            ctx,
-			driver:         v1CompatibleDriverRef,
-			batchWriteSize: 2000,
-		}
-	}
+func wrapDriverToBatch(ctx context.Context, driver database.Driver) (*driverBatchWrapper, error) {
+	if v1CompatibleDriverRef, typeOK := driver.(BackwardCompatibleDriver); !typeOK {
+		return nil, fmt.Errorf("type %T is not a v1CompatibleDriver", driver)
+	}
+	return &driverBatchWrapper{
+		ctx:            ctx,
+		driver:         v1CompatibleDriverRef,
+		batchWriteSize: 2000,
+	}, nil
 }

Also applies to: 190-200


105-115: Avoid panic in wrapDriverToTransaction; plumb errors to callers

Return an error instead of panicking, and handle it in Read/WriteTransaction.

@@
 func (s *v1Wrapper) ReadTransaction(ctx context.Context, txDelegate TransactionDelegate, options ...TransactionOption) error {
 	return s.v2DB.Session(ctx, func(ctx context.Context, driver database.Driver) error {
-		return txDelegate(wrapDriverToTransaction(ctx, driver))
+		tx, err := wrapDriverToTransaction(ctx, driver)
+		if err != nil {
+			return err
+		}
+		return txDelegate(tx)
 	}, database.OptionReadOnly)
 }
 
 func (s *v1Wrapper) WriteTransaction(ctx context.Context, txDelegate TransactionDelegate, options ...TransactionOption) error {
 	return s.v2DB.Session(ctx, func(ctx context.Context, driver database.Driver) error {
-		return txDelegate(wrapDriverToTransaction(ctx, driver))
+		tx, err := wrapDriverToTransaction(ctx, driver)
+		if err != nil {
+			return err
+		}
+		return txDelegate(tx)
 	})
 }
@@
-func wrapDriverToTransaction(ctx context.Context, driver database.Driver) Transaction {
-	if v1CompatibleDriverRef, typeOK := driver.(BackwardCompatibleDriver); !typeOK {
-		panic(fmt.Sprintf("type %T is not a v1CompatibleDriver", driver))
-	} else {
-		return &driverTransactionWrapper{
-			ctx:    ctx,
-			driver: v1CompatibleDriverRef,
-		}
-	}
+func wrapDriverToTransaction(ctx context.Context, driver database.Driver) (Transaction, error) {
+	if v1CompatibleDriverRef, typeOK := driver.(BackwardCompatibleDriver); !typeOK {
+		return nil, fmt.Errorf("type %T is not a v1CompatibleDriver", driver)
+	}
+	return &driverTransactionWrapper{
+		ctx:    ctx,
+		driver: v1CompatibleDriverRef,
+	}, nil
 }

Also applies to: 319-328


91-99: Avoid panic in V1Wrapper; return error

Prefer returning (Database, error) and let callers handle unsupported types gracefully.

-func V1Wrapper(v2DB database.Instance) Database {
+func V1Wrapper(v2DB database.Instance) (Database, error) {
 	if v1CompatibleInstanceRef, typeOK := v2DB.(BackwardCompatibleInstance); !typeOK {
-		panic(fmt.Sprintf("type %T is not a v1CompatibleInstance", v2DB))
+		return nil, fmt.Errorf("type %T is not a v1CompatibleInstance", v2DB)
 	} else {
-		return &v1Wrapper{
+		return &v1Wrapper{
 			v2DB: v1CompatibleInstanceRef,
-		}
+		}, nil
 	}
 }

84-89: Guard schema mutations with a mutex; avoid holding lock during AssertSchema

Concurrent calls to AssertSchema/SetDefaultGraph mutate s.schema without synchronization. Add an RWMutex and lock around reads/writes; copy out the schema and call v2DB.AssertSchema after unlocking.

@@
 package v1compat
 
 import (
 	"context"
 	"fmt"
 	"time"
+	"sync"
@@
 type v1Wrapper struct {
 	schema         *database.Schema
 	v2DB           BackwardCompatibleInstance
 	writeFlushSize int
 	batchWriteSize int
+	mu             sync.RWMutex
 }
@@
 func (s *v1Wrapper) AssertSchema(ctx context.Context, dbSchema database.Schema) error {
-	s.schema = &dbSchema
-	return s.v2DB.AssertSchema(ctx, dbSchema)
+	s.mu.Lock()
+	s.schema = &dbSchema
+	local := dbSchema
+	s.mu.Unlock()
+	return s.v2DB.AssertSchema(ctx, local)
 }
@@
 func (s *v1Wrapper) SetDefaultGraph(ctx context.Context, graphSchema database.Graph) error {
-	if s.schema != nil {
-		s.schema.GraphSchemas[graphSchema.Name] = graphSchema
-		s.schema.DefaultGraphName = graphSchema.Name
-	} else {
-		s.schema = &database.Schema{
-			GraphSchemas: map[string]database.Graph{
-				graphSchema.Name: graphSchema,
-			},
-			DefaultGraphName: graphSchema.Name,
-		}
-	}
-
-	return s.v2DB.AssertSchema(ctx, *s.schema)
+	s.mu.Lock()
+	if s.schema != nil {
+		s.schema.GraphSchemas[graphSchema.Name] = graphSchema
+		s.schema.DefaultGraphName = graphSchema.Name
+	} else {
+		s.schema = &database.Schema{
+			GraphSchemas:     map[string]database.Graph{graphSchema.Name: graphSchema},
+			DefaultGraphName: graphSchema.Name,
+		}
+	}
+	local := *s.schema
+	s.mu.Unlock()
+	return s.v2DB.AssertSchema(ctx, local)
 }

Also applies to: 132-151, 3-13


348-375: Add MATCH clause before UPDATE for node updates

The update builder lacks a MATCH to scope the update to the target node.

 func (s driverTransactionWrapper) UpdateNode(node *graph.Node) error {
-	updateQuery := query.New().Where(query.Node().ID().Equals(node.ID))
+	updateQuery := query.New()
+	updateQuery.Match(query.Node().Where(query.Node().ID().Equals(node.ID)))
 
 	if len(node.AddedKinds) > 0 {
 		updateQuery.Update(query.Node().Kinds().Add(node.AddedKinds))
 	}

399-418: Add MATCH clause before UPDATE for relationship updates

Similarly, scope the update to the specific relationship ID.

 func (s driverTransactionWrapper) UpdateRelationship(relationship *graph.Relationship) error {
-	updateQuery := query.New().Where(query.Relationship().ID().Equals(relationship.ID))
+	updateQuery := query.New()
+	updateQuery.Match(query.Relationship().Where(query.Relationship().ID().Equals(relationship.ID)))
 
 	if modifiedProperties := relationship.Properties.ModifiedProperties(); len(modifiedProperties) > 0 {
 		updateQuery.Update(query.Relationship().SetProperties(modifiedProperties))
 	}
🧹 Nitpick comments (10)
registry_integration_test.go (1)

22-25: Externalize PG connection string (avoid hardcoded creds).

Read from an env var with a sane local default. Improves security and portability.

@@
-import (
+import (
 	"context"
 	"testing"
 
 	"github.com/specterops/dawgs"
 	"github.com/specterops/dawgs/database/pg"
+	"os"
@@
-	graphDB, err := dawgs.Open(ctx, pg.DriverName, dawgs.Config{
+	connStr := os.Getenv("PG_CONNECTION_STRING")
+	if connStr == "" {
+		connStr = "postgresql://postgres:postgres@localhost:5432/bhe"
+	}
+
+	graphDB, err := dawgs.Open(ctx, pg.DriverName, dawgs.Config{
 		GraphQueryMemoryLimit: size.Gibibyte * 1,
-		ConnectionString:      "postgresql://postgres:postgres@localhost:5432/bhe",
+		ConnectionString:      connStr,
 	})
query/util.go (1)

127-130: Consider removing or documenting unused context flags.

The context flags inDelete, inUpdate, inCreate, and inWhere are declared but never set or read in the Enter method. If these are intended for future use, add a comment explaining their purpose. Otherwise, remove them to reduce code clutter.

database/pg/driver.go (2)

93-95: Consider using a proper context instead of context.TODO().

The Mapper method uses context.TODO() which is generally discouraged. If a proper context is not available (e.g., from a request), consider adding a context parameter or documenting why context.TODO() is used here (e.g., for backward compatibility).


179-182: Consider immutability for WithGraph to avoid shared state issues.

WithGraph mutates the receiver by setting targetGraph, which could cause issues if the driver is shared across goroutines or multiple queries. Consider returning a new instance with the updated target graph to ensure thread safety.

Apply this diff if immutability is desired:

 func (s *dawgsDriver) WithGraph(targetGraph database.Graph) database.Driver {
-	s.targetGraph = &targetGraph
-	return s
+	return &dawgsDriver{
+		internalConn:       s.internalConn,
+		queryExecMode:      s.queryExecMode,
+		queryResultFormats: s.queryResultFormats,
+		schemaManager:      s.schemaManager,
+		targetGraph:        &targetGraph,
+	}
 }
database/neo4j/driver.go (2)

26-29: Guard against nil nextRecord in Values()

Avoid a panic if Values() is called before HasNext or after exhaustion.

 func (s *sessionResult) Values() []any {
-    return s.nextRecord.Values
+    if s.nextRecord == nil {
+        return nil
+    }
+    return s.nextRecord.Values
 }

72-79: Optionally always consume to release resources even on error

Returning early on s.err skips Consume; consider consuming to ensure resources are freed.

 func (s *sessionResult) Close(ctx context.Context) error {
-    if s.err != nil {
-        return s.err
-    }
-
-    _, err := s.result.Consume(ctx)
-    return err
+    _, consumeErr := s.result.Consume(ctx)
+    if s.err != nil {
+        // Prefer the original error but still consume
+        return s.err
+    }
+    return consumeErr
 }
database/pg/manager.go (1)

84-94: Minor cleanup: avoid redundant self-assignment in fetch()

Range the fetched map directly and drop the no-op write-back.

-   s.kindsByID = kinds
-
-   for kind, kindID := range s.kindsByID {
-       s.kindsByID[kind] = kindID
-       s.kindIDsByKind[kindID] = kind
-   }
+   s.kindsByID = kinds
+   for kind, kindID := range kinds {
+       s.kindIDsByKind[kindID] = kind
+   }
database/v1compat/traversal/traversal.go (1)

213-213: Consider adding parentheses for clarity.

The boolean expression relies on operator precedence (&&binds tighter than ||) which evaluates as (tag.depth > 0 && currentExpansion.minDepth == 0) || tag.depth >= currentExpansion.minDepth. While logically correct, explicit parentheses would improve readability.

Apply this diff to add explicit grouping:

-        if tag.depth > 0 && currentExpansion.minDepth == 0 || tag.depth >= currentExpansion.minDepth {
+        if (tag.depth > 0 && currentExpansion.minDepth == 0) || tag.depth >= currentExpansion.minDepth {
database/v1compat/wrapper.go (1)

169-175: SetWriteFlushSize appears unused

writeFlushSize is set but not referenced. Either apply it where intended (e.g., writer flush cadence) or remove to avoid dead config.

database/pg/query/query.go (1)

5-5: Remove unused embed import (if no //go:embed in this file)

The blank import of embed is unnecessary here.

-import (
+import (
 	"context"
-	_ "embed"
 	"fmt"
 	"strings"
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 90a694e and cfc29c0.

📒 Files selected for processing (92)
  • cypher/models/cypher/model.go (1 hunks)
  • cypher/models/pgsql/format/format.go (3 hunks)
  • cypher/models/pgsql/test/testcase.go (3 hunks)
  • cypher/models/pgsql/test/translation_test.go (1 hunks)
  • cypher/models/pgsql/test/validation_integration_test.go (3 hunks)
  • cypher/models/pgsql/visualization/visualizer_test.go (1 hunks)
  • database/driver.go (1 hunks)
  • database/neo4j/database.go (1 hunks)
  • database/neo4j/database_integration_test.go (1 hunks)
  • database/neo4j/driver.go (1 hunks)
  • database/neo4j/mapper.go (3 hunks)
  • database/neo4j/neo4j.go (1 hunks)
  • database/neo4j/schema.go (1 hunks)
  • database/pg/batch.go (1 hunks)
  • database/pg/config.go (1 hunks)
  • database/pg/database.go (1 hunks)
  • database/pg/driver.go (1 hunks)
  • database/pg/manager.go (10 hunks)
  • database/pg/mapper.go (3 hunks)
  • database/pg/model/format.go (3 hunks)
  • database/pg/model/model.go (1 hunks)
  • database/pg/pg.go (2 hunks)
  • database/pg/query/format.go (2 hunks)
  • database/pg/query/query.go (1 hunks)
  • database/pg/query/sql.go (1 hunks)
  • database/pg/query/sql/select_graph_by_name.sql (1 hunks)
  • database/pg/query/sql/select_graphs.sql (1 hunks)
  • database/pg/statements.go (1 hunks)
  • database/pg/types.go (2 hunks)
  • database/pg/util.go (1 hunks)
  • database/schema.go (1 hunks)
  • database/tooling.go (1 hunks)
  • database/v1compat/database.go (1 hunks)
  • database/v1compat/errors.go (1 hunks)
  • database/v1compat/graph.go (1 hunks)
  • database/v1compat/node.go (1 hunks)
  • database/v1compat/ops/ops.go (1 hunks)
  • database/v1compat/ops/parallel.go (1 hunks)
  • database/v1compat/ops/traversal.go (1 hunks)
  • database/v1compat/query.go (7 hunks)
  • database/v1compat/query/builder.go (3 hunks)
  • database/v1compat/query/model.go (1 hunks)
  • database/v1compat/query/sort.go (1 hunks)
  • database/v1compat/relationship.go (1 hunks)
  • database/v1compat/result.go (4 hunks)
  • database/v1compat/switch.go (2 hunks)
  • database/v1compat/traversal/collection.go (2 hunks)
  • database/v1compat/traversal/query.go (1 hunks)
  • database/v1compat/traversal/traversal.go (1 hunks)
  • database/v1compat/traversal/traversal_test.go (1 hunks)
  • database/v1compat/types.go (1 hunks)
  • database/v1compat/wrapper.go (1 hunks)
  • dawgs.go (0 hunks)
  • drivers/neo4j/batch.go (0 hunks)
  • drivers/neo4j/const.go (0 hunks)
  • drivers/neo4j/cypher.go (0 hunks)
  • drivers/neo4j/cypher_internal_test.go (0 hunks)
  • drivers/neo4j/driver.go (0 hunks)
  • drivers/neo4j/index.go (0 hunks)
  • drivers/neo4j/neo4j.go (0 hunks)
  • drivers/neo4j/node.go (0 hunks)
  • drivers/neo4j/relationship.go (0 hunks)
  • drivers/neo4j/result.go (0 hunks)
  • drivers/neo4j/result_internal_test.go (0 hunks)
  • drivers/neo4j/transaction.go (0 hunks)
  • drivers/neo4j/wrapper.go (0 hunks)
  • drivers/pg/batch.go (0 hunks)
  • drivers/pg/driver.go (0 hunks)
  • drivers/pg/model/model.go (0 hunks)
  • drivers/pg/node.go (0 hunks)
  • drivers/pg/query.go (0 hunks)
  • drivers/pg/query/query.go (0 hunks)
  • drivers/pg/relationship.go (0 hunks)
  • drivers/pg/result.go (0 hunks)
  • drivers/pg/tooling.go (0 hunks)
  • drivers/pg/transaction.go (0 hunks)
  • drivers/pg/util.go (0 hunks)
  • graph/error.go (0 hunks)
  • graph/graph.go (0 hunks)
  • graph/properties.go (1 hunks)
  • graph/schema.go (0 hunks)
  • graphcache/cache.go (0 hunks)
  • query/model.go (0 hunks)
  • query/neo4j/neo4j.go (0 hunks)
  • query/neo4j/neo4j_test.go (0 hunks)
  • query/neo4j/rewrite.go (0 hunks)
  • query/query.go (1 hunks)
  • query/query_test.go (1 hunks)
  • query/util.go (1 hunks)
  • registry.go (1 hunks)
  • registry_integration_test.go (1 hunks)
  • traversal/traversal.go (10 hunks)
💤 Files with no reviewable changes (33)
  • query/neo4j/neo4j_test.go
  • drivers/neo4j/result_internal_test.go
  • drivers/pg/tooling.go
  • graph/error.go
  • dawgs.go
  • drivers/pg/transaction.go
  • drivers/neo4j/batch.go
  • drivers/pg/util.go
  • drivers/pg/result.go
  • drivers/pg/relationship.go
  • drivers/neo4j/transaction.go
  • drivers/neo4j/driver.go
  • drivers/pg/model/model.go
  • drivers/neo4j/const.go
  • drivers/pg/query.go
  • drivers/neo4j/cypher_internal_test.go
  • drivers/neo4j/result.go
  • drivers/neo4j/cypher.go
  • drivers/neo4j/node.go
  • graphcache/cache.go
  • drivers/neo4j/index.go
  • drivers/neo4j/relationship.go
  • drivers/pg/node.go
  • graph/graph.go
  • query/model.go
  • query/neo4j/rewrite.go
  • drivers/neo4j/wrapper.go
  • drivers/neo4j/neo4j.go
  • drivers/pg/driver.go
  • drivers/pg/query/query.go
  • graph/schema.go
  • query/neo4j/neo4j.go
  • drivers/pg/batch.go
✅ Files skipped from review due to trivial changes (1)
  • database/pg/batch.go
🚧 Files skipped from review as they are similar to previous changes (18)
  • database/v1compat/ops/traversal.go
  • database/v1compat/errors.go
  • database/v1compat/traversal/query.go
  • database/v1compat/node.go
  • database/neo4j/database_integration_test.go
  • cypher/models/pgsql/test/testcase.go
  • database/v1compat/query/sort.go
  • database/pg/statements.go
  • database/pg/model/model.go
  • database/v1compat/traversal/collection.go
  • cypher/models/pgsql/test/translation_test.go
  • database/pg/config.go
  • query/query_test.go
  • database/pg/util.go
  • database/pg/query/sql/select_graphs.sql
  • database/v1compat/ops/parallel.go
  • cypher/models/cypher/model.go
  • database/neo4j/schema.go
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-06-27T15:24:24.735Z
Learnt from: zinic
PR: SpecterOps/DAWGS#4
File: drivers/pg/v2/driver.go:82-85
Timestamp: 2025-06-27T15:24:24.735Z
Learning: The pgx.Rows.Close() method in the jackc/pgx library (including v5) does not return an error - it has the signature Close() with no return value. This is different from the standard database/sql library where Rows.Close() does return an error. Do not suggest changing patterns like "s.rows.Close(); return nil" to "return s.rows.Close()" for pgx.Rows.

Applied to files:

  • database/pg/driver.go
📚 Learning: 2025-06-27T15:24:24.735Z
Learnt from: zinic
PR: SpecterOps/DAWGS#4
File: drivers/pg/v2/driver.go:82-85
Timestamp: 2025-06-27T15:24:24.735Z
Learning: The pgx.Rows.Close() method in the jackc/pgx/v5 library does not return an error - it has the signature Close() with no return value. Do not suggest changing patterns like "s.rows.Close(); return nil" to "return s.rows.Close()" for pgx.Rows.

Applied to files:

  • database/pg/driver.go
📚 Learning: 2025-08-14T01:22:17.837Z
Learnt from: zinic
PR: SpecterOps/DAWGS#10
File: database/v1compat/traversal/traversal.go:227-227
Timestamp: 2025-08-14T01:22:17.837Z
Learning: In database/v1compat/traversal/traversal.go, the fetchDirection variable should remain consistent during expansion. The fetchDirection represents the query perspective for retrieving relationships and must be the same for both current and next expansion steps to maintain proper traversal semantics.

Applied to files:

  • database/v1compat/relationship.go
  • database/v1compat/traversal/traversal.go
  • traversal/traversal.go
📚 Learning: 2025-06-27T15:32:33.244Z
Learnt from: zinic
PR: SpecterOps/DAWGS#4
File: drivers/neo4j/v2/schema.go:136-140
Timestamp: 2025-06-27T15:32:33.244Z
Learning: The Neo4j Go driver v5 may not have the same transaction rollback error handling patterns as suggested. Check the specific v5 documentation before recommending `errors.Is(err, neo4j.ErrTransactionTerminated)` or similar patterns.

Applied to files:

  • database/v1compat/wrapper.go
  • database/pg/database.go
🧬 Code graph analysis (28)
cypher/models/pgsql/format/format.go (1)
cypher/models/pgsql/pgtypes.go (1)
  • TextArray (89-89)
database/pg/query/format.go (1)
database/schema.go (4)
  • IndexType (5-5)
  • IndexTypeBTree (9-9)
  • IndexTypeTextSearch (10-10)
  • IndexTypeUnsupported (8-8)
database/neo4j/mapper.go (1)
graph/mapper.go (1)
  • ValueMapper (358-360)
database/v1compat/database.go (1)
database/v1compat/query.go (2)
  • NodeQuery (118-160)
  • RelationshipQuery (164-217)
database/pg/pg.go (1)
registry.go (1)
  • Register (23-25)
database/v1compat/query.go (3)
graph/mapper.go (1)
  • ValueMapper (358-360)
database/v1compat/graph.go (1)
  • Criteria (26-26)
database/v1compat/result.go (2)
  • DirectionalResult (28-32)
  • RelationshipTripleResult (17-21)
database/neo4j/driver.go (1)
graph/mapper.go (1)
  • ValueMapper (358-360)
database/pg/driver.go (3)
graph/mapper.go (1)
  • ValueMapper (358-360)
database/v1compat/wrapper.go (1)
  • BackwardCompatibleDriver (73-82)
cypher/models/pgsql/translate/translator.go (1)
  • Translate (446-454)
database/neo4j/neo4j.go (1)
registry.go (1)
  • Register (23-25)
database/v1compat/relationship.go (5)
query/query.go (1)
  • QueryBuilder (375-387)
database/v1compat/query.go (2)
  • RelationshipQuery (164-217)
  • CriteriaProvider (114-114)
database/v1compat/graph.go (1)
  • Criteria (26-26)
database/v1compat/query/model.go (3)
  • SetProperties (80-92)
  • Offset (225-229)
  • Count (148-153)
database/v1compat/result.go (4)
  • NewResultIterator (67-81)
  • DirectionalResult (28-32)
  • RelationshipTripleResult (17-21)
  • RelationshipKindsResult (23-26)
query/query.go (3)
cypher/models/cypher/model.go (31)
  • NewVariableWithSymbol (1153-1157)
  • Comparison (1073-1076)
  • NewPartialComparison (1131-1136)
  • KindMatcher (744-747)
  • NewComparison (1078-1086)
  • NewSimpleFunctionInvocation (1037-1042)
  • NewStringLiteral (785-790)
  • NewNegation (930-934)
  • NewStringListLiteral (883-891)
  • Remove (546-548)
  • Set (594-596)
  • SetItem (614-618)
  • RemoveItem (566-569)
  • NewSetItem (620-626)
  • RemoveProperty (577-581)
  • FunctionInvocation (1028-1035)
  • RemoveKindsByMatcher (571-575)
  • NewKindMatcher (749-754)
  • Skip (1496-1498)
  • NewSet (598-602)
  • NewRemove (550-554)
  • NewDelete (524-529)
  • NewProjection (1386-1390)
  • ProjectionItem (1169-1172)
  • NewProjectionItemWithExpr (1178-1182)
  • NewSkip (1500-1504)
  • NewLimit (1480-1484)
  • NewRegularQueryWithSingleQuery (142-150)
  • Match (411-415)
  • NewWhere (1333-1335)
  • NewReadingClause (207-209)
cypher/models/cypher/functions.go (4)
  • EdgeTypeFunction (15-15)
  • CountFunction (4-4)
  • IdentityFunction (11-11)
  • NodeLabelsFunction (14-14)
cypher/models/pgsql/operators.go (10)
  • OperatorNotEquals (71-71)
  • OperatorIn (89-89)
  • OperatorAnd (79-79)
  • OperatorOr (80-80)
  • OperatorEquals (70-70)
  • OperatorGreaterThan (72-72)
  • OperatorGreaterThanOrEqualTo (73-73)
  • OperatorLessThan (74-74)
  • OperatorLessThanOrEqualTo (75-75)
  • OperatorAssignment (94-94)
registry.go (3)
database/driver.go (1)
  • Instance (57-65)
database/v1compat/database.go (1)
  • Database (111-153)
database/v1compat/wrapper.go (1)
  • V1Wrapper (91-99)
database/v1compat/types.go (1)
cardinality/cardinality.go (1)
  • Duplex (46-58)
database/v1compat/traversal/traversal.go (7)
database/v1compat/database.go (2)
  • Transaction (57-91)
  • Database (111-153)
database/v1compat/result.go (1)
  • DirectionalResult (28-32)
util/channels/pipe.go (1)
  • BufferedPipe (10-81)
database/v1compat/ops/traversal.go (1)
  • ErrGraphQueryMemoryLimit (42-42)
database/v1compat/query.go (1)
  • RelationshipQuery (164-217)
database/v1compat/traversal/query.go (1)
  • ShallowFetchNodesByID (16-45)
database/v1compat/types.go (1)
  • DuplexToGraphIDs (159-169)
database/neo4j/database.go (3)
database/neo4j/neo4j.go (2)
  • DefaultNeo4jTransactionTimeout (15-15)
  • DefaultConcurrentConnections (20-20)
database/driver.go (3)
  • Option (11-11)
  • OptionReadOnly (14-14)
  • QueryLogic (55-55)
database/v1compat/database.go (1)
  • Transaction (57-91)
database/pg/mapper.go (1)
graph/mapper.go (1)
  • ValueMapper (358-360)
query/util.go (3)
query/query.go (1)
  • Identifiers (39-45)
cypher/models/cypher/model.go (4)
  • Match (411-415)
  • NewPatternPart (1441-1443)
  • PatternRange (903-906)
  • ProjectionItem (1169-1172)
cypher/models/walk/walk.go (2)
  • NewVisitor (86-90)
  • Cypher (218-220)
database/pg/manager.go (1)
database/pg/query/query.go (1)
  • On (20-24)
traversal/traversal.go (1)
database/v1compat/query/model.go (1)
  • Equals (267-269)
registry_integration_test.go (3)
registry.go (2)
  • Open (39-45)
  • Config (28-35)
database/schema.go (2)
  • NewSchema (52-63)
  • IndexTypeTextSearch (10-10)
database/driver.go (1)
  • Driver (30-53)
cypher/models/pgsql/test/validation_integration_test.go (2)
registry.go (1)
  • Open (39-45)
database/schema.go (1)
  • NewSchema (52-63)
database/v1compat/wrapper.go (3)
cypher/frontend/context.go (2)
  • Context (17-21)
  • NewContext (23-33)
database/v1compat/database.go (6)
  • Database (111-153)
  • TransactionDelegate (96-96)
  • TransactionOption (108-108)
  • BatchDelegate (99-99)
  • Batch (12-53)
  • Transaction (57-91)
database/driver.go (1)
  • OptionReadOnly (14-14)
database/pg/database.go (3)
database/driver.go (3)
  • Option (11-11)
  • OptionReadOnly (14-14)
  • QueryLogic (55-55)
database/v1compat/wrapper.go (1)
  • BackwardCompatibleInstance (66-71)
database/v1compat/database.go (1)
  • Transaction (57-91)
database/v1compat/query/model.go (3)
cypher/models/cypher/model.go (32)
  • UpdatingClause (459-463)
  • NewUpdatingClause (465-469)
  • Set (594-596)
  • SetItem (614-618)
  • Remove (546-548)
  • RemoveItem (566-569)
  • KindMatcher (744-747)
  • PropertyLookup (1201-1204)
  • NewKindMatcher (749-754)
  • FunctionInvocation (1028-1035)
  • Conjunction (1006-1008)
  • NewConjunction (1010-1016)
  • NewDisjunction (988-994)
  • ExclusiveDisjunction (962-964)
  • NewExclusiveDisjunction (966-972)
  • WithErrors (112-118)
  • Skip (1496-1498)
  • Comparison (1073-1076)
  • NewComparison (1078-1086)
  • NewSimpleFunctionInvocation (1037-1042)
  • PatternPredicate (1516-1518)
  • NewPatternPredicate (1520-1522)
  • NewVariableWithSymbol (1153-1157)
  • NewWhere (1333-1335)
  • SortOrder (10-10)
  • Return (1411-1413)
  • ProjectionItem (1169-1172)
  • Negation (926-928)
  • ReadingClause (202-205)
  • SingleQuery (162-165)
  • Match (411-415)
  • Unwind (182-185)
cypher/models/pgsql/operators.go (9)
  • OperatorAssignment (94-94)
  • OperatorEquals (70-70)
  • OperatorGreaterThan (72-72)
  • OperatorGreaterThanOrEqualTo (73-73)
  • OperatorLessThan (74-74)
  • OperatorLessThanOrEqualTo (75-75)
  • OperatorIsNot (91-91)
  • OperatorIn (89-89)
  • OperatorIs (90-90)
database/v1compat/query/identifiers.go (5)
  • NodeSymbol (22-22)
  • EdgeSymbol (23-23)
  • EdgeStartSymbol (24-24)
  • EdgeEndSymbol (25-25)
  • Identity (13-18)
database/v1compat/switch.go (1)
database/driver.go (1)
  • Instance (57-65)
database/v1compat/graph.go (1)
database/schema.go (2)
  • IndexTypeBTree (9-9)
  • IndexTypeTextSearch (10-10)
database/driver.go (2)
graph/mapper.go (1)
  • ValueMapper (358-360)
database/v1compat/database.go (1)
  • Transaction (57-91)
database/pg/query/query.go (3)
database/pg/model/model.go (6)
  • GraphPartition (27-31)
  • NewGraphPartition (33-39)
  • GraphPartitions (59-62)
  • IndexChangeSet (7-16)
  • NewGraphPartitionFromSchema (41-57)
  • NewIndexChangeSet (18-25)
database/pg/model/format.go (4)
  • NodePartitionTableName (19-21)
  • EdgePartitionTableName (23-25)
  • NodeTable (11-11)
  • EdgeTable (12-12)
database/schema.go (2)
  • IndexType (5-5)
  • IndexTypeBTree (9-9)
🔇 Additional comments (69)
cypher/models/pgsql/format/format.go (3)

74-76: LGTM! Sub-second precision preserved.

The use of RFC3339Nano correctly addresses the previous concern about precision loss when formatting timestamps for PostgreSQL's timestamp with time zone type.

Based on learnings


123-125: LGTM! SQL injection risk mitigated.

The string escaping implementation correctly doubles single quotes per SQL literal rules, addressing the previous concern about broken SQL and potential injection vulnerabilities.

Based on learnings


126-128: LGTM! Array formatting delegates safely.

The []string case correctly delegates to formatSlice with TextArray datatype. Since formatSlice calls formatValue on each element, the string escaping implemented at lines 123-125 automatically protects array elements, ensuring safe SQL formatting.

graph/properties.go (2)

238-265: LGTM! Efficient map initialization with capacity hints.

The changes correctly initialize the receiver's maps with capacity hints based on the other parameter's map sizes. This optimization reduces allocations during the merge operation. The merge logic correctly handles:

  • Copying properties from other.Map
  • Propagating modification flags while removing from deleted set
  • Propagating deletion flags while cleaning up from both map and modified set

1-547: No remaining references to NewPropertiesRed() constructor found.

database/pg/query/sql.go (1)

49-49: LGTM!

The update correctly loads select_kind_id.sql for the sqlSelectKindID variable, aligning with the PR's schema and metadata handling improvements.

database/pg/query/sql/select_graph_by_name.sql (1)

4-5: LGTM!

Adding LIMIT 1 ensures deterministic behavior when retrieving a graph by name, which is appropriate since graph names should be unique. This prevents potential issues if the uniqueness constraint is temporarily violated.

database/v1compat/traversal/traversal_test.go (4)

10-36: LGTM!

The test fixtures are well-structured with clear comments documenting the path relationships. The inclusion of both multi-step and cyclic paths provides comprehensive coverage for testing traversal filters.


38-45: LGTM!

The test correctly validates that AcyclicNodeFilter rejects cycles while allowing the pass-through delegate to accept non-cyclic segments.


47-63: LGTM!

The test comprehensively validates UniquePathSegmentFilter behavior: allowing traversal to the same node via different paths while preventing retraversal of the same path and rejecting cycles.


65-93: LGTM!

The test correctly validates FilteredSkipLimit behavior with skip=1 and limit=1, ensuring the filter skips the first matching node, collects the second, and rejects subsequent matches while properly controlling traversal descent.

database/v1compat/ops/ops.go (1)

14-15: LGTM!

The import changes correctly adopt the v1compat package aliases, maintaining the existing function signatures while routing through the v1 compatibility layer.

database/neo4j/mapper.go (5)

12-12: LGTM!

The function is correctly renamed to asTime (private) as it's only used internally within the Neo4j mapper package.


31-39: LGTM!

The function correctly converts Neo4j nodes to graph nodes with proper nil-safety for properties.


41-55: LGTM!

The function correctly converts Neo4j relationships to graph relationships with proper nil-safety for properties.


57-69: LGTM!

The function correctly converts Neo4j paths by aggregating nodes and relationships using the helper functions.


113-113: LGTM!

The function is correctly renamed to newValueMapper (private) as it's only used internally within the Neo4j mapper package.

database/pg/model/format.go (3)

7-7: LGTM!

The import change correctly adopts the database package for schema types.


27-27: LGTM!

The parameter type change from graph.Index to database.Index aligns with the broader schema representation migration while preserving function behavior.


38-38: LGTM!

The parameter type change from graph.Constraint to database.Constraint aligns with the broader schema representation migration while preserving function behavior.

database/v1compat/query/builder.go (4)

8-8: LGTM!

The import reordering is a non-functional formatting change.


31-31: LGTM!

The signature change from graph.Criteria to cypher.SyntaxNode aligns with the v1compat query builder's shift to working directly with cypher syntax nodes.


222-222: LGTM!

The signature change from graph.Criteria to cypher.SyntaxNode is consistent with the builder's refactored approach.


225-225: LGTM!

The type switch case update from []graph.Criteria to []cypher.SyntaxNode is consistent with the signature changes.

registry.go (1)

47-54: OpenV1 now correctly propagates constructor errors. LGTM.

Good fix; no masking of non-missing-driver errors.

database/neo4j/database.go (1)

127-144: Consume/check result errors; current code drops errors from result.Consume/Err.

Defered Consume result is ignored. Check and return any errors after iteration.

-        consumeKindResult := func(result neo4j.ResultWithContext) error {
-            defer result.Consume(ctx)
-
-            for result.Next(ctx) {
+        consumeKindResult := func(result neo4j.ResultWithContext) error {
+            for result.Next(ctx) {
                 values := result.Record().Values
@@
-            return nil
+            if err := result.Err(); err != nil {
+                return err
+            }
+            if _, err := result.Consume(ctx); err != nil {
+                return err
+            }
+            return nil
         }
⛔ Skipped due to learnings
Learnt from: zinic
PR: SpecterOps/DAWGS#4
File: drivers/neo4j/v2/schema.go:231-231
Timestamp: 2025-06-27T15:56:17.950Z
Learning: In the Neo4j driver v2 schema.go file, cleanup operations in defer statements like result.Consume(ctx) are intentionally treated as best-effort and don't require explicit error checking. This is a conscious design pattern used consistently throughout the codebase for cleanup operations.
database/neo4j/neo4j.go (2)

14-24: LGTM!

The configuration constants are well-defined with reasonable defaults for Neo4j operations.


44-48: LGTM!

The driver registration correctly wires the Neo4j backend into the Dawgs registry.

query/util.go (5)

12-24: LGTM!

The pattern detection logic correctly identifies node and relationship patterns based on identifier presence.


26-36: LGTM!

The function correctly enforces mutual exclusivity between node and relationship patterns.


38-86: LGTM!

The relationship pattern construction logic correctly handles all cases, including shortest path options and conditional node/relationship identifiers.


88-120: LGTM!

The identifier set implementation is clean and provides the necessary set operations for identifier tracking.


167-174: LGTM!

The identifier extraction logic correctly traverses the Cypher AST and collects identifiers using the visitor pattern.

database/pg/driver.go (5)

67-69: LGTM!

The Error method correctly returns rows.Err() to propagate any underlying row errors.


76-83: LGTM!

The interface and type definitions are clean and appropriate for the driver abstraction.


97-151: LGTM!

The driver initialization and query translation logic is clean and correctly wires together the translation pipeline.


153-177: LGTM!

The query execution methods correctly handle translation and SQL transformation for explain/profile variants.


184-252: LGTM!

Both CreateRelationship and CreateNode correctly handle the no-results case by returning ErrNoResultsFound. The past issue flagged for CreateNode has been addressed.

database/schema.go (4)

5-27: LGTM!

The IndexType enum and String method are well-defined and provide clear string representations.


29-35: LGTM!

The Index and Constraint types provide a clean and reusable structure for schema metadata.


37-45: LGTM!

The Graph type provides a comprehensive structure for representing graph schemas with nodes, edges, constraints, and indexes.


47-68: LGTM!

The Schema type and methods provide a clean API for managing graph schemas with a default graph accessor.

database/pg/database.go (3)

18-33: LGTM!

The beginTx helper correctly selects transaction access mode based on options.


35-75: LGTM!

The instance initialization and session management logic is clean and correctly wires together the database components.


101-117: LGTM!

The FetchKinds and Close methods correctly handle kind retrieval and resource cleanup.

database/pg/query/format.go (3)

14-34: LGTM!

The index type mapping functions correctly translate between database.IndexType and PostgreSQL-specific index types.


48-68: LGTM!

The constraint and index creation functions correctly handle different index types, including the special GIN tri-gram ops for text search.


36-300: LGTM!

The remaining SQL formatting and batch update logic is well-structured and handles the various PostgreSQL-specific patterns correctly.

database/driver.go (5)

11-16: LGTM!

The Option enum provides clear transaction access mode constants.


18-28: LGTM!

The Result interface provides a clean abstraction for query results with clear deprecation notices for legacy methods.


30-53: LGTM!

The Driver interface provides a comprehensive abstraction with clear deprecation notices for backward compatibility methods.


55-65: LGTM!

The QueryLogic type and Instance interface provide clean abstractions for database operations and lifecycle management.


67-95: LGTM!

The errorResult type provides a clean way to wrap and propagate errors through the Result interface.

database/v1compat/relationship.go (8)

12-43: LGTM!

The query initialization and filtering methods correctly implement the chainable query builder pattern.


45-80: LGTM!

The update, delete, and query modifier methods correctly handle result lifecycle and chainable operations.


82-132: LGTM!

The Count and First methods correctly handle result iteration and error propagation, including the no-results case.


134-163: LGTM!

The Query and Fetch methods correctly handle result delegation and cursor-based iteration.


165-198: LGTM!

The FetchDirection method correctly handles directional queries and only scans once, returning the result with a nil error. The past duplicate scanning issue has been addressed.


200-214: LGTM!

The FetchIDs method correctly streams relationship IDs using the cursor pattern.


216-240: LGTM!

The FetchTriples method correctly assigns the edge ID to the ID field in RelationshipTripleResult. The past incorrect assignment issue has been addressed.


242-260: LGTM!

The FetchAllShortestPaths method correctly configures the query for shortest paths and streams results.

database/v1compat/result.go (1)

67-81: Iterator wiring to database.Result — LGTM

Constructor and fields align with v2 Result; loop uses HasNext(ctx) correctly.

database/v1compat/types.go (1)

159-169: Ignore ID import suggestion; v1compat already defines ID alias to graph.ID

Likely an incorrect or invalid review comment.

query/query.go (2)

61-86: LGTM! Correct special handling for relationship kind negation.

The Not() function correctly handles the special case for relationship kind matchers by:

  • Type-asserting the reference to *cypher.Variable
  • Checking the Symbol matches the relationship identifier
  • Using EdgeTypeFunction for single vs multiple kinds
  • Generating appropriate negation expressions

607-683: LGTM! Comprehensive projection handling with correct type checks.

The buildProjection() method properly handles:

  • Multiple projection types (Return, QualifiedExpression, continuations)
  • Correct Symbol field usage for identifier comparisons (lines 639-645, 652-658)
  • Skip/Limit parameter application
  • Projection ordering
  • Type safety with error returns for invalid types
database/v1compat/traversal/traversal.go (3)

107-154: LGTM! Good defensive programming with depth validation.

Both OutboundWithDepth() and InboundWithDepth() correctly validate depth parameters:

  • Reject negative min/max depths with warning logs
  • Apply sensible defaults (min=1, max=0 for unbounded)
  • Clear logging helps users understand corrections

271-361: LGTM! Robust concurrent traversal implementation.

The BreadthFirst() method demonstrates solid concurrent programming:

  • Proper synchronization with WaitGroup and atomic.Int64
  • Context cancellation for graceful shutdown
  • Memory limit checks to prevent runaway usage (line 313)
  • Thread-safe error collection
  • Buffered pipe for segment coordination
  • Clean worker goroutine lifecycle

536-594: LGTM! Efficient lightweight driver with proper caching.

The LightweightDriver() correctly implements:

  • Minimal data fetching (IDs and kinds only)
  • Cache interaction via ShallowFetchNodesByID (line 566)
  • Efficient node ID tracking with Bitmap64
  • Proper filter application before collecting segments
  • Terminal visitor invocation when no further descent
database/v1compat/query.go (1)

72-78: Good: wrapResult now carries context

Passing ctx into resultWrapper fixes prior nil ctx issues during Next/Close.

Comment on lines +93 to +109
// Acquire a new transaction
if transaction, err := session.BeginTransaction(ctx); err != nil {
return err
} else {
defer func() {
if err := transaction.Rollback(ctx); err != nil {
slog.DebugContext(ctx, "failed to rollback transaction", slog.String("err", err.Error()))
}
}()

if err := driverLogic(ctx, newInternalDriver(transaction)); err != nil {
return err
}

return transaction.Commit(ctx)
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Only rollback on error; current defer always attempts rollback after commit.

Unconditional rollback after commit causes noisy errors. Rollback should happen only if commit isn’t reached.

-        // Acquire a new transaction
+        // Acquire a new transaction
         if transaction, err := session.BeginTransaction(ctx); err != nil {
             return err
         } else {
-            defer func() {
-                if err := transaction.Rollback(ctx); err != nil {
-                    slog.DebugContext(ctx, "failed to rollback transaction", slog.String("err", err.Error()))
-                }
-            }()
-
-            if err := driverLogic(ctx, newInternalDriver(transaction)); err != nil {
-                return err
-            }
-
-            return transaction.Commit(ctx)
+            committed := false
+            defer func() {
+                if !committed {
+                    if err := transaction.Rollback(ctx); err != nil {
+                        slog.DebugContext(ctx, "failed to rollback transaction", slog.String("err", err.Error()))
+                    }
+                }
+            }()
+
+            if err := driverLogic(ctx, newInternalDriver(transaction)); err != nil {
+                return err
+            }
+            if err := transaction.Commit(ctx); err != nil {
+                return err
+            }
+            committed = true
+            return nil
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Acquire a new transaction
if transaction, err := session.BeginTransaction(ctx); err != nil {
return err
} else {
defer func() {
if err := transaction.Rollback(ctx); err != nil {
slog.DebugContext(ctx, "failed to rollback transaction", slog.String("err", err.Error()))
}
}()
if err := driverLogic(ctx, newInternalDriver(transaction)); err != nil {
return err
}
return transaction.Commit(ctx)
}
}
// Acquire a new transaction
if transaction, err := session.BeginTransaction(ctx); err != nil {
return err
} else {
committed := false
defer func() {
if !committed {
if err := transaction.Rollback(ctx); err != nil {
slog.DebugContext(ctx, "failed to rollback transaction", slog.String("err", err.Error()))
}
}
}()
if err := driverLogic(ctx, newInternalDriver(transaction)); err != nil {
return err
}
if err := transaction.Commit(ctx); err != nil {
return err
}
committed = true
return nil
}
}
🤖 Prompt for AI Agents
In database/neo4j/database.go around lines 93 to 109, the defer always attempts
a rollback even after a successful commit causing noisy errors; change the
pattern so rollback only runs when the transaction was not committed: introduce
a named error/result variable or a committed boolean in the outer function
scope, set it to true after a successful Commit, and update the deferred closure
to call Rollback only if committed is false (or if the named error is non-nil).
Ensure the function still returns the original error from driverLogic or Commit,
and that Rollback errors are logged (not returned) when triggered.

Comment on lines +71 to +74
func (s *queryResult) Close(ctx context.Context) error {
s.rows.Close()
return nil
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Propagate underlying row errors in Close.

Close should check and return rows.Err() after closing to surface any pending errors from the result set.

Apply this diff:

 func (s *queryResult) Close(ctx context.Context) error {
 	s.rows.Close()
-	return nil
+	return s.rows.Err()
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func (s *queryResult) Close(ctx context.Context) error {
s.rows.Close()
return nil
}
func (s *queryResult) Close(ctx context.Context) error {
s.rows.Close()
return s.rows.Err()
}
🤖 Prompt for AI Agents
In database/pg/driver.go around lines 71 to 74, Close currently just calls
s.rows.Close() and returns nil; update it to propagate underlying errors by
capturing the error from s.rows.Close() and also checking s.rows.Err()
afterwards, returning the first non-nil error (preferably the Close error if
present, otherwise rows.Err()), otherwise return nil.

Comment on lines +26 to +33
func (s Query) exec(ctx context.Context, statement string, args map[string]any, queryArgs ...any) error {
if args != nil && len(args) > 0 {
queryArgs = append(queryArgs, args)
}

_, err := s.tx.Exec(ctx, statement, queryArgs...)
return err
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

exec: use pgx.NamedArgs for named parameters

Passing a raw map won’t bind named params. Mirror query() and wrap args with pgx.NamedArgs.

 func (s Query) exec(ctx context.Context, statement string, args map[string]any, queryArgs ...any) error {
 	if args != nil && len(args) > 0 {
-		queryArgs = append(queryArgs, args)
+		queryArgs = append(queryArgs, pgx.NamedArgs(args))
 	}
 
 	_, err := s.tx.Exec(ctx, statement, queryArgs...)
 	return err
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func (s Query) exec(ctx context.Context, statement string, args map[string]any, queryArgs ...any) error {
if args != nil && len(args) > 0 {
queryArgs = append(queryArgs, args)
}
_, err := s.tx.Exec(ctx, statement, queryArgs...)
return err
}
func (s Query) exec(ctx context.Context, statement string, args map[string]any, queryArgs ...any) error {
if args != nil && len(args) > 0 {
queryArgs = append(queryArgs, pgx.NamedArgs(args))
}
_, err := s.tx.Exec(ctx, statement, queryArgs...)
return err
}
🤖 Prompt for AI Agents
In database/pg/query/query.go around lines 26 to 33, exec currently appends a
raw map to queryArgs which will not bind named parameters; change it to wrap the
map with pgx.NamedArgs (i.e. append pgx.NamedArgs(args)) similar to query(), so
named parameters are bound correctly when calling s.tx.Exec(ctx, statement,
queryArgs...). Ensure the pgx package is imported/available and only wrap args
when args != nil and len(args) > 0.

Comment on lines +124 to +127
if result, err := s.query(ctx, sqlSelectGraphByName, pgx.NamedArgs(map[string]any{
"name": name,
})); err != nil {
return model.Graph{}, err
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Compile error: wrong argument type for query args

s.query expects map[string]any for args but pgx.NamedArgs was passed. Pass a plain map; s.query will wrap it.

-	if result, err := s.query(ctx, sqlSelectGraphByName, pgx.NamedArgs(map[string]any{
-		"name": name,
-	})); err != nil {
+	if result, err := s.query(ctx, sqlSelectGraphByName, map[string]any{
+		"name": name,
+	}); err != nil {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if result, err := s.query(ctx, sqlSelectGraphByName, pgx.NamedArgs(map[string]any{
"name": name,
})); err != nil {
return model.Graph{}, err
if result, err := s.query(ctx, sqlSelectGraphByName, map[string]any{
"name": name,
}); err != nil {
return model.Graph{}, err
🤖 Prompt for AI Agents
In database/pg/query/query.go around lines 124 to 127, the call to s.query is
passing pgx.NamedArgs which causes a compile error because s.query expects a
plain map[string]any; replace pgx.NamedArgs(map[string]any{ "name": name }) with
a plain map[string]any{"name": name} (or construct the map in a variable) so
s.query receives the correct type — it will handle wrapping as needed.

Comment on lines +202 to +252
func (s *driverBatchWrapper) tryFlush(batchWriteSize int) error {
if len(s.nodeUpdateByBuffer) >= batchWriteSize {
if err := s.driver.UpdateNodes(s.ctx, s.nodeUpdateByBuffer); err != nil {
return err
}

s.nodeUpdateByBuffer = s.nodeUpdateByBuffer[:0]
}

if len(s.relationshipUpdateByBuffer) >= batchWriteSize {
if err := s.driver.UpdateRelationships(s.ctx, s.relationshipUpdateByBuffer); err != nil {
return err
}

s.relationshipUpdateByBuffer = s.relationshipUpdateByBuffer[:0]
}

if len(s.relationshipCreateBuffer) >= batchWriteSize {
if err := s.driver.CreateRelationships(s.ctx, s.relationshipCreateBuffer); err != nil {
return err
}

s.relationshipCreateBuffer = s.relationshipCreateBuffer[:0]
}

if len(s.nodeCreateBuffer) >= batchWriteSize {
if err := s.driver.CreateNodes(s.ctx, s.nodeCreateBuffer); err != nil {
return err
}

s.nodeCreateBuffer = s.nodeCreateBuffer[:0]
}

if len(s.nodeDeletionBuffer) >= batchWriteSize {
if err := s.driver.DeleteNodes(s.ctx, s.nodeDeletionBuffer); err != nil {
return err
}

s.nodeDeletionBuffer = s.nodeDeletionBuffer[:0]
}

if len(s.relationshipDeletionBuffer) >= batchWriteSize {
if err := s.driver.DeleteRelationships(s.ctx, s.relationshipDeletionBuffer); err != nil {
return err
}

s.relationshipDeletionBuffer = s.relationshipDeletionBuffer[:0]
}

return nil
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix tryFlush(0): avoid flushing empty buffers

After switching to >=, tryFlush(0) now flushes empties, potentially calling driver methods with zero-length batches. Treat 0 as “flush any non-empty buffer”.

 func (s *driverBatchWrapper) tryFlush(batchWriteSize int) error {
-	if len(s.nodeUpdateByBuffer) >= batchWriteSize {
+	shouldFlush := func(n int) bool {
+		if batchWriteSize == 0 {
+			return n > 0
+		}
+		return n >= batchWriteSize
+	}
+
+	if shouldFlush(len(s.nodeUpdateByBuffer)) {
 		if err := s.driver.UpdateNodes(s.ctx, s.nodeUpdateByBuffer); err != nil {
 			return err
 		}
 
 		s.nodeUpdateByBuffer = s.nodeUpdateByBuffer[:0]
 	}
 
-	if len(s.relationshipUpdateByBuffer) >= batchWriteSize {
+	if shouldFlush(len(s.relationshipUpdateByBuffer)) {
 		if err := s.driver.UpdateRelationships(s.ctx, s.relationshipUpdateByBuffer); err != nil {
 			return err
 		}
 
 		s.relationshipUpdateByBuffer = s.relationshipUpdateByBuffer[:0]
 	}
 
-	if len(s.relationshipCreateBuffer) >= batchWriteSize {
+	if shouldFlush(len(s.relationshipCreateBuffer)) {
 		if err := s.driver.CreateRelationships(s.ctx, s.relationshipCreateBuffer); err != nil {
 			return err
 		}
 
 		s.relationshipCreateBuffer = s.relationshipCreateBuffer[:0]
 	}
 
-	if len(s.nodeCreateBuffer) >= batchWriteSize {
+	if shouldFlush(len(s.nodeCreateBuffer)) {
 		if err := s.driver.CreateNodes(s.ctx, s.nodeCreateBuffer); err != nil {
 			return err
 		}
 
 		s.nodeCreateBuffer = s.nodeCreateBuffer[:0]
 	}
 
-	if len(s.nodeDeletionBuffer) >= batchWriteSize {
+	if shouldFlush(len(s.nodeDeletionBuffer)) {
 		if err := s.driver.DeleteNodes(s.ctx, s.nodeDeletionBuffer); err != nil {
 			return err
 		}
 
 		s.nodeDeletionBuffer = s.nodeDeletionBuffer[:0]
 	}
 
-	if len(s.relationshipDeletionBuffer) >= batchWriteSize {
+	if shouldFlush(len(s.relationshipDeletionBuffer)) {
 		if err := s.driver.DeleteRelationships(s.ctx, s.relationshipDeletionBuffer); err != nil {
 			return err
 		}
 
 		s.relationshipDeletionBuffer = s.relationshipDeletionBuffer[:0]
 	}
 
 	return nil
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func (s *driverBatchWrapper) tryFlush(batchWriteSize int) error {
if len(s.nodeUpdateByBuffer) >= batchWriteSize {
if err := s.driver.UpdateNodes(s.ctx, s.nodeUpdateByBuffer); err != nil {
return err
}
s.nodeUpdateByBuffer = s.nodeUpdateByBuffer[:0]
}
if len(s.relationshipUpdateByBuffer) >= batchWriteSize {
if err := s.driver.UpdateRelationships(s.ctx, s.relationshipUpdateByBuffer); err != nil {
return err
}
s.relationshipUpdateByBuffer = s.relationshipUpdateByBuffer[:0]
}
if len(s.relationshipCreateBuffer) >= batchWriteSize {
if err := s.driver.CreateRelationships(s.ctx, s.relationshipCreateBuffer); err != nil {
return err
}
s.relationshipCreateBuffer = s.relationshipCreateBuffer[:0]
}
if len(s.nodeCreateBuffer) >= batchWriteSize {
if err := s.driver.CreateNodes(s.ctx, s.nodeCreateBuffer); err != nil {
return err
}
s.nodeCreateBuffer = s.nodeCreateBuffer[:0]
}
if len(s.nodeDeletionBuffer) >= batchWriteSize {
if err := s.driver.DeleteNodes(s.ctx, s.nodeDeletionBuffer); err != nil {
return err
}
s.nodeDeletionBuffer = s.nodeDeletionBuffer[:0]
}
if len(s.relationshipDeletionBuffer) >= batchWriteSize {
if err := s.driver.DeleteRelationships(s.ctx, s.relationshipDeletionBuffer); err != nil {
return err
}
s.relationshipDeletionBuffer = s.relationshipDeletionBuffer[:0]
}
return nil
}
func (s *driverBatchWrapper) tryFlush(batchWriteSize int) error {
// if batchWriteSize==0 flush only non-empty buffers; otherwise flush when
// buffer length ≥ threshold
shouldFlush := func(n int) bool {
if batchWriteSize == 0 {
return n > 0
}
return n >= batchWriteSize
}
if shouldFlush(len(s.nodeUpdateByBuffer)) {
if err := s.driver.UpdateNodes(s.ctx, s.nodeUpdateByBuffer); err != nil {
return err
}
s.nodeUpdateByBuffer = s.nodeUpdateByBuffer[:0]
}
if shouldFlush(len(s.relationshipUpdateByBuffer)) {
if err := s.driver.UpdateRelationships(s.ctx, s.relationshipUpdateByBuffer); err != nil {
return err
}
s.relationshipUpdateByBuffer = s.relationshipUpdateByBuffer[:0]
}
if shouldFlush(len(s.relationshipCreateBuffer)) {
if err := s.driver.CreateRelationships(s.ctx, s.relationshipCreateBuffer); err != nil {
return err
}
s.relationshipCreateBuffer = s.relationshipCreateBuffer[:0]
}
if shouldFlush(len(s.nodeCreateBuffer)) {
if err := s.driver.CreateNodes(s.ctx, s.nodeCreateBuffer); err != nil {
return err
}
s.nodeCreateBuffer = s.nodeCreateBuffer[:0]
}
if shouldFlush(len(s.nodeDeletionBuffer)) {
if err := s.driver.DeleteNodes(s.ctx, s.nodeDeletionBuffer); err != nil {
return err
}
s.nodeDeletionBuffer = s.nodeDeletionBuffer[:0]
}
if shouldFlush(len(s.relationshipDeletionBuffer)) {
if err := s.driver.DeleteRelationships(s.ctx, s.relationshipDeletionBuffer); err != nil {
return err
}
s.relationshipDeletionBuffer = s.relationshipDeletionBuffer[:0]
}
return nil
}
🤖 Prompt for AI Agents
In database/v1compat/wrapper.go around lines 202 to 252, tryFlush currently uses
>= comparisons so calling tryFlush(0) will flush even zero-length slices; change
the logic so that when batchWriteSize == 0 you only flush buffers with len(...)
> 0, otherwise keep the existing >= batchWriteSize behavior. Update each buffer
check (nodeUpdateByBuffer, relationshipUpdateByBuffer, relationshipCreateBuffer,
nodeCreateBuffer, nodeDeletionBuffer, relationshipDeletionBuffer) to use a
conditional: if batchWriteSize == 0 { if len(buf) > 0 { ... } } else { if
len(buf) >= batchWriteSize { ... } } ensuring you still clear the buffer after a
successful driver call and return any errors as before.

Comment on lines +259 to +262
func (s *driverBatchWrapper) CreateNode(node *graph.Node) error {
_, err := s.driver.CreateNode(s.ctx, node)
return err
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Buffer node creates to honor batch thresholds

CreateNode bypasses batching; nodeCreateBuffer is never used. Append to the buffer and flush via tryFlush.

 func (s *driverBatchWrapper) CreateNode(node *graph.Node) error {
-	_, err := s.driver.CreateNode(s.ctx, node)
-	return err
+	s.nodeCreateBuffer = append(s.nodeCreateBuffer, node)
+	return s.tryFlush(s.batchWriteSize)
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func (s *driverBatchWrapper) CreateNode(node *graph.Node) error {
_, err := s.driver.CreateNode(s.ctx, node)
return err
}
func (s *driverBatchWrapper) CreateNode(node *graph.Node) error {
s.nodeCreateBuffer = append(s.nodeCreateBuffer, node)
return s.tryFlush(s.batchWriteSize)
}
🤖 Prompt for AI Agents
In database/v1compat/wrapper.go around lines 259 to 262, CreateNode currently
calls s.driver.CreateNode directly bypassing the nodeCreateBuffer; change it to
append the provided node to s.nodeCreateBuffer and then call s.tryFlush with the
appropriate threshold/flush trigger instead of performing the direct create.
Ensure you respect existing synchronization (use the same mutex/lock used for
other buffer operations), update any buffer count/indices if needed, and return
the error result from tryFlush (or nil if buffering succeeds) so callers observe
failures from flush attempts.

Comment on lines +51 to +55
myQuery := query.New().Where(
query.Node().Property("a").Equals(1234),
).OrderBy(
query.Node().Property("my_order"),
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Test filter doesn’t match created node; query will return no rows.

You create a node with property name="THAT NODE" but filter on a==1234 and order by my_order. Adjust to filter on name.

-		myQuery := query.New().Where(
-			query.Node().Property("a").Equals(1234),
-		).OrderBy(
-			query.Node().Property("my_order"),
-		)
+		myQuery := query.New().Where(
+			query.Node().Property("name").Equals("THAT NODE"),
+		)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
myQuery := query.New().Where(
query.Node().Property("a").Equals(1234),
).OrderBy(
query.Node().Property("my_order"),
)
myQuery := query.New().Where(
query.Node().Property("name").Equals("THAT NODE"),
)
🤖 Prompt for AI Agents
In registry_integration_test.go around lines 51 to 55, the test builds a query
that filters on property "a" equals 1234 which doesn't match the created node
(name="THAT NODE"), causing the query to return no rows; update the Where clause
to filter on Property("name").Equals("THAT NODE") so it matches the created node
(leave the OrderBy on "my_order" as-is unless the test requires a different
sort).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant