Skip to content

Submission for the ML challenge - Kushagra Kishore#61

Open
Kk-304 wants to merge 1 commit intopartcleda:mainfrom
Kk-304:solution
Open

Submission for the ML challenge - Kushagra Kishore#61
Kk-304 wants to merge 1 commit intopartcleda:mainfrom
Kk-304:solution

Conversation

@Kk-304
Copy link
Copy Markdown

@Kk-304 Kk-304 commented Apr 26, 2026

Results:

Average Overlap: 0.0000
Average Wirelength: 0.3106
Total Runtime: 666.57s

Tested/Timed on local machine(Rog G14).

Testing Methodology:

Overlap loss: Pairwise relu(sep - |dist|) with linear + cubic terms, normalized by N.
Wirelength loss: True HPWL via Union-Find nets + smooth logsumexp bounding boxes.
Initialization: Tutte centroid iteration (80 rounds) for net-aware seeding.
Optimizer: Adam + cosine LR with warmup
Polish phase: 30% LR + 8× WL weight after sustained zero overlap
Multi-start: Best of K (K=4/3/2/1 by design size)
Legalization: Force-based pairwise separation as safety net

Copilot AI review requested due to automatic review settings April 26, 2026 14:03
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the placement optimizer for the ML placement challenge submission by implementing a real overlap loss, adding a net-based (HPWL) wirelength objective, and introducing a multi-start training flow with post-optimization legalization. It also tweaks the test harness to run a smaller subset by default for faster local iteration.

Changes:

  • Implemented differentiable pairwise overlap repulsion loss (with legalization fallback).
  • Added net reconstruction + smooth HPWL loss and integrated them into a multi-start training loop.
  • Updated the test runner entrypoint to run the first 10 cases by default and accept a custom test-case subset.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 5 comments.

File Description
test.py Makes the runner accept an explicit test-case list and updates main() to run the first 10 tests for quicker feedback.
placement.py Adds union-find net building, smooth HPWL loss, overlap loss implementation, legalization, and a multi-start training orchestrator.
Comments suppressed due to low confidence (1)

test.py:134

  • run_all_tests defaults TEST_CASES to None, but the body immediately does len(TEST_CASES) and iterates it, which will raise a TypeError if the function is called without an argument. Consider defaulting to the module-level TEST_CASES when the parameter is None (and/or rename the parameter to avoid shadowing).
def run_all_tests(TEST_CASES=None):
    """Run all test cases and compute aggregate metrics.

    Uses default hyperparameters from train_placement() function.

    Returns:
        Dictionary with all test results and aggregate statistics
    """
    print("=" * 70)
    print("PLACEMENT CHALLENGE TEST SUITE")
    print("=" * 70)
    print(f"\nRunning {len(TEST_CASES)} test cases with various netlist sizes...")
    print("Using default hyperparameters from train_placement()")

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread placement.py
Comment on lines +610 to +614
if zero_streak >= 30 and not polish_mode and epoch > num_epochs // 4:
polish_mode = True
for g in opt.param_groups:
g['lr'] = lr * 0.3
# Early stop after 500 polish epochs: no point continuing.
Copy link

Copilot AI Apr 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LambdaLR is stepped every epoch, so manually setting g['lr'] = lr * 0.3 here will be overwritten on the next sched.step() (scheduler recomputes LR from its stored base_lrs). If polish mode is meant to lower LR, incorporate the scale into the scheduler (or update sched.base_lrs / stop stepping the scheduler while in polish mode).

Copilot uses AI. Check for mistakes.
Comment thread placement.py
Comment on lines +619 to +622
if polish_mode:
polish_mode = False
for g in opt.param_groups:
g['lr'] = lr
Copy link

Copilot AI Apr 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, resetting g['lr'] = lr when leaving polish mode is likely ineffective with LambdaLR still active, since the scheduler will continue to control the LR. Align the polish-mode LR behavior with the scheduler to avoid confusing/ineffective hyperparameter changes.

Copilot uses AI. Check for mistakes.
Comment thread placement.py
Comment on lines +371 to +374
cell_positions = cell_features[:, 2:4]
cell_indices = pin_features[:, 0].long()
pin_x = cell_positions[cell_indices, 0] + pin_features[:, 1]
pin_y = cell_positions[cell_indices, 1] + pin_features[:, 2]
Copy link

Copilot AI Apr 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pin_features[:, 1:3] are generated as offsets from the cell corner (see generate_placement_input), while cell x/y are treated as cell centers elsewhere (overlap math + plotting subtracts width/2). In that coordinate system, absolute pin positions should include the (-width/2, -height/2) corner shift; otherwise wirelength/HPWL gradients are computed with systematically shifted pin locations.

Suggested change
cell_positions = cell_features[:, 2:4]
cell_indices = pin_features[:, 0].long()
pin_x = cell_positions[cell_indices, 0] + pin_features[:, 1]
pin_y = cell_positions[cell_indices, 1] + pin_features[:, 2]
# Cell X/Y are stored as cell centers, while PIN_X/PIN_Y are offsets
# from the cell corner. Convert centers to corners before adding the
# stored pin offsets so HPWL uses the same coordinate system as overlap
# checks and plotting.
cell_positions = cell_features[:, CellFeatureIdx.X:CellFeatureIdx.Y + 1]
cell_indices = pin_features[:, PinFeatureIdx.CELL_IDX].long()
cell_widths = cell_features[cell_indices, CellFeatureIdx.WIDTH]
cell_heights = cell_features[cell_indices, CellFeatureIdx.HEIGHT]
pin_x = (
cell_positions[cell_indices, 0]
- cell_widths / 2
+ pin_features[:, PinFeatureIdx.PIN_X]
)
pin_y = (
cell_positions[cell_indices, 1]
- cell_heights / 2
+ pin_features[:, PinFeatureIdx.PIN_Y]
)

Copilot uses AI. Check for mistakes.
Comment thread test.py
Comment on lines +122 to 123
def run_all_tests(TEST_CASES=None):
"""Run all test cases and compute aggregate metrics.
Copy link

Copilot AI Apr 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The run_all_tests parameter is named TEST_CASES, which shadows the module-level constant of the same name and violates typical Python naming (parameters are usually snake_case). Renaming the parameter (e.g., test_cases) will make the API clearer and avoid confusion when reading stack traces/logs.

Copilot uses AI. Check for mistakes.
Comment thread placement.py
Comment on lines +638 to 644
num_epochs=None,
lr=None,
lambda_wirelength=1.0,
lambda_overlap=10.0,
lambda_overlap=None,
verbose=True,
log_interval=100,
):
Copy link

Copilot AI Apr 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lambda_wirelength is still exposed in the public train_placement API and documented as a weight for wirelength, but it is never used when computing the optimization loss (wirelength is effectively hard-coded via wl_weight). This makes the argument misleading and breaks callers that tune it; either incorporate lambda_wirelength into the loss or remove/deprecate the parameter and update the docstring accordingly.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants