You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Overall, I think you did a good job because the algorithm works and finds solutions that seem to be good, near the optimum!
The README file is short, but clear; the code is not too difficult to understand, but a few comments might help.
The choice that caught my attention the most is the fact that, I think, you do a sort of initialization for the initial state (like an heuristic to fix something and reduce the computations (and the execution time), at the expense of a maybe worse solution): 'initial_state = State(np.array([input_state[0]]))'; I think that this should be like to impose a constraint in the research of the solution (but I do not know if this is wanted). Without initialization, so 'initial_state = State(np.array([]))', for N=20, the weight of the reached solution is equal to 24 (the weight of the solutions obtained with Dijkstra or A* is 23), but, obviously, the nodes and the states increases, instead, with initialization it is equal to 27 (the result written in the README file).
I believe that there is a redundant control, but it is a minor: in possible_actions(state) function, you verify if is_valid(state, item), so, I think that it is not necessary the control if not is_valid(state, a): continue, in search(...) function.
It is interesting the fact that the initial changes done on input_state delete the duplicates, so optimize the research: for example, passing, for n=5, from 25 lists to 10 tuples.
The text was updated successfully, but these errors were encountered:
Hi,
Overall, I think you did a good job because the algorithm works and finds solutions that seem to be good, near the optimum!
The README file is short, but clear; the code is not too difficult to understand, but a few comments might help.
The choice that caught my attention the most is the fact that, I think, you do a sort of initialization for the initial state (like an heuristic to fix something and reduce the computations (and the execution time), at the expense of a maybe worse solution): 'initial_state = State(np.array([input_state[0]]))'; I think that this should be like to impose a constraint in the research of the solution (but I do not know if this is wanted). Without initialization, so 'initial_state = State(np.array([]))', for N=20, the weight of the reached solution is equal to 24 (the weight of the solutions obtained with Dijkstra or A* is 23), but, obviously, the nodes and the states increases, instead, with initialization it is equal to 27 (the result written in the README file).
I believe that there is a redundant control, but it is a minor: in possible_actions(state) function, you verify if is_valid(state, item), so, I think that it is not necessary the control if not is_valid(state, a): continue, in search(...) function.
It is interesting the fact that the initial changes done on input_state delete the duplicates, so optimize the research: for example, passing, for n=5, from 25 lists to 10 tuples.
The text was updated successfully, but these errors were encountered: