Push Grasp - Clear Toys Adversarial - Efficientnet-B0 Test Results v0.3.2
Pre-release
Pre-release
·
135 commits
to grasp_pytorch0.4+
since this release
Adversarial Pushing Grasping Results v0.3.2
Average % clearance: 99.1
Average % grasp success per clearance: 62.8
Average % action efficiency: 50.9
Average grasp to push ratio: 90.8
video:
status printout:
Testing iteration: 994
Change detected: True (value: 1439)
Primitive confidence scores: 1.498990 (push), 1.806918 (grasp)
Strategy: exploit (exploration probability: 0.000000)
Action: grasp at (15, 100, 123)
Executing: grasp at (-0.478000, -0.024000, 0.051004)
Trainer.get_label_value(): Current reward: 1.000000 Future reward: 1.847297 Expected reward: 1.000000 + 0.500000 x 1.847297 = 1.923648
Training loss: 0.055023
gripper position: 0.030909866094589233
gripper position: 0.02564963698387146
gripper position: 0.0006891787052154541
gripper position: -0.010575711727142334
Grasp successful: False
Grasp Count: 882, grasp success rate: 0.5850340136054422
Time elapsed: 6.893619
Trainer iteration: 995.000000
Testing iteration: 995
Change detected: True (value: 876)
Primitive confidence scores: 1.365109 (push), 2.030846 (grasp)
Strategy: exploit (exploration probability: 0.000000)
Action: grasp at (9, 140, 124)
Executing: grasp at (-0.476000, 0.056000, 0.032713)
Trainer.get_label_value(): Current reward: 0.000000 Future reward: 2.069253 Expected reward: 0.000000 + 0.500000 x 2.069253 = 1.034626
Training loss: 0.076920
gripper position: 0.03028649091720581
gripper position: 0.02625584602355957
gripper position: 0.004431545734405518
gripper position: 0.003403604030609131
gripper position: 0.003234654664993286
gripper position: -0.001414567232131958
Grasp successful: True
Grasp Count: 883, grasp success rate: 0.5855039637599094
Time elapsed: 7.405238
Trainer iteration: 996.000000
Testing iteration: 996
There have not been changes to the objects for for a long time [push, grasp]: [1, 0], or there are not enough objects in view (value: 0)! Repositioning objects.
loading case file: /home/costar/src/costar_visual_stacking/simulation/test-cases/test-10-obj-10.txt
Testing iteration: 996
Change detected: True (value: 3244)
Trainer.get_label_value(): Current reward: 1.000000 Future reward: 1.899974 Expected reward: 1.000000 + 0.500000 x 1.899974 = 1.949987
Trial logging complete: 110 --------------------------------------------------------------
Training loss: 0.001421
Test command:
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir 'objects/toys' --num_obj 10 --push_rewards --experience_replay --explore_rate_decay --load_snapshot --snapshot_file '/home/costar/src/costar_visual_stacking/logs/2019-08-17.20:54:32-train-grasp-place-split-efficientnet-21k-acc-0.80/models/snapshot.reinforcement.pth' --random_seed 1238 --is_testing --save_visualizations --test_preset_cases --max_test_trials 10
Evaluate command:
python3 evaluate.py --session_directory '/home/costar/src/costar_visual_stacking/logs/2019-08-17.20:54:32-train-grasp-place-split-efficientnet-21k-acc-0.80-TEST-ADVERSARIAL-PRESET-2019-09-04.10:29:04' --method reinforcement --preset --preset_num_trials 10 > /home/costar/src/costar_visual_stacking/logs/2019-08-17.20:54:32-train-grasp-place-split-efficientnet-21k-acc-0.80-TEST-ADVERSARIAL-PRESET-2019-09-04.10:29:04/grasp_push_evaluation_clearance_results.txt