-
Notifications
You must be signed in to change notification settings - Fork 32
✅ Add e2e tests for metamodeling #8457
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
✅ Add e2e tests for metamodeling #8457
Conversation
Signed-off-by: Alejandro Parcet <[email protected]>
Signed-off-by: Alejandro Parcet <[email protected]>
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #8457 +/- ##
==========================================
- Coverage 87.59% 87.55% -0.05%
==========================================
Files 2001 2001
Lines 77978 77937 -41
Branches 1338 1338
==========================================
- Hits 68306 68236 -70
- Misses 9272 9301 +29
Partials 400 400
*This pull request uses carry forward flags. Click here to find out more.
Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
🧪 CI InsightsHere's what we observed from your CI run for b4cc6f0. ❌ Job Failures
✅ Passed Jobs With Interesting Signals
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR!
We usually like to include a bit more context to help us track changes, connect related issues, and coordinate work more easily. It shouldn’t take much time.
A few suggestions:
- Assign yourself to the issue (I went ahead and did that for you this time).
- Add the relevant project and labels.
- Include a short description — you can even use AI for that — and fill out the PR template sections where applicable (e.g. enumerate related PO issues )
tests/e2e-playwright/tests/metamodeling/test_response_surface_modeling.py
Outdated
Show resolved
Hide resolved
tests/e2e-playwright/tests/metamodeling/test_response_surface_modeling.py
Show resolved
Hide resolved
tests/e2e-playwright/tests/metamodeling/test_response_surface_modeling.py
Show resolved
Hide resolved
Signed-off-by: Alejandro Parcet <[email protected]>
…lexpargon/osparc-simcore into add-e2e-tests-for-metamodeling
…hboard each time Signed-off-by: Alejandro Parcet <[email protected]>
|
||
with log_context(logging.INFO, "Waiting for the sampling to complete..."): | ||
plotly_graph = service_iframe.locator(".js-plotly-plot") | ||
plotly_graph.wait_for(state="visible", timeout=300000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a bit confused here, so you wait for the graph to show up? But do you check if the sample created is included?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the function is created specifically for the test, so it's empty and has no jobs, then we launch a sampling campaign which will update the UI when the jobs are ready, as the runner is a jsonifier and we have timeouts, more than five of them consistently finish when the launching is done, and thus the plot is displayed.
the display of the plot can only happen when you have more than 5 jobs listed and finished.
if we want to wait for every job to finish, we can wait longer to do a refresh or implement a refresh mechanism every 10 seconds to wait for all of them to be complete
do you require the testing to also check the execution for failed jobs? we can fail the test if any job fails
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not saying every job has to finish. But I think it would be best to check if the jobs are listed at least?
for i in range(count_min): | ||
input_field = min_inputs.nth(i) | ||
input_field.fill(str(i + 1)) | ||
print(f"Filled Min input {i} with value {i + 1}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use logs, not print. Get it from log_context()
toast = service_iframe.locator("div.Toastify__toast").filter( | ||
has_text="Sampling started running successfully, please wait for completion." | ||
) | ||
toast.wait_for(state="visible", timeout=120000) # waits up to 120 seconds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
look at the style of others. We use constants to parametrize key timeouts and therefore tune them later
toast.wait_for(state="visible", timeout=120000) # waits up to 120 seconds | ||
|
||
with log_context(logging.INFO, "Waiting for the sampling to complete..."): | ||
moga_container = service_iframe.locator("[mmux-testid='moga-pareto-plot']") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: is this unused on purpose?
# # then we wait a long time | ||
# page.wait_for_timeout(1 * MINUTE) | ||
|
||
# TODO: more steps |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no TODO allowed :-) Create follow up issue
page.wait_for_timeout(15000) | ||
|
||
# # then we wait a long time | ||
# page.wait_for_timeout(1 * MINUTE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rm comments if not used. No comments with code allowed
elif message_2.is_visible(): | ||
message_2.wait_for(state="detached", timeout=300000) | ||
else: | ||
print("No blocking message found — continuing.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
log
service_iframe.locator('[mmux-testid="run-sampling-btn"]').click() | ||
page.wait_for_timeout(1000) | ||
|
||
with log_context(logging.INFO, "Waiting for the sampling to complete..."): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI: There is no need to log everything as a start/stop context. Note that you can also log steps within it using the returned object
with log_context(
logging.INFO,
f"Convert {project_uuid=} / {_STUDY_FUNCTION_NAME} to a function",
) as ctx:
...
ctx.logger.info(
"Created function: %s", f"{json.dumps(function_data['data'], indent=2)}"
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good, there are a few things I would suggest:
- no need to wait so much everywhere with timeouts in the production test (it shall run as fast as possible) if there is no functional need (click already has an integrated 30 seconds default timeout)
mmux-testid
this is very nice! please check the locations where you do not use it as this is fragile and will for sure break as soon as you modify the UI- ideally after the test everything should be deleted and cleaned so we do not accumulate e2e data (and actually pay for it)
page.wait_for_timeout(1000) | ||
service_iframe.locator('[mmux-testid="new-sampling-campaign-btn"]').click() | ||
page.wait_for_timeout(1000) | ||
service_iframe.locator('[mmux-testid="run-sampling-btn"]').click() | ||
page.wait_for_timeout(1000) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why all these timeouts in between? is this for you while debugging?
cause all the clicks have already a 30s default timeout
|
||
# # then we wait a long time | ||
# page.wait_for_timeout(1 * MINUTE) | ||
page.wait_for_timeout(1 * MINUTE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is this for? if this from me, it was only for me to debug locally and see stuff in the created browser debugger. you can safely remove it as this is just wasting time and money ;)
service_iframe.get_by_role("grid").wait_for( | ||
state="visible", timeout=_WAITING_FOR_SERVICE_TO_APPEAR | ||
) | ||
|
||
service_iframe = page.frame_locator("iframe") | ||
with log_context(logging.INFO, "Waiting for the RSM to be ready..."): | ||
service_iframe.get_by_role("grid").wait_for( | ||
state="visible", timeout=_WAITING_FOR_SERVICE_TO_APPEAR | ||
) | ||
# select the function | ||
with log_context(logging.INFO, "Selected test function..."): | ||
service_iframe.get_by_role("button", name="SELECT").nth(0).click() | ||
|
||
page.wait_for_timeout(10000) | ||
with log_context(logging.INFO, "Filling the input parameters..."): | ||
min_test_id = "Mean" if "uq" in service_key.lower() else "Min" | ||
min_inputs = service_iframe.locator( | ||
f'[mmux-testid="input-block-{min_test_id}"] input[type="number"]' | ||
) | ||
count_min = min_inputs.count() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would also use a mmux-testid if possible in lines 258 and 264
|
What do these changes do?
Added test for the Meta Modeling services:
to this end,
mmux-testid
s where added to the meta modeling service.Related issue/s
How to test
The changes include a new test phase which executes e2e playwright tests on the osparc service, by creating a project with JSONIFIER, creating a function from this proyect, and then launching MMuX to populate the function with jobs and use them to test each of the tools MMuX enables
Dev-ops