-
Notifications
You must be signed in to change notification settings - Fork 17
add trigger gc api #379
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: v4
Are you sure you want to change the base?
add trigger gc api #379
Conversation
|
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## v4 #379 +/- ##
=====================================
Coverage ? 55.94%
=====================================
Files ? 36
Lines ? 4869
Branches ? 602
=====================================
Hits ? 2724
Misses ? 1861
Partials ? 284 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
ec6d1c7 to
6d9a9d0
Compare
6d9a9d0 to
9cac1b5
Compare
|
|
||
| void HttpManager::get_gc_job_status(const Pistache::Rest::Request& request, Pistache::Http::ResponseWriter response) { | ||
| auto job_id_param = request.query().get("job_id"); | ||
| if (!job_id_param) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If no job_id_param, how about returning all jobs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it reasonable that one client can see the jobs results that submitted by others?
I suggest to postpone this until we really need the function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just want to know what is happening on this SM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, will try to add this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done , use the following command to query the status of all gc jobs.
curl -k "https://localhost:10443/api/v1/gc_job_status"
|
|
||
| std::vector< folly::SemiFuture< bool > > gc_task_futures; | ||
|
|
||
| for (const auto& pg_id : pg_ids) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesnt matter for this time as we decided to stop everything.
As a more generic enhancement,
- Could we do PG by PG ? that only quience a PG at a time and resumed once the PG completed. It has much less impact to traffic, if next time we want to do GC with traffic on.
- if we have a
hs_pg::gc_chunks(vector <chunk_id> chunks), which will quience, GC, resume, can we consolidate most of the dup code between the two path (all_chunks and one_chunk)? - we can support gc a pg.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have change the api to supporting triggering gc for a specific chunk or gc.
if we trigger all the chunks, then it will be scheduled pg by pg
f1d478b to
b76b917
Compare
b76b917 to
a2e4e2e
Compare
1 trigger gc for all chunks
curl -X POST "http://127.0.0.1:5000/api/v1/trigger_gc"
2 trigger gc for a specific chunk
curl -X POST "http://127.0.0.1:5000/api/v1/trigger_gc?chunk_id=87"
3 trigger gc for a specific pg
curl -X POST "http://127.0.0.1:5000/api/v1/trigger_gc?pg_id=87"
4 query the status of a gc job. ATM, we only support retain the status of the latest 100 gc jobs
curl -k "https://localhost:10443/api/v1/gc_job_status?job_id=trigger-gc-task-0"
5 query the status of all gc jobs.
curl -k "https://localhost:10443/api/v1/gc_job_status"
all the above apis have been tested manually, UT will be added later in a separate PR