I modified a part of the code to enable parallel inference with multiple num_batch#1113
I modified a part of the code to enable parallel inference with multiple num_batch#1113zhugelaozei wants to merge 10 commits intoobss:mainfrom
Conversation
|
Great @zhugelaozei ! Can you please fix the formatting by:
|
|
Does the |
|
Hello, |
@tonyreina hey. I believe it refers to one image. This is also the usual way to use On the topic. I tried the modifications today and it works. Here are my observations:
With regards to the latter, I suspect it's similar to what I observed running multiple instances of inference on separate processes. One reason is likely the data loading limitation, but more than that, I would suspect it has to do with some low level locking of the GPU operations? I really am not an expert in terms of hardware utilization, but perhaps someone with more experience could shed some light on this topic. Either way, it's a welcome addition and I hope this change is seriously considered. |
|
Hello. I did quite some work on this a while ago. While I did not submit a PR for this I thought I would submit the link here for reference in the hopes that some of the implementation would help getting this PR merged. |
There was a problem hiding this comment.
PR Overview
This PR enables parallel inference within SAHI when using YOLOv11 by adapting the Ultralytics code for batch processing. Key changes include:
- Removing the conversion of the image to PIL in get_prediction.
- Adding a new num_batch parameter to get_sliced_prediction and updating the slice loop to support multiple batches.
- Adjusting type checks for list-based image inputs in both prediction and model inference methods.
Reviewed Changes
| File | Description |
|---|---|
| sahi/predict.py | Updated prediction and slicing logic to support parallel inference with num_batch. |
| sahi/prediction.py | Modified image input handling in the ObjectPrediction constructor. |
| sahi/models/ultralytics.py | Adjusted perform_inference logic to handle list inputs and set original image shape correctly. |
Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.
Comments suppressed due to low confidence (2)
sahi/prediction.py:168
- [nitpick] Consider using isinstance(image, list) instead of comparing the type directly for improved robustness.
if type(image) is list:
sahi/models/ultralytics.py:114
- [nitpick] Consider using isinstance(image, list) instead of a direct type equality check for improved robustness.
if type(image) == list:
| for _predicion_result in prediction_result.object_prediction_list: | ||
| object_prediction_list.extend(_predicion_result) | ||
|
|
There was a problem hiding this comment.
[nitpick] Typo detected: '_predicion_result' should be renamed to '_prediction_result' for clarity.
| for _predicion_result in prediction_result.object_prediction_list: | |
| object_prediction_list.extend(_predicion_result) | |
| for _prediction_result in prediction_result.object_prediction_list: | |
| object_prediction_list.extend(_prediction_result) |
Hi!Since I found that SAHI cannot perform parallel inference when using YOLOv11 for slicing inference, I modified part of the code and adapted it to work with the relevant parts of Ultralytics' code.Unfortunately, I have only adapted the Ultralytics part of the code for now.I hope this is helpful to you.