Skip to content

Conversation

@GalAvineri
Copy link

@GalAvineri GalAvineri commented Feb 23, 2025

Motivation

Sampler outputs indices using a generator, forcing BatchSampler to iterate over the indices one-by-one before grouping them into batches.
If Sampler constructs the whole sequence of indices before yielding it, batching could be done more efficiently over the sequence than by iterating over a generator.

This occurs for example in the widely used RandomSampler.
This PR replaces iteration with slicing by merging RandomSampler and BatchSampler into RandomBatchSampler.

Builds upon #137423
Benchmarking code is based on #76950

 batch_size  drop_last  replacement avg and std original   avg and std new  speedup
          4       True         True    0.0085 +- 3.7e-04 0.0032 +- 2.3e-04  166.63%
          4       True        False    0.0126 +- 3.2e-04 0.0041 +- 1.1e-03  210.52%
          4      False         True    0.0100 +- 2.3e-04 0.0031 +- 3.9e-05  222.77%
          4      False        False    0.0068 +- 5.0e-04 0.0037 +- 9.6e-05   85.69%
          8       True         True    0.0083 +- 1.2e-04 0.0016 +- 1.9e-05  403.48%
          8       True        False    0.0054 +- 1.2e-04 0.0022 +- 7.5e-05  147.67%
          8      False         True    0.0090 +- 9.0e-05 0.0016 +- 3.5e-05  452.94%
          8      False        False    0.0060 +- 1.2e-04 0.0022 +- 7.9e-05  172.32%
         64       True         True    0.0079 +- 1.0e-04 0.0003 +- 1.9e-05 2257.91%
         64       True        False    0.0050 +- 1.1e-04 0.0009 +- 2.0e-05  457.21%
         64      False         True    0.0082 +- 5.7e-05 0.0003 +- 1.7e-05 2418.74%
         64      False        False    0.0052 +- 8.8e-05 0.0009 +- 2.1e-05  475.84%
        256       True         True    0.0078 +- 9.2e-05 0.0002 +- 1.6e-05 3696.33%
        256       True        False    0.0052 +- 4.9e-05 0.0008 +- 2.1e-05  555.58%
        256      False         True    0.0084 +- 5.4e-05 0.0002 +- 1.1e-05 3676.29%
        256      False        False    0.0056 +- 1.2e-03 0.0008 +- 2.9e-05  601.11%
       1024       True         True    0.0082 +- 6.0e-05 0.0002 +- 1.6e-05 4226.53%
       1024       True        False    0.0052 +- 4.9e-05 0.0008 +- 1.8e-05  589.77%
       1024      False         True    0.0083 +- 7.4e-05 0.0002 +- 1.6e-05 4216.05%
       1024      False        False    0.0053 +- 7.3e-05 0.0008 +- 1.8e-05  598.53%
       4096       True         True    0.0080 +- 1.0e-04 0.0002 +- 1.9e-05 4200.74%
       4096       True        False    0.0053 +- 8.2e-05 0.0007 +- 1.5e-05  608.29%
       4096      False         True    0.0081 +- 1.3e-04 0.0002 +- 1.4e-05 4398.71%
       4096      False        False    0.0052 +- 6.9e-05 0.0007 +- 1.2e-05  604.97%
       8192       True         True    0.0079 +- 7.3e-05 0.0002 +- 1.5e-05 4324.38%
       8192       True        False    0.0053 +- 8.5e-05 0.0007 +- 1.9e-05  613.55%
       8192      False         True    0.0080 +- 5.8e-05 0.0002 +- 1.3e-05 4545.14%
       8192      False        False    0.0053 +- 1.0e-04 0.0007 +- 1.2e-05  613.44%
      16384       True         True    0.0081 +- 1.6e-04 0.0002 +- 1.1e-05 4527.95%
      16384       True        False    0.0052 +- 1.1e-04 0.0007 +- 2.2e-05  606.50%
      16384      False         True    0.0080 +- 6.5e-05 0.0002 +- 1.2e-05 4462.77%
      16384      False        False    0.0052 +- 4.0e-05 0.0007 +- 1.7e-05  604.40%

In order to support the replacement argument I used numpy's choice since I couldn't find an efficient alternative in pytorch.
Therefore I also used a numpy.random.Generator in the generator argument.

If it is required to not use numpy, I could look into finding an efficient torch alternative for choice.

@pytorch-bot pytorch-bot bot added the release notes: dataloader release notes category label Feb 23, 2025
@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Feb 23, 2025

CLA Signed

The committers listed above are authorized under a signed CLA.

@pytorch-bot
Copy link

pytorch-bot bot commented Feb 23, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/147706

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 648ef95 with merge base 56039b5 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@GalAvineri
Copy link
Author

GalAvineri commented Feb 23, 2025

This is the speed testing code:

from typing import Sized, Iterator
import timeit
import numpy as np
import pandas as pd
from torch.utils.data import Sampler, RandomSampler, BatchSampler

class RandomBatchSampler(Sampler[list[int]]):
    def __init__(
        self,
        data_source: Sized,
        replacement: bool = False,
        generator: Optional[np.random.Generator] = None,
        batch_size: int = 32,
        drop_last: bool = False,
    ) -> None:
        super().__init__()
        self.data_source = data_source
        self.replacement = replacement
        self.generator = generator
        self.batch_size = batch_size
        self.drop_last = drop_last and len(data_source) % self.batch_size > 0

        if not isinstance(self.replacement, bool):
            raise TypeError(
                f"replacement should be a boolean value, but got replacement={self.replacement}"
            )

        self.n_batches = len(data_source) // batch_size

    def sample_indices(self) -> NDArray[np.int_]:
        generator = (
            self.generator if self.generator is not None else np.random.default_rng()
        )

        if self.replacement:
            indices = generator.integers(0, len(self.data_source), len(self.data_source))
        else:
            indices = np.arange(len(self.data_source))
            generator.shuffle(indices)

        return indices

    def __iter__(self) -> Iterator[list[int]]:
        indices = self.sample_indices()
        indices_batches = [
            indices[i : i + self.batch_size]
            for i in range(0, len(indices), self.batch_size)
        ]
        if self.drop_last:
            indices_batches.pop()
        yield from indices_batches

    def __len__(self):
        return self.n_batches

def _iter_on_origin_sampler(batch_size, drop_last, replacement):
    for _ in BatchSampler(RandomSampler(range(DATA_SIZE), replacement=replacement), batch_size=batch_size, drop_last=drop_last):
        pass


def _iter_on_new_sampler(batch_size, drop_last, replacement):
    for _ in RandomBatchSampler((range(DATA_SIZE)), batch_size=batch_size, drop_last=drop_last, replacement=replacement):
        pass


if __name__ == '__main__':
    DATA_SIZE = 100_000
    AVG_TIMES = 10

    data = np.zeros(DATA_SIZE)

    results = []
    for batch_size in [4, 8, 64, 256, 1024, 4096, 8192, 16384]:
        for drop_last in [True, False]:
            for replacement in [True, False]:
                timer = timeit.Timer(lambda: _iter_on_origin_sampler(batch_size, drop_last, replacement))
                times_original = timer.repeat(AVG_TIMES, 1)
                original_avg = np.mean(times_original)
                original_std = np.std(times_original)
                desc_original = f"{original_avg:.4f} +- {original_std:.1e}"

                timer = timeit.Timer(lambda: _iter_on_new_sampler(batch_size, drop_last, replacement))
                times_new = timer.repeat(AVG_TIMES, 1)
                new_avg = np.mean(times_new)
                new_std = np.std(times_new)
                desc_new = f"{new_avg:.4f} +- {new_std:.1e}"

                speedup_percent = "%.2f" % ((1 / new_avg - 1 / original_avg) * original_avg * 100) + "%"

                current_row = [batch_size, drop_last, replacement,
                               desc_original,
                               desc_new,
                               speedup_percent]

                results.append(current_row)


    columns = ["batch_size", "drop_last", "replacement", "avg and std original", "avg and std new", "speedup"]
    results = pd.DataFrame(results, columns=columns)

    pd.set_option('display.max_columns', None)
    pd.set_option('display.width', 1000)
    print(results.to_string(index=False))

@mikaylagawarecki mikaylagawarecki added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Feb 26, 2025
@huydhn
Copy link
Contributor

huydhn commented Mar 7, 2025

/easycla

@huydhn
Copy link
Contributor

huydhn commented Mar 7, 2025

@divyanshk Please help take a look at the PR, I think you're already on it.

In the meantime, I have unblock CI on your PR, please sign the CLA first following the instructions on #147706 (comment)

@divyanshk
Copy link
Contributor

Thanks for waiting on me @GalAvineri

  1. Do the speedup numbers in the PR compare RandomBatchSampler(..) vs BatchSampler(RandomSampler(..) ?
  2. Can we add unit tests for RandomBatchSampler ?
  3. Regarding supporting the replacement argument, it will be better if we can use a torch alternative instead of importing numpy, unless there are performance implications.

Thanks.

self,
data_source: Sized,
replacement: bool = False,
generator: Optional[np.random.Generator] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This probably shouldn't be optional.

Copy link
Author

@GalAvineri GalAvineri Apr 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll change the arguments be exactly as in RandomSampler and BatchSampler.


def sample_indices(self) -> NDArray[np.int_]:
generator = (
self.generator if self.generator is not None else np.random.default_rng()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case self.generator is None, we can use torch.Generator()?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll remove all numpy from the implementation

@GalAvineri
Copy link
Author

GalAvineri commented Mar 19, 2025

I've opened an alternative PR #149441 that generalizes the approach in this PR for other samplers beside RandomSampler.
@divyanshk If you prefer #149441 i'll focus my efforts on that PR.

@GalAvineri
Copy link
Author

GalAvineri commented Apr 29, 2025

Since #149441 got a bit complicated, I thought to proceed with this simpler PR.

I removed all numpy based implementation and replaced it with torch.
I also added unit tests for RandomBatchSampler.

@divyanshk Please let me know if there is anything else you would like :)

@GalAvineri
Copy link
Author

These are the speedups after removing usage of numpy:

                                        original(avg)  original(std)  new(avg)     new(std)    speedup     
batch_size   drop_last    replacement                                                                      
4            True         True          0.077361       0.001752       0.046196     0.001111       67.46%   
             False        True          0.096277       0.001955       0.042328     0.000543      127.45%   
             True         False         0.070717       0.001534       0.051294     0.001127       37.87%   
             False        False         0.086170       0.001569       0.046817     0.000441       84.06%   
8            True         True          0.074448       0.000945       0.036348     0.000260      104.82%   
             False        True          0.084396       0.000754       0.032137     0.000481      162.61%   
             True         False         0.067243       0.000538       0.040406     0.000324       66.42%   
             False        False         0.076455       0.000561       0.036644     0.000401      108.64%   
16           True         True          0.072844       0.001015       0.051147     0.001097       42.42%   
             False        True          0.077790       0.001318       0.049599     0.000539       56.84%   
             True         False         0.066606       0.000690       0.055055     0.001180       20.98%   
             False        False         0.071119       0.001579       0.053538     0.001417       32.84%   
32           True         True          0.071427       0.000895       0.025986     0.000430      174.86%   
             False        True          0.075948       0.001242       0.026491     0.000308      186.69%   
             True         False         0.066858       0.000910       0.031280     0.001601      113.74%   
             False        False         0.069822       0.000942       0.031740     0.002089      119.98%   
64           True         True          0.072361       0.001379       0.014195     0.000402      409.78%   
             False        True          0.075627       0.002324       0.014001     0.000331      440.14%   
             True         False         0.065734       0.001027       0.019085     0.000318      244.43%   
             False        False         0.068074       0.000818       0.019414     0.000291      250.65%   
128          True         True          0.070799       0.001911       0.008154     0.000037      768.29%   
             False        True          0.073799       0.000367       0.008164     0.000058      803.96%   
             True         False         0.064791       0.000556       0.011636     0.000836      456.81%   
             False        False         0.068400       0.000777       0.013891     0.000113      392.40%   
256          True         True          0.070954       0.000564       0.005279     0.000042     1244.17%   
             False        True          0.074482       0.000988       0.005361     0.000058     1289.45%   
             True         False         0.065680       0.000660       0.009615     0.000362      583.13%   
             False        False         0.067947       0.001012       0.008243     0.001056      724.29%   
512          True         True          0.072583       0.000923       0.003893     0.000086     1764.50%   
             False        True          0.073721       0.002031       0.003776     0.000050     1852.28%   
             True         False         0.065812       0.000692       0.008533     0.000334      671.26%   
             False        False         0.066747       0.001561       0.008604     0.000311      675.73%   
1024         True         True          0.072452       0.000382       0.003198     0.000045     2165.72%   
             False        True          0.072604       0.000823       0.003105     0.000031     2238.26%   
             True         False         0.065036       0.001010       0.007973     0.000299      715.67%   
             False        False         0.067535       0.000658       0.005608     0.000962     1104.21%   
2048         True         True          0.074144       0.001107       0.002782     0.000059     2564.97%   
             False        True          0.072833       0.001297       0.002799     0.000038     2501.97%   
             True         False         0.065317       0.000707       0.007677     0.000222      750.79%   
             False        False         0.067148       0.000610       0.008302     0.000137      708.77%   
4096         True         True          0.071696       0.000822       0.002542     0.000035     2720.77%   
             False        True          0.072413       0.001693       0.002511     0.000029     2783.91%   
             True         False         0.066336       0.000471       0.006060     0.000799      994.62%   
             False        False         0.066454       0.000905       0.007068     0.000326      840.27%   
8192         True         True          0.073054       0.001202       0.002504     0.000051     2817.67%   
             False        True          0.073117       0.001131       0.002403     0.000041     2942.23%   
             True         False         0.064256       0.001185       0.005114     0.000776     1156.54%   
             False        False         0.062578       0.006234       0.005196     0.000408     1104.37%   
16384        True         True          0.075046       0.000456       0.002383     0.000045     3049.44%   
             False        True          0.072383       0.000784       0.002357     0.000024     2970.89%   
             True         False         0.064347       0.000682       0.005994     0.000627      973.55%   
             False        False         0.064587       0.001146       0.007257     0.000292      789.99%   

Comment on lines 3628 to 3629
for replacement in [False, True]:
for drop_last in [False, True]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use @parametrize decorator to handle this ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course :)

def __len__(self) -> int:
return self.num_samples


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the linter would require 2 lines before class definition.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My Bad. I didn't run the linter yet. Thank you for catching that!

self.generator = generator

def __iter__(self) -> Iterator[int]:
for i in torch.randperm(len(self.indices), generator=self.generator):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change will go away with a rebase on latest branch.


def init_generator(self):
if self.generator is None:
seed = int(torch.empty((), dtype=torch.int64).random_().item())
Copy link
Contributor

@vadimkantorov vadimkantorov Apr 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe could be not-simplified as:

seed = int(torch.empty((), dtype=torch.int64).random_())
generator = torch.Generator().manual_seed(seed)

Copy link
Author

@GalAvineri GalAvineri Apr 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I originally just refactored an existing code and extracted it as a method.
If you'd like, I'll apply the change you ask :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a suggestion. it's nice that manual_seed also returns a Generator :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with the suggestion :) Thank you!

Comment on lines 359 to 360
batch_size: int = 32,
drop_last: bool = False,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's not have defaults for batch_size and drop_last.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done :)

indices = self.sample_indices()

# Slicing is faster on list when batch size is small
# if self.batch_size < 16:
Copy link
Author

@GalAvineri GalAvineri Apr 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The if should not be commented. This is a typo.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect timing, was literally about to type that, thanks

@GalAvineri
Copy link
Author

Thank you for the review! Let me know your thoughts about this draft :)

return (len(self.sampler) + self.batch_size - 1) // self.batch_size # type: ignore[arg-type]


class RandomBatchSampler(Sampler[Union[torch.Tensor, list[int]]]):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case batch_size < 16 the yielded type is list[int], otherwise it is Tensor.
Is this an issue?

Copy link
Contributor

@divyanshk divyanshk May 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We decide on 16 based on the benchmarking ?

Slicing is faster on list when batch size is small

Do you mind mentioning how much is the difference ? I'm just thinking if we can get away with just one type - it is not intuitive how the types switch unless the user goes looking at the code.

Also, do we see significant perf drop if we just do List[int] like in BatchSampler?

Copy link
Author

@GalAvineri GalAvineri May 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are the speedups comparison between outputting Tensor and List[int]

                                 tensor speedup list speedup
batch_size drop_last replacement                            
4          True      True               -59.74%       70.91%
           False     True               -48.60%      105.75%
           True      False              -66.06%       32.98%
           False     False              -58.86%       72.66%
8          True      True               -21.46%      128.56%
           False     True                -8.93%      162.25%
           True      False              -38.05%       73.13%
           False     False              -29.65%       92.26%
16         True      True                53.87%      156.65%
           False     True                63.77%      177.13%
           True      False               15.04%       92.64%
           False     False               19.29%      107.54%
32         True      True               184.59%      171.52%
           False     True               204.93%      187.73%
           True      False               99.53%      102.54%
           False     False              106.50%      111.35%
64         True      True               418.91%      165.22%
           False     True               442.52%      169.36%
           True      False              218.32%       97.65%
           False     False              238.59%      106.67%
128        True      True               825.41%      223.34%
           False     True               855.68%      238.24%
           True      False              354.56%      127.31%
           False     False              375.22%      136.33%
256        True      True              1229.66%      201.41%
           False     True              1330.88%      217.68%
           True      False              562.71%      119.89%
           False     False              528.40%      129.21%
512        True      True              1883.99%      210.61%
           False     True              1830.45%      219.07%
           True      False              628.83%      118.31%
           False     False              744.82%      133.60%
1024       True      True              2177.97%      205.58%
           False     True              2314.08%      214.30%
           True      False              605.68%      120.53%
           False     False             1060.29%      124.61%
2048       True      True              2551.70%      208.84%
           False     True              2664.56%      208.69%
           True      False              655.49%      121.70%
           False     False              732.72%      133.70%
4096       True      True              2781.90%      208.66%
           False     True              2954.09%      209.90%
           True      False              684.58%      121.75%
           False     False              683.95%      125.92%
8192       True      True              2882.72%      215.19%
           False     True              2808.82%      209.92%
           True      False              669.49%      124.20%
           False     False              723.95%      124.53%
16384      True      True              3021.89%      219.85%
           False     True              3062.53%      218.42%
           True      False              722.83%      130.07%
           False     False              766.00%      130.34%

Tensor speedup is negative when batch_size < 16, which is why I used the list implementation in these cases.
The perf drop between Tensor and List[int] grows with batch_size.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you'd like we could just go for the List[int] implementation.
And when I have an improvement that outputs only Tensor i'll open another PR and we can discuss this again :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@GalAvineri Let us keep the list[int] implementation. It aligns with other samplers, and we shouldn't keep have branch in return type (would be confusing for the users)

@GalAvineri
Copy link
Author

GalAvineri commented May 6, 2025

@divyanshk @vadimkantorov Looking forward to your response!

pytorchmergebot pushed a commit that referenced this pull request Jun 27, 2025
## Motivation
Many PRs optimizing samplers (for eg #147706, #137423) are leveraging an adhoc script for benchmarking samplers. The script and outputs are often copied over in PRs. We want to begin centralizing benchmarks for torch.utils.data components.

## What ?
* This PR adds a new sub-folder in `benchmarks`  for `data`. This is aimed to cover benchmarking scripts for torch.utils.data components like dataloader and sampler.
* Specifically, this PR includes a simple script to time samplers. This is often "copy-pasted" in PRs optimizing samplers. Having it in a centralized location should prevent that, and allow a common standard.

## Output
```
Benchmark Results:
+--------------+-------------+----------------+-----------+-----------+
|   Batch Size | Drop Last   |   Original (s) |   New (s) | Speedup   |
+==============+=============+================+===========+===========+
|            4 | True        |         0.004  |    0.0088 | -119.62%  |
+--------------+-------------+----------------+-----------+-----------+
|            4 | False       |         0.0083 |    0.009  | -9.23%    |
+--------------+-------------+----------------+-----------+-----------+
|            8 | True        |         0.003  |    0.0074 | -147.64%  |
+--------------+-------------+----------------+-----------+-----------+
|            8 | False       |         0.0054 |    0.0075 | -38.72%   |
+--------------+-------------+----------------+-----------+-----------+
|           64 | True        |         0.0021 |    0.0056 | -161.92%  |
+--------------+-------------+----------------+-----------+-----------+
|           64 | False       |         0.0029 |    0.0055 | -92.50%   |
+--------------+-------------+----------------+-----------+-----------+
|          640 | True        |         0.002  |    0.0055 | -168.75%  |
+--------------+-------------+----------------+-----------+-----------+
|          640 | False       |         0.0024 |    0.0062 | -161.35%  |
+--------------+-------------+----------------+-----------+-----------+
|         6400 | True        |         0.0021 |    0.0055 | -160.13%  |
+--------------+-------------+----------------+-----------+-----------+
|         6400 | False       |         0.0021 |    0.0068 | -215.46%  |
+--------------+-------------+----------------+-----------+-----------+
|        64000 | True        |         0.0042 |    0.0065 | -55.29%   |
+--------------+-------------+----------------+-----------+-----------+
|        64000 | False       |         0.0029 |    0.0077 | -169.56%  |
+--------------+-------------+----------------+-----------+-----------+
```
Pull Request resolved: #156974
Approved by: https://github.com/ramanishsingh
@github-actions
Copy link
Contributor

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label Jul 29, 2025
@github-actions github-actions bot closed this Aug 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

open source release notes: dataloader release notes category Stale triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants