Skip to content

[Build] Cuda Execution Provider library is needed despite we only use TensoRT Execution provider #22960

@jcdatin

Description

@jcdatin

Describe the issue

There must be a way to build onnxruntime with tensorRt without the cuda execution provider and its cuda unused dependencies.
libonnxruntime_providers_cuda.so is big (220MB) and is dragging other big dependencies like libcufft or libcublas that we don't use in inference (another 400MB).

Urgency

non blocking

Target platform

linux

Build script

build.py

Error / output

N/A

Visual Studio Version

N/A

GCC / Compiler Version

gcc11

Metadata

Metadata

Assignees

No one assigned

    Labels

    buildbuild issues; typically submitted using templateep:CUDAissues related to the CUDA execution providerstaleissues that have not been addressed in a while; categorized by a bot

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions