{"id":1132465,"date":"2025-01-08T20:57:31","date_gmt":"2025-01-08T12:57:31","guid":{"rendered":""},"modified":"2025-01-08T20:57:40","modified_gmt":"2025-01-08T12:57:40","slug":"%e5%a6%82%e4%bd%95%e6%9f%a5%e7%9c%8bpython%e4%b8%adgpu%e7%9a%84%e4%bd%bf%e7%94%a8%e6%83%85%e5%86%b5","status":"publish","type":"post","link":"https:\/\/docs.pingcode.com\/ask\/1132465.html","title":{"rendered":"\u5982\u4f55\u67e5\u770bpython\u4e2dgpu\u7684\u4f7f\u7528\u60c5\u51b5"},"content":{"rendered":"<p style=\"text-align:center;\" ><img decoding=\"async\" src=\"https:\/\/cdn-kb.worktile.com\/kb\/wp-content\/uploads\/2024\/04\/25102009\/02ee59d1-28fc-42d9-8983-a8164f922971.webp\" alt=\"\u5982\u4f55\u67e5\u770bpython\u4e2dgpu\u7684\u4f7f\u7528\u60c5\u51b5\" \/><\/p>\n<p><p> <strong>\u5728Python\u4e2d\u67e5\u770bGPU\u7684\u4f7f\u7528\u60c5\u51b5\uff0c\u53ef\u4ee5\u901a\u8fc7\u4ee5\u4e0b\u51e0\u79cd\u65b9\u6cd5\uff1a\u4f7f\u7528NVIDIA\u7684nvidia-smi\u5de5\u5177\u3001\u4f7f\u7528Python\u7684GPUtil\u5e93\u3001\u4f7f\u7528TensorFlow\u6216PyTorch\u7b49\u6df1\u5ea6\u5b66\u4e60\u6846\u67b6\u7684\u5185\u7f6e\u65b9\u6cd5\u3002<\/strong> \u5176\u4e2d\uff0c<strong>nvidia-smi\u5de5\u5177<\/strong>\u662f\u6700\u76f4\u63a5\u548c\u5e38\u7528\u7684\u65b9\u6cd5\uff0c\u56e0\u4e3a\u5b83\u662f\u7531NVIDIA\u5b98\u65b9\u63d0\u4f9b\u7684\uff0c\u53ef\u4ee5\u51c6\u786e\u5730\u663e\u793aGPU\u7684\u8be6\u7ec6\u4fe1\u606f\u3002\u6211\u4eec\u63a5\u4e0b\u6765\u5c06\u8be6\u7ec6\u8ba8\u8bba\u5982\u4f55\u4f7f\u7528\u8fd9\u4e9b\u65b9\u6cd5\u6765\u67e5\u770b\u548c\u76d1\u63a7GPU\u7684\u4f7f\u7528\u60c5\u51b5\u3002<\/p>\n<\/p>\n<p><h3>\u4e00\u3001NVIDIA\u7684nvidia-smi\u5de5\u5177<\/h3>\n<\/p>\n<p><h4>\u4ec0\u4e48\u662fnvidia-smi\u5de5\u5177\uff1f<\/h4>\n<\/p>\n<p><p>nvidia-smi\uff08NVIDIA System Management Interface\uff09\u662fNVIDIA\u63d0\u4f9b\u7684\u4e00\u6b3e\u547d\u4ee4\u884c\u5de5\u5177\uff0c\u7528\u4e8e\u7ba1\u7406\u548c\u76d1\u63a7GPU\u8bbe\u5907\u3002\u5b83\u53ef\u4ee5\u663e\u793aGPU\u7684\u6e29\u5ea6\u3001\u4f7f\u7528\u7387\u3001\u663e\u5b58\u5360\u7528\u7b49\u8be6\u7ec6\u4fe1\u606f\u3002<\/p>\n<\/p>\n<p><h4>\u5982\u4f55\u4f7f\u7528nvidia-smi\u5de5\u5177\uff1f<\/h4>\n<\/p>\n<ol>\n<li>\n<p><strong>\u5b89\u88c5NVIDIA\u9a71\u52a8<\/strong>\uff1a\u9996\u5148\uff0c\u786e\u4fdd\u4f60\u5df2\u7ecf\u5b89\u88c5\u4e86NVIDIA\u7684\u663e\u5361\u9a71\u52a8\uff0c\u56e0\u4e3anvidia-smi\u5de5\u5177\u662f\u9a71\u52a8\u7684\u4e00\u90e8\u5206\u3002<\/p>\n<\/p>\n<\/li>\n<li>\n<p><strong>\u8fd0\u884cnvidia-smi\u547d\u4ee4<\/strong>\uff1a\u5728\u547d\u4ee4\u884c\u4e2d\u8f93\u5165 <code>nvidia-smi<\/code>\uff0c\u4f60\u4f1a\u770b\u5230\u5982\u4e0b\u8f93\u51fa\uff1a<\/p>\n<\/p>\n<p><pre><code class=\"language-plaintext\">+-----------------------------------------------------------------------------+<\/p>\n<p>| NVIDIA-SMI 460.32.03    Driver Version: 460.32.03    CUDA Version: 11.2     |<\/p>\n<p>|-------------------------------+----------------------+----------------------+<\/p>\n<p>| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |<\/p>\n<p>| Fan  Temp  Perf  Pwr:Usage\/Cap|         Memory-Usage | GPU-Util  Compute M. |<\/p>\n<p>|                               |                      |               MIG M. |<\/p>\n<p>|===============================+======================+======================|<\/p>\n<p>|   0  Tesla K80           On   | 00000000:00:1E.0 Off |                    0 |<\/p>\n<p>| N\/A   54C    P8    30W \/ 149W |    777MiB \/ 11441MiB |      0%      Default |<\/p>\n<p>+-------------------------------+----------------------+----------------------+<\/p>\n<p><\/code><\/pre>\n<\/p>\n<\/li>\n<li>\n<p><strong>\u89e3\u91ca\u8f93\u51fa\u4fe1\u606f<\/strong>\uff1a<\/p>\n<\/p>\n<ul>\n<li><strong>GPU Name<\/strong>\uff1aGPU\u7684\u578b\u53f7\u540d\u79f0\u3002<\/li>\n<li><strong>Fan<\/strong>\uff1a\u98ce\u6247\u7684\u8f6c\u901f\u60c5\u51b5\u3002<\/li>\n<li><strong>Temp<\/strong>\uff1aGPU\u7684\u6e29\u5ea6\u3002<\/li>\n<li><strong>Perf<\/strong>\uff1a\u6027\u80fd\u72b6\u6001\u3002<\/li>\n<li><strong>Pwr:Usage\/Cap<\/strong>\uff1a\u5f53\u524d\u529f\u8017\u548c\u6700\u5927\u529f\u8017\u3002<\/li>\n<li><strong>Memory-Usage<\/strong>\uff1a\u663e\u5b58\u4f7f\u7528\u60c5\u51b5\u3002<\/li>\n<li><strong>GPU-Util<\/strong>\uff1aGPU\u4f7f\u7528\u7387\u3002<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p><p>\u901a\u8fc7nvidia-smi\u547d\u4ee4\uff0c\u4f60\u53ef\u4ee5\u5b9e\u65f6\u76d1\u63a7GPU\u7684\u5404\u9879\u6307\u6807\uff0c\u65b9\u4fbf\u8fdb\u884c\u8c03\u8bd5\u548c\u4f18\u5316\u3002<\/p>\n<\/p>\n<p><h3>\u4e8c\u3001\u4f7f\u7528Python\u7684GPUtil\u5e93<\/h3>\n<\/p>\n<p><h4>\u4ec0\u4e48\u662fGPUtil\u5e93\uff1f<\/h4>\n<\/p>\n<p><p>GPUtil\u662f\u4e00\u4e2aPython\u5e93\uff0c\u7528\u4e8e\u83b7\u53d6NVIDIA GPU\u7684\u4f7f\u7528\u60c5\u51b5\u3002\u5b83\u53ef\u4ee5\u5f88\u65b9\u4fbf\u5730\u5728Python\u811a\u672c\u4e2d\u5d4c\u5165GPU\u76d1\u63a7\u529f\u80fd\u3002<\/p>\n<\/p>\n<p><h4>\u5982\u4f55\u5b89\u88c5\u548c\u4f7f\u7528GPUtil\u5e93\uff1f<\/h4>\n<\/p>\n<ol>\n<li>\n<p><strong>\u5b89\u88c5GPUtil<\/strong>\uff1a\u4f60\u53ef\u4ee5\u901a\u8fc7pip\u547d\u4ee4\u5b89\u88c5GPUtil\u5e93\uff1a<\/p>\n<\/p>\n<p><pre><code class=\"language-shell\">pip install gputil<\/p>\n<p><\/code><\/pre>\n<\/p>\n<\/li>\n<li>\n<p><strong>\u4f7f\u7528GPUtil\u83b7\u53d6GPU\u4fe1\u606f<\/strong>\uff1a<\/p>\n<\/p>\n<p><pre><code class=\"language-python\">import GPUtil<\/p>\n<h2><strong>\u83b7\u53d6\u6240\u6709\u53ef\u7528\u7684GPU<\/strong><\/h2>\n<p>gpus = GPUtil.getGPUs()<\/p>\n<p>for gpu in gpus:<\/p>\n<p>    print(f&quot;GPU ID: {gpu.id}&quot;)<\/p>\n<p>    print(f&quot;GPU Name: {gpu.name}&quot;)<\/p>\n<p>    print(f&quot;GPU Load: {gpu.load * 100}%&quot;)<\/p>\n<p>    print(f&quot;GPU Free Memory: {gpu.memoryFree}MB&quot;)<\/p>\n<p>    print(f&quot;GPU Used Memory: {gpu.memoryUsed}MB&quot;)<\/p>\n<p>    print(f&quot;GPU Total Memory: {gpu.memoryTotal}MB&quot;)<\/p>\n<p>    print(f&quot;GPU Temperature: {gpu.temperature} \u00b0C&quot;)<\/p>\n<p><\/code><\/pre>\n<\/p>\n<\/li>\n<li>\n<p><strong>\u89e3\u91caGPUtil\u7684\u8f93\u51fa\u4fe1\u606f<\/strong>\uff1a<\/p>\n<\/p>\n<ul>\n<li><strong>GPU ID<\/strong>\uff1aGPU\u7684ID\u7f16\u53f7\u3002<\/li>\n<li><strong>GPU Name<\/strong>\uff1aGPU\u7684\u578b\u53f7\u540d\u79f0\u3002<\/li>\n<li><strong>GPU Load<\/strong>\uff1aGPU\u7684\u4f7f\u7528\u7387\u3002<\/li>\n<li><strong>GPU Free Memory<\/strong>\uff1aGPU\u7a7a\u95f2\u663e\u5b58\u3002<\/li>\n<li><strong>GPU Used Memory<\/strong>\uff1aGPU\u5df2\u7528\u663e\u5b58\u3002<\/li>\n<li><strong>GPU Total Memory<\/strong>\uff1aGPU\u603b\u663e\u5b58\u3002<\/li>\n<li><strong>GPU Temperature<\/strong>\uff1aGPU\u7684\u6e29\u5ea6\u3002<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p><p>GPUtil\u5e93\u63d0\u4f9b\u4e86\u4e00\u4e2a\u7b80\u6d01\u7684\u63a5\u53e3\uff0c\u4f7f\u5f97\u5728Python\u811a\u672c\u4e2d\u76d1\u63a7GPU\u53d8\u5f97\u975e\u5e38\u7b80\u5355\u3002<\/p>\n<\/p>\n<p><h3>\u4e09\u3001\u4f7f\u7528TensorFlow\u67e5\u770bGPU\u4f7f\u7528\u60c5\u51b5<\/h3>\n<\/p>\n<p><h4>TensorFlow\u4e2d\u7684GPU\u76d1\u63a7<\/h4>\n<\/p>\n<p><p>TensorFlow\u662f\u4e00\u4e2a\u5e7f\u6cdb\u4f7f\u7528\u7684\u6df1\u5ea6\u5b66\u4e60\u6846\u67b6\uff0c\u5b83\u63d0\u4f9b\u4e86\u5185\u7f6e\u7684\u65b9\u6cd5\u6765\u67e5\u770b\u548c\u7ba1\u7406GPU\u7684\u4f7f\u7528\u60c5\u51b5\u3002<\/p>\n<\/p>\n<p><h4>\u5982\u4f55\u67e5\u770bTensorFlow\u4e2d\u7684GPU\u4fe1\u606f\uff1f<\/h4>\n<\/p>\n<ol>\n<li>\n<p><strong>\u5b89\u88c5TensorFlow<\/strong>\uff1a\u786e\u4fdd\u4f60\u5df2\u7ecf\u5b89\u88c5\u4e86TensorFlow\uff0c\u53ef\u4ee5\u901a\u8fc7pip\u547d\u4ee4\u5b89\u88c5\uff1a<\/p>\n<\/p>\n<p><pre><code class=\"language-shell\">pip install tensorflow<\/p>\n<p><\/code><\/pre>\n<\/p>\n<\/li>\n<li>\n<p><strong>\u67e5\u770b\u53ef\u7528\u7684GPU<\/strong>\uff1a<\/p>\n<\/p>\n<p><pre><code class=\"language-python\">import tensorflow as tf<\/p>\n<h2><strong>\u5217\u51fa\u6240\u6709\u53ef\u7528\u7684GPU<\/strong><\/h2>\n<p>gpus = tf.config.experimental.list_physical_devices(&#39;GPU&#39;)<\/p>\n<p>for gpu in gpus:<\/p>\n<p>    print(gpu)<\/p>\n<p><\/code><\/pre>\n<\/p>\n<\/li>\n<li>\n<p><strong>\u76d1\u63a7TensorFlow\u4e2d\u7684GPU\u4f7f\u7528\u60c5\u51b5<\/strong>\uff1a<\/p>\n<\/p>\n<p><pre><code class=\"language-python\">import tensorflow as tf<\/p>\n<h2><strong>\u521b\u5efa\u4e00\u4e2a\u7b80\u5355\u7684TensorFlow\u6a21\u578b<\/strong><\/h2>\n<p>model = tf.keras.models.Sequential([<\/p>\n<p>    tf.keras.layers.Dense(64, activation=&#39;relu&#39;, input_shape=(1000,)),<\/p>\n<p>    tf.keras.layers.Dense(64, activation=&#39;relu&#39;),<\/p>\n<p>    tf.keras.layers.Dense(10, activation=&#39;softmax&#39;)<\/p>\n<p>])<\/p>\n<p>model.compile(optimizer=&#39;adam&#39;, loss=&#39;categorical_crossentropy&#39;)<\/p>\n<h2><strong>\u751f\u6210\u4e00\u4e9b\u968f\u673a\u6570\u636e<\/strong><\/h2>\n<p>import numpy as np<\/p>\n<p>x_tr<a href=\"https:\/\/docs.pingcode.com\/blog\/59162.html\" target=\"_blank\">AI<\/a>n = np.random.random((1000, 1000))<\/p>\n<p>y_train = np.random.random((1000, 10))<\/p>\n<h2><strong>\u8bad\u7ec3\u6a21\u578b<\/strong><\/h2>\n<p>model.fit(x_train, y_train, epochs=10)<\/p>\n<h2><strong>\u67e5\u770bGPU\u4f7f\u7528\u60c5\u51b5<\/strong><\/h2>\n<p>from tensorflow.python.client import device_lib<\/p>\n<p>print(device_lib.list_local_devices())<\/p>\n<p><\/code><\/pre>\n<\/p>\n<\/li>\n<\/ol>\n<p><p>\u901a\u8fc7\u4e0a\u8ff0\u4ee3\u7801\uff0c\u4f60\u53ef\u4ee5\u67e5\u770bTensorFlow\u5728\u8bad\u7ec3\u8fc7\u7a0b\u4e2d\u4f7f\u7528\u7684GPU\u8bbe\u5907\u3002<\/p>\n<\/p>\n<p><h3>\u56db\u3001\u4f7f\u7528PyTorch\u67e5\u770bGPU\u4f7f\u7528\u60c5\u51b5<\/h3>\n<\/p>\n<p><h4>PyTorch\u4e2d\u7684GPU\u76d1\u63a7<\/h4>\n<\/p>\n<p><p>PyTorch\u662f\u53e6\u4e00\u4e2a\u5e7f\u6cdb\u4f7f\u7528\u7684\u6df1\u5ea6\u5b66\u4e60\u6846\u67b6\uff0c\u5b83\u540c\u6837\u63d0\u4f9b\u4e86\u65b9\u4fbf\u7684\u65b9\u6cd5\u6765\u67e5\u770b\u548c\u7ba1\u7406GPU\u7684\u4f7f\u7528\u60c5\u51b5\u3002<\/p>\n<\/p>\n<p><h4>\u5982\u4f55\u67e5\u770bPyTorch\u4e2d\u7684GPU\u4fe1\u606f\uff1f<\/h4>\n<\/p>\n<ol>\n<li>\n<p><strong>\u5b89\u88c5PyTorch<\/strong>\uff1a\u786e\u4fdd\u4f60\u5df2\u7ecf\u5b89\u88c5\u4e86PyTorch\uff0c\u53ef\u4ee5\u901a\u8fc7pip\u547d\u4ee4\u5b89\u88c5\uff1a<\/p>\n<\/p>\n<p><pre><code class=\"language-shell\">pip install torch<\/p>\n<p><\/code><\/pre>\n<\/p>\n<\/li>\n<li>\n<p><strong>\u67e5\u770b\u53ef\u7528\u7684GPU<\/strong>\uff1a<\/p>\n<\/p>\n<p><pre><code class=\"language-python\">import torch<\/p>\n<p>if torch.cuda.is_available():<\/p>\n<p>    device = torch.device(&quot;cuda&quot;)<\/p>\n<p>    print(&quot;GPU is available&quot;)<\/p>\n<p>    print(f&quot;GPU Name: {torch.cuda.get_device_name(0)}&quot;)<\/p>\n<p>    print(f&quot;Total Memory: {torch.cuda.get_device_properties(0).total_memory} bytes&quot;)<\/p>\n<p>    print(f&quot;Memory Allocated: {torch.cuda.memory_allocated(0)} bytes&quot;)<\/p>\n<p>    print(f&quot;Memory Cached: {torch.cuda.memory_reserved(0)} bytes&quot;)<\/p>\n<p>else:<\/p>\n<p>    print(&quot;GPU is not available&quot;)<\/p>\n<p><\/code><\/pre>\n<\/p>\n<\/li>\n<li>\n<p><strong>\u76d1\u63a7PyTorch\u4e2d\u7684GPU\u4f7f\u7528\u60c5\u51b5<\/strong>\uff1a<\/p>\n<\/p>\n<p><pre><code class=\"language-python\">import torch<\/p>\n<h2><strong>\u521b\u5efa\u4e00\u4e2a\u7b80\u5355\u7684PyTorch\u6a21\u578b<\/strong><\/h2>\n<p>model = torch.nn.Sequential(<\/p>\n<p>    torch.nn.Linear(1000, 64),<\/p>\n<p>    torch.nn.ReLU(),<\/p>\n<p>    torch.nn.Linear(64, 64),<\/p>\n<p>    torch.nn.ReLU(),<\/p>\n<p>    torch.nn.Linear(64, 10),<\/p>\n<p>    torch.nn.Softmax(dim=1)<\/p>\n<p>)<\/p>\n<p>model.to(device)<\/p>\n<h2><strong>\u751f\u6210\u4e00\u4e9b\u968f\u673a\u6570\u636e<\/strong><\/h2>\n<p>x_train = torch.randn(1000, 1000).to(device)<\/p>\n<p>y_train = torch.randn(1000, 10).to(device)<\/p>\n<h2><strong>\u5b9a\u4e49\u635f\u5931\u51fd\u6570\u548c\u4f18\u5316\u5668<\/strong><\/h2>\n<p>criterion = torch.nn.CrossEntropyLoss()<\/p>\n<p>optimizer = torch.optim.Adam(model.parameters())<\/p>\n<h2><strong>\u8bad\u7ec3\u6a21\u578b<\/strong><\/h2>\n<p>for epoch in range(10):<\/p>\n<p>    optimizer.zero_grad()<\/p>\n<p>    output = model(x_train)<\/p>\n<p>    loss = criterion(output, y_train)<\/p>\n<p>    loss.backward()<\/p>\n<p>    optimizer.step()<\/p>\n<h2><strong>\u67e5\u770bGPU\u4f7f\u7528\u60c5\u51b5<\/strong><\/h2>\n<p>print(f&quot;Memory Allocated: {torch.cuda.memory_allocated(0)} bytes&quot;)<\/p>\n<p>print(f&quot;Memory Cached: {torch.cuda.memory_reserved(0)} bytes&quot;)<\/p>\n<p><\/code><\/pre>\n<\/p>\n<\/li>\n<\/ol>\n<p><p>\u901a\u8fc7\u4e0a\u8ff0\u4ee3\u7801\uff0c\u4f60\u53ef\u4ee5\u67e5\u770bPyTorch\u5728\u8bad\u7ec3\u8fc7\u7a0b\u4e2d\u4f7f\u7528\u7684GPU\u8bbe\u5907\u3002<\/p>\n<\/p>\n<p><h3>\u4e94\u3001\u603b\u7ed3<\/h3>\n<\/p>\n<p><p>\u901a\u8fc7\u4f7f\u7528nvidia-smi\u5de5\u5177\u3001GPUtil\u5e93\u3001TensorFlow\u548cPyTorch\u7684\u5185\u7f6e\u65b9\u6cd5\uff0c\u4f60\u53ef\u4ee5\u975e\u5e38\u65b9\u4fbf\u5730\u5728Python\u4e2d\u67e5\u770b\u548c\u76d1\u63a7GPU\u7684\u4f7f\u7528\u60c5\u51b5\u3002\u6bcf\u79cd\u65b9\u6cd5\u90fd\u6709\u5176\u72ec\u7279\u7684\u4f18\u52bf\u548c\u9002\u7528\u573a\u666f\uff0c\u9009\u62e9\u5408\u9002\u7684\u65b9\u6cd5\u53ef\u4ee5\u5e2e\u52a9\u4f60\u66f4\u597d\u5730\u4f18\u5316\u548c\u8c03\u8bd5\u4f60\u7684\u7a0b\u5e8f\u3002<strong>nvidia-smi\u5de5\u5177\u6700\u4e3a\u76f4\u63a5\u548c\u8be6\u7ec6\uff0cGPUtil\u5e93\u5728Python\u811a\u672c\u4e2d\u4f7f\u7528\u975e\u5e38\u65b9\u4fbf\uff0c\u800cTensorFlow\u548cPyTorch\u63d0\u4f9b\u4e86\u4e0e\u6df1\u5ea6\u5b66\u4e60\u6846\u67b6\u7d27\u5bc6\u7ed3\u5408\u7684\u89e3\u51b3\u65b9\u6848\u3002<\/strong><\/p>\n<\/p>\n<h2><strong>\u76f8\u5173\u95ee\u7b54FAQs\uff1a<\/strong><\/h2>\n<p> <strong>\u5982\u4f55\u786e\u8ba4\u6211\u7684Python\u73af\u5883\u662f\u5426\u80fd\u591f\u4f7f\u7528GPU\uff1f<\/strong><br \/>\u8981\u786e\u8ba4\u60a8\u7684Python\u73af\u5883\u662f\u5426\u80fd\u591f\u4f7f\u7528GPU\uff0c\u60a8\u53ef\u4ee5\u5c1d\u8bd5\u5b89\u88c5\u5e76\u5bfc\u5165\u9002\u7528\u4e8eGPU\u8ba1\u7b97\u7684\u5e93\uff0c\u5982TensorFlow\u6216PyTorch\u3002\u5728\u5b89\u88c5\u5b8c\u6210\u540e\uff0c\u8fd0\u884c\u4ee5\u4e0b\u4ee3\u7801\u6765\u68c0\u67e5GPU\u7684\u53ef\u7528\u6027\uff1a  <\/p>\n<pre><code class=\"language-python\">import tensorflow as tf\nprint(&quot;Num GPUs Available: &quot;, len(tf.config.list_physical_devices(&#39;GPU&#39;)))\n<\/code><\/pre>\n<p>\u5982\u679c\u60a8\u4f7f\u7528\u7684\u662fPyTorch\uff0c\u53ef\u4ee5\u6267\u884c\uff1a  <\/p>\n<pre><code class=\"language-python\">import torch\nprint(&quot;Is GPU available: &quot;, torch.cuda.is_available())\n<\/code><\/pre>\n<p>\u8fd9\u4e9b\u4ee3\u7801\u5c06\u544a\u8bc9\u60a8\u5f53\u524d\u73af\u5883\u662f\u5426\u68c0\u6d4b\u5230GPU\u3002<\/p>\n<p><strong>\u6211\u53ef\u4ee5\u901a\u8fc7\u54ea\u4e9b\u5de5\u5177\u76d1\u63a7GPU\u7684\u5b9e\u65f6\u4f7f\u7528\u60c5\u51b5\uff1f<\/strong><br \/>\u53ef\u4ee5\u4f7f\u7528NVIDIA\u7684nvidia-smi\u5de5\u5177\u6765\u76d1\u63a7GPU\u7684\u5b9e\u65f6\u4f7f\u7528\u60c5\u51b5\u3002\u53ea\u9700\u5728\u547d\u4ee4\u884c\u4e2d\u8f93\u5165<code>nvidia-smi<\/code>\uff0c\u5373\u53ef\u67e5\u770bGPU\u7684\u4f7f\u7528\u7387\u3001\u5185\u5b58\u5360\u7528\u3001\u6e29\u5ea6\u7b49\u4fe1\u606f\u3002\u6b64\u5916\uff0c\u4e00\u4e9b\u56fe\u5f62\u5316\u5de5\u5177\u5982GPU-Z\u548cMSI Afterburner\u4e5f\u80fd\u591f\u63d0\u4f9b\u8be6\u7ec6\u7684\u76d1\u63a7\u529f\u80fd\u3002<\/p>\n<p><strong>\u5728Python\u4e2d\u5982\u4f55\u4f18\u5316\u6211\u7684\u4ee3\u7801\u4ee5\u5229\u7528GPU\uff1f<\/strong><br \/>\u8981\u4f18\u5316\u60a8\u7684Python\u4ee3\u7801\u4ee5\u66f4\u597d\u5730\u5229\u7528GPU\uff0c\u60a8\u53ef\u4ee5\u8003\u8651\u4ee5\u4e0b\u51e0\u4e2a\u65b9\u9762\uff1a  <\/p>\n<ol>\n<li>\u4f7f\u7528GPU\u52a0\u901f\u7684\u5e93\uff0c\u5982CuPy\u3001TensorFlow\u6216PyTorch\uff0c\u8fd9\u4e9b\u5e93\u63d0\u4f9b\u4e86GPU\u652f\u6301\u7684\u9ad8\u6548\u8fd0\u7b97\u3002  <\/li>\n<li>\u5c3d\u91cf\u51cf\u5c11\u6570\u636e\u4f20\u8f93\u7684\u9891\u7387\uff0c\u56e0\u4e3a\u6570\u636e\u5728CPU\u548cGPU\u4e4b\u95f4\u4f20\u8f93\u4f1a\u9020\u6210\u5ef6\u8fdf\u3002\u5c06\u6570\u636e\u5c3d\u53ef\u80fd\u591a\u5730\u4fdd\u7559\u5728GPU\u5185\u5b58\u4e2d\u3002  <\/li>\n<li>\u5bf9\u4e8e\u5927\u89c4\u6a21\u7684\u77e9\u9635\u8fd0\u7b97\uff0c\u4f7f\u7528\u6279\u5904\u7406\u7684\u65b9\u5f0f\u6765\u63d0\u9ad8\u8ba1\u7b97\u6548\u7387\u3002  <\/li>\n<li>\u786e\u4fdd\u4ee3\u7801\u4e2d\u6ca1\u6709\u4e0d\u5fc5\u8981\u7684\u540c\u6b65\u64cd\u4f5c\uff0c\u4ee5\u51cf\u5c11GPU\u7684\u7a7a\u95f2\u65f6\u95f4\u3002<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"\u5728Python\u4e2d\u67e5\u770bGPU\u7684\u4f7f\u7528\u60c5\u51b5\uff0c\u53ef\u4ee5\u901a\u8fc7\u4ee5\u4e0b\u51e0\u79cd\u65b9\u6cd5\uff1a\u4f7f\u7528NVIDIA\u7684nvidia-smi\u5de5\u5177\u3001\u4f7f\u7528P [&hellip;]","protected":false},"author":3,"featured_media":1132477,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[37],"tags":[],"acf":[],"_links":{"self":[{"href":"https:\/\/docs.pingcode.com\/wp-json\/wp\/v2\/posts\/1132465"}],"collection":[{"href":"https:\/\/docs.pingcode.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/docs.pingcode.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/docs.pingcode.com\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/docs.pingcode.com\/wp-json\/wp\/v2\/comments?post=1132465"}],"version-history":[{"count":"1","href":"https:\/\/docs.pingcode.com\/wp-json\/wp\/v2\/posts\/1132465\/revisions"}],"predecessor-version":[{"id":1132480,"href":"https:\/\/docs.pingcode.com\/wp-json\/wp\/v2\/posts\/1132465\/revisions\/1132480"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/docs.pingcode.com\/wp-json\/wp\/v2\/media\/1132477"}],"wp:attachment":[{"href":"https:\/\/docs.pingcode.com\/wp-json\/wp\/v2\/media?parent=1132465"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/docs.pingcode.com\/wp-json\/wp\/v2\/categories?post=1132465"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/docs.pingcode.com\/wp-json\/wp\/v2\/tags?post=1132465"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}