About 2,250,000 results
Open links in new tab
  1. What is the command to install pytorch with cuda 12.8?

    Mar 27, 2025 · Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers; Advertising & Talent Reach devs & technologists worldwide about your product, service or employer brand

  2. Installing PyTorch with Python version 3.11 in a Docker container

    Mar 12, 2024 · Inside the container I see that torch.version.cuda is 12.1. PyTorch claims they're compatible with Python 3.11, has anybody actually been able to use PyTorch+CUDA with Python 3.11? Tried running Docker images with Python 3.11.4. Tried running the Conda docker image and installing pytorch, but kept getting errors that the images couldn't be found

  3. python - How do I check if PyTorch is using the GPU? - Stack …

    Jan 8, 2018 · Edit: torch.cuda.memory_cached has been renamed to torch.cuda.memory_reserved. So use memory_cached for older versions. Output: Using device: cuda Tesla K80 Memory Usage: Allocated: 0.3 GB Cached: 0.6 GB As mentioned above, using device it is possible to: To move tensors to the respective device: torch.rand(10).to(device)

  4. How to get the CUDA version? - Stack Overflow

    Mar 16, 2012 · As Jared mentions in a comment, from the command line: nvcc --version (or /usr/local/cuda/bin/nvcc --version) gives the CUDA compiler version (which matches the toolkit version).

  5. How to use supported numpy and math functions with CUDA in …

    Feb 20, 2021 · According to numba 0.51.2 documentation, CUDA Python supports several math functions. However, it doesn't work in the following kernel function: However, it doesn't work in the following kernel function:

  6. python - PyTorch Segementation Fault (core dumped) when …

    Mar 20, 2024 · However, the CUDA toolkit version you have installed - according to the image you are using - is 11.3.0 which is lower than the minimum supported version by a RTX 6000 Ada. In other words, you should use an image that comes with a higher CUDA toolkit version.

  7. Use GPU on python docker image - Stack Overflow

    I'm using a python:3.7.4-slim-buster docker image and I can't change it. I'm wondering how to use my nvidia gpus on it.

  8. Which TensorFlow and CUDA version combinations are compatible?

    Jul 31, 2018 · Anyway, I just moved /usr/local/cuda-10.0 to /usr/local/old-cuda-10.0 so TF couldn't find it any more and everything then worked like a charm. It was all very frustrating, and I still feel like I just did a random hack.

  9. cuda - How do I select which GPU to run a job on ... - Stack Overflow

    Sep 23, 2016 · The comma is not needed though CUDA_VISIBLE_DEVICES=5 python test_script.py will work, as well as CUDA_VISIBLE_DEVICES=1,2,3 python test_script.py for multi gpu. In this case it doesn't makes a difference because the variable allows lists. But for other cases it wouldn't –

  10. python - How to use multiple GPUs in pytorch? - Stack Overflow

    Jan 16, 2019 · If you want to run your code only on specific GPUs (e.g. only on GPU id 2 and 3), then you can specify that using the CUDA_VISIBLE_DEVICES=2,3 variable when triggering the python code from terminal. CUDA_VISIBLE_DEVICES=2,3 python lstm_demo_example.py --epochs=30 --lr=0.001 and inside the code, leave it as: