Installing Dependencies
Each project uses external libraries, named dependencies. We will explain how you can manage them with UV in this section.
Installing Core Data Science Libraries
With your virtual environment active, adding the core data science libraries to your project is as simple as running a single command. UV’s add command not only downloads and installs packages into your environment but also records them in your project’s dependency list. For example, typing:
uv add numpy pandas matplotlib seaborn jupyterlabwill fetch each library and bring it into your isolated .venv folder. Behind the scenes, UV first updates your pyproject.toml, inserting lines under the [project] (or [tool.uv.dependencies], if you prefer) section to declare each package and its version constraints. Then it resolves any sub-dependencies—such as dateutil for Pandas or cycler for Matplotlib—and writes a fully detailed lockfile (uv.lock) that freezes exact versions for reproducibility.
Once UV has recorded your needs, it installs every library into the environment’s own site-packages directory. On macOS or Linux this lives at:
your-project/.venv/lib/python3.x/site-packages/and on Windows at:
your-project\.venv\Lib\site-packages\Within that folder you’ll find each package’s code—modules, data files, and any compiled extensions—all neatly contained so they cannot collide with packages in other projects or your system installation. When you import NumPy or launch JupyterLab, Python knows to look in this private site-packages location first, ensuring your project always uses exactly the versions you declared.
Meanwhile, your source of truth for what the project depends on remains pyproject.toml (and the companion uv.lock). If you ever clone this project to a new machine or share it with teammates, running uv sync will read those files, reconstruct the exact same environment, and reinstall every library into a fresh .venv. This two-file approach—declarative dependencies in pyproject.toml plus pinned versions in uv.lock—strikes a balance between readability and reproducibility, giving beginners confidence that “it worked on my machine” will hold true everywhere.
Installing PyTorch
Now that you have the core data science libraries in place, it’s time to add PyTorch, the deep learning framework that lets you build neural networks and perform GPU-accelerated tensor computations. Installing PyTorch with UV follows the same simple pattern you’ve already used: declare the dependency with uv add and then fetch and install it into your isolated environment with uv sync. At the time of writing, the latest stable PyTorch release is 2.7.0, and for most students—especially those without a GPU—you can install the CPU-only build by running:
uv add torch torchvision torchaudio
uv syncUnder the hood, UV records these packages in your pyproject.toml (and in the accompanying uv.lock file), then installs their code into your project’s private “site-packages” directory (.venv/lib/python3.x/site-packages on macOS/Linux or .venv\Lib\site-packages on Windows) alongside the libraries you added earlier . When you import torch in your scripts or notebooks, Python will look in this folder first, ensuring you always use the exact versions you declared.
If you have an NVIDIA GPU and want to leverage its parallel computing power, you’ll need a CUDA-enabled build of PyTorch. CUDA (Compute Unified Device Architecture) is NVIDIA’s platform and programming model for offloading compute-intensive tasks to the GPU, which can accelerate deep learning training by orders of magnitude compared to a CPU alone . In contrast, a CPU-only environment executes every tensor operation on your machine’s central processor, which is perfectly fine for learning and small experiments but becomes slow as model and data size grow.
Before installing the GPU build, make sure your system’s NVIDIA drivers and CUDA toolkit are installed and compatible. Most students can safely use CUDA 11.8, which is the default for the latest PyTorch binaries. To fetch the GPU-optimized wheels, you’ll invoke pip through UV, pointing it at PyTorch’s CUDA index:
This tells UV to pass the installation command into your active virtual environment, downloading the CUDA-enabled packages instead of the CPU-only variants . Once installation finishes, you can quickly verify GPU availability by running:
If it returns True, your GPU and CUDA drivers are correctly configured and PyTorch is ready to use.
Not everyone has access to an NVIDIA GPU locally, but you can still explore CUDA builds of PyTorch using cloud-hosted notebooks like Google Colaboratory. Colab provides free, no-setup access to NVIDIA GPUs (and even TPUs) in a familiar Jupyter interface, making it ideal for students who want to experiment with GPU-accelerated training without installing anything on their own machine . Just select a GPU runtime under Runtime → Change runtime type, and you’re off to the races.
Whether you choose the simplicity of a CPU-only install or the power of GPU acceleration, UV’s unified commands—uv add, uv sync, and uv pip install—ensure that your PyTorch environment remains reproducible, isolated, and perfectly tailored to your hardware. Enjoy building and training your first deep learning models!
Activating and Deactivating the Virtual Environment
Once you’ve created your virtual environment, the next step is to “activate” it so that any Python commands you run—whether installing new packages, launching a Jupyter notebook, or running your script—use the isolated interpreter and libraries inside your project folder rather than the global Python installation. Activation is simply a small shell script that adjusts a few environment variables in your current terminal session.
On macOS and Linux, you activate the environment by sourcing the activate script that lives inside your .venv/bin directory. From your project root, you would run:
After you press Enter, you’ll notice your shell prompt changes—often it will prepend something like (.venv)—indicating that the environment is now active. Behind the scenes, this script has added .venv/bin to the front of your PATH variable and set a VIRTUAL_ENV variable pointing to the environment’s path. From this point on, typing commands like python, pip, or even jupyter lab launches the executables inside .venv/bin rather than any system-wide versions. This guarantees that when you import NumPy, Pandas, or PyTorch, you’re using the exact versions installed in this environment.
If you’re on Windows, activation works a little differently depending on your shell. In PowerShell, you run:
Just like on Unix-like systems, your prompt will change—often to show the environment name in parentheses—and all subsequent Python-related commands occur within the virtual environment.
Deactivating the environment is equally simple. In any shell, once you’ve finished working, just run:
or on Windows keyboards in PowerShell or Command Prompt, the same deactivate command applies. This reverses the changes made by activation: your PATH is reset to its previous state and the VIRTUAL_ENV variable is removed. Your prompt returns to normal, and subsequent calls to python or pip will once again use your system installation. If you forget to deactivate before closing the terminal window, no worries—the environment is only active in that session, so it automatically resets when you open a new terminal.
Because activation only affects the current terminal session, you can safely have multiple projects open in different terminals, each with its own environment active. Whenever you switch between projects, simply navigate to the project directory and re-run the activation command there. This habit ensures that each project uses the correct dependencies and interpreter version without any risk of cross-contamination.
Last updated