Unveiling Python and PyTorch 2025 | Supercharge Your AI Journey from Scratch
PyTorch has become one of the most popular frameworks in modern artificial intelligence, powering breakthroughs across research and industry. In recent years, the development of AI has advanced at a breathtaking pace, achieving remarkable progress across fields ranging from natural language processing to computer vision. Driving this revolution requires not only powerful algorithms but also a flexible and efficient development framework. Among the many options available, PyTorch (also known as Torch) has become the tool of choice for researchers and developers, thanks to its intuitive design, dynamic computation graph, and seamless support for hardware acceleration on GPUs and MPS.
Whether you are a student, an engineer, or a self-learner looking to transition into the AI field, mastering Python along with todayโs leading deep learning framework will enable you to quickly train, deploy, and optimize models. This article, built around a โfrom scratchโ approach, will guide you step by step through a complete AI development workflow, while incorporating the latest tools and practical trends of 2025โmaking it easier to get started and significantly boost your productivity.

Contents
What is PyTorch?
PyTorch is an open-source machine learning framework developed by Meta (formerly Facebook) AI Research, built on Python and widely used in two major areas:
- Tensor Computation
Similar to NumPy, but with one key difference: tensors can run directly on GPUs, dramatically accelerating large-scale mathematical operations (such as matrix multiplication). Performance is often tens of times faster than on CPUs. - Deep Neural Networks
Equipped with an automatic differentiation system (autograd), it can compute gradients automaticallyโessential for training models with backpropagation. Combined with thetorch.nnmodule, developers can quickly build complex network architectures as if they were stacking building blocks.
The frameworkโs design philosophy emphasizes intuition, flexibility, and efficiency. By adopting a dynamic computation graph, model definition and execution happen simultaneously, giving developers an interactive and highly adaptable workflow.
Why Choose PyTorch?
By 2025, the advantages of this framework are more evident than ever:
- Intuitive Syntax: Built on Pythonic principles, the code is highly readableโalmost like pseudocodeโmaking it easy to learn and smooth to develop with.
- Flexible and Powerful: The dynamic computation graph is ideal for research and rapid prototyping. You can print tensors or modify network architectures on the fly without recompiling the entire model.
- Comprehensive Ecosystem: Backed by Meta and extended with domain-specific libraries such as torchvision for computer vision, torchtext and transformers for NLP, and torchaudio for audio processing. Combined with a large open-source community, solutions to common problems are never far away.
- Production-Ready Deployment: With the maturity of TorchScript and
torch.compile(a core feature of PyTorch 2.0), the framework is not only research-friendly but also seamlessly deployable to production environments. - Cross-Device Support: The same code can run on CPUs, NVIDIA GPUs (CUDA), or Apple Silicon (MPS), making hardware acceleration simple and accessible.
Development Environment
Every breakthrough begins with the right tools. Letโs prepare your environment for whatโs ahead.
Operating System
- macOS / Linux / Windows are all supported
Python Installation
- Recommended versions: Python 3.9 โ 3.12
- Check if Python is installed:
python3 --version
If Python is not installed, download and install it from the official Python website.
Install VSCode
- Download: Visual Studio Code
- Install the Python extension (officially provided by Microsoft)
Git (for version control)
- Check if Git is installed:
git --version
If Git is not installed, download it from the official Git website.
Installing PyTorch
Go to the official website and choose the appropriate command based on your platform (operating system), package manager (pip/conda), and CUDA version (if you have an NVIDIA GPU).
Verify Installation and Device
Open your Python environment (Jupyter Notebook, VS Code, PyCharm, etc.) and run the following code to check your device:
import torch
print(f"PyTorch version: {torch.__version__}")
print(f"CUDA (NVIDIA GPU) available: {torch.cuda.is_available()}")
if torch.cuda.is_available():
print(f"CUDA device name: {torch.cuda.get_device_name(0)}")
print(f"MPS (Apple Silicon) available: {torch.backends.mps.is_available()}")
# Select which device to use
device = "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu"
print(f"Using device: {device}")
Project Structure
my_python_project/ # Project root directory
โโโ my_project_env/ # venv folder (not pushed to Git)
โโโ app.py # Main application
โโโ requirements.txt # List of dependencies
โโโ .gitignore # Git ignore rules
Code
Letโs implement a classic example: training a simple neural network on the Fashion-MNIST dataset to classify images of clothing. This code demonstrates the full workflow and automatically leverages the available device for acceleration.
# import_libraries.py
import torch
from torch import nn, optim
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
import matplotlib.pyplot as plt
# 1. Set device
device = "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu"
print(f"Using {device} device")
# 2. Load and prepare dataset
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
train_dataset = datasets.FashionMNIST(root='./data', train=True, download=True, transform=transform)
test_dataset = datasets.FashionMNIST(root='./data', train=False, download=True, transform=transform)
train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=64, shuffle=False)
# Check the shape of one batch
for X, y in train_loader:
print(f"Shape of X [Batch, Channel, Height, Width]: {X.shape}")
print(f"Shape of y: {y.shape} {y.dtype}")
break
# 3. Build neural network model
class NeuralNetwork(nn.Module):
def __init__(self):
super().__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10)
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
model = NeuralNetwork().to(device) # Move model to device (GPU/MPS/CPU)
print(model)
# 4. Define loss function and optimizer
loss_fn = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=1e-3)
# 5. Training loop
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device) # Move data to the same device
# Compute prediction error
pred = model(X)
loss = loss_fn(pred, y)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
# 6. Test loop
def test(dataloader, model, loss_fn):
size = len(dataloader.dataset)
num_batches = len(dataloader)
model.eval()
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
X, y = X.to(device), y.to(device)
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= num_batches
correct /= size
print(f"Test Results: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
return correct
# 7. Run training
epochs = 5
accuracy_history = []
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
train(train_loader, model, loss_fn, optimizer)
acc = test(test_loader, model, loss_fn)
accuracy_history.append(acc)
print("Training Done!")
# 8. Save trained model
torch.save(model.state_dict(), "outputs/model.pth")
print("Saved model state to model.pth")
Output
When you run the code above, you will see something like the following in the console, and a model file will be generated in the outputs folder.
Console Output:
Using cuda device # ๆ Using mps device / Using cpu device
Shape of X [Batch, Channel, Height, Width]: torch.Size([64, 1, 28, 28])
Shape of y: torch.Size([64]) torch.int64
NeuralNetwork(
(flatten): Flatten(start_dim=1, end_dim=-1)
(linear_relu_stack): Sequential(
(0): Linear(in_features=784, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
(3): ReLU()
(4): Linear(in_features=512, out_features=10, bias=True)
)
)
Epoch 1
-------------------------------
loss: 2.301106 [ 0/60000]
loss: 0.558233 [ 6400/60000]
...
Test Error:
Accuracy: 84.0%, Avg loss: 0.412345
Epoch 5
-------------------------------
...
Test Error:
Accuracy: 87.2%, Avg loss: 0.352123
Training Done!
Saved PyTorch Model State to model.pth
Conclusion
Through this โfrom scratchโ PyTorch guide, weโve uncovered the core tools and practical applications of AI development in 2025. Youโve learned:
- Why PyTorch is a key tool for modern AI: intuitive, flexible, and backed by a powerful ecosystem.
- How to set up your environment and leverage hardware acceleration in PyTorch: seamlessly switching between CPU, GPU (CUDA), and MPS with
.to(device). - A standard PyTorch project workflow: from data loading and model definition to training loops and saving models.
And this is only the beginning. The world of AI is vast and full of opportunitiesโwaiting for you to explore more advanced models (CNNs, RNNs, Transformers), cutting-edge techniques (transfer learning, GANs), and real-world deployment strategies with PyTorch.
Now, you hold the key to accelerating development. Open your editor, harness the power of PyTorch, and start building your intelligent future!









