Planet Python
Last update: February 20, 2026 09:44 PM UTC
February 20, 2026
Graham Dumpleton
Teaching an AI about Educates
The way we direct AI coding agents has changed significantly over the past couple of years. Early on, the interaction was purely conversational. You'd open a chat, explain what you wanted, provide whatever context seemed relevant, and hope the model could work with it. If it got something wrong or went down the wrong path, you'd correct it and try again. It worked, but it was ad hoc. Every session started from scratch. Every conversation required re-establishing context.
What's happened since then is a steady progression toward giving agents more structured, persistent knowledge to work with. Each step in that progression has made agents meaningfully more capable, to the point where they can now handle tasks that would have been unrealistic even a year ago. I've been putting these capabilities to work on a specific challenge: getting an AI to author interactive workshops for the Educates training platform. In my previous posts I talked about why workshop content is actually a good fit for AI generation. Here I want to explain how I've been making that work in practice.
How agent steering has evolved
The first real step beyond raw prompting was agent steering files. These are files you place in a project directory that give the agent standing instructions whenever it works in that context. Think of it as a persistent briefing document. You describe the project structure, the conventions to follow, the tools to use, and the agent picks that up automatically each time you interact with it. No need to re-explain the basics of your codebase every session. This was a genuine improvement, but the instructions are necessarily general-purpose. They tell the agent about the project, not about any particular domain of expertise.
The next step was giving agents access to external tools and data sources through protocols like the Model Context Protocol (MCP). Instead of the agent only being able to read and write files, it could now make API calls, query databases, fetch documentation, and interact with external services. The agent went from being a conversationalist that could edit code to something that could actually do things in the world. That opened up a lot of possibilities, but the agent still needed you to explain what to do and how to approach it.
Planning modes added another layer. Rather than the agent diving straight into implementation, it could first think through the approach, break a complex task into steps, and present a plan for review before acting. This was especially valuable for larger tasks where getting the overall approach right matters more than any individual step. The agent became more deliberate and less likely to charge off in the wrong direction.
Skills represent where things stand now, and they're the piece that ties the rest together. A skill is a self-contained package of domain knowledge, workflow guidance, and reference material that an agent can invoke when working on a specific type of task. Rather than the agent relying solely on what it learned during training, a skill gives it authoritative, up-to-date, structured knowledge about a particular domain. The agent knows when to use the skill, what workflow to follow, and which reference material to consult for specific questions.
With the advances in what LLMs are capable of combined with these structured ways of steering them, agents are genuinely reaching a point where their usefulness is growing in ways that matter for real work.
Why model knowledge isn't enough
Large language models know something about most topics. If you ask an AI about Educates, it will probably have some general awareness of the project. But general awareness is not the same as the detailed, precise knowledge you need to produce correct output for a specialised platform.
Educates workshops have specific YAML structures for their configuration files. The interactive instructions use a system of clickable actions with particular syntax for each action type. There are conventions around how learners interact with terminals and editors, how dashboard tabs are managed, how Kubernetes resources are configured, and how data variables are used for parameterisation. Getting any of these wrong doesn't just produce suboptimal content, it produces content that simply won't work when someone tries to use it.
I covered the clickable actions system in detail in my last post. There are eight categories of actions covering terminal execution, file viewing and editing, YAML-aware modifications, validation, and more. Each has its own syntax and conventions. An AI that generates workshop content needs to use all of these correctly, not approximately, not most of the time, but reliably.
This is where skills make the difference. Rather than hoping the model has absorbed enough Educates documentation during its training to get these details right, you give it the specific knowledge it needs. The skill becomes the agent's reference manual for the domain, structured in a way that supports the workflow rather than dumping everything into context at once.
The Educates workshop authoring skill
The obvious approach would be to take the full Educates documentation and load it into the agent's context. But AI agents work within a finite context window, and that window is shared between the knowledge you give the agent and the working space it needs for the actual task. Generating a workshop involves reasoning about structure, producing instruction pages, writing clickable action syntax, and keeping track of what's been created so far. If you consume most of the context with raw documentation, there's not enough room left for the agent to do its real work. You have to be strategic about what goes in.
The skill I built for Educates workshop authoring is a deliberate distillation. At its core is a main skill definition of around 25 kilobytes that captures the essential workflow an agent follows when creating a workshop. It covers gathering requirements from the user, creating the directory structure, generating the workshop configuration file, writing instruction pages with clickable actions, and running through verification checklists at the end. This isn't a copy of the documentation. It's the key knowledge extracted and organised to drive the workflow correctly.
Supporting that are 20+ reference files totalling around 300 kilobytes. These cover specific aspects of the platform in the detail needed to get things right: the complete clickable actions system across all eight action categories, Kubernetes access patterns and namespace isolation, data variables for parameterising workshop content, language-specific references for Python and Java workshops, dashboard configuration and tab management, workshop image selection, setup scripts, and more.
The skill is organised around the workflow rather than being a flat dump of information. The main definition tells the agent what to do at each step, and the reference files are there for it to consult when it needs detail on a particular topic. If it's generating a terminal action, it knows to check the terminal actions reference for the correct syntax. If it's setting up Kubernetes access, it consults the Kubernetes reference for namespace configuration patterns. The agent pulls in the knowledge it needs when it needs it, keeping the active context focused on the task at hand.
There's also a companion skill for course design that handles the higher-level task of planning multi-workshop courses, breaking topics into individual workshops, and creating detailed plans for each one. But the workshop authoring skill is where the actual content generation happens, and it's the one I want to demonstrate.
Putting it to the test with Air
To show what the skill can do, I decided to use it to generate a workshop for the Air web framework. Air is a Python web framework written by friends in the Python community. It's built on FastAPI, Starlette, and HTMX, with a focus on simplicity and minimal JavaScript. What caught my attention about it as a test case is the claim on their website: "The first web framework designed for AI to write. Every framework claims AI compatibility. Air was architected for it." That's a bold statement, and using Air as the subject for this exercise is partly a way to see how that claim holds up in practice, not just for writing applications with the framework but for creating training material about it.
There's another reason Air makes for a good test. I haven't used the framework myself. I know the people behind it, but I haven't built anything with it. That means I can't fall back on my own knowledge to fill in gaps. The AI needs to research the framework and understand it well enough to teach it to someone, while the skill provides all the Educates platform knowledge needed to structure that understanding into a proper interactive workshop. It's a genuine test of both the skill and the model working together.
The process starts simply enough. You tell the agent what you want: create me a workshop for the Educates training platform introducing the Air web framework for Python developers. The phrasing matters here. The agent needs enough context in the request to recognise that a relevant skill exists and should be applied. Mentioning Educates in the prompt is what triggers the connection to the workshop authoring skill. Some agents also support invoking a skill directly through a slash command, which removes the ambiguity entirely. Either way, once the skill is activated, its workflow kicks in. It asks clarifying questions about the workshop requirements. Does it need an editor? (Yes, learners will be writing code.) Kubernetes access? (No, this is a web framework workshop, not a Kubernetes one.) What's the target difficulty and duration?
I'd recommend using the agent's planning mode for this initial step if it supports one. Rather than having the agent jump straight into generating files, planning mode lets it first describe what it intends to put in the workshop: the topics it will cover, the page structure, and the learning progression. You can review that plan and steer it before any files are created. It's a much better starting point than generating everything and then discovering the agent went in a direction you didn't want.
From those answers and the approved plan, it builds up the workshop configuration and starts generating content.
lab-python-air-intro/
├── CLAUDE.md
├── README.md
├── exercises/
│ ├── README.md
│ ├── pyproject.toml
│ └── app.py
├── resources/
│ └── workshop.yaml
└── workshop/
├── setup.d/
│ └── 01-install-packages.sh
├── profile
└── content/
├── 00-workshop-overview.md
├── 01-first-air-app.md
├── 02-air-tags.md
├── 03-adding-routes.md
└── 99-workshop-summary.md
The generated workshop pages cover a natural learning progression:
- Overview, introducing Air and its key features
- Your First Air App, opening the starter
app.py, running it, and viewing it in the dashboard - Building with Air Tags, replacing the simple page with styled headings, lists, and a horizontal rule to demonstrate tag nesting, attributes, and composition
- Adding Routes, creating an about page with
@app.page, a dynamic greeting page with path parameters, and navigation links between pages - Summary, recapping concepts and pointing to further learning
What the skill produced is a complete workshop with properly structured instruction pages that follow the guided experience philosophy. Learners progress through the material entirely through clickable actions. Terminal commands are executed by clicking. Files are opened, created, and modified through editor actions. The workshop configuration includes the correct YAML structure, the right session applications are enabled, and data variables are used where content needs to be parameterised for each learner's environment.

The generated content covers the progression you'd want in an introductory workshop, starting from the basics and building up to more complete applications. At each step, the explanations provide context for what the learner is about to do before the clickable actions guide them through doing it. That rhythm of explain, show, do, observe, the pattern I described in my earlier posts, is maintained consistently throughout.
Is the generated workshop perfect and ready to publish as-is? Realistically, no. Although the AI can generate some pretty amazing content, it doesn't always get things exactly right. In this case three changes were needed before the workshop would run correctly.
The first was removing some unnecessary configuration from the pyproject.toml. The generated file included settings that attempted to turn the application into an installable package, which wasn't needed for a simple workshop exercise. This isn't a surprise. AI agents often struggle to generate correct configuration for uv because the tooling has changed over time and there's plenty of outdated documentation out there that leads models astray.
The second was that the AI generated the sample application as app.py rather than main.py, which meant the air run command in the workshop instructions had to be updated to specify the application name explicitly. A small thing, but the kind of inconsistency that would trip up a learner following the steps.
The third was an unnecessary clickable action. The generated instructions included an action for the learner to click to open the editor on the app.py file, but the editor would already have been displayed by a previous action. This one turned out to be a gap in the skill itself. When using clickable actions to manipulate files in the editor, the editor tab is always brought to the foreground as a side effect. The skill didn't make that clear enough, so the AI added a redundant step to explicitly show the editor tab.
That last issue is a good example of why even small details matter when creating a skill, and also why skills have an advantage over relying purely on model training. Because the skill can be updated at any time, fixing that kind of gap is straightforward. You edit the reference material, and every future workshop generation benefits immediately. You aren't dependent on waiting for some future LLM model release that happens to have seen more up-to-date documentation.
You can browse the generated files in the sample repository on GitHub. If you check the commit history you'll see how little had to be changed from what was originally generated.
Even with those fixes, the changes were minor. The overall structure was correct, the clickable actions worked, and the content provided a coherent learning path. What would have taken hours of manual authoring to produce (writing correct clickable action syntax, getting YAML configuration right, maintaining consistent pacing across instruction pages) the skill handles all of that. A domain expert would still want to review the content, verify the technical accuracy of the explanations, and adjust the pacing or emphasis based on what they think matters most for learners. But the job shifts from writing everything from scratch to reviewing and refining what was generated.
What this means
Skills are a way of packaging expertise so that it can be reused. The knowledge I've accumulated about how to author effective Educates workshops over years of building the platform is now encoded in a form that an AI agent can apply. Someone who has never created an Educates workshop before could use this skill and produce content that follows the platform's conventions correctly. They bring the subject matter knowledge (or the AI researches it), and the skill provides the platform expertise.
That's what makes this different from just asking an AI to "write a workshop." The skill encodes not just facts about the platform but the workflow, the design principles, and the detailed reference material that turn general knowledge into correct, structured output. It's the difference between an AI that knows roughly what a workshop is and one that knows exactly how to build one for this specific platform.
Both the workshop authoring skill and the course design skill are available now, and I'm continuing to refine them as I use them. If the idea of guided, interactive workshops appeals to you, the Educates documentation is the place to start. And if you're interested in exploring the use of AI to generate workshops for Educates, do reach out to me.
February 20, 2026 09:39 PM UTC
Clickable actions in workshops
The idea of guided instruction in tutorials isn't new. Most online tutorials these days provide a click-to-copy icon next to commands and code snippets. It's a useful convenience. You see the command you need to run, you click the icon, and it lands in your clipboard ready to paste. Better than selecting text by hand and hoping you got the right boundaries.
But this convenience only goes so far. The instructions still assume you have a suitable environment set up on your own machine. The commands might reference tools you haven't installed, paths that don't exist in your setup, or configuration that differs from what the tutorial expects. The copy button solves the mechanics of getting text into your clipboard, but the real friction is in the gap between the tutorial and your environment. You end up spending more time troubleshooting your local setup than actually learning the thing the tutorial was supposed to teach you.
Hosted environments and the copy/paste problem
Online training platforms like Instruqt and Strigo improved on this by providing VM-based environments that are pre-configured and ready to go. You don't need to install anything locally. The environment matches what the instructions expect, so commands and paths should work as written. That eliminates the entire class of problems around "works on the tutorial author's machine but not on mine."
The interaction model, though, is still copy and paste. You read instructions in one panel, find the command you need, copy it, switch to the terminal panel, paste it, and run it. For code changes, you copy a snippet from the instructions and paste it into a file in the editor. It works, but it's a manual process that requires constant context switching between panels. Every copy and paste is a small interruption, and over the course of a full workshop those interruptions add up. Learners end up spending mental energy on the mechanics of following instructions rather than on the material itself.
When commands became clickable
Katacoda, before it was shut down by O'Reilly in 2022, included an improvement to this model. Commands embedded in the workshop instructions were clickable. Click on a command and it would automatically execute in the terminal session provided alongside the instructions. No copying, no pasting, no switching between panels. The learner reads the explanation, clicks the command, and watches the result appear in the terminal. The flow from reading to doing became much more seamless.
This was a meaningful step forward for terminal interactions specifically. But it only covered one part of the workflow. For code changes, editing configuration files, or any interaction that involved working with files in an editor, you were still back to the copy and paste model. The guided experience had a gap. Commands were frictionless, but everything else still required manual effort.
Educates and the fully guided experience
Educates takes the idea of clickable actions and extends it across the entire workshop interaction. The workshop dashboard provides instructions alongside live terminals and an embedded VS Code editor. Throughout the instructions, learners encounter clickable actions that cover not just running commands, but the full range of things you'd normally do in a hands-on technical workshop.
Terminal actions work the way Katacoda relied on. Click on a command in the instructions and it runs in the terminal. But Educates goes further by providing a full set of editor actions as well. Clickable actions can open a file in the embedded editor, create a new file with specified content, select and highlight specific text within a file, and then replace that selected text with new content. You can append lines to a file, insert content at a specific location, or delete a range of lines. All of it driven by clicking on actions in the instructions rather than manually editing files.
Educates also includes YAML-aware editor actions, which is significant because YAML editing is notoriously error-prone when done by hand. A misplaced indent or a missing space after a colon can break an entire configuration file, and debugging YAML syntax issues is not what anyone signs up for in a workshop about Kubernetes or application deployment. The YAML actions let you reference property paths like spec.replicas or spec.template.spec.containers[name=nginx] and set values, add items to sequences, or replace entries, all while preserving existing comments and formatting in the file.
Beyond editing, Educates provides examiner actions that run validation scripts to check whether the learner has completed a step correctly. In effect, the workshop can grade the learner's work and provide immediate feedback. If they missed a step or made an error, they find out right away rather than discovering it three steps later when something else breaks. There are also collapsible section actions for hiding optional content or hints until the learner needs them, and file transfer actions for downloading files from the workshop environment to the learner's machine or uploading files into it.
The end result is that learners can progress through an entire workshop without ever manually typing a command, editing a file by hand, or wondering whether they've completed a step correctly. They focus on understanding the concepts being taught while the clickable actions handle the mechanics. That changes the experience fundamentally. Instead of the workshop being something you push through, it becomes something that carries you forward.
The dashboard in action
To get a sense for what this looks like in practice, here are a couple of screenshots from an Educates workshop.

The instructions panel on the left contains a clickable action for running a command. When the learner clicks it, the command executes in the terminal panel and the output appears immediately. No copying, no pasting, no typing.

Here the embedded editor shows the result of a select-and-replace flow. The instructions guided the learner through highlighting specific text in a file and then replacing it with updated content, all through clickable actions. The learner sees exactly what changed and why, without needing to manually locate the right line and make the edit themselves.
How it works in the instructions
Workshop instructions in Educates are written in markdown. Clickable actions are embedded as specially annotated fenced code blocks where the language identifier specifies the action type and the body contains YAML configuration that controls what the action does.
For example, to guide a learner through updating an image reference in a Kubernetes deployment file, you might include two actions in sequence. The first selects the text that needs to change:
```editor:select-matching-text
file: ~/exercises/deployment.yaml
text: "image: nginx:1.19"
```
The second replaces the selected text with the new value:
```editor:replace-text-selection
file: ~/exercises/deployment.yaml
text: "image: nginx:latest"
```
When the learner clicks the first action, the matching text is highlighted in the editor so they can see exactly what will change. When they click the second, the replacement is applied. They understand the change being made because they see both the before and after states, but they don't need to manually find the right line, select the text, and type the replacement. The instructions guide them through it.
For terminal commands, the syntax is even simpler:
```terminal:execute
command: |-
echo "Hello from terminal:execute"
```
The YAML within each code block controls everything about the action: which file to operate on, what text to match or replace, which terminal session to use, and so on. The format is consistent across all action types. Once you understand the pattern of action type as the language identifier and YAML configuration as the body, authoring with actions is straightforward.
The value of removing friction
The progression from copy/paste tutorials to hosted environments to clickable commands to a fully guided experience like Educates is ultimately a progression toward removing every point where a learner might disengage. Each improvement eliminates another source of friction, another moment where someone might lose focus because they're fighting the tools instead of learning the material. When the mechanics of following instructions become invisible, learners stay engaged longer and absorb more of what the workshop is trying to teach.
In my previous post I discussed how this interactive format, combined with thoughtful use of AI for content generation, can produce workshop content that maintains consistent quality throughout. The clickable actions I've described here are what make that format possible. They're the mechanism that turns static instructions into a guided, interactive experience where the learner's attention stays on the concepts rather than the process.
In future posts I plan to write about how I'm using AI agent skills to automate the creation of Educates workshops, including the generation of all the clickable actions that drive the guided process along with the commentary and explanations the workshop instructions include. The goal is that the generated workshop runs out of the box, with the only remaining step being for the domain expert to validate the content and tweak where necessary. That has the potential to save a huge amount of time in creating workshops, making it practical to build high-quality guided learning experiences for topics that would otherwise never get the investment.
February 20, 2026 09:39 PM UTC
Real Python
The Real Python Podcast – Episode #285: Exploring MCP Apps & Adding Interactive UIs to Clients
How can you move your MCP tools beyond plain text? How do you add interactive UI components directly inside chat conversations? This week on the show, Den Delimarsky from Anthropic joins us to discuss MCP Apps and interactive UIs in MCP.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 20, 2026 12:00 PM UTC
Graham Dumpleton
When AI content isn't slop
In my last post I talked about the forces reshaping developer advocacy. One theme that kept coming up was content saturation. AI has made it trivially easy to produce content, and the result is a flood of generic, shallow material that exists to fill space rather than help anyone. People have started calling this "AI slop," and the term captures something real. Recycled tutorials, SEO-bait blog posts, content that says nothing you couldn't get by asking a chatbot directly. There's a lot of it, and it's getting worse.
The backlash against AI slop is entirely justified. But I've been wondering whether it has started to go too far.
The backlash is justified
To be clear, the problem is real. You can see it every time you search for something technical. The same generic "getting started" guide, rewritten by dozens of different sites (or quite possibly the same AI), each adding nothing original. Shallow tutorials that walk through the basics without any insight from someone who has actually used the technology in practice. Content that was clearly produced to fill a content calendar rather than to answer a question anyone was actually asking.
Developers have become good at spotting this. Most can tell within a few seconds whether something was written by a person with genuine experience or generated to tick a box. That's a healthy instinct. The bar for content worth reading has gone up, and honestly, that's probably a good thing. There was plenty of low-effort content being produced by humans long before AI entered the picture.
But healthy skepticism can tip over into reflexive dismissal. "AI-generated" has become a label that gets applied broadly, and once it sticks, people stop evaluating the content on its merits. The assumption becomes that if AI was involved, the content can't be worth reading. That misses some important distinctions.
Not all AI content serves the same purpose
There are two very different ways to use AI for content. One is to mass-produce generic articles to flood search results or pad out a blog. The goal is volume, not value. Nobody designed the output with a particular audience in mind or thought carefully about what the content needed to achieve. That's slop, and the label fits.
The other is to use AI as a tool within a system you've designed, where the output has a specific structure, a specific audience, and a specific purpose. The human provides the intent and the domain knowledge. The AI helps execute within those constraints.
The problem with AI slop is not that AI generated it. The problem is that nobody designed it with care or purpose. There was no thought behind the structure, no domain expertise informing the content, no consideration for who would read it or what they'd take away from it. If you bring all of those things to the table, the output is a different thing entirely.
Workshop instructions aren't blog posts
I've been thinking about this because of my own project. Educates is an interactive training platform I've been working on for over five years (I mentioned it briefly in my earlier post when I started writing here again). It's designed for hands-on technical workshops where people learn by doing, not just by reading.
Anyone who has run a traditional workshop knows the problem. You give people a set of instructions, and half of them get stuck before they've finished the first exercise. Not because the concepts are hard, but because the mechanics are. They're copying long commands from a document, mistyping a path, missing a flag, getting an error that has nothing to do with what they're supposed to be learning. The experience becomes laborious. People switch off. They stop engaging with the material and start just trying to get through it.
Educates takes a different approach. Workshop instructions are displayed alongside live terminals and an embedded code editor. The instructions include things that learners can click on that perform actions for them. Click to run a command in the terminal. Click to open a file in the editor. Click to apply a code change. Click to run a test. The aim is to make the experience as frictionless as possible so that learners stay engaged throughout.
This creates a rhythm. You see code in context. You read an explanation of what it does and what needs to change. You click to apply the change. You click to run it and observe the result. At every step, learners are actively progressing through a guided flow rather than passively reading a wall of text. Their attention stays on the concepts being taught, not on the mechanics of following instructions. People learn more effectively because nothing about the process gives them a reason to disengage.
Where AI fits into this
Writing good workshop content by hand is hard. Not just because of the volume of writing, but because maintaining that engaging, well-paced flow across a full workshop takes sustained focus. It's one thing to write a good explanation for one section. It's another to keep that quality consistent across dozens of sections covering an entire topic. Humans get tired. Explanations become terse halfway through. Steps that should guide the learner smoothly start to feel rushed or incomplete. The very quality that makes workshops effective, keeping learners engaged from start to finish, is the hardest thing to sustain when you're writing it all by hand.
This is where AI, with the right guidance and steering, can actually do well. When you provide the content conventions for the platform, the structure of the workshop, and clear direction about the learning flow you want, AI can generate content that maintains consistent quality and pacing throughout. It doesn't get fatigued halfway through and start cutting corners on explanations. It follows the same pattern of explaining, showing, applying, and observing as carefully in section twenty as it did in section one.
That said, this only works because the content has a defined structure, a specific format, and a clear purpose. The human still provides the design and the domain expertise. The AI operates within those constraints. With review and iteration, the result can actually be superior to what most people would produce by hand for this kind of structured content. Not because AI is inherently better at explaining things, but because maintaining that engaging flow consistently across a full workshop is something humans genuinely struggle with.
Slop is a design problem, not a tool problem
The backlash against AI slop is well-founded. Content generated without intent, without structure, and without domain expertise behind it deserves to be dismissed. But the line should be drawn at intent and design, not at whether AI was involved in the process. Content that was designed with a clear purpose, structured for a specific use case, and reviewed by someone who understands the domain is not slop, regardless of how it was produced. Content that was generated to fill space with no particular audience in mind is slop, regardless of whether a human wrote it.
I plan to write more about Educates in future posts, including what makes the interactive workshop format effective and how it changes the way people learn. For now, the point is simpler. Before dismissing AI-generated content out of hand, it's worth asking what it was designed to do and whether it does that well.
And yes, this post was itself written with the help of AI, guided by the kind of intent, experience, and hands-on steering I've been talking about. The same approach I'm applying to generating workshop content. If the argument holds, it should hold here too.
February 20, 2026 12:00 AM UTC
February 19, 2026
Paolo Melchiorre
Django ORM Standalone⁽¹⁾: Querying an existing database
A practical step-by-step guide to using Django ORM in standalone mode to connect to and query an existing database using inspectdb.
February 19, 2026 11:00 PM UTC
PyBites
How Even Senior Developers Mess Up Their Git Workflow
There are few things in software engineering that induce panic quite like a massive git merge conflict.
You pull down the latest code, open your editor, and suddenly your screen is bleeding with <<<<<<< HEAD markers. Your logic is tangled with someone else’s, the CSS is conflicting, and you realise you just wasted hours building on top of outdated architecture.
It is easy to think this only happens to juniors, but it happens to us all. Case in point – this week it was the two of us butting… HEADs (get it?).
When you code in isolation, you get comfortable. You stop checking for open pull requests, you ignore issue trackers and you just start writing code. This is the trap I fell into.
And that is exactly how you break your application. It’s exactly how I broke our application!
If you want to avoid spending your weekend untangling a broken repository (ahem… like we did), you need to enforce these three non-negotiable git habits.
1. Stop Coding in a Vacuum and Use Issue Trackers
Don’t go rogue and start redesigning a codebase without talking to your team. It doesn’t matter if it’s a massive enterprise app or a two-person side project.
If two developers are working on the same views and templates without dedicated issue tickets, a collision is inevitable. You need to break generic ideas like “redesign the UI” into highly specific, granular issues (e.g., “fix this menu,” “change the nav bar colour”).
Communication is your first line of defence against code conflicts.
2. Check for Stale Pull Requests Before You Branch
Pulling the latest code from main is the baseline, but as I was painfully reminded, it isn’t enough.
Before you write a single line of code, you have to check for open pull requests. Your teammate might have a massive architectural change sitting in review that hasn’t hit production yet. If you branch off an old version of main while ignoring a pending PR, you are guaranteed to hit merge conflicts when you finally try to integrate your work.
Once your branch is merged, leave it alone. Don’t keep committing to a stale branch. Go ahead and create a brand new one for your next feature.
3. Master the Bailout Commands
Even with the best practices in place, mistakes happen. You might accidentally code a new feature directly on the main branch, or tangle your logic with a bug fix.
When things go wrong, you need to know how to safely extract your work. This is where advanced git commands become lifesavers. You need to know how to use git stash to temporarily park your changes, create a clean branch, and reapply them. You should also understand how to use git cherry-pick to pull specific historical commits out of a messy branch and into a clean one.
These tools give you the comfort to manipulate code without the fear of destroying the repository.
Bob and I got into a deep discussion about this exact issue after we, as I alluded to, broke every single one of these rules over the weekend.
We were working on our privacy-first book tracking app, Pybites Books. Because we hadn’t coded deeply together on the same codebase in a while, I was rusty and complacent. We didn’t use hyper-specific issues, I ignored an open pull request that was three weeks old, and we both changed the colour scheme independently.
It resulted in a massive merge conflict that required a lot of manual reconciliation, stashing, and cherry-picking to fix.
If you want to hear the full breakdown of our git mess, what went wrong, and how we saved the app, listen using the following links!
Listen to the Episode
– Julian
P.S. Check out the app that caused all of this drama! If you want a privacy-first way to track your reading without being farmed for data, head over to Pybites Books. We just shipped a massive new statistics dashboard (that survived the merge conflict!)
February 19, 2026 10:39 PM UTC
The Python Coding Stack
The Journey From LBYL to EAFP • [Club]
LBYL came more naturally to me in my early years of programming. It seemed to have fewer obstacles in those early stages, fewer tricky concepts.
And in my 10+ years of teaching Python, I also preferred teaching LBYL to beginners and delaying EAFP until later.
But over the years, as I came to understand Python’s psyche better, I gradually shifted my programming style—and then, my teaching style, too.
So, what are LBYL and EAFP? And which one is more suited to Python?
I’m running a series of three live workshops starting next week.
Each workshop is 2 hours long, so plenty to time to explore core Python topics:
#1 • Python’s Plumbing: Dunder Methods and Python’s Hidden Interface
#2 • Pythonic Iteration: Iterables, Iterators, itertools#3 • To Inherit or Not? Inheritance, Composition, Abstract Base Classes, and Protocols
Read more and book your place here:
https://www.thepythoncodingstack.com/p/when-it-works-is-not-good-enough
Look Both Sides Before Crossing the Road
You should definitely look before you leap across a busy road…or any road, really. And programming also has a Look Before You Leap concept—that’s LBYL—when handling potential failure points in your code.
Let’s start by considering this basic example. You define a function that accepts a value and a list. The function adds the value to the list if the value is above a user-supplied threshold:
def add_value_above_threshold(value, threshold, data):
if value >= threshold:
data.append(value)You can confirm this short code works as intended:
# ...
prices = []
add_value_above_threshold(12, 5, prices)
add_value_above_threshold(3, 5, prices)
add_value_above_threshold(9, 5, prices)
print(prices)This code outputs the list with the two prices above the threshold:
[12, 9]However, you want to ensure this can’t happen:
# ...
products = {}
add_value_above_threshold(12, 5, products)Now, products is a dictionary, but add_value_above_threshold() was designed to work with lists and not dictionaries:
Traceback (most recent call last):
...
add_value_above_threshold(12, 5, products)
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
...
data.append(value)
^^^^^^^^^^^
AttributeError: ‘dict’ object has no attribute ‘append’One option is the look before you leap (LBYL):
def add_value_above_threshold(value, threshold, data):
if not isinstance(data, list):
print(”Invalid format. ‘data’ must be a list”)
return
if value >= threshold:
data.append(value)Now, the function prints a warning when you pass a dictionary, and it doesn’t crash the program!
But this is too restrictive.
Let’s assume you decide to use a deque instead of a list:
from collections import deque
# ...
prices = deque()
add_value_above_threshold(12, 5, prices)
add_value_above_threshold(3, 5, prices)
add_value_above_threshold(9, 5, prices)
print(prices)This code still complains that it wants a list and doesn’t play ball:
Invalid format. ‘data’ must be a list
Invalid format. ‘data’ must be a list
Invalid format. ‘data’ must be a list
deque([])But there’s no reason why this code shouldn’t work since deque also has an .append() method.
You could change the call to isinstance() to include the deque data type—isinstance(data, list | deque)—but then there may be other data structures that are valid and can be used in this function. You don’t want to have to write them all.
If you’re well-versed with the categories of data structures—perhaps because you devoured the The Python Data Structure Categories Series—then you might conclude you need to check whether the object is a MutableSequence since all mutable sequences have an .append() method. You can import MutableSequence from collections.abc and use isinstance(data, MutableSequence). Now you’re fine to use lists, deques, or any other mutable sequence.
This version fits better with Python’s duck-typing philosophy. It doesn’t restrict the function to a limited number of data types but to a category of data types. This category is defined by what the data types can do. In duck typing, you care about what an object can do rather than what it is. You can read more about duck typing in Python in this post: When a Duck Calls Out • On Duck Typing and Callables in Python
However, you could still have other data types that have an .append() method but may not fully fit into the MutableSequence category. There’s no reason you should exclude those data types from working with your function.
Perhaps, you could use Python’s built-in hasattr() to check whether the object you pass has an .append() attribute. You’re now checking whether the object has the required attribute rather than what the object is.
But if you’re going through all this trouble, you can go a step further.
Just Go For It and See What Happens
Why not just run the line of code that includes data.append() and see what happens? Ah, but you don’t want the code to fail if you use the wrong data type—you only want to print a warning, say.
That’s where the try..except construct comes in:
def add_value_above_threshold(value, threshold, data):
if value < threshold: # inequality flipped to avoid nesting
return
try:
data.append(value)
except AttributeError:
print(
“Provided data structure does not support appending values.”
)This is the Easier to Ask for Forgiveness than Permission (EAFP) philosophy. Just try the code. If it doesn’t work, you can then deal with it in the except block. Now, this fits even more nicely with Python’s duck typing philosophy. You’re asking the program whether data can append a value. It doesn’t matter what data is–can it append a value?
You don’t have to think about all the valid data types or which category they fall into. And rather than checking whether the data type has the .append() attribute first, you just try to run the code and deal with the consequences later. That’s why it’s easier to ask for forgiveness than permission.
But don’t use this philosophy when crossing a busy road. Stick with “look before you leap” there!
Another Example Comparing LBYL and EAFP
February 19, 2026 10:26 PM UTC
Django Weblog
Plan to Adopt Contributor Covenant 3 as Django’s New Code of Conduct
Last month we announced our plan to adopt Contributor Covenant 3 as Django's new Code of Conduct through a multi-step process. Today we're excited to share that we've completed the first step of that journey!
What We've Done
We've merged new documentation that outlines how any member of the Django community can propose changes to our Code of Conduct and related policies. This creates a transparent, community-driven process for keeping our policies current and relevant.
The new process includes:
- Proposing Changes: Anyone can open an issue with a clear description of their proposed change and the rationale behind it.
- Community Review: The Code of Conduct Working Group will discuss proposals in our monthly meetings and may solicit broader community feedback through the forum, Discord, or DSF Slack.
- Approval and Announcement: Once consensus is reached, changes are merged and announced to the community. Changes to the Code of Conduct itself will be sent to the DSF Board for final approval.
How You Can Get Involved
We welcome and encourage participation from everyone in the Django community! Here's how you can engage with this process:
- Share Your Ideas: If you have suggestions for improving our Code of Conduct or related documentation, open an issue on our GitHub repo.
- Join the Discussion: Participate in community discussions about proposed changes on the forum, Discord, or DSF Slack. Keep it positive, constructive, and respectful.
- Stay Informed: Watch the Code of Conduct repository to follow along with proposed changes and discussions.
- Provide Feedback: Not comfortable with GitHub? You can also reach out via conduct@djangoproject.com, or look for anyone with the
Code of Conduct WGrole on Discord.
What's Next
We're moving forward with the remaining steps of our plan:
- Step 2 (target: March 15): Update our Enforcement Manual, Reporting Guidelines, and FAQs via pull request 91.
- Step 3 (target: April 15): Adopt the Contributor Covenant 3 with proposed changes from the working group.
Each step will have its own pull request where the community can review and provide feedback before we merge. We're committed to taking the time needed to incorporate your input thoughtfully.
Thank you for being part of this important work to make Django a more welcoming and inclusive community for everyone!
February 19, 2026 03:51 PM UTC
Real Python
Quiz: Python's tuple Data Type: A Deep Dive With Examples
In this quiz, you’ll test your understanding of Python tuples.
By working through this quiz, you’ll revisit various ways to interact with Python tuples. You’ll also practice recognizing common features and gotchas.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 19, 2026 12:00 PM UTC
PyCharm
LangChain Python Tutorial: 2026’s Complete Guide
If you’ve read the blog post How to Build Chatbots With LangChain, you may want to know more about LangChain. This blog post will dive deeper into what LangChain offers and guide you through a few more real-world use cases. And even if you haven’t read the first post, you might still find the info in this one helpful for building your next AI agent.
LangChain fundamentals
Let’s have a look at what LangChain is. LangChain provides a standard framework for building AI agents powered by LLMs, like the ones offered by OpenAI, Anthropic, Google, etc., and is therefore the easiest way to get started. LangChain supports most of the commonly used LLMs on the market today.
LangChain is a high-level tool built on LangGraph, which provides a low-level framework for orchestrating the agent and runtime and is suitable for more advanced users. Beginners and those who only need a simple agent build are definitely better off with LangChain.
We’ll start by taking a look at several important components in a LangChain agent build.
Agents
Agents are what we are building. They combine LLMs with tools to create systems that can reason about tasks, decide which tools to use for which steps, analyze intermittent results, and work towards solutions iteratively.
Creating an agent is as simple as using the `create_agent` function with a few parameters:
from langchain.agents import create_agent agent = create_agent( "gpt-5", tools=tools )
In this example, the LLM used is GPT-5 by OpenAI. In most cases, the provider of the LLM can be inferred. To see a list of all supported providers, head over here.
LangChain Models: Static and Dynamic
There are two types of agent models that you can build: static and dynamic. Static models, as the name suggests, are straightforward and more common. The agent is configured in advance during creation and remains unchanged during execution.
import os
from langchain.chat_models import init_chat_model
os.environ["OPENAI_API_KEY"] = "sk-..."
model = init_chat_model("gpt-5")
print(model.invoke("What is PyCharm?"))
Dynamic models allow you to build an agent that can switch models during runtime based on customized logic. Different models can then be picked based on the current state and context. For example, we can use ModelFallbackMiddleware (described in the Middleware section below) to have a backup model in case the default one fails.
from langchain.agents import create_agent from langchain.agents.middleware import ModelFallbackMiddleware agent = create_agent( model="gpt-4o", tools=[], middleware=[ ModelFallbackMiddleware( "gpt-4o-mini", "claude-3-5-sonnet-20241022", ), �� ], )
Tools
Tools are important parts of AI agents. They make AI agents effective at carrying out tasks that involve more than just text as output, which is a fundamental difference between an agent and an LLM. Tools allow agents to interact with external systems – such as APIs, databases, or file systems. Without tools, agents would only be able to provide text output, with no way of performing actions or iteratively working their way toward a result.
LangChain provides decorators for systematically creating tools for your agent, making the whole process more organized and easier to maintain. Here are a couple of examples:
Basic tool
@tool
def search_db(query: str, limit: int = 10) -> str:
"""Search the customer database for records matching the query.
"""
...
return f"Found {limit} results for '{query}'"
Tool with a custom name
@tool("pycharm_docs_search", return_direct=False)
def pycharm_docs_search(q: str) -> str:
"""Search the local FAISS index of JetBrains PyCharm documentation and return relevant passages."""
...
docs = retriever.get_relevant_documents(q)
return format_docs(docs)
Middleware
Middleware provides ways to define the logic of your agent and customize its behavior. For example, there is middleware that can monitor the agent during runtime, assist with prompting and selecting tools, or even help with advanced use cases like guardrails, etc.
Here are a few examples of built-in middleware. For the full list, please refer to the LangChain middleware documentation.
| Middleware | Description |
| Summarization | Automatically summarize the conversation history when approaching token limits. |
| Human-in-the-loop | Pause execution for human approval of tool calls. |
| Context editing | Manage conversation context by trimming or clearing tool uses. |
| PII detection | Detect and handle personally identifiable information (PII). |
Real-world LangChain use cases
LangChain use cases cover a varied range of fields, with common instances including:
AI-powered chatbots
When we think of AI agents, we often think of chatbots first. If you’ve read the How to Build Chatbots With LangChain blog post, then you’re already up to speed about this use case. If not, I highly recommend checking it out.
Document question answering systems
Another real-world use case for LangChain is a document question answering system. For example, companies often have internal documents and manuals that are rather long and unwieldy. A document question answering system provides a quick way for employees to find the info they need within the documents, without having to manually read through each one.
To demonstrate, we’ll create a script to index the PyCharm documentation. Then we’ll create an AI agent that can answer questions based on the documents we indexed. First let’s take a look at our tool:
@tool("pycharm_docs_search")
def pycharm_docs_search(q: str) -> str:
"""Search the local FAISS index of JetBrains PyCharm documentation and return relevant passages."""
# Load vector store and create retriever
embeddings = OpenAIEmbeddings(
model=settings.openai_embedding_model, api_key=settings.openai_api_key
)
vector_store = FAISS.load_local(
settings.index_dir, embeddings, allow_dangerous_deserialization=True
)
k = 4
retriever = vector_store.as_retriever(
search_type="mmr", search_kwargs={"k": k, "fetch_k": max(k * 3, 12)}
)
docs = retriever.invoke(q)
We are using a vector store to perform a similarity search with embeddings provided by OpenAI. Documents are embedded so the doc search tool can perform similarity searches to fetch the relevant documents when called.
def main():
parser = argparse.ArgumentParser(
description="Ask PyCharm docs via an Agent (FAISS + GPT-5)"
)
parser.add_argument("question", type=str, nargs="+", help="Your question")
parser.add_argument(
"--k", type=int, default=6, help="Number of documents to retrieve"
)
args = parser.parse_args()
question = " ".join(args.question)
system_prompt = """You are a helpful assistant that answers questions about JetBrains PyCharm using the provided tools.
Always consult the 'pycharm_docs_search' tool to find relevant documentation before answering.
Cite sources by including the 'Source:' lines from the tool output when useful. If information isn't found, say you don't know."""
agent = create_agent(
model=settings.openai_chat_model,
tools=[pycharm_docs_search],
system_prompt=system_prompt,
response_format=ToolStrategy(ResponseFormat),
)
result = agent.invoke({"messages": [{"role": "user", "content": question}]})
print(result["structured_response"].content)
System prompts are provided to the LLM together with the user’s input prompt. We are using OpenAI as the LLM provider in this example, and we’ll need an API key from them. Head to this page to check out OpenAI’s integration documentation. When creating an agent, we’ll have to configure the settings for `llm`, `tools`, and `prompt`.
For the full scripts and project, see here.
Content generation tools
Another example is an agent that generates text based on content fetched from other sources. For instance, we might use this when we want to generate marketing content with info taken from documentation. In this example, we’ll pretend we’re doing marketing for Python and creating a newsletter for the latest Python release.
In tools.py, a tool is set up to fetch the relevant information, parse it into a structured format, and extract the necessary information.
@tool("fetch_python_whatsnew", return_direct=False)
def fetch_python_whatsnew() -> str:
"""
Fetch the latest "What's New in Python" article and return a concise, cleaned
text payload including the URL and extracted section highlights.
The tool ignores the input argument.
"""
index_html = _fetch(BASE_URL)
latest = _find_latest_entry(index_html)
if not latest:
return "Could not determine latest What's New entry from the index page."
article_html = _fetch(latest.url)
highlights = _extract_highlights(article_html)
return f"URL: {latest.url}\nVERSION: {latest.version}\n\n{highlights}"
As for the agent in agent.py.
SYSTEM_PROMPT = (
"You are a senior Product Marketing Manager at the Python Software Foundation. "
"Task: Draft a clear, engaging release marketing newsletter for end users and developers, "
"highlighting the most compelling new features, performance improvements, and quality-of-life "
"changes in the latest Python release.\n\n"
"Process: Use the tool to fetch the latest 'What's New in Python' page. Read the highlights and craft "
"a concise newsletter with: (1) an attention-grabbing subject line, (2) a short intro paragraph, "
"(3) 4–8 bullet points of key features with user benefits, (4) short code snippets only if they add clarity, "
"(5) a 'How to upgrade' section, and (6) links to official docs/changelog. Keep it accurate and avoid speculation."
)
...
def run_newsletter() -> str:
load_dotenv()
agent = create_agent(
model=os.getenv("OPENAI_MODEL", "gpt-4o"),
tools=[fetch_python_whatsnew],
system_prompt=SYSTEM_PROMPT,
# response_format=ToolStrategy(ResponseFormat),
)
...
As before, we provide a system prompt and the API key for OpenAI to the agent.
For the full scripts and project, see here.
Advanced LangChain concepts
LangChain’s more advanced features can be extremely useful when you’re building a more sophisticated AI agent. Not all AI agents require these extra elements, but they are commonly used in production. Let’s look at some of them.
MCP adapter
The MCP (Model Context Protocol) allows you to add extra tools or functionalities to an AI agent, making it increasingly popular among active AI agent users and AI enthusiasts alike.
LangChain’s Client module provides a MultiServerMCPClient class that allows the AI agent to accept MCP server connections. For example:
from langchain_mcp_adapters.client import MultiServerMCPClient
client = MultiServerMCPClient(
{
"postman-server": {
"type": "http",
"url": "https://mcp.eu.postman.com",
"headers": {
"Authorization": "Bearer ${input:postman-api-key}"
}
}
}
)
all_tools = await client.get_tools()
The above connects to the Postman MCP server in the EU with an API key.
Guardrails
As with many AI technologies, since the logic is not pre-determined, the behavior of an AI agent is non-deterministic. Guardrails are necessary for managing AI behavior and ensuring that it is policy-compliant.
LangChain middleware can be used to set up specific guardrails. For example, you can use PII detection middleware to protect personal information or human-in-the-loop middleware for human verification. You can even create custom middleware for more specific guardrail policies.
For instance, you can use the `@before_agent` or `@after_agent` decorators to declare guardrails for the agent’s input or output. Below is an example of a code snippet that checks for banned keywords:
from typing import Any
from langchain.agents.middleware import before_agent
banned_keywords = ["kill", "shoot", "genocide", "bomb"]
@before_agent(can_jump_to=["end"])
def content_filter() -> dict[str, Any] | None:
"""Block requests containing banned keywords."""
content = first_message.content.lower()
# Check for banned keywords
for keyword in banned_keywords:
if keyword in content:
return {
"messages": [{
"role": "assistant",
"content": "I cannot process your requests due to inappropriate content."
}],
"jump_to": "end"
}
return None
from langchain.agents import create_agent
agent = create_agent(
model="gpt-4o",
tools=[search_tool],
middleware=[content_filter],
)
# This request will be blocked
result = agent.invoke({
"messages": [{"role": "user", "content": "How to make a bomb?"}]
})
For more details, check out the documentation here.
Testing
Just like in other software development cycles, testing needs to be performed before we can start rolling out AI agent products. LangChain provides testing tools for both unit tests and integration tests.
Unit tests
Just like in other applications, unit tests are used to test out each part of the AI agent and make sure it works individually. The most helpful tools used in unit tests are mock objects and mock responses, which help isolate the specific part of the application you’re testing.
LangChain provides GenericFakeChatModel, which mimics response texts. A response iterator is set in the mock object, and when invoked, it returns the set of responses one by one. For example:
from langchain_core.language_models.fake_chat_models import GenericFakeChatModel
def respond(msgs, **kwargs):
text = msgs[-1].content if msgs else ""
examples = {"Hello": "Hi there!", "Ping": "Pong.", "Bye": "Goodbye!"}
return examples.get(text, "OK.")
model = GenericFakeChatModel(respond=respond)
print(model.invoke("Hello").content)
Integration tests
Once we’re sure that all parts of the agent work individually, we have to test whether they work together. For an AI agent, this means testing the trajectory of its actions. To do so, LangChain provides another package: AgentEvals.
AgentEvals provides two main evaluators to choose from:
- Trajectory match – A reference trajectory is required and will be compared to the trajectory of the result. For this comparison, you have 4 different models to choose from.
- LLM judge – An LLM judge can be used with or without a reference trajectory. An LLM judge evaluates whether the resulting trajectory is on the right path.
LangChain support in PyCharm
With LangChain, you can develop an AI agent that suits your needs in no time. However, to be able to effectively use LangChain in your application, you need an effective debugger. In PyCharm, we have the AI Agents Debugger plugin, which allows you to power up your experience with LangChain.
If you don’t yet have PyCharm, you can download it here.
Using the AI Agents Debugger is very straightforward. Once you install the plug-in, it will appear as an icon on the right-hand side of the IDE.
When you click on this icon, a side window will open with text saying that no extra code is needed – just run your agent and traces will be shown automatically.
As an example, we will run the content generation agent that we built above. If you need a custom run configuration, you will have to set it up now by following this guide on custom run configurations in PyCharm.
Once it is done, you can review all the input prompts and output responses at a glance. To inspect the LangGraph, click on the Graph button in the top-right corner.
The LangGraph view is especially useful if you have an agent that has complicated steps or a customized workflow.
Summing up
LangChain is a powerful tool for building AI agents that work for many use cases and scenarios. It’s built on LangGraph, which provides low-level orchestration and runtime customization, as well as compatibility with a vast variety of LLMs on the market. Together, LangChain and LangGraph set a new industry standard for developing AI agents.
February 19, 2026 10:40 AM UTC
February 18, 2026
Python Engineering at Microsoft
Python Environments Extension for VS Code
Introducing the Python Environments Extension for VS Code
Python development in VS Code now has a unified, streamlined workflow for managing environments, interpreters, and packages. The Python Environments extension brings consistency and clarity to a part of Python development that has historically been fragmented across tools like venv, conda, pyenv, poetry, and pipenv. After a year in preview—refined through community feedback and real-world usage—the extension is being rolled out for general availability. Users can expect to have all environment workflows automatically switched to using the environments extension in the next few weeks or can opt in immediately with the setting python.useEnvsExtension. The extension works alongside the Python extension and requires no setup—open a Python file and your environments are discovered automatically.
A Unified Environment Experience
The extension automatically discovers environments from all major managers:
- venv
- conda
- pyenv
- poetry
- pipenv
- System Python installs
Discovery is powered by PET (Python Environment Tool), a fast Rust-based scanner that finds environments reliably across platforms by checking your PATH, known installation locations, and configurable search paths. PET already powers environment discovery in the Python extension today, so this is the same proven engine—now with a dedicated UI built around it. You can create, delete, switch, and manage environments from a single UI—regardless of which tool created them.
For most users, everything just works out of the box. If you have environments in non-standard locations, you can configure workspace-level search paths with glob patterns or set global search paths for shared directories outside your workspace.
Faster Environment Creation with uv
If uv is installed, the extension uses it automatically for creating venv environments and installing packages—significantly faster than standard tools, especially in large projects. This is enabled by default via the python-envs.alwaysUseUv setting.
Quick Create and Custom Create
Getting a new environment up and running is now just a click away. Quick Create (the + button in the Environment Managers view) builds an environment using your default manager, the latest Python version, and any workspace dependencies it finds in requirements.txt or pyproject.toml. You get a working environment in seconds.
When you need more control, Custom Create (via Python: Create Environment in the Command Palette) lets you choose your environment manager, Python version, environment name, and which dependency files to install from. Both venv and conda support creating environments directly from VS Code; for other managers like pyenv, poetry, and pipenv, the extension discovers environments you create with their respective CLI tools.
Python Projects: Environments That Match Your Code Structure
Python Projects let you map environments to specific folders or files. This solves common problems in monorepos, multi-service workspaces, mixed script/package repositories, and multi-version testing scenarios.
Adding a project is straightforward: right-click a folder in the Explorer and select Add as Python Project, or use Auto Find to discover folders with pyproject.toml or setup.py. Once a folder is a project, you can assign it its own environment—and that environment is used automatically for running, debugging, testing, and terminal activation within that folder.
Portable by design
When you assign an environment to a project, the extension stores the environment manager type—not hardcoded interpreter paths. This means your .vscode/settings.json is portable across machines, operating systems, and teammates. No more fixing broken paths after cloning a repo. Teammates can commit the settings, clone the workspace, run Quick Create, and be up and running immediately.
Scaffold new projects from templates
The Python Envs: Create New Project from Template command scaffolds a new project with the right structure. Choose between a Package template (with pyproject.toml, package directory, and tests) or a Script template (a standalone .py file with inline dependency metadata using PEP 723).
Multi-Project Testing
The Python extension now uses the Python Environments API to support multi-project testing. Each project gets its own test root, its own interpreter, and its own test discovery settings. This prevents cross-contamination between services and ensures each project uses the correct environment. For details, see the Multi-Project Testing guide.
Smarter Terminal Activation
The extension introduces a new terminal activation model with three modes, controlled by the python-envs.terminal.autoActivationType setting:
shellStartup— Activates your environment using VS Code terminal integration, so it’s ready before the first command runs. This is especially important if you use GitHub Copilot to run terminal commands, and will become the default in a future release.command— Runs the activation command visibly in the terminal after it opens (currently the default).off— No automatic activation, for users who prefer manual control.
You can also open a terminal with any environment activated by right-clicking an environment in the Environment Managers view and selecting Open in Terminal.
Predictable Interpreter Selection
Interpreter selection now follows a simple, deterministic priority order:
- A project’s configured environment manager
- The workspace’s default environment manager (only if you’ve explicitly set it)
python.defaultInterpreterPath(legacy)- Auto-discovery (
.venv→ system Python)
Only settings you explicitly configure are used. Defaults never override your choices. And importantly, opening a workspace never writes to your settings—the extension only modifies settings.json when you make an explicit change like selecting an interpreter or creating an environment.
Built-In Package Management
You can manage packages directly from the Environment Managers view—search and install packages, uninstall packages, or install from requirements.txt, pyproject.toml, or environment.yml. The extension automatically uses the correct package manager for each environment type (pip for venv, conda for conda environments, or uv pip when uv is enabled).
.env File Support
For developers who use environment variables during development, the extension supports .env files. Set python.terminal.useEnvFile to true and your variables are injected into terminals when they’re created—great for development credentials and configuration that shouldn’t live in source control. Configuring the path to the environment file with python.envFilePath is supported with the previous setting turned on.
Extensible by Design
The Python Environments extension isn’t just for the built-in managers. Its API is designed so that any environment or package manager can build an extension that plugs directly into the Python sidebar, appearing alongside venv, conda, and the rest. The community is already building on this—check out the Pixi Extension as an example of what’s possible.
Known Limitations
There are a couple of areas where integration is still catching up. We want to be upfront so you know what to expect. For the full list, see known issues in the documentation. If you run into an issue, report a bug — your VS Code version, Python extension version, and steps to reproduce help us resolve issues faster. If you need to get back to a stable state quickly, you can disable the extension without affecting the core Python extension.
What’s Next
This is just the beginning. The Python Environments extension lays the foundation for a more integrated, intelligent Python development experience in VS Code. Try the extension, share your feedback, and help us shape the future of Python tooling in VS Code.
Try out these new improvements by downloading the Python Environments extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or ⌘ + ⇧ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.
The post Python Environments Extension for VS Code appeared first on Microsoft for Python Developers Blog.
February 18, 2026 10:00 PM UTC
The Python Coding Stack
When "It Works" Is Not Good Enough • Live Workshops
You’ve been reading articles here on The Python Coding Stack. How about live workshops?
I need your help to find out whether to run more of these–and yes, you can still sign up for this series of three workshops that start next week. See the information below.
But can you answer this one question for me please:
If you answered ‘something else’, you can reply with your reason–it will help me judge what readers want.
If you’re interested and haven’t signed up yet, here are the links you need (and details about the content and workshops are further down):
Book your place on all three workshops in one convenient, cost-effective bundle:
Or book workshops individually if you only want to attend one or two:
#1 • Python’s Plumbing: Dunder Methods and Python’s Hidden Interface
#2 • Pythonic Iteration: Iterables, Iterators, itertools
#3 • To Inherit or Not? Inheritance, Composition, Abstract Base Classes, and Protocols
You write code. It works. And that’s great.
But do you feel that “it works” isn’t good enough? You need to understand Python’s “behind the scenes” to write robust and efficient code. If you’re keen to step up your Python coding, then I’ve got a series of live, hands-on workshops coming up. I think you’ll find them interesting and useful.
Here they are:
Python’s Plumbing: Dunder Methods and Python’s Hidden Interface
Pythonic Iteration: Iterables, Iterators,
itertoolsTo Inherit or Not? Inheritance, Composition, Abstract Base Classes, and Protocols
Each live workshop will run for 2 hours+ (we’ll keep going if you have more questions!) My live teaching style is quite similar to how I write. So, expect a relaxed, friendly, and fun session. And you have permission to jump in and ask questions at any point.
Here’s a bit more about each workshop. You can sign up to whichever workshop you wish or all three of them. Up to you.
1. Python’s Plumbing: Dunder Methods and Python’s Hidden Interface
I know you’ve heard the phrase “everything is an object in Python.” But why does this matter? And here’s a less-catchy phrase that’s just as important: “Everything Python does goes through dunder methods at some point.”
You can think of a program as a conversation between Python and its objects. Python says “Hey object, are you iterable?” or “Hey object, do you understand what + means?”
The first workshop in this series explores the key special methods–or dunder methods, if you prefer the casual term. You’ll discover how every operation Python performs is managed by each object’s dunder methods. And you’ll start seeing Python programs through a different lens once this all clicks.
2. Pythonic Iteration: Iterables, Iterators, itertools
What happens in a for loop? There’s a lot more than you see on the surface. And if you’re using too many for loops in your code, maybe you’re missing out on some more Pythonic iteration options.
In the second workshop, you’ll finally get to grips with the difference between iterable and iterator. You’ll master the Iterator Protocol and you’ll see how there’s always an iterator somewhere behind every iteration. And you know lots of weird iteration patterns, often consisting of nested for loops? You probably don’t need them. Python has special iteration tools hidden in the standard module, including the itertools treasure trove.
3. To Inherit or Not? Inheritance, Composition, Abstract Base Classes, and Protocols
When you learn object-oriented programming, you learn about inheritance. And that’s cool. But when should you use inheritance? Are there alternatives? [Spoiler alert: yes, there are]
The third and final workshop explores inheritance, yes, but also composition. We’ll explore examples to understand when to use one or when to use the other (or when to use both). This discussion will lead us to some key design principles and how abstract base classes can help. But what if you don’t want to use inheritance? ABCs can’t help, but protocols (the typing.Protocol one) can step in to give you a duck typing-friendly way to organise your code.
I’ll run each workshop in the Python Behind the Scenes series twice so that you can choose the day and time that suits you best. You’ll also get the recording of the session.
The workshops are on Zoom but I won’t use the webinar format–that’s too ‘one-way’ for my liking. Instead, I’ll run them in a standard Zoom meeting where you can jump in and ask questions at any time. It’s friendlier that way!
Any questions? Just reply to this email and ask.
Join the Workshops
Each workshop is $45. Or you can get all three for $100. That’s 6+ hours of live, hands-on, interactive learning…
Here are the dates and times for each workshop. Each workshop runs for 2+ hours. I’m showing times in a few time zones:
1. Python’s Plumbing: Dunder Methods and Python’s Hidden Interface
either
Thursday 26 February 2026 • London 9:00 PM • New York 4:00 PM • Los Angeles 1:00 PM • Berlin 10:00 PM • UTC/GMT 9:00 PM
or
Sunday 1 March 2026 • London 4:00 PM • New York 11:00 AM • Los Angeles 8:00 AM • Berlin 5:00 PM • UTC/GMT 4:00 PM
2. Pythonic Iteration: Iterables, Iterators, itertools
either
Thursday 12 March 2026 • London 9:00 PM • New York 5:00 PM • Los Angeles 2:00 PM • Berlin 10:00 PM • UTC/GMT 9:00 PM
or
Sunday 15 March 2026 • London 4:00 PM • New York 12:00 PM • Los Angeles 9:00 AM • Berlin 5:00 PM • UTC/GMT 4:00 PM
3. To Inherit or Not? Inheritance, Composition, Abstract Base Classes, and Protocols
either
Thursday 19 March 2026 • London 9:00 PM • New York 5:00 PM • Los Angeles 2:00 PM • Berlin 10:00 PM • UTC/GMT 9:00 PM
or
Sunday 22 March 2026 • London 4:00 PM • New York 12:00 PM • Los Angeles 9:00 AM • Berlin 5:00 PM • UTC/GMT 4:00 PM
When you book a workshop, you’ll get access to both sessions for that workshop—so you don’t need to pick a date just yet. You can then join the session that suits you best (or attend both if don’t mind hearing the same thing twice!)
Book your place on all three workshops in one convenient, cost-effective bundle:
Or book workshops individually if you only want to attend one or two:
#1 • Python’s Plumbing: Dunder Methods and Python’s Hidden Interface
#2 • Pythonic Iteration: Iterables, Iterators, itertools
#3 • To Inherit or Not? Inheritance, Composition, Abstract Base Classes, and Protocols
So, in summary:
3 great live workshops to master core Python and improve the quality of your code
2+ hours each, live, interactive, hands-on
Ask questions during the sessions—your questions always lead to interesting discussions
See you at one (or all) of these workshops…
February 18, 2026 05:13 PM UTC
Anarcat
net-tools to iproute cheat sheet
This is also known as: "ifconfig is not installed by default
anymore, how do I do this only with the ip command?"
I have been slowly training my brain to use the new commands but I
sometimes forget some. So, here's a couple of equivalence from the old
package to net-tools the new iproute2, about 10 years late:
net-tools |
iproute2 |
shorter form | what it does |
|---|---|---|---|
arp -an |
ip neighbor |
ip n |
|
ifconfig |
ip address |
ip a |
show current IP address |
ifconfig |
ip link |
ip l |
show link stats (up/down/packet counts) |
route |
ip route |
ip r |
show or modify the routing table |
route add default GATEWAY |
ip route add default via GATEWAY |
ip r a default via GATEWAY |
add default route to GATEWAY |
route del ROUTE |
ip route del ROUTE |
ip r d ROUTE |
remove ROUTE (e.g. default) |
netstat -anpe |
ss --all --numeric --processes --extended |
ss -anpe |
list listening processes, less pretty |
Another trick
Also note that I often alias ip to ip -br -c as it provides a
much prettier output.
Compare, before:
anarcat@angela:~> ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff permaddr xx:xx:xx:xx:xx:xx
altname wlp166s0
altname wlx8cf8c57333c7
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
20: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
inet 192.168.0.108/24 brd 192.168.0.255 scope global dynamic noprefixroute eth0
valid_lft 40699sec preferred_lft 40699sec
After:
anarcat@angela:~> ip -br -c a
lo UNKNOWN 127.0.0.1/8 ::1/128
wlan0 DOWN
virbr0 DOWN 192.168.122.1/24
eth0 UP 192.168.0.108/24
I don't even need to redact MAC addresses! It also affects the display of the other commands, which look similarly neat.
Also imagine pretty colors above.
Finally, I don't have a cheat sheet for iw vs iwconfig (from
wireless-tools) yet. I just use NetworkManager now and rarely have
to mess with wireless interfaces directly.
Background and history
For context, there are traditionally two ways of configuring the network in Linux:
- the old way, with commands like
ifconfig,arp,routeandnetstat, those are part of the net-tools package - the new way, mostly (but not entirely!) wrapped in a single
ipcommand, that is the iproute2 package
It seems like the latter was made "important" in Debian in 2008,
which means every release since Debian 5 "lenny"
has featured the
ip command.
The former net-tools package was demoted in December 2016 which
means every release since Debian 9 "stretch" ships without an
ifconfig command unless explicitly requested. Note that this was
mentioned in the release notes in a similar (but, IMHO, less
useful) table.
(Technically, the net-tools Debian package source still indicates it
is Priority: important but that's a bug I have just filed.)
Finally, and perhaps more importantly, the name iproute is hilarious
if you are a bilingual french speaker: it can be read as "I proute"
which can be interpreted as "I fart" as "prout!" is the sound a fart
makes. The fact that it's called iproute2 makes it only more
hilarious.
February 18, 2026 04:30 PM UTC
Real Python
How to Install Python on Your System: A Guide
To learn how to install Python on your system, you can follow a few straightforward steps. First, check if Python is already installed by opening a command-line interface and typing python --version or python3 --version.
You can install Python on Windows using the official installer from Python.org or through the Microsoft Store. On macOS, you can use the official installer or Homebrew. For Linux, use your package manager or build Python from source.
By the end of this tutorial, you’ll understand how to:
- Check if Python is installed by running
python --versionorpython3 --versionin a command-line interface. - Upgrade Python by downloading and installing the latest version from Python.org.
- Install and manage multiple Python versions with
pyenvto keep them separate.
This tutorial covers installing the latest Python on the most important platforms or operating systems, such as Windows, macOS, Linux, iOS, and Android. However, it doesn’t cover all the existing Linux distributions, as that would be a massive task. Nevertheless, you’ll find instructions for the most popular distributions available today.
To get the most out of this tutorial, you should be comfortable using your operating system’s terminal or command line.
Free Bonus: Click here to get a Python Cheat Sheet and learn the basics of Python 3, like working with data types, dictionaries, lists, and Python functions.
Take the Quiz: Test your knowledge with our interactive “How to Install Python on Your System: A Guide” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Install Python on Your System: A GuideIn this quiz, you'll test your understanding of how to install or update Python on your computer. With this knowledge, you'll be able to set up Python on various operating systems, including Windows, macOS, and Linux.
Windows: How to Check or Get Python
In this section, you’ll learn to check whether Python is installed on your Windows operating system (OS) and which version you have. You’ll also explore three installation options that you can use on Windows.
Note: In this tutorial, you’ll focus on installing the latest version of Python in your current operating system (OS) rather than on installing multiple versions of Python. If you want to install several versions of Python in your OS, then check out the Managing Multiple Python Versions With pyenv tutorial. Note that on Windows machines, you’d have to use pyenv-win instead of pyenv.
For a more comprehensive guide on setting up a Windows machine for Python programming, check out Your Python Coding Environment on Windows: Setup Guide.
Checking the Python Version on Windows
To check whether you already have Python on your Windows machine, open a command-line application like PowerShell or the Windows Terminal.
Follow the steps below to open PowerShell on Windows:
- Press the Win key.
- Type
PowerShell. - Press Enter.
Alternatively, you can right-click the Start button and select Windows PowerShell or Windows PowerShell (Admin). In some versions of Windows, you’ll find Terminal or Terminal (admin).
Note: To learn more about your options for the Windows terminal, check out Your Python Coding Environment on Windows: Setup Guide.
With the command line open, type in the following command and press the Enter key:
PS> python --version
Python 3.x.z
Using the --version switch will show you the installed version. Note that the 3.x.z part is a placeholder here. In your machine, x and z will be numbers corresponding to the specific version you have installed.
Alternatively, you can use the -V switch:
PS> python -V
Python 3.x.z
You can also use the py launcher, which is the Python launcher for Windows and is especially helpful if you plan to work with multiple Python versions:
PS> py --version
Python 3.x.z
Read the full article at https://realpython.com/installing-python/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 18, 2026 02:00 PM UTC
Hugo van Kemenade
A CLI to fight GitHub spam
gh triage spam #
We get a lot of spam in the CPython project.
A lot of it isn’t even slop, but mostly worthless “nothing” issues and PRs that barely fill in the issue template, or add a line of nonsense to some arbitrary file.
They’re often from new accounts with usernames like:
- za9066559-wq
- quanghuynh10111-png
- riffocristobal579-cmd
- sajjad5giot
- satyamchoudhary1430-boop
- SilaMey
- standaell1234-maker
- eedamhmd2005-ui
- ksdmyanmar-lighter
- experments-studios
- madurangarathanayaka5-art
A new issue from a username following the pattern nameNNNN-short_suffix is a dead
giveaway. I think they’re trying to farm “realistic” accounts: open a PR, open an issue,
comment on something, make a fake review.
It’s easy but tedious to:
- close the PR/issue as not planned
- retitle to “spam”
- apply the “invalid” label
- remove other labels
I use the GitHub CLI gh a lot (for example, gh co NNN to check out a PR locally),
and it’s straightforward to write your own Python-based extensions, so I wrote
gh triage.
Install:
$ gh extension install hugovk/gh-triage
Cloning into '/Users/hugo/.local/share/gh/extensions/gh-triage'...
remote: Enumerating objects: 15, done.
remote: Counting objects: 100% (15/15), done.
remote: Compressing objects: 100% (13/13), done.
remote: Total 15 (delta 5), reused 12 (delta 2), pack-reused 0 (from 0)
Receiving objects: 100% (15/15), 5.09 KiB | 5.09 MiB/s, done.
Resolving deltas: 100% (5/5), done.
✓ Installed extension hugovk/gh-triage
Then run like gh triage spam <issue-or-pr-number-or-url>:
$ gh triage spam https://github.com/python/cpython/issues/144900
✅ Removed labels: type-bug
✅ Added labels: invalid
✅ Changed title: spam
✅ Closed
This can be used for any repo that you have permissions for: it applies the “invalid” or “spam” labels, but only if they exist in the repo.
Next step: perhaps it could print out the URL to make it easy to report the account to GitHub (usually for “Spam or inauthentic Activity”).
gh triage unassign #
Not spam, but another triage helper.
A less common occurrence is a rebase or merge from main or change of PR base branch
that ends up bringing in lots of code changes. This often assigns the PR to dozens of
people via
CODEOWNERS,
for example:
python/cpython#142564.
Everyone’s already been pinged and subscribed to the PR, so it’s too late to help that, but we can automate unassigning them all so at least the PR is not in their “assigned to” list.
Run gh triage unassign <issue-or-pr-number-or-url> to:
- remove all assignees (issues and PRs)
- remove all requested reviewers (PRs only)
For example:
gh triage unassign 142564
See also #
gh triagehomepage- Adam Johnson’s
top
ghcommands
Header photo: Otto of the Silver Hand written and illustrated by William Pyle, originally published 1888, from the University of California Libraries.
February 18, 2026 01:35 PM UTC
Real Python
Quiz: How to Install Python on Your System: A Guide
In this quiz, you’ll test your understanding of how to install Python. This quiz covers questions about how to check which version of Python is installed on your machine, and how you can install or update Python on various operating systems.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 18, 2026 12:00 PM UTC
PyPodcats
Trailer: Episode 11 With Sheena O'Connell
A preview of our chat with Sheena O'Connell. Watch the full episode on February 26, 2026A preview of our chat with Sheena O'Connell. Watch the full episode on February 26, 2026
Sneak Peek of our chat with Sheena O’Connell, hosted by Cheuk Ting Ho and Tereza Iofciu.
Sheena began her career as a software engineer and technical leader across multiple startups, but her passion for education led her to spend the last five years reimagining how people learn to code professionally. Working within the nonprofit sector, she built alternative education systems from the ground up and developed deep expertise in effective teaching, educator development, and the structural limitations of traditional education models. Sheena is the founder of Prelude.tech, where she delivers rigorous technical training alongside consultation and coaching for technical educators and organizations with education functions. She also leads the Guild of Educators, a community she founded to empower technology educators through shared resources, support, and evidence-based teaching practices.
In this episode, Sheena O’Connell tells us about her journey, the importance of community and good practices for teachers and eductuoions in python, organizational psychology and how herself has become involve in this journey. We talk about how to enable a 10 x team and how to enable the community through guild of educators.
Full episode is coming on February 26th, 2026! Subscribe to our podcast now!
February 18, 2026 05:00 AM UTC
February 17, 2026
PyCoder’s Weekly
Issue #722: Itertools, Circular Imports, Mock, and More (Feb. 17, 2026)
#722 – FEBRUARY 17, 2026
View in Browser »
5 Essential Itertools for Data Science
Learn 5 essential itertools methods to eliminate manual feature engineering waste. Replace nested loops with systematic functions for interactions, polynomial features, and categorical combinations.
CODECUT.AI • Shared by Khuyen Tran
A Fun Python Puzzle With Circular Imports
A deep inspection of just what happens when you write from ... import ... and how that impacts circular import references in your code.
CHRIS SIEBENMANN
B2B MCP Auth Support
Your users are asking if they can connect their AI agent to your product, but you want to make sure they can do it safely and securely. PropelAuth makes that possible →
PROPELAUTH sponsor
Improving Your Tests With the Python Mock Object Library
Master Python testing with unittest.mock. Create mock objects to tame complex logic and unpredictable dependencies.
REAL PYTHON course
Python Jobs
Python + AI Content Specialist (Anywhere)
Articles & Tutorials
Introducing the PSF Community Partner Program
The Python Software Foundation has announced the new Community Partner Program, a way for the PSF to support Python events and initiatives with non-financial support such as promotion and branding.
PYTHON SOFTWARE FOUNDATION
Better Python Tests With inline-snapshot
inline-snapshot lets you quickly and easily write rigorous tests that automatically update themselves. It combines nicely with dirty-equals to handle dynamic data that’s a pain to normalize.
PYDANTIC.DEV • Shared by Alex Hall
See Why Your CI Is Slow
Your GitHub Actions workflows are burning time and money, but you’re flying blind. Depot’s new Analytics shows exactly where your CI spends resources. Track trends, find bottlenecks, optimize across your org. Get visibility with Depot →
DEPOT sponsor
Django’s Test Runner Is Underrated
Loopwerk never made the switch from unittest to pytest for their Django projects. And after years of building and maintaining Django applications, they still don’t feel like they’re missing out.
LOOPWERK
Webmentions With Batteries Included
A webmention is a W3 standard for one post to refer to another and interlink. This article introduces you to a Python library that helps you implement this feature on your site.
FABIO MANGANIELLO
Python 3.12 vs 3.13 vs 3.14
Compare Python 3.12, 3.13, and 3.14: free-threading, JIT, t-strings, performance, and library changes. Which version should you actually use in 2026?
MATHEUS
Django Steering Council 2025 Year in Review
Want to know what is happening in the world of the Django project? This post talks about all the things the Django Steering Council did in 2025.
FRANK WILES
What Exactly Is the Zen of Python?
The Zen of Python is a collection of 19 guiding principles for writing good Python code. Learn its history, meaning, and hidden jokes.
REAL PYTHON
Open Source AI We Use to Work on Wagtail
One of the core maintainers at Wagtail CMS shares what open source models having been working best for the project so far.
WAGTAIL.ORG • Shared by Meagen Voss
Need Switch-Case in Python? It’s Not Match-Case!
Python’s match-case is not a switch-case statement. If you need switch-case, you can often use a dictionary instead.
TREY HUNNER
Python Time & Space Complexity Reference
Open-source reference documenting time and space O(n) complexity for Python built-in and stdlib operations.
PYTHONCOMPLEXITY.COM • Shared by Heikki Toivonen
Projects & Code
pycaniuse: Query caniuse.com From the Terminal
GITHUB.COM/VISESHRP • Shared by Visesh Prasad
silkworm-rs: Free-Threaded Compatible Async Web Scraper
GITHUB.COM/BITINGSNAKES • Shared by Yehor Smoliakov
Events
Weekly Real Python Office Hours Q&A (Virtual)
February 18, 2026
REALPYTHON.COM
PyData Bristol Meetup
February 19, 2026
MEETUP.COM
PyLadies Dublin
February 19, 2026
PYLADIES.COM
PyCon Namibia 2026
February 20 to February 27, 2026
PYCON.ORG
Chattanooga Python User Group
February 20 to February 21, 2026
MEETUP.COM
PyCon Mini Shizuoka 2026
February 21 to February 22, 2026
PYCON.JP
Happy Pythoning!
This was PyCoder’s Weekly Issue #722.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
February 17, 2026 07:30 PM UTC
Real Python
Write Python Docstrings Effectively
Writing clear, consistent docstrings in Python helps others understand your code’s purpose, parameters, and outputs. In this video course, you’ll learn about best practices, standard formats, and common pitfalls to avoid, ensuring your documentation is accessible to users and tools alike.
By the end of this video course, you’ll understand that:
- Docstrings are strings used to document your Python code and can be accessed at runtime.
- Python comments and docstrings have important differences.
- One-line and multiline docstrings are classifications of docstrings.
- Common docstring formats include reStructuredText, Google-style, NumPy-style, and doctest-style.
- Antipatterns such as inconsistent formatting should be avoided when writing docstrings.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 17, 2026 02:00 PM UTC
Python Software Foundation
Join the Python Security Response Team!
Thanks to the work of the Security Developer-in-Residence Seth Larson, the Python Security Response Team (PSRT) now has an approved public governance document (PEP 811). Following the new governance structure the PSRT now publishes a public list of members, has documented responsibilities for members and admins, and a defined process for onboarding and offboarding members to balance the needs of security and sustainability. The document also clarifies the relationship between the Python Steering Council and the PSRT.
And this new onboarding process is already working! The PSF Infrastructure Engineer, Jacob Coffee, has just joined the PSRT as the first new non-"Release Manager" member since Seth joined the PSRT in 2023. We expect new members to join further bolstering the sustainability of security work for the Python programming language.
Thanks to Alpha-Omega for their support of Python ecosystem security by sponsoring Seth’s work as the Security Developer-in-Residence at the Python Software Foundation.
Security doesn't happen by accident: it's thanks to the work of volunteers and paid Python Software Foundation staff on the Python Security Response Team to triage and coordinate vulnerability reports and remediations keeping all Python users safe. Just last year the PSRT published 16 vulnerability advisories for CPython and pip, the most in a single year to date!
And the PSRT usually can’t do this work alone, PSRT coordinators are encouraged to involve maintainers and experts on the projects and submodules. By involving the experts directly in the remediation process ensures fixes adhere to existing API conventions and threat-models, are maintainable long-term, and have minimal impact on existing use-cases.
Sometimes the PSRT even coordinates with other open source projects to avoid catching the Python ecosystem off-guard by publishing a vulnerability advisory that affects multiple other projects. The most recent example of this is PyPI’s ZIP archive differential attack mitigation.
This work deserves recognition and celebration just like contributions to source code and documentation. Seth and Jacob are developing further improvements to workflows involving “GitHub Security Advisories” to record the reporter, coordinator, and remediation developers and reviewers to CVE and OSV records to properly thank everyone involved in the otherwise private contribution to open source projects.
Maybe you’ve read all this and are interested in directly helping the Python programming language be more secure! The process is similar to the Core Team nomination process, you need an existing PSRT member to nominate you and for your nomination to receive at least ⅔ positive votes from existing PSRT members.
You do not need to be a core developer, team member, or triager to be a member of the Python Security Response Team. Anyone with security expertise that is known and highly-trusted within the Python community and has time to volunteer or donate through their employer would make a good candidate for the PSRT. Please note that all PSRT team members have documented responsibilities and are expected to contribute meaningfully to the remediation of vulnerabilities.
Being a member of the PSRT is not required to be notified of vulnerabilities and shouldn’t be to receive “early notification” of vulnerabilities affecting CPython and pip. The Python Software Foundation is a CVE Numbering Authority and publishes CVE and OSV records with up-to-date information about vulnerabilities affecting CPython and pip.
February 17, 2026 02:30 AM UTC
February 16, 2026
Chris Warrick
I Wrote YetAnotherBlogGenerator
Writing a static site generator is a developer rite of passage. For the past 13 years, this blog was generated using Nikola. This week, I finished implementing my own generator, the unoriginally named YetAnotherBlogGenerator.
Why would I do that? Why would I use C# for it? And how fast is it? Continue reading to find out.
OK, but why?
You might have noticed I’m not happy with the Python packaging ecosystem. But the language itself is no longer fun for me to code in either. It is especially not fun to maintain projects in. Elementary quality-of-life features get bogged down in months of discussions and design-by-committee. At the same time, there’s a new release every year, full of removed and deprecated features. A lot of churn, without much benefit. I just don’t feel like doing it anymore.
Python is praised for being fast to develop in. That’s certainly true, but a good high-level statically-typed language can yield similar development speed with more correctness from day one. For example, I coded an entire table-of-contents-sidebar feature in one evening (and one more evening of CSS wrangling to make it look good). This feature extracts headers from either the Markdown AST or the HTML fragment. I could do it in Python, but I’d need to jump through hoops to get Python-Markdown to output headings with IDs. In C#, introspecting what a class can do is easier thanks to great IDE support and much less dynamic magic happening at runtime. There are also decompiler tools that make it easy to look under the hood and see what a library is doing.
Writing a static site generator is also a learning experience. A competent SSG needs to ingest content in various formats (as nobody wants to write blog posts in HTML by hand) and generate HTML (usually from templates) and XML (which you could, in theory, do from templates, but since XML parsers are not at all lenient, you don’t want to). Image processing to generate thumbnails is needed too. And to generate correct RSS feeds, you need to parse HTML to rewrite links. The list of small-but-useful things goes on.
Is C#/.NET a viable technology stack for a static site generator?
C#/.NET is certainly not the most popular technology stack for static site generators. JamStack.org have gathered a list of 377 SSGs. Grouping by language, there are 154 generators written in JavaScript or TypeScript, 55 generators written in Python, and 28 written in PHP of all languages. C#/.NET is in sixth place with 13 (not including YABG; I’m probably not submitting it).
However, it is a pretty good choice. Language-level support for concurrency with async/await (based on a thread pool) and JIT compilation help to make things fast. But it is still a high-level, object-oriented language where you don’t need to manually manage memory (hi Rustaceans!).
The library ecosystem is solid too. There are plenty of good libraries for working with data serialization formats: CsvHelper, YamlDotNet, Microsoft.Data.Sqlite, and the built-in System.Text.Json and System.Xml.Linq. Markdig handles turning Markdown into HTML. Fluid is an excellent templating library that implements the Liquid templating language. HtmlAgilityPack is solid for manipulating HTML, and Magick.NET wraps the ImageMagick library.
There’s one major thing missing from the above list: code highlighting. There are a few highlighting libraries on NuGet, but I decided to stick with Pygments. I still need the Pygments stylesheets around since I’m not converting old reStructuredText posts to Markdown (I’m copying them as HTML directly from Nikola’s cache), so using Pygments for new content keeps things consistent. Staying with Pygments means I still maintain a bit of Python code, but much less: 230 LoC in pygments_better_html and 89 in yabg_pygments_adapter, with just one third-party dependency. Calling a subprocess while rendering listings is slow, but it’s a price worth paying.
Paid libraries in the .NET ecosystem
All the above libraries are open source (MIT, Apache 2.0, BSD-2-Clause). However, one well-known issue of the .NET ecosystem is the number of packages that suddenly become commercial. This trend was started by ImageSharp, a popular 2D image manipulation library. I could probably use it, since it’s licensed to open-source projects under Apache 2.0, but I’d rather not. I initially tried SkiaSharp, but it has terrible image scaling algorithms, so I settled on Magick.NET.
Open-source sustainability is hard, maybe impossible. But I don’t think transitioning from open-source to pay-for-commercial-use is the answer. In practice, many businesses just use the last free version or switch to a different library. I’d rather support open-source projects developed by volunteers in their spare time. They might not be perfect or always do exactly what I want, but I’m happy to contribute fixes and improve things for everyone. I will avoid proprietary or dual-licensed libraries, even for code that never leaves my computer. Some people complain when Microsoft creates a library that competes with a third-party open-source library (e.g. Microsoft.AspNetCore.OpenApi, which was built to replace Swashbuckle.AspNetCore), but I am okay with that, since libraries built or backed by large corporations (like Microsoft) tend to be better maintained.
But at least sometimes trash libraries take themselves out.
Is it fast?
One of the things that set Nikola apart from other Python static site generators is that it only rebuilds files that need to be rebuild. This does make Nikola fast when rebuilding things, but it comes at a cost: Nikola needs to track all dependencies very closely. Also, some features that are present in other SSGs are not easy to achieve in Nikola, because they would cause many pages to be rebuilt.
YetAnotherBlogGenerator has almost no caching. The only thing currently cached is code listings, since they’re rendered using Pygments in a subprocess. Additionally, the image scaling service checks the file modification date to skip regenerating thumbnails if the source image hasn’t changed. And yet, even if it rewrites everything, YABG finishes faster than Nikola when the site is fully up-to-date (there is nothing to do).
I ran some quick benchmarks comparing the performance of rendering the final Nikola version of this blog against the first YABG version (before the Bootstrap 5 redesign).
Testing methodology
Here’s the testing setup:
- AWS EC2 instances
- c7a.xlarge (4 vCPU, 8 GB RAM)
- 30 GB io2 SSD (30000 IOPS)
- Total cost: $2.95 + tax for about an hour’s usage ($2.66 of which were storage costs)
- Fedora 43 from official Fedora AMI
- Python 3.14.2 (latest available in the repos)
- .NET SDK 10.0.102 / .NET 10.0.2 (latest available in the repos)
- setenforce 0, SELINUX=disabled
- Windows Server 2025
- Python 3.14.3 (latest available in winget)
- .NET SDK 10.0.103 / .NET 10.0.3 (latest available in winget)
- Windows Defender disabled
I ran three tests. Each test was run 11 times. The first attempt was discarded (as a warmup and to let me verify the log). The other ten attempts were averaged as the final result. I used PowerShell’s Measure-Command cmdlet for measurements.
The tests were as follows:
- Clean build (no cache, no output)
- Removing
.doit.db,cache, andoutputfrom the Nikola site, so that everything has to be rebuilt from scratch. - Removing
.yabg_cache.sqlite3andoutputfrom the YABG site, so that everything has to be reuilt from scratch, most notably the Pygments code listings have to be regenerated via a subprocess.
- Removing
- Build with cache, but no output
- Removing
outputfrom the Nikola site, so that posts rendered to HTML by docutils/Python-Markdown are cached, but the final HTML still need to be built. - Removing
outputfrom the YABG site, so that the code listings rendered to HTML by Pygments are cached, but everything else needs to be built.
- Removing
- Rebuild (cache and output intact)
- Not removing anything from the Nikola site, so that there is nothing to do.
- Not removing anything from the YABG site. Things are still rebuilt, except for Pygments code listings and thumbnails.
For YetAnotherBlogGenerator, I tested two builds: one in Release mode (standard), and another in ReadyToRun mode, trading build time and executable size for faster execution.
All the scripts I used for setup and testing can be found in listings.
Test results
| Platform | Build type | Nikola | YABG (ReadyToRun) | YABG (Release) |
|---|---|---|---|---|
| Linux | Clean build (no cache, no output) | 6.438 | 1.901 | 2.178 |
| Linux | Build with cache, but no output | 5.418 | 0.980 | 1.249 |
| Linux | Rebuild (cache and output intact) | 0.997 | 0.969 | 1.248 |
| Windows | Clean build (no cache, no output) | 9.103 | 2.666 | 2.941 |
| Windows | Build with cache, but no output | 7.758 | 1.051 | 1.333 |
| Windows | Rebuild (cache and output intact) | 1.562 | 1.020 | 1.297 |
Design details and highlights
Here are some fun tidbits from development.
Everything is an item
In Nikola, there are several different entities that can generate HTML files. Posts and Pages are both Post objects. Listings and galleries each have their own task generators. There’s no Listing class, everything is handled within the listing plugin. Galleries can optionally have a Post object attached (though that Post is not picked up by the file scanner, and it is not part of the timeline). The listings and galleries task generators both have ways to build directory trees.
In YABG, all of the above are Items. Specifically, they start as SourceItems and become Items when rendered. For listings, the source is just the code and the rendered content is Pygments-generated HTML. For galleries, the source is a TSV file with a list of included gallery images (order, filenames, and descriptions), and the generated content comes from a meta field named galleryIntroHtml. Gallery objects have a GalleryData object attached to their Item object as RichItemData.
This simplifies the final rendering pipeline design. Only four classes (actual classes, not temporary structures in some plugin) can render to HTML: Item, ItemGroup (tags, categories, yearly archives, gallery indexes), DirectoryTreeGroup (listings), and LinkGroup (archive and tag indexes). Each has a corresponding template model. Nikola’s sitemap generator recurses through the output directory to find files, but YABG can just use the lists of items and groups. The sitemap won’t include HTML files from the files folder, but I don’t need them there (though I could add them if needed).
Windows first, Linux in zero time
I developed YABG entirely on Windows. This forced me to think about paths and URLs as separate concepts. I couldn’t use most System.IO.Path facilities for URLs, since they would produce backslashes. As a result, there are zero bugs where backslashes leak into output on Windows. Nikola has such bugs pop up occasionally; indeed, I fixed one yesterday.
But when YABG was nearly complete, I ran it on Linux. And it just worked. No code changes needed. No output differences. (I had to add SkiaSharp.NativeAssets.Linux and apt install libfontconfig1 since I was stilll using SkiaSharp at that point, but that’s no longer needed with Magick.NET.)
Not everything is perfect, though. I added a --watch mode based on FileSystemWatcher, but it doesn’t work on Linux. I don’t need it there; I’d have to switch to polling to make it work.
Dependency injection everywhere
A good principle used in object-oriented development (though not very often in Python) is dependency injection. I have several grouping services, all implementing either IPostGrouper or IItemGrouper. They’re registered in the DI container as implementations of those interfaces. The GroupEngine doesn’t need to know about specific group types, it just gets them from the container and passes the post and item arrays.
The ItemRenderEngine has a slightly different challenge: it needs to pick the correct renderer for the post (Gallery, HTML, Listing, Markdown). The renderers are registered as keyed services. The render engine does not need to know anything about the specific renderer types, it just gets the renderer name from the SourceItem’s ScanPattern (so ultimately from the configuration file) and asks the DI container to provide it with the right implementation.
In total, there are 37 specific service implementations registered (plus system services like TimeProvider and logging). Beyond these two examples, the main benefit is testability. I can write unit tests without dependencies on unrelated services, and without monkey-patching random names. (In Python, unittest.mock does both monkey-patching and mocking.)
Okay, I haven’t written very many tests, but I could easily ask an LLM to do it.
Immutable data structures and no global state
All classes are immutable. This helps in several ways. It’s easier to reason about state when SourceItem becomes Item during rendering, compared to a single class with a nullable Content property. Immutability also makes concurrency safer. But the biggest win is how easy it was to develop the --watch mode. Every service has Scoped lifetime, and main logic lives in IMainEngine. I can just create a new scope, get the engine, and run it without state leaking between executions. No subprocess launching, no state resetting — everything disappears when the scope is disposed.
Can anyone use it?
On one hand, it’s open source under the 3-clause BSD license and available on GitHub.
On the other hand, it’s more of a source-available project. There are no docs, and it was designed specifically for this site (so some things are probably too hardcoded for your needs). In fact, this blog’s configuration and templates were directly hardcoded in the codebase until the day before launch. But I’m happy to answer questions and review pull requests!
February 16, 2026 09:15 PM UTC
Anarcat
Keeping track of decisions using the ADR model
In the Tor Project system Administrator's team (colloquially known as TPA), we've recently changed how we take decisions, which means you'll get clearer communications from us about upcoming changes or targeted questions about a proposal.
Note that this change only affects the TPA team. At Tor, each team has its own way of coordinating and making decisions, and so far this process is only used inside TPA. We encourage other teams inside and outside Tor to evaluate this process to see if it can improve your processes and documentation.
The new process
We had traditionally been using a "RFC" ("Request For Comments") process and have recently switched to "ADR" ("Architecture Decision Record").
The ADR process is, for us, pretty simple. It consists of three things:
- a simpler template
- a simpler process
- communication guidelines separate from the decision record
The template
As team lead, the first thing I did was to propose a new template (in ADR-100), a variation of the Nygard template. The TPA variation of the template is similarly simple, as it has only 5 headings, and is worth quoting in full:
Context: What is the issue that we're seeing that is motivating this decision or change?
Decision: What is the change that we're proposing and/or doing?
Consequences: What becomes easier or more difficult to do because of this change?
More Information (optional): What else should we know? For larger projects, consider including a timeline and cost estimate, along with the impact on affected users (perhaps including existing Personas). Generally, this includes a short evaluation of alternatives considered.
Metadata: status, decision date, decision makers, consulted, informed users, and link to a discussion forum
The previous RFC template had 17 (seventeen!) headings, which encouraged much longer documents. Now, the decision record will be easier to read and digest at one glance.
An immediate effect of this is that I've started using GitLab issues more for comparisons and brainstorming. Instead of dumping in a document all sorts of details like pricing or in-depth alternatives comparison, we record those in the discussion issue, keeping the document shorter.
The process
The whole process is simple enough that it's worth quoting in full as well:
Major decisions are introduced to stakeholders in a meeting, smaller ones by email. A delay allows people to submit final comments before adoption.
Now, of course, the devil is in the details (and ADR-101), but the point is to keep things simple.
A crucial aspect of the proposal, which Jacob Kaplan-Moss calls the one weird trick, is to "decide who decides". Our previous process was vague about who makes the decision and the new template (and process) clarifies decision makers, for each decision.
Inversely, some decisions degenerate into endless discussions around trivial issues because too many stakeholders are consulted, a problem known as the Law of triviality, also known as the "Bike Shed syndrome".
The new process better identifies stakeholders:
- "informed" users (previously "affected users")
- "consulted" (previously undefined!)
- "decision maker" (instead of the vague "approval")
Picking those stakeholders is still tricky, but our definitions are more explicit and aligned to the classic RACI matrix (Responsible, Accountable, Consulted, Informed).
Communication guidelines
Finally, a crucial part of the process (ADR-102) is to decouple the act of making and recording decisions from communicating about the decision. Those are two radically different problems to solve. We have found that a single document can't serve both purposes.
Because ADRs can affect a wide range of things, we don't have a specific template for communications. We suggest the Five Ws method (Who? What? When? Where? Why?) and, again, to keep things simple.
How we got there
The ADR process is not something I invented. I first stumbled upon it in the Thunderbird Android project. Then, in parallel, I was in the process of reviewing the RFC process, following Jacob Kaplan-Moss's criticism of the RFC process. Essentially, he argues that:
- the RFC process "doesn't include any sort of decision-making framework"
- "RFC processes tend to lead to endless discussion"
- the process "rewards people who can write to exhaustion"
- "these processes are insensitive to expertise", "power dynamics and power structures"
And, indeed, I have been guilty of a lot of those issues. A verbose writer, I have written extremely long proposals that I suspect no one has ever fully read. Some proposals were adopted by exhaustion, or ignored because not looping in the right stakeholders.
Our discussion issue on the topic has more details on the issues I found with our RFC process. But to give credit to the old process, it did serve us well while it was there: it's better than nothing, and it allowed us to document a staggering number of changes and decisions (95 RFCs!) made over the course of 6 years of work.
What's next?
We're still experimenting with the communication around decisions, as this text might suggest. Because it's a separate step, we also have a tendency to forget or postpone it, like this post, which comes a couple of months late.
Previously, we'd just ship a copy of the RFC to everyone, which was easy and quick, but incomprehensible to most. Now we need to write a separate communication, which is more work but, hopefully, worth the as the result is more digestible.
We can't wait to hear what you think of the new process and how it works for you, here or in the discussion issue! We're particularly interested in people that are already using a similar process, or that will adopt one after reading this.
Note: this article was also published on the Tor Blog.
February 16, 2026 08:21 PM UTC
PyBites
We’re launching 60 Rust Exercises Designed for Python Devs
“Rust is too hard.”
We hear it all the time from Python developers.
But after building 60 Rust exercises specifically designed for Pythonistas, we’ve come to a clear conclusion: Rust isn’t harder than Python per se, it’s just a different challenge.
And with the right bridges, you can learn it faster than you think.
Why We Built This
Most Rust learning resources start from zero. They assume you’ve never seen a programming language before, or they assume you’re coming from C++.
Neither fits the Python developer who already knows how to think in code but needs to learn Rust’s ownership model, type system, and borrow checker.
We took a different approach: you already know the pattern, here’s how Rust does it.
Every exercise starts with the Python concept you’re familiar with — list comprehensions, context managers, __str__, defaultdict — and shows you the Rust equivalent.
No starting from scratch. No wasted time on concepts you already understand.
What’s Inside
60 exercises across 10 tracks:
- Intro (15 exercises) — variables, types, control flow, enums, pattern matching
- Ownership (7) — move semantics, borrowing, the borrow checker
- Traits & Generics (8) — Debug, Display, generic functions and structs
- Iterators & Closures (8) — closures, iterator basics, map/filter, chaining
- Error Handling (4) — Result, Option, the
?operator - Strings (5) — String vs &str, slicing, UTF-8
- Collections (5) — Vec, HashMap, the entry API
- Modules (4) — module system, visibility, re-exports
- Algorithms (4) — recursion, sorting, classic problems in Rust
Each exercise has a teaching description with Python comparisons, a starter template, and a full test suite that validates your solution.
The Python → Rust Map
Every exercise bridges a concept you already know:
| You know this in Python | You’ll learn this in Rust | Track |
|---|---|---|
__str__ / __repr__ | Display / Debug traits | Traits & Generics |
defaultdict, Counter | HashMap entry API | Collections |
| list comprehensions | .map().filter().collect() | Iterators & Closures |
try / except | Result<T, E> + ? operator | Error Handling |
with context managers | RAII + ownership | Ownership |
lambda | closures (|x| x + 1) | Iterators & Closures |
Optional / None checks | Option<T> + combinators | Error Handling |
import / from x import y | mod / use | Modules |
What the Bridges Look Like
Here’s a taste. When teaching functions, we start with what you already know:
def area(width: int, height: int) -> int:
return width * heightThen have you convert it into Rust:
fn area(width: i32, height: i32) -> i32 {
width * height
}def becomes fn. Type hints become required. And the last expression — without a semicolon — is the return value. No return needed.
Add a semicolon by accident? The compiler catches it instantly. That’s your first lesson in how Rust turns runtime surprises into compile-time errors.
Or take branching. In Python, if is a statement — it does things. In Rust, if is an expression — it returns things:
Python:
if celsius >= 30:
label = "Hot"
elif celsius >= 15:
label = "Mild"
else:
label = "Cold"Rust:
let label = if celsius >= 30 {
"Hot"
} else if celsius >= 15 {
"Mild"
} else {
"Cold"
};
Same logic, but now the result goes straight into label. No ternary operator needed — if itself returns a value.
You’ll learn the Rust language bit by bit, and we hope that by making it more relatable to your Python knowledge, it will stick faster.
Write, Test, Learn — All in the Browser
No local Rust installation needed. Each exercise gives you a split-screen editor: the teaching description with Python comparisons on the left, a code editor with your starter template on the right (switched to dark mode):
Write your solution, hit Run Tests, and get instant feedback from the compiler and test suite:
Errors show you exactly what went wrong. Iterate until all tests pass — then check the solution to see if there is anything you can do in a different or more idiomatic way.
Mirroring our Python coding platform, code persists automatically, so you can pick up where you left off. And as you solve exercises, you earn points and progress through ninja belts. 
Why Learn Rust in 2026
Three reasons Python developers should care:
Career. Rust has been the most admired language for 8 years running in Stack Overflow surveys. AWS, Microsoft, Google, Discord, and Cloudflare are all investing heavily in Rust. The demand is real and growing.
Ecosystem. Python + Rust is becoming the standard stack for performance-critical Python. The tools you already use — pydantic, ruff, uv, cryptography — are Rust under the hood. Understanding Rust means understanding the layer beneath your Python.
Becoming a better developer. Learning Rust’s ownership model changes how you think about code. You start reasoning about data flow, memory, and error handling more carefully — and that makes your Python better too. It’s one of the best investments you can make in your craft.
Beyond Exercises: The Cohort
If you want to go deeper, our Rust Developer Cohort takes these concepts and applies them to a real project: building a JSON parser from scratch over 6 weeks. You’ll go from tokenizing strings to recursive descent parsing, with PyO3 integration to call your Rust parser from Python.
The exercises are the foundation. The cohort is where you learn app development end-to-end, building something real.
How Developers Experience The Platform
“Who said learning Rust is gonna be difficult? Had tons of fun learning Rust by going through the exercises!” — Aris N
“As someone who is primarily a self taught developer, I learned the importance of learning by doing by completing so many of the ‘Bites’ challenges on the PyBites platform. Now, as someone learning Rust, I’ve come across the Rust platform and have used the exercises in the same way. Some things I will know and be able to solve quickly, while others require me to research and learn more about the language. The new concepts solidify and build over time. They are a great way to be hands on and learn by doing.” — Jesse B
The Rust Bites are a great way to start learning Rust hands-on. Whether you’re just starting with Rust or already have some experience, they help build real skills and challenge you to understand all the basic data types and design patterns of Rust. Things that are tough to understand, like pattern matching, result handling, and ownership, will feel more understandable and natural after going through these exercises, and they’ll help you be a better programmer in other languages too! Highly recommended! — Dan D
Key Takeaways
- Rust isn’t harder than Python — it’s a different kind of challenge
- Python-to-Rust bridges make concepts click faster than learning from scratch
- 60 exercises across 10 tracks, from basics to Traits & Generics
- Every exercise starts with the Python pattern you already know
- Learning Rust makes you a better Python developer too
Where to Start
New to Rust? Start with the Intro track — first 10 exercises are free and cover the fundamentals: variables, types, control flow, enums, and pattern matching. They will get your feet wet.
Know the basics already? Jump straight to Ownership — that’s where Rust gets genuinely different from Python, and where the Python bridges help most. Once ownership clicks, the rest of Rust falls into place.
Want a challenge? The Iterators & Closures and Error Handling tracks are where Python developers tend to have the most “aha” moments. More advanced concepts like lifetimes we’ll add later.
Try It Yourself
Start with the exercises at Rust Platform — pick a track that matches where you are, and see how the Python bridges make Rust feel less foreign than you expected.
If you’re ready to commit to the full journey, check out the Rust Developer Cohort — our 6-week guided program where you build a real project from the ground up.
Rust isn’t the enemy. It’s your next superpower.
We’re not aware of any other platform that teaches Rust specifically through the lens of Python. If you’re a Python developer curious about Rust, this is built for you.
February 16, 2026 03:41 PM UTC
Real Python
TinyDB: A Lightweight JSON Database for Small Projects
TinyDB is a Python implementation of a NoSQL, document-oriented database. Unlike a traditional relational database, which stores records across multiple linked tables, a document-oriented database stores its information as separate documents in a key-value structure. The keys are similar to the field headings, or attributes, in a relational database table, while the values are similar to the table’s attribute values.
TinyDB uses the familiar Python dictionary for its document structure and stores its documents in a JSON file.
TinyDB is written in Python, making it easily extensible and customizable, with no external dependencies or server setup needed. Despite its small footprint, it still fully supports the familiar database CRUD features of creating, reading, updating, and deleting documents using an API that’s logical to use.
The table below will help you decide whether TinyDB is a good fit for your use case:
| Use Case | TinyDB | Possible Alternatives |
|---|---|---|
| Local, small dataset, single-process use (scripts, CLIs, prototypes) | ✅ | simpleJDB, Python’s json module, SQLite |
| Local use that requires SQL, constraints, joins, or stronger durability | — | SQLite, PostgreSQL |
| Multi-user, multi-process, distributed, or production-scale systems | — | PostgreSQL, MySQL, MongoDB |
Whether you’re looking to use a small NoSQL database in one of your projects or you’re just curious how a lightweight database like TinyDB works, this tutorial is for you. By the end, you’ll have a clear sense of when TinyDB shines, and when it’s better to reach for something else.
Get Your Code: Click here to download the free sample code you’ll use in this tutorial to explore TinyDB.
Take the Quiz: Test your knowledge with our interactive “TinyDB: A Lightweight JSON Database for Small Projects” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
TinyDB: A Lightweight JSON Database for Small ProjectsIf you're looking for a JSON document-oriented database that requires no configuration for your Python project, TinyDB could be what you need.
Get Ready to Explore TinyDB
TinyDB is a standalone library, meaning it doesn’t rely on any other libraries to work. You’ll need to install it, though.
You’ll also use the pprint module to format dictionary documents for easier reading, and Python’s csv module to work with CSV files. You don’t need to install either of these because they’re included in Python’s standard library.
So to follow along, you only need to install the TinyDB library in your environment. First, create and activate a virtual environment, then install the library using pip:
(venv) $ python -m pip install tinydb
Alternatively, you could set up a small pyproject.toml file and manage your dependencies using uv.
When you add documents to your database, you often do so manually by creating Python dictionaries. In this tutorial, you’ll do this, and also learn how to work with documents already stored in a JSON file. You’ll even learn how to add documents from data stored in a CSV file.
These files will be highlighted as needed and are available in this tutorial’s downloads. You might want to download them to your program folder before you start to keep them handy:
Get Your Code: Click here to download the free sample code you’ll use in this tutorial to explore TinyDB.
Regardless of the files you use or the documents you create manually, they all rely on the same world population data. Each document will contain up to six fields, which become the dictionary keys used when the associated values are added to your database:
| Field | Description |
|---|---|
continent |
The continent the country belongs to |
location |
Country |
date |
Date population count made |
% of world |
Percentage of the world’s population |
population |
Population |
source |
Source of population |
As mentioned earlier, the four primary database operations are Create, Read, Update, and Delete—collectively known as the CRUD operations. In the next section, you’ll learn how you can perform each of them.
To begin with, you’ll explore the C in CRUD. It’s time to get creative.
Create Your Database and Documents
The first thing you’ll do is create a new database and add some documents to it. To do this, you create a TinyDB() object that includes the name of a JSON file to store your data. Any documents you add to the database are then saved in that file.
Documents in TinyDB are stored in tables. Although it’s not necessary to create a table manually, doing so can help you organize your documents, especially when working with multiple tables.
To start, you create a script named create_db.py that initializes your first database and adds documents in several different ways. The first part of your script looks like this:
Read the full article at https://realpython.com/tinydb-python/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 16, 2026 02:00 PM UTC
Quiz: TinyDB: A Lightweight JSON Database for Small Projects
In this quiz, you’ll test your understanding of the TinyDB database library and what it has to offer, and you’ll revisit many of the concepts from the TinyDB: A Lightweight JSON Database for Small Projects tutorial.
Remember that the official documentation is also a great reference.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]


