Friday, January 3, 2025

Better Python Developer Productivity with RDD

"REPL Driven Development is an interactive development experience with fast feedback loops”

I have written about REPL Driven Development (RDD) before, and I use it in my daily workflow. You will find setups for Python development in Emacs, VS Code and PyCharm in this post from 2022. That's the setup I have used since then.

But there's one particular thing that I missed ever since I begun with RDD in Python. I learned this interactive REPL kind of writing code when developing services and apps with Clojure. When you evaluate code in Clojure - such as function calls or vars - the result of the evaluation nicely pops up as an overlay in your code editor, right next to the actual code. This is great, because that's also where the eyes are and what the brain is currently focused on.

The setup from my previous post will output the evaluated result in a separate window (or buffer, as it is called in Emacs). Still within the code editor, but in a separate view. That works well and has improved my Python Developing Productivity a lot, but I don't like the context switching. Can this Python workflow be improved?

Can I improve this somehow?

I've had that question in the back of my mind for quite some time. My Elisp skills are unfortunately not that great, and I think that has been a blocker for me. I've managed to write my own Emacs config, but that's about it. Until now. During the Christmas Holidays I decided to try learning some Emacs Lisp, to be able to develop something that reminds of the great experience from Clojure development.

I used an LLM find out how to do this, and along the way learned more about what's in the language. I'm not a heavy LLM/GPT/AI user at all. In fact, I rarely use it. But here it made sense to me, and I have used it to learn how to write and understand code in this particular language and environment.

I have done rewrites and a lot of refactoring of the result on my own (my future self will thank me). The code is refactored from a few, quite large and nested Lisp functions into several smaller and logically separated ones. Using an LLM to just get stuff done and move on without learning would be depressing. The same goes for copy-pasting code from StackOverflow, without reflecting on what the code actually does. Don't do that.

Ok, back to the RDD improvements. I can now do this, with new feature that is added to my code editor:

Selecting a variable, and inspecting what it contains by evaluating and displaying the result in an overlay.
Selecting a function, execute it and evaluate the result.

The overlay content is syntax highlighted as Python, and will be rendered into several rows when it's a lot of data.

The actual code evaluation is still performed in the in-editor IPython shell. But the result is extracted from the shell output, formatted and rendered as an overlay. I've chosen to also truncate the result if it's too large. The full result will still be printed in the Python shell anyway.
The Emacs Lisp code does this in steps:

  1. Adding a hook for a specific command (it's the elpy-shell-send-buffer-or-region command). The Emacs shortcut is C-c C-c.
  2. Capture the contents of the Python shell.
  3. Create an overlay with the evaluated result, based on the current cursor position.
  4. Remove the overlay when the cursor moves.

This is very much adapted to my current configuration, and I guess the real world testing of this new addition will be done after the holidays, starting next week. So far, so good!

Future improvements?

I'm currently looking into the possibilities of starting an external IPython or Jupyter/kernel session, and how to connect to it from within the code editor. I think that could enable even more REPL Driven Development productivity improvements.

You'll find the Emacs Lisp code at GitHub, in this repo, where I store my current Emacs setup.

Top Photo by Nicolai Berntsen on Unsplash

Sunday, December 22, 2024

Introducing python-hiccup

"All you need is list, set and dict"

Write HTML with Python

Python Hiccup is a library for representing HTML using plain Python data structures. It's a Python implementation of the Hiccup syntax.

You create HTML with Python, using list or tuple to represent HTML elements, and dict to represent the element attributes. The work on this library started out as a fun coding challenge, and now evolving into something useful for Python Dev teams.

Basic syntax

The first item in the list is the element. The rest is attributes, inner text or children. You can define nested structures or siblings by adding lists (or tuples if you prefer).

["div", "Hello world!"]
Using the html.render function of the library, the output will be HTML as a string: <div>Hello world!</div>

Adding id and classes:["div#foo.bar", "Hello world!"]
The HTML equivalent is: <div id="foo" class="bar">Hello world!</div>

If you prefer, you can define the attributes using a Python dict as an alternative to the compact syntax above: {"id": "foo", "class": "bar"}

Writing a nested HTML structure, using Python Hiccup:
["div", ["span", ["strong", "Hello world!"]]]
The HTML equivalent is:
<div><span><strong>Hello world!</strong></span></div>

Example usage

Server side rendering with FastAPI

Using the example from the FastAPI docs , but without the inline HTML. Instead, using the more compact and programmatic approach with the Python-friendly hiccup syntax.

from python_hiccup.html import render

from fastapi import FastAPI
from fastapi.responses import HTMLResponse

app = FastAPI()


def generate_html_response():
    data = ["html",
           ["head", ["title", "Some HTML in here"]],
           ["body", ["h1", "Look ma! HTML!"]]]

    return HTMLResponse(content=render(data), status_code=200)


@app.get("/items/", response_class=HTMLResponse)
async def read_items():
    return generate_html_response()

PyScript

Add python-hiccup as any other third-party dependency in the package.toml file: packages = ["python-hiccup"]

Write the HTML rendering in your PyScript files:

from pyweb import pydom
from python_hiccup.html import render

pydom["div#main"].html = render(["h1", "Look ma! HTML!"])

That's it!

python-hiccup aims to make HTML rendering programmatic, simple and readable. I hope you will find it useful. The HTML in this blog post was written using python-hiccup.

Resources



Top photo by Susan Holt Simpson on Unsplash

Tuesday, August 27, 2024

Simple Kubernetes in Python Monorepos

"Kubernetes, also known as K8s, is an open source system for automating deployment, scaling, and management of containerized applications."
(from kubernetes.io)

Setting up Kubenetes for a set of Microservices can be overwhelming at first sight.

I'm currently learning about K8s and the Ecosystem of tooling around it. One thing that I've found difficult is the actual K8s configuration and how the different parts relate to each other. The YAML syntax is readable, but I also find it hard to understand how to structure it - impossible to edit without having the documentation close at hand. When I first got the opportunity to work with Python code running in Kubernetes, I realized that I have to put extra effort in understanding what's going on in there.

I was a bit overwhelmed by what looked like a lot of repetitive and duplicated configuration. I don't like that. But there's a tool called Kustomize that can solve this, by incrementally constructing configuration objects with snippets of YAML.

Kustomize is about managing Kubernetes objects in a declarative way, by transforming a basic setup with environment-specific transformations. You can replace, merge or add parts of the configuration that is specific for the current environment. It reminds me of how we write reusable Python code in general. The latest version of the Kubernetes CLI - Kubectl - already includes Kustomize. It used to be a separate install.

Microservices

From my experience, the most common way of developing Microservices is to isolate the source code of each service in a separate git repository. Sometimes, shared code is extracted into libraries and put in separate repos. This way of working comes with tradeoffs. With the source code spread out in several repositories, there's a risk of having duplicated source code. Did I mention I don't like duplicated code?

Over time, it is likely that the services will run different versions of tools and dependencies, potentially also different Python versions. From a maintainability and code quality perspective, this can be a challenge.

YAML Duplication

In addition to having the Python code spread out in many repos, a common setup for Kubernetes is to do the same thing: having the service-specific configuration in the same repo as the service source code. I think it makes a lot of sense to have the K8s configuration close to the source code. But with the K8s configuration in separate repos, the tradeoffs are very much the same as for the Python source code.

For the YAML part in specific, it is even likely that the configuration will be duplicated many times across the repos. A lot of boilerplate configuration. This can lead to unnecessary extra work when needing to update something that affects many Microservices.

One solution to the tradeoffs with the source code and the Kubernetes configuration is: Monorepos.

K8s configuration in a Monorepo

A Monorepo is a repository containing source code and multiple deployable artifacts (or projects), i.e. a place where you would have all your Python code and where you would build & package several Microservices from. The purpose of a Monorepo is to simplify code reuse, and to use the same developer tooling setup for all code.

The Polylith Architecture is designed for this kind of workflow (I am the maintainer of the Python tools for the Polylith Architecture).

While learning, struggling and trying out K8s, I wanted to find ways to improve the Configuration Experience by applying the good ideas from the Developer Experience of Polylith. The goal is to make K8s configuration simple and joyful!

Local Development

You can try things out locally with developer tools like Minikube. With Minikube, you will have a local Kubernetes to experiment with, test configurations and to run your containerized microservices. It is possible to dry-run the commands or apply the setup into a local cluster, by using the K8s CLI with the Kustomize configs.

I have added examples of a reusable K8s configuration in the The Python Polylith Example repo. This Monorepo contains different types of projects, such as APIs and event handlers.

The K8s configuration is structured in three sections:

  • A basic setup for all types of deployments, i.e. the config that is common for all services.
  • Service-type specific setup (API and event handler specific)
  • Project-specific setup

All of theses sections have overlays for different environments (such as development, staging and production).

As an alternative, the project-specific configuration could also be placed in the top /kubernets folder.

I can run kubectl apply -k to deploy a project into the Minikube cluster, using the Kustomize configuration. Each section adds things to the configuration that is specific for the actual environment, the service type and the project.

The base, overlays and services are the parts that aren't aware of the project. Those Project-specific things are defined in the project section.

Using a structure like this will make the Kubernetes configuration reusable and with almost no duplications. Changing any common configuration only needs to be done in one place, just as with the Python code - the bricks - in a Polylith Monorepo.

That’s all. I hope you will find the ideas and examples in this post useful.


Resources


Top photo by Justus Menke on Unsplash

Sunday, May 12, 2024

Pants & Polylith

But who is Luke and who is R2?
"Pants is a fast, scalable, user-friendly build system for codebases of all sizes"
"Polylith helps us build simple, maintainable, testable, and scalable backend systems"

Can we use both? I have tried that out, and here's my notes.

Why?

Because The Developer Experience.

Developer Experience is important, but what does that mean? For me, it is about keeping things simple. The ability to write, try out and reuse code without any context switching. By using one single setup for the REPL and the IDE, you will have everything at your fingertips.

The Polylith Architecture solves this by organizing code into smaller building blocks, or bricks, and separating code from the project-specific configurations. You have all the bricks and configs available in a Monorepo. For Python development, you create one single virtual environment for all your code and dependencies.

There is also tooling support for Polylith that is useful for visualizing the contents of the Monorepo, and for validating the setup. If you already are into Pantsbuild, the Polylith Architecture might be the missing Lego bricks you want to add for a great Developer Experience.

Powerful builds with Pants

Pantsbuild is a powerful build system. The pants tool resolves all the dependencies (by inspecting the source code itself), runs the tests and creates distributions in isolation. The tool also support the common Python tasks such as linting, type checking and formatting. It also has support for creating virtual environments.

Dude, where's my virtual environment?

In the Python Community, there is a convention to name the virtual environment in a certain way, usually .venv, and creating it at the Project root (this will also likely work well with the defaults of your IDE).

The virtual environment created by Pants is placed in a dists folder, and further in a Pants-specific folder structure. I found that the created virtual environment doesn't seem to include custom source paths (I guess that would be what Pants call roots).

Custom source paths is important for an IDE to locate the Python source code. Maybe there are built-in ways in Pantsbuild to solve that already? Package management tools like Poetry, Hatch and PDM have support for configuring custom source paths in the pyproject.toml and also creating virtual environments according to the Python Community conventions.

Note: If you are a PyCharm user, you can mark a folder as a source root manually and it will keep that information in a cache (probably a .pth file).

Example code and custom scripts

I have created an example repository, a monorepo using Pantsbuild and Polylith. You will find Python code and configurations according to the Polylith Architecture and the Pantsbuild configurations making it possible to use both tools. In the example repo I have added a script that adds source paths, based on the output from the pants roots command, to the virtual environment created by Pantsbuild. This is accomplished by adding a .pth file to the site_packages folder. For convenience, the script will also create a symlink to a .venv folder at the root of the repo.

Having the virtual environment properly setup, you can use the REPL (my favorite is IPython) with full access to the entire code base:

source .venv/bin/activate
ipython

With an activated virtual environment, you can also use all of the Polylith commands:

poly create
poly info
poly libs
poly deps
poly diff
poly check
poly sync

Pants & Polylith

Pantsbuild has a different take on building and packaging artifacts compared to other tools I've used. It has support for several languages and setups. Some features overlap with what's available in the well-known tooling in the Python community, such as Poetry. Some parts diverge from the common conventions.

Polylith has a different take on sharing code, and also have some overlapping features. Polylith is a Monorepo Architecture, with tooling support for visualizing the Monorepo. From what I've learned so far, the checks and code inspection features are the things you will find in both Pants and Polylith.

Pants operate on a file level. Polylith on the bricks level.

My gut feeling after learning about it and by experimenting, is that Pantsbuild and Polylith shares the same basic vision of software development in general and I have found them working really well together. There are some things I would like to have been a better fit, such as when selecting contents of the Pants-specific BUILD files vs the content in the project-specific pyproject.toml files.

Maybe I should develop a Pants Polylith plugin to fix that part. 🤔
How does that sound to you?


Resources


Top Photo by Studbee on Unsplash

Saturday, April 13, 2024

Write Less Code, You Must

An aspect of Python Software Development that is often overlooked, is Architecture (or Design) at the namespace, modules & functions level. My thoughts on Software Development in general is that it is important to try hard writing code that is Simple, and Easy to move from one place to another.

When having code written like this, it becomes less important if a feature was added in Service X, but a better fit would be Service Y when looking at it from a high-level Architectural perspective. All you need to do is move the code to the proper place, and you're all good. However, this will require that the actual code is moveable: i.e. having the features logically separated into functions, modules and namespace packages.

Less Problems

There's a lot of different opinions about this, naturally. I've seen it in in several public Python forums, and been surprised about the reactions about Python with (too) few lines of code in it. How is it even possible having too little of code?

My take on this in general is Less code is Less Problems.

An example

def my_something_function():
    # Validation
    
    # if valid 
    # else do something
    ... python code here

    # Checking

    # if this 
    # elif that
    # elif not this or not that
    # else do_something
    ... python code here

    # Data transformation

    # for each thing in the things
    #    do a network call and append to a list
    ... python code here

    # Yay, done
    return the_result

This type of function - when all of those things are processed within the function body - is not very testable. A unit test would likely need a bunch of mocking, patching and additional boilerplate test data code. Especially when there are network calls involved.

My approach on refactoring the code above would be to first identify the different tasks within this controller type of function, and begin by extracting each task into separate functions. Ideally these would be pure functions, accepting input and returning output.

At first, I would put the functions within the same module, close to at hand. Quite quickly, the original function has become a whole lot more testable, because the extracted functions can now easily be patched (my preference is using pytest monkeypatch). This approach would be my interpretation of developing software towards a clean code ideal. There is no need for a Dependency Injection framework or any unnecessary complex OOP-style hierarchy to accomplish it.

In addition to testability, the Python code becomes runnable and REPL-friendly. You can now refactor, develop and test-run the individual functions in the REPL. This is a very fast workflow for a developer. Read more about REPL Driven Development in Python here.

With the features living in separate isolated functions, you will likely begin to identify patterns:

"- Hey, this part does this specific thing & could be put in that namespace"

When moving code into a namespace package, the functions become reusable. Other parts of the application - or, if you have a Monorepo containing several services - can now use one and the same source code. The same rows of code, located in a single place of the repo. You will likely structure the repo with many namespace packages, each one containing one or a couple of modules with functions that ideally do one thing. It kind of sounds like the Unix philosophy, doesn't it?

This is how I try to write code on a daily basis, at work and when developing Open Source things. I use tools like SonarCloud and CodeScene to help me keep going in this direction. I've written about that before. The Open source code that I focus on these days (Polylith) has 0% Code Duplications, 0% Code Smells and about a 9.96 long-term Quality Code Scoring. The 0.04 that is left has been an active decision by me and is because of endpoints having 5+ input arguments. It makes sense for me to keep it like that there, but not in functions within the app itself where an options object is a better choice.

This aspect of Software Development is, from my point of view, very important. Even more important than the common Microservices/Events/REST/CQRS debates when Architecture is the topic of discussion. This was my Saturday afternoon reflections, and I thank you for reading this post. ☀️

Top Photo by Remy Gieling on Unsplash