Tuesday, December 19, 2023

The Lost Balkan Tapes: a Christmas story

A couple of days ago, a nice thing happened that has made me very happy. It happened almost by accident. At work, we have a #music channel on Slack and some of us frequently share music that we like and recommend to the folks in the company. I really like those kind of work-but-not-related-to-the-actual-work kind of things. It helps developing a friendly Organizational culture, and it is also a simple way to get to know each other better.


I was born and raised in the Stockholm area of Sweden, but my dad and his parents (my grandparents) came to Sweden in the late 1960s from former Yugoslavia. They, like many from the Balkans, Finland, Italy and Greece, got job offerings from Swedish companies and they decided to give it a try. They began working with manufacturing chocolate here in Stockholm. I don't know that much about their life over there before Sweden, other than the stories I've heard many times when growing up.

One of my favorite stories is about my Grandfather and when he escaped from some kind of prison camp, set up by the World War II Occupation of former Yugoslavia. He was only a teenager back then, but managed to run away from the guards into a forest - and then catch a passing train, on its way away from the camp. In my imagination, the train was moving fast and he jumped on it in the same way Indiana Jones would do. I remember him telling me and my brother about going undercover, calling himself Ivan Something-something when the train conductor asked who that kid from nowhere was. I loved that story and wanted to hear it over and over again.

Another favorite story was about my Grandmother. She used to be a singer in the 1950s, early 1960s and performed all over the country. I remember the day she told me about the famous people she used to sing for, such as the Ethiopian Emperor Haile Selassie! When I grew up, Reggae and the Rastafari culture was a big thing among us kids in the suburbs, and I certainly knew about Haile Sellasie. My grandma has not only met him, but also has sung for him! Wow. As I understood it, she was popular and during a period of time she was often hired to sing when the officials of the Country expected a visit from abroad.

I have heard only a few songs on cassette when we hang out in the apartment in Fisksätra (a concrete suburb in the Stockholm area that I have spent a lot of time in). I remember the music sounding quite good. But honestly, as a kid and later a young adult, I wasn't really that much of a fan of Balkan Folk Music. Still very cool to hear her sing. She occasionally sang some songs for us too, but time & smoking cigarettes had made her voice different than from the recording of those cassette tapes.

Years later, with both grandma and grandpa no longer around, I thought that the tapes were lost & gone forever. If I recall it correctly, some of it was even accidentally overwritten. Oh no. I made some attempts to find information (and possibly music) online, but failed. I gave up hope of finding anything. This was many years ago.


Back to work. I decide to share some nice Reggae music in the Slack channel. I found the Gratitude Riddim when browsing my online music app, good stuff! The friends at work got inspired and also shared some Jamaican vibes and we had a fun conversation going on there. Then, one person added a picture of Haile Selassie that took me right back to those childhood memories. So, naturally, I told them the Story about my Grandmother and that she has sung for him. But sadly the music is gone, you know. My friend probably got curious, and I believe he just googled her name.

”Wow, very cool to hear about your grandma. Is this her?”
(with a link containing music)


What? Huh? No, wait. That can't be her. Or. Is it? Naw. Gotta be someone else. Then again, how many Folk Music singers from the Balkans named Ikonija Vujic can there be? After a while, I realized that it is in fact my Grandmother! He actually found about 15 songs. All of them beautiful & melancholic Love Songs. Old school Balkan Folk Music with the Tamburitza instrument. Is it the great Janika Balaž playing? This is just me guessing, but according to Wikipedia he lived in the exact same area where my grandparents (and my dad) used to hang out back in the days. They must have met sometime!

I can't think of a better Christmas gift than this and I am forever grateful about these findings. It happened almost randomly, by accident. If I hadn't shared those Reggae songs in Slack, I would still be thinking that the music of my Grandmother was gone. But the Lost Balkan Tapes of Ikonija Vujic have been found again. Thanks to my friend at work and the enthusiast person that has published a huge amount of Balkan Music from the past. Thank you! 🙏

Wednesday, November 15, 2023

A Coding Copilot

When developing software, I spend most of the time thinking & talking about the problems to solve: the what, the why and the how. The thinking-part works best for me when I'm actually away from the desktop, such as walking outdoors or even when grabbing a cup of coffee & a banana by the office kitchen. 🍌

Talking is a really nice thing, we should try that more often in combination with listening to what others have say. Even when not having any human around to chat with, there is always the rubber duck. Speak to it!

Problem Solving

Mob programming takes the Thinking & Talking to the next level. I am, strangely enough, surprised almost every time how great mob and pair programming is for software development. I tend to forget that. It's probably easy to fall back into old familiar behavior.

When figuring out what to try out, the actual typing is done pretty fast for me. A big part of the productivity boost is inspiration and the joy of solving problems with programming. I think this is valid for most creative and problem-solving work.

Everyday practicing

I try hard every day to write clear & concise code. Practicing, learning ... and walking. I really like the Functional approach to programming: separating data from behavior, actions from calculations. I should probably use GitHub Copilot or ChatGPT more, but haven't yet found the need to go all-in (aren't the AI suggestions a bit verbose, or am I doing things wrong?). Those types of in-editor copilots are cool, definitely, and are for sure here to stay. Currently, I find more value in a different kind of automated guidance.


When working on my Open Source Project, the Python tools for the Polylith Architecture, I am guided by tools like SonarCloud and CodeScene. I guess there are some AI involved quite heavily in there too. I especially appreciate the code reviews & even more the CodeScene long-term analysis about the effects of Tech Debt. It's like having a co-pilot, or even a Teacher, guiding you during the development to make you a better software developer than before.

This is different from the kind of review a team of humans usually do when sharing feedback on Pull Requests or even during mob programming sessions.

Even if I sometimes disagree with things the tools report on being a problem, I go ahead and refactor the code anyway. Then, when doing the refactoring, I quickly realize that the code is actually transforming into something simpler and clearer than before. SonarCloud is more into the low level details, and lets me know about code duplications or code smells. We don't want that, no. Better fix that too.

Coding Style

The feedback from these kind of tools is helping me to get closer to writing the type of code I want to write: minimalistic, readable and functional. A coding style that also is a very good fit for the Polylith Architecture, even if that thing isn't about how the code is written. You are encouraged to write in a readable & functional style by the structuring of code and by having all code at your fingertips, ready to experiment with in the REPL or in a Notebook.

CodeScene, SonarCloud and Polylith: all of them helping me in my journey to be a better Developer. Maybe I'll throw in Copilot in there too eventually.

Top Photo by Jamie Templeton on Unsplash

Saturday, September 30, 2023

Software and Projects

How can we simplify things, to add value as early as possible?

Software Projects is a misconception within Software Development. When organizing work into Projects, one might think that there also is a need to estimate the things to be done. Because the Project itself is by definition something that will end, or more likely be delayed.

From my experience, estimation is only adding anxiety and sometimes actually delaying value. I have experienced the same thing with Projects and fixed release dates.

Instead, we could flip it: how can we simplify, to add value for our users as early as possible?

Flip it!

In a team that I was part of some time ago, the Product Managers challenged us by asking those kind of questions. We (the Developers) were at the beginning intimidated, because we had the idea of building the perfect kind of feature and usually we spent time on figuring out & cover as many scenarios as we could imagine. The not-great thing with that approach is that it will likely increase the cost, because it would take longer time and more effort to deliver value for the users of the product.

Think Different

The Product Management folks kept challenging us to think different, and we begun to find ways of delivering smaller parts of the product within days (instead of months). We wouldn't be able to release an entire feature, of course. Far from it. Instead, it would be something - sometimes just a tiny addition - that people could use to simplify their daily work life. They actually did that, we saw it ourselves. There was value added, within a couple of days of development!

Smaller parts

There was no need for us (the Dev Team) to put any Fibonacci numbers or estimating hours to the tasks we identified should be developed. We stopped doing those stressful Big Bang Releases of huge chunks of code. Instead, we found ways to simplify tasks - splitting them out into several smaller parts - to make it possible to push the work to production within days. In my opinion, Projects, Estimation and Deadlines are rarely adding value. Why are there so many Organizations out there still doing that?

Top photo by Patrick Fore on Unsplash

Thursday, August 3, 2023

Kafka messaging with Python & Polylith

This article isn't about Franz Kafka or his great novella The Metamorphosis, where the main character one day realizes that he has transformed into a Human-size Bug.

It is about a different kind of Kafka: Apache Kafka, with examples on how to get started producing & consuming messages with Python. All this in a Polylith Monorepo (hopefully without any of the bugs from that Franz Kafka novella).

This article can be seen as Part III of a series of posts about Python & Polylith. Previous ones are:

  1. GCP Cloud Functions with Python and Polylith
  2. Python FastAPI Microservices with Polylith

If you haven't heard about Polylith before, it's about Developer Experience, sharing code and keeping things simple. You will have all your Python code in a Monorepo, and develop things without the common Microservice trade offs. Have a look at the docs: Python tools for the Polylith Architecture

Edit: don't know what Kafka is? Have a look at the official Apache Kafka quickstart.

I will use the confluent-kafka library and have read up on the Confluent Getting Started Guide about writing message Producers & Consumers.

The Polylith Architecture encourages you to build features step-by-step, and you can choose from where to begin. I have an idea about producing events with Kafka when items have been stored or updated in a database, but how to actually solve it is a bit vague at the moment. What I do know is that I need a function that will produce a message based on input. So I'll begin there.

All code in a Polylith Workspace is referred to as bricks (just like when building with LEGO). I'll go ahead and create a Kafka brick. I am going to use the Python tooling for Polylith to create the brick.

Note: I already have a Workspace prepared, have a look at the docs for how to set up a Polylith Workspace. Full example at: python-polylith-example

The poly tool has created a kafka Python package, and placed it in the components folder. It lives in a top namespace that is used for all the bricks in this Polylith Workspace. I have set it to example here, but you would probably want an organizational name or similar as your top namespace.


There's two types of bricks in Polylith: components and bases. A component is where you write the actual implementation of something. A base is the entry point of an app or service, such as the entry point(s) of a FastAPI microservice or the main function of a CLI. In short: a base is a thin layer between the outside world and the components (containing the features). I will develop the base for my new Kafka feature in a moment.

A Producer and a Consumer

For this example kafka component, I will use code from the Confluent Python guide (with a little bit of refactoring).

def produce(topic: str, key: str, value: str):
    producer = get_producer()

    producer.produce(topic, value, key, callback=_acked)


Full example at: python-polylith-example

I'll go ahead and write a message consumer while I'm at it, and decide to also put the Consumer within the kafka component.

def consume(topic: str, callback: Callable):
    consumer = get_consumer()


        while True:
            msg = consumer.poll(1.0)

            if msg is None:

            if msg.error():
                topic, key, value = parse(msg)
                callback(topic, key, value)
    except KeyboardInterrupt:

The kafka component now looks like this after some additional coding & refactoring:


Running a Kafka server locally

I continue following along with the Confluent guide to run Kafka locally, and have added a Docker Compose file. I am storing that one in the development folder of the Polylith workspace.


I can now try out the Producer and Consumer in a REPL, making sure messages are correctly sent & received without any Kafkaesque situations (👴 🥁).

Producing a message

I already have a messaging API in my python-polylith-example repo, with endpoints for creating, reading, updating and deleting data. This is the acual service that I want to extend with Kafka messaging abilities. The service is built with FastAPI and the endpoints are found in a base.

@app.post("/message/", response_model=int)
def create(content: str = Body()):
    return message.create(content)

I'll continue the development, by adding the newly created kafka component. While developing, I realize that I need to transform the data into simple data structures - and remember that I already have a component that can be used here. This is where Polylith really shines: developing these kind of smaller bricks makes it easy to re-use them in other places - just by importing them.

Consuming messages

I have the kafka component with a consumer in it, and now is the time when I create a new base: the entry point for my kafka consumer.

This is the code I add to the base. Note the re-use of another already existing Polylith component (the log).

from example import kafka, log

logger = log.get_logger("Consumer-app-logger")

def parse_message(topic: str, key: str, value: str):
    logger.info(f"Consumed message with topic={topic}, key={key}, value={value}")

def main():
    topic = "message"
    kafka.consumer.consume(topic, parse_message)

Adding a project

I now have all code needed for this feature. What is left is to add the infrastructure for it, the actual deployable artifact. This command will create a new entry in the projects folder.


I'm adding the dependencies and needed packages to the project-specific pyproject.toml file. But I am lazy, and will only add the base in the packages section - and then run poetry poly sync. It will add all needed bricks for this project. The poly tool has some magic in it, yes.

When deploying, just use the build-project command to package it properly without any relative paths, and use the built wheel to deploy it where you want it running. That's all!

In this article, I have written about the usage of Polylith when developing features and services, and Kafka messaging in specific. Adding new features & re-using existing code is a simple thing when working in a Polylith workspace. Don't hesitate to reach out if you have feedback or questions.

Additional info

Full example at: python-polylith-example
Docs about Polylith: Python tools for the Polylith Architecture

Top photo by Thomas Peham on Unsplash

Friday, July 21, 2023

Python FastAPI Microservices with Polylith

I like FastAPI. It's is a good choice for developing Python APIs. FastAPI has a modern feel to it, and is also easy to learn. When I think about it, I could say the same about Polylith.

Modern Python Microservices

Building Python services with FastAPI and Polylith is a pretty good combination. With FastAPI, you'll have a modern framework including tools like Pydantic. The simplicity of the Polylith Architecture - using bases & components - helps us build simple, maintainable, testable, and scalable services.

You will have a developer experience without the common Microservice struggles, such as code duplication or maintaining several different (possibly diverging) repositories.

Getting started

Okay, let's build something. I will write a simple CRUD service that will handle messages. Since I'm starting from a clean sheet, I can choose where to begin the actual coding. I'd like to start with writing a function in a message module, and will figure out the solution while coding. The Polylith tooling support (totally optional) will help me add new code according to the structure of architecture. I already have a Workspace prepared, have a look at the docs for how to set up a Polylith Workspace. Full example at: python-polylith-example

Great! Now I have a namespace package that is ready for coding. I'll continue with writing a create function. While working on it, I realize I need a persistent storage. I decide to try out SQLAlchemy and the Session thing.

def create(content: str) -> int:
    with Session.begin() as session:
        data = crud.create(session, content)
        return data.id

Again, I will use the poly tool to create a database component, where I will put all the SQLAlchemy setup along with a model and the function where data is added to the DB.

def create(session: Session, content: str) -> Message:
    data = Message(content=content)


    return data

In this example, I will go for SQLite to keep things simple. Use your favorite data storage here.

from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker

DATABASE_URL = "sqlite:///./sql_app.db"
DATABASE_ARGS = {"check_same_thread": False}

engine = create_engine(DATABASE_URL, connect_args=DATABASE_ARGS)
Session = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()

I now have the basics done and am ready to write the endpoint.

I could (of course) also have started from the endpoint, if I wanted to. Polylith allows you to postpone design decisions until you are ready for it.

Before continuing, I will add the components to the packages section of the workspace pyproject.toml:

packages = [
    {include = "my_namespace/database", from = "components"},
    {include = "my_namespace/message", from = "components"},

The Base Plate

API endpoints are recommended to have in what is called a base in Polylith. The idea is very much like LEGO, when building things with bricks and place them on a base plate. Yes, the FastAPI endpoint is the base plate. I will create a new base with the poly tool, and add the FastAPI code in there. It will import the message component I wrote before, and consume the create function.

@app.post("/message/", response_model=int)
def create(content: str = Body()):
    return message.create(content)

Just as with the components, I will add the base to the packages section of the pyproject.toml:

{include = "my_namespace/message_api", from = "bases"},

I can now try out the API in the browser (I have already added the FastAPI dependency). Developing & trying out code is normally done from the development area (i.e. the root of the workspace). This is where you have all the things set up, all code and all dependencies in one single virtual environment. This is a really good thing for a nice Developer Experience.

Reusing existing code

I realize that I need something that will log what is happening in the service. Luckily, there's already a log component there that was added before, for a different service. I'll go ahead and import the logger into my endpoint and start logging.

✨ This is where Polylith really shines. ✨ The components that you develop are ready to be re-used already from start. There's no need to extract code into libraries or doing something additional. All previosly written components are already there, and you just add them to your project. You might think that the logger is a silly little thing? Yes, it is. But this is only a simple example of code that is shared between projects in a Polylith Workspace.

The Project

There's one important part missing, and that is the project. A project is the artifact that will be deployed. It is a package of all project-specific code, along with project-specific configuration. Using the poly tool and adding the base to the project-specific pyproject.toml. Note the relative path to the bases folder.

packages = [
    {include = "my_namespace/message_api", from = "../../bases"},

I'm feeling lazy and will use the poly tool to add the other needed bricks for this specific project (components and bases are often referred to as bricks). The poly tool will analyze the imports and add only the ones needed for this project when running the poetry poly sync command. There's also a check command available that will report any missing bricks.

You still need to add the third party dependencies, though. There's limits to the magic of the poly tool 🪄 🧙. To keep track on what is needed in the bricks, you can actually run poetry poly check again. It will notify about the usage of SQLAlchemy in the bricks. You can use the poly check command during development to guide you in the dependencies actually needed for this specific project.

Add sqlalchemy to the [tool.poetry.dependencies] section manually. You can update the lock-file for the project by running the builtin Poetry command:

poetry lock --directory projects/message_fastapi_project

All project-specific dependencies should now be set up properly! When packaging the project, simply run the poetry build-project (with --directory projects/my_message_fastapi_project if you run the command from the workspace root). You can use the built wheel to deploy the FastAPI service. It contains all the needed code, packaged and re-arranged without any relative paths.

What's in my Workspace?

Check the state of the workspace with poetry poly info command. You will get a nice overview of the added projects, bricks and the usage of bricks in each project.

In this article, I have written about the usage of Polylith when developing services. Adding new services is a simple thing when working in a Polylith workspace, and the tooling is there for a nice Developer Experience. Even if it sometimes might feel like a superpower, it's basically only about keeping things simple. Don't hesitate to reach out if you have feedback or questions.

Additional info

Full example at: python-polylith-example
Docs about Polylith: Python tools for the Polylith Architecture

Top photo by Yulia Matvienko on Unsplash

Monday, July 17, 2023

GCP Cloud Functions with Python and Polylith

Running a full-blown 24/7 running service is sometimes not a good fit for a feature. Serverless functions (also known as Lambdas or Cloud functions) is an alternative for those single-purpose kind of features. The major cloud providers have their own flavors of Serverless, and Cloud Functions is the one on Google Cloud Platform (GCP).

GCP Cloud Functions & Polylith

Polylith is good choice for this. The main use case for Polylith is to support having one or more Microservices, Serverless functions or apps, and being able to share code & deploy in a simple and developer friendly way.

One thing to notice is that GCP Cloud Functions for Python expects a certain file structure when packaging the thing to deploy. There should be a main.py in the top directory and it should contain the entry point for the function. Here's an example, using the functions-framework from GCP to define the entry point.

  import functions_framework

  def handler(request):
      return "hello world"

You'll find more about Python Cloud Functions on the official GCP docs page.

In addition to the main.py, there should also be a requirements.txt in the top folder, defining the dependencies of the Cloud function. Other than that, it's up to you how to structure the code.


How does this fit in a Polylith workspace?

I would recommend to create a base, containing the entry point.

    poetry poly create base --name my_gcp_function
Never heard about Polylith? Have a look a the documentation.

Polylith will create a namespace package in the bases directory. Put the handler code in the already created core.py file.

Make sure to export the handler function in the __init__.py of the base. We will use this one later, when packaging the cloud function.

    from my_namespace.my_gcp_function.core import handler

    __all__ = ["handler"]

If you haven’t already, create a project:

    poetry poly create project --name my_gcp_project

Just as with any other Polylith project, add the needed bricks to the packages section.

 packages = [{include=”my_namespace/my_gcp_function”,from="../../bases"}]

 # this is specific for GCP Cloud Functions:
 include = ["main.py", "requirements.txt"]

Packaging a Cloud Function

To package the project as a GCP Cloud Function, we will use the include property to make sure the needed main.py and requirements.txt files are included. These ones will be generated by a deploy script (see further down in this article).

Just as we normally would do to package a Polylith project, we will use the poetry build-project command. Before running that command, we need to make some additional tasks.

Polylith lives in a Poetry context, using a pyproject.toml to define dependencies, but we can export it to the requirements.txt format. In addition to that, we are going to copy the "interface" for the base (__init__.py) into a main.py file. This file will be put in the root of the built package. GCP will then find the entry point handler function.

To summarize:

  • Export the dependencies into a requirements.txt format
  • Copy the interface into a main.py
  • run build-project
Depending on how you actually deploy the function to GCP, it is probably a good idea to also rename the wheel that has been built to "function.zip". A wheel is already a zipped folder, but with a specific file extension.

 # export the dependencies
 poetry export -f requirements.txt --output requirements.txt
 # copy the interface into a new main.py file
 cp ../../bases//messages_gcp_function/__init__.py ./main.py

 # build the project
 poetry build-project
 # rename the built wheel
 mv dist/my_gcp_function_project-0.1.0-py3-none-any.whl dist/function.zip

That's all you need. You are now ready to write & deploy GCP Cloud Functions from your Polylith workspace. 😄

Additional info

Full example at: python-polylith-example
Docs about Polylith: Python tools for the Polylith Architecture

Top photo by Rodion Kutsaiev on Unsplash

Sunday, April 30, 2023

Dad Jokes & Python Developer Experience

"What do you call a Frenchman wearing sandals?"

"Philippe Flop." 🥁 👴 😆
Developer Experience, what does that even mean?

For me, a huge part is to quickly being able to write, try out and experiment with code. In Python, that could be about having the dependencies and paths already set up, to easily run snippets in a REPL or in the IDE, and without it crashing because of some missing dependencies or configs.

Testable code

When I develop a new feature, I usually want to run & evaluate the code very early in the process. Maybe I'm working on parsing something, transforming or filtering out data: that's when I want to evaluate the code, while writing and figuring out how to solve the actual problem. I do this to verify the output and get ideas about how to refactor the code. I practice a thing called REPL Driven Development, and have written about it before. The workflow is a form of TDD and the refactoring part comes natural with this kind of workflow.

Reusable & Composable code

Another important thing for me is to have already existing code nearby and available to use again. I usually think of Python code as building blocks, like LEGO, and try to have that in mind when writing new functions and adding features. I have learned that from functional programming and it is helping me to solve problems using a step-by-step approach.

With building blocks, code is composable and is ready for re-use from start.

"What did Yoda say when he saw himself in 4K?"

"HDMI." 🥁 👴 😆

Delay Design Decisions

I don't think it is that important to put code in the right place from start (such as what kind of service or app, domain, repo or a folder). At least it isn't as important as writing code that can easily be moved from one place to another. Very related to the building blocks & LEGO way, mentioned above. This approach will likely lead to less wasted time & less design-upfront planning. It is easier to figure it out the proper place to put the code eventually, while iterating and developing. In my opinion, it is perfectly alright to rename or move around code while developing. It shouldn't be a difficult thing to do that.

I usually like to dive into a feature and start coding, rather than starting off by deciding name of a repo, or what type of service the feature should exist in. These things can be left to decide later, when learning more about what to actually build.

"Where do dads store their dad jokes?"

"In the Dadabase." 🥁 👴 😆

A Developer Friendly Architecture

The Polylith Architecture targets all of these things. During the development you have all code available for experimentation and you compose already existing code with new when building features. All code in a Polylith workspace is referenced as bricks, and you use them just as when building something with LEGO. Pick the ones you need along with writing new ones, and combine them into features. A brick can be used in several apps, services or projects as it is called in Polylith.

During development, you have one single virtual environment, with all paths and dependencies already set up. You also have a separate section especially made for trying out and experimenting. It is very similar to the scratch files in PyCharm, but they are versioned and included in the repo. These ones are very useful for evaluating code.

You can immediately dive in to writing code and push the decision about deployment into the future if you like. Add the project to the Polylith workspace, using a simple command, when you are ready for it.

A Dad Joke microservice

To get some insights in how Polylith and the Python tooling changes the way you develop and improves the Developer Experience, I have recorded an improvised video with live coding. Phew, it is difficult to speak & code at the same time. 😅

In this video, I'm taking the first steps into developing some kind of Dad Joke Service and use the Development Environment of Polylith to figure out how to build it.

Top photo by Toa Heftiba on Unsplash