Making mitmproxy more easily debuggable

Mitmproxy is just what it says on the tin: a proxy that can act as a man-in-the-middle. By default it will re-sign HTTPS traffic with its own root CA. It can also modify other requests in-place, using Python hooks.

In this post I show how I add a main to mitmproxy hook scripts themselves. This way both your hook and the mitmproxy invocation are contained within one file. I think a Python module without an if __name__ == '__main__' always is a bit of a missed opportunity. Even if a module is nested deep into your application, it might still be a suitable place to write some examples how to use the code in the module.

Normally, when you run a mitmproxy and want to set some hooks, you supply the script as a command line argument to the CLI tool.

mitmdump -q -s intercept.py
# or for the Command Line UI:
mitmproxy -s intercept.py

But, when running mitmproxy from the command line, you will not actually the __main__ of the mitmproxy module. The place where the CLI tool actually lives in a code base can usually be found in setup.py or pyproject.toml. There is often a parameter or section called scripts, console_scripts, or something similar, depending on the packaging tools. For mitmproxy, it was in pyproject.toml in the project.scripts section:

[project.scripts]
mitmproxy = "mitmproxy.tools.main:mitmproxy"
mitmdump = "mitmproxy.tools.main:mitmdump"
mitmweb = "mitmproxy.tools.main:mitmweb"

In the code below I import the method that contains the CLI tool. I also use the special __file__ variable present in Python. This contains the full filename of the script it’s called from.

from mitmproxy import http
from mitmproxy.tools.main import mitmdump


def websocket_message(flow: http.HTTPFlow):
    """ Hook on websocket messages """
    last_message = flow.websocket.messages[-1]
    print(last_message.content)


if __name__ == "__main__":
    mitmdump(
        [
            "-q",      # quiet flag, only script's output
            "-s",      # script flag
            __file__,  # use the same file as the hook
        ]
    )

This way of adding a main is a bit similar to what I did earlier with streamlit. That solution turned out to have some unforeseen implications: Streamlit was convinced a form was nested in itself. So, stay tuned for the trouble that this mitmproxy hack might cause later.

OpenAI's CLIP inference in C# using ONNX Runtime

CLIP is a model developed by OpenAI (back in 2021), that can create embeddings for both text and images. These embeddings exist in the same vector space and can be compared across the two modalities. Contrary to some other OpenAI models, the weights are freely available.

The official implementation released by OpenAI is in Python. I needed to calculate CLIP vectors in C# however. To make the C# implementation, I build from the work of josephrocca, who ported the model from Torch to ONNX. Although he initially ported it to be able to use it in Javascript, we can reuse these weights in C#. The nice thing about the ONNX Runtime is that it is available for many programming languages, and the models and weights are compatible.

I’m planning to create a proper library here: clip.dll. In the future there will also be support for vectorizing text in that library. However, for the time being, the implementation is just the following snippet:

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Net;
using System.Text.Json;
using Microsoft.ML.OnnxRuntime;
using Microsoft.ML.OnnxRuntime.Tensors;
using SixLabors.ImageSharp;
using SixLabors.ImageSharp.Processing;
using SixLabors.ImageSharp.PixelFormats;


class CLIP {
    static void Main(string[] args) {
        // Download the model weights if we don't have them in the current directory
        if (!File.Exists("clip-image-vit-32-float32.onnx"))
        {
            WebClient webClient = new WebClient();
            webClient.DownloadFile(
                "https://huggingface.co/rocca/openai-clip-js/resolve/main/clip-image-vit-32-float32.onnx",
                @"clip-image-vit-32-float32.onnx"
            );
        }

        // Load the model
        // Model sourced from: https://huggingface.co/rocca/openai-clip-js/tree/main
        var clipModel = new InferenceSession("clip-image-vit-32-float32.onnx");

        // Load an image specified as a command line argument
        var image = Image.Load<Rgba32>(File.ReadAllBytes(args[0]));

        // Calculate the shortest side, and use that to extract a square from the center
        // Known in other image libraries as Centercrop
        // AFAIK Centercrop is not available in Sixlabors.ImageSharp, so we do it manually
        var smallestSide = Math.Min(image.Width, image.Height);
        image.Mutate(x => x.Crop(
            new Rectangle(
                (image.Width - smallestSide) / 2,
            (image.Height - smallestSide) / 2,
            smallestSide,
            smallestSide
        )));

        // Resize to 224 x 224 (bicubic resizing is the default)
        image.Mutate(x => x.Resize(224, 224));

        // Create a new array for 1 picture, 3 channels (RGB) and 224 pixels height and width
        var inputTensor = new DenseTensor<float>(new[] {1, 3, 224, 224});

        // Put all the pixels in the input tensor
        for (var x = 0; x < 224; x++)
        {
            for (var y = 0; y < 224; y++)
            {
                // Normalize from bytes (0-255) to floats (constants borrowed from CLIP repository)
                inputTensor[0, 0, y, x] = Convert.ToSingle((((float) image[x, y].R / 255) - 0.48145466) / 0.26862954);
                inputTensor[0, 1, y, x] = Convert.ToSingle((((float) image[x, y].G / 255) - 0.4578275 ) / 0.26130258);
                inputTensor[0, 2, y, x] = Convert.ToSingle((((float) image[x, y].B / 255) - 0.40821073) / 0.27577711);
            }
        }

        // Prepare the inputs as a named ONNX variable, name should be "input"
        var inputs = new List<NamedOnnxValue> {NamedOnnxValue.CreateFromTensor("input", inputTensor)};

        // Run the model, and get the output back as an Array of floats
        var outputData = clipModel.Run(inputs).ToList().Last().AsTensor<float>().ToArray();

        // Write the array serialized as JSON
        Console.WriteLine(JsonSerializer.Serialize(outputData));
    }
}

Updated 2024-03-29: Added cropping a square out of the center to match the reference implementation

Why pip and Homebrew make a dangerous cocktail

pip install will happily replace anything in /usr/local/

pip, a Python package manager, can install two types of Python packages: A source distribution and a binary distribution.

With the source distribution, it’s simply running the setup.py which often contains a call to setuptools.setup. You could argue that pip installing source distributions is RCE by design.

Binary distributions (often wheels) are not intended to run code immediately during install. They simply copy files using all kinds of logic defined in pip itself. This should make them less dangerous.

Homebrew is “The Missing Package Manager for macOS”, and probably the most popular way for MacOS users to get Python: Python 3.* was brew installed around 500,000 times in the last 30 days. “Homebrew installs packages to their own directory and then symlinks their files into /opt/homebrew (on Apple Silicon).” Note that this is /usr/local/ on Intel Macs. “Homebrew won’t install files outside its prefix and you can place a Homebrew installation wherever you like.”

In this post I’ll assume a Python 3.9 installation, performed with brew install [email protected]. I’ll demonstrate that a malicious Python package can replace files in the Homebrew prefix directory, by default /usr/local for Intel Macs and /opt/homebrew/ for ARM Macs.

By defining the following setup.py we could even replace the python3.9 executable itself. I replace python3.9 with an executable that simply outputs Not Python to demonstrate the issue:

from setuptools import setup

setup(name='malware',
      version='3.2.1',
      description='malware',
      url='https://example.com',
      author='',
      author_email='[email protected]',
      # Every file in /usr/local can be poisoned by including data_files.
      # If they already existed, the executable flag is preserved
      # This is just one example of a file that can be replaced:
      data_files=[("Cellar/[email protected]/3.9.17_1/bin", ["python3.9"])],
      packages=[],
      install_requires=[])

In a video this looks like this:

If a data file has the same path as an existing file, and the existing file has executable bits set, they will remain set!

Furthermore, real world attacks will be much more subtle than the one above. An attacker could patch some malware into a dynamic library while preserving its original functionality. Packages like lief will help you do that with ease.

Of course, before posting this here, I tried to find out how well known this issue is. After some discussion with the people running the security mailing lists at Python and the Python Packaging Authority, the conclusion is: There is no fix for this, at least not without breaking someone’s legitimate uses.

It is possible to think of some countermeasures to reduce the risk, however. One countermeasure could be screening packages for executables and libraries in places they don’t typically belong. This can be done in three steps:

  1. Download the package and its dependencies using pip download, without installing them. The --only-binary=:all: is important because pip download will run a source distribution to find out its dependencies.
mkdir /tmp/scan
python3 -m pip download --only-binary=:all: -d /tmp/scan package_name
  1. Apply the following script to detect any executables or libraries in places they don’t belong (using libmagic):
import glob
import zipfile

import magic  # brew install libmagic && python3.9 -m pip install python-magic
from tqdm import tqdm  # python3.9 -m pip install tqdm

for wheel in tqdm(list(glob.glob("/tmp/scan/*.whl"))):
    wheel = zipfile.ZipFile(wheel)
    for file in wheel.filelist:
        # this is an indication that data was added using data_files:
        if '.data/data/' in file.filename:
            # let libmagic find out what it is:
            magic_guess = magic.from_buffer(wheel.open(file.filename).read())
            # change this when not on MacOS to something relevant to your platform:
            if 'Mach-O' in magic_guess:  
                print(wheel.filename, file.filename, magic_guess)

Note that in this example I only check for Mach-O libraries and executables. This reduces false positives. For example: If a package author sets include_package_data=True there will be lots of Python files in the data directory as well. This scanning also does not check for shell scripts or other types of executables. To cast a wider net, check for the words 'executable' or 'library' in the output of libmagic.

  1. Inspect the output of step 2, and decide if you still want to run pip install on the same target and its transitive dependencies.

Of course, it’s best to combine this scan-before-you-install with some long-standing best practices:

Firstly, avoid installing pip source distributions when possible, using the --only-binary=:all: parameter.

Secondly, follow the recommendations here, and configure Homebrew to use a directory not on the PATH. This means disregarding Homebrew’s own documentation, warning that it might be inconvenient not to have this set to /usr/local.

To summarize: installing things always introduces an inherent risk, but the way Homebrew’s Python is configured might pose an additional risk.

Git knapsack; dealing with commit size constraints

Many commercial git servers have limitations on the size of files, commits and pushes to repositories. Typically they enforce file size limits of around 25-100 MB.

When I tried to add a HLS splitted video to a Github Pages website, I hit this limit. None of my files exceeded the maximum file size of Github, but combining them all in a single commit and push did exceed this limit. The solution is clear: they need to be added using multiple commits and pushes. (The better solution in many cases is Git LFS).

python3 -m pip install git-knapsack
git knapsack

git knapsack is a simple script that goes over all uncommitted files, and packs them into commits until the limit (currently 30 MB) is reached. It pushes the changes and continues.

Here’s my initial version. There’s a lot to improve, but it gets the job done. Pull requests are always welcome in the bartbroere/git-knapsack repository.

"""
`git_knapsack.py`

Knapsack untracked files in a git repository in many commits, so that each commit does not exceed 30 MB.
Currently the knapsacking algorithm is extremely naive, and it does not expose any custom git features.
Effectively, this scripts performs git add, git commit and git push.

Note that it also will commit and push any untracked file.
If you run git status before this command and see anything you don't want committed,
either delete it or add it to the .gitignore file.

If any single file exceeds the git server's file limit or commit size limit, this script will not be able to help you.

The dependencies of this script are gitpython and tqdm.
"""
import os

from git import Repo
from tqdm import tqdm

repository = Repo(os.path.curdir)
untracked_files = repository.untracked_files

commit_size = 0
untracked_file_batch = []
for untracked_file in tqdm(untracked_files):
    current_file_size = os.stat(untracked_file).st_size
    if commit_size + current_file_size > 1024 ** 2 * 30:  # keep commits below 30 MB
        repository.index.add(untracked_file_batch)
        repository.index.commit("Knapsack into multiple commits")
        # For many hosts, pushing after each commit is required.
        # Not only the commit and file size can be limited,
        # but often also the size of a push over HTTPS has a size limit
        origin = repository.remote('origin')
        origin.push()
        untracked_file_batch = [untracked_file]  # reset the batch
        commit_size = current_file_size  # reset the commit size
    else:
        untracked_file_batch.append(untracked_file)
        commit_size += current_file_size

# Clean up any files in the queue
repository.index.add(untracked_file_batch)
repository.index.commit("Knapsack into multiple commits")
origin = repository.remote('origin')
origin.push()

If this script is installed to a directory that is in the PATH variable, it is available system wide with git knapsack. This is because git checks to see if any executables exist when it does not recognize a keyword (like knapsack).

This is just a first draft. Some of the things to improve are:

  • accepting command line arguments
  • forwarding these arguments to add, commit and push respectively
  • adding a command line argument to make the size configurable

(Subjectively) better Streamlit filtering

A common use case when creating a dashboard in Streamlit for me is filtering displayed data. Some of the examples I read on the Streamlit blog were quite “code heavy”:

Although the behaviour of these existing examples was nice enough, they all needed custom classes or methods resulting in many lines of code. The code wasn’t Pythonic enough yet for my taste.

After some trial and error, I came up with a solution. It only needs one additional method, that can be an anonymous lambda function, if it’s simple enough. This is combined with pandas’ apply and defaults to True. This helps keep the filters intuitive to use while avoiding writing lots of logic in classes and methods.

Here’s the solution, as proposed to the official Streamlit documentation in streamlit/docs#709:

Live filtering of a dataset can be achieved by combining st.dataframe and input elements like the select_slider, text_input or multiselect. In the example below, a sample DataFrame will be filtered using these three different elements. We can write custom filtering logic using the apply method provided by Pandas. The custom logic is defined using anonymous lambda functions, which default to True if a filter is not used. This ensures that it’s not mandatory to provide values for each filter.

import pandas
import streamlit as st

# Some sample data:
employees = pandas.DataFrame([
    {"Name": "Ava Reynolds", "Age": 38, "Skills": ["Python", "Javascript"]},
    {"Name": "Caleb Roberts", "Age": 29, "Skills": ["juggling", "karate", "Python"]},
    {"Name": "Harper Anderson", "Age": 51, "Skills": ["sailing", "French", "Javascript"]}
])

# Create an input element and apply the filter to the DataFrame with employees
age_input = st.sidebar.select_slider("Minimum age", options=range(0, 100))
age_filter = employees["Age"] > age_input

# Filter the name field, but default to True if the filter is not used
name_input = st.sidebar.text_input("Name")
name_filter = employees["Name"].apply(lambda name: name_input in name if name_input else True)

# Filter the skills, but default to True if no skills are selected
# Options contains all unique values in the multilabel column Skills
skills_input = st.sidebar.multiselect("Skills", options=employees["Skills"].explode().unique())
skills_filter = employees["Skills"].apply(
    # We check whether any of the selected skills are in the row, defaulting to True if the input is not specified
    # To check whether all of the selected skills are there, simply replace `any` with `all`
    lambda skills: any(skill in skills for skill in skills_input) if skills_input else True
)

# Apply the three different filters and display the data
# Since the default when the filter is not used is True, we can simply use the & operator
employees_filtered = employees[age_filter & name_filter & skills_filter]
st.dataframe(employees_filtered, hide_index=True)