28 Dec 2023
I noticed Elasticsearch was a bit slow when using cosineSimilarity
on a dense_vector
.
Before thinking about it at all, I rather naively put all my data in only four indices.
The largest of these indices were easily over 200 GB in size, mostly occupied by 512-float vectors.
These vectors were already indexed for cosine similarity.
So what made the queries slow?
In the background Elasticsearch runs on Lucene, which uses one thread per shard for queries.
By default, Elasticsearch uses only one shard per index. This used to be five per index, before Elasticsearch 7.
In my case this meant that my 128-core Elasticsearch cluster was using only 4 threads for a search!
Therefore, one of the simplest ways to use more shards (and cores) is to grow the number of indices.
At query time this doesn’t have to be a problem, since you can use index patterns to search over multiple indices at once.
However, at some point adding more indices will stop helping.
The results from each of the indices need to be merged by Elasticsearch, which will add overhead.
As a simple rule of thumb I would say get the number of cores in your cluster and split your data into that many indices, while keeping the size of indices somewhere between 1 and 10 GB.
This maximizes the number of cores used for a search, while keeping the overhead of merging the results relatively small.
If your indices end up a lot smaller than 1 GB, you probably don’t need as many indices as cores.
If your indices are still a lot larger than 10 GB, and your queries are not quick enough, you might want to increase the core count of your Elasticsearch cluster.
You can guess the size of your indices in advance by inserting a relative sample of documents.
You can check the size of the index on disk (with curl localhost:9200/_cat/indices
for example), divide the size by the number of documents, and multiply by the total number of documents you want to index.
This gives an idea of the size of the index you will end up with.
To summarize: while Elasticsearch recommends index sizes between 10 and 50 GB, I found that the performance of vector search in particular was better when the indices were between 1 and 10 GB.
01 Dec 2023
With the 8.11.0 release of the Python package eland
it got a lot easier to dump an Elasticsearch index to csv.
All the code you need is this:
import eland
eland.DataFrame('http://localhost:9200', 'your-index').to_csv('your-csv.csv')
Before the 8.11.0 release, this method already existed in eland
, but it needed as much memory as the size of your dataset.
Now it streams the data batch-wise into a csv.
Most of the code to achieve this was written by @V1NAY8.
I only did some minor edits to get it to pass the pull request review.
While testing this eland
successfully dumped hundreds of gigabytes of data without any issues,
all without having to bother with scroll requests myself.
30 Nov 2023
To get ready for Advent of Code 2023, I continued where I stopped last year: day 9.
Here’s me struggling for 1 hour and 40 minutes spread across two days, because I tried to be clever.
For viewing pleasure it has been sped up.
What goes wrong here is that I assumed incorrectly that a negative number to the power of zero would be -1.
Quick maths turned out not to be good maths.
The following bit of experimentation led me to believe that.
result = -5 ** 0
assert result == -1
You may have already guessed that the order of operation fooled me.
The **
operator is executed before the -
operator is applied to the result.
The behaviour is different if I do the same using a variable:
x = -5
result = x ** 0
assert result == 1
The solution that eventually led to the correct answer is in the code block below.
After submitting the correct answer, I did some cleanup:
- deleted some unreachable code
- linted it
- removed some debug lines
- and added some more comments
- deleted the
x ** 0
and y ** 0
since they are pointless now.
from advent_of_code import *
import requests_cache
requests_cache.install_cache()
test_input = """R 4
U 4
L 3
D 1
R 4
D 1
L 5
R 2"""
input_9 = fetch_input(9)
# input_9 = test_input
movements = [x.split(' ') for x in input_9.splitlines()]
movements = [(x[0], int(x[1])) for x in movements]
visited = set()
head = (0, 0)
tail = (0, 0)
directions = {
# What coordinates change for each movement
# x, y
"U": (1, 0),
"D": (-1, 0),
"L": (0, -1),
"R": (0, 1),
}
def modify_location(location, direction):
if isinstance(direction, str):
change = directions[direction]
else:
change = direction
return location[0] + change[0], location[1] + change[1]
def direction_to_move(head, tail):
x = head[0] - tail[0]
y = head[1] - tail[1]
# head and tail are at the same location, don't move
if x == 0 and y == 0:
return 0, 0
# head and tail are less than one square apart (including diagonally)
elif max(abs(x), abs(y)) == 1:
return 0, 0
# head and tail are too far apart, decide which direction to move the tail
else:
if x < 0:
x = x ** 0 * -1
else:
x = 0 if not x else 1
if y < 0:
y = -1
else:
y = 0 if not y else 1
return x, y
tail_visited = set()
for direction, length in movements:
for _ in range(length):
head = modify_location(head, direction)
tail = modify_location(tail, direction_to_move(head, tail))
# keep track of where the tail has been:
tail_visited.add(tail)
submit_answer(level=1, day=9, answer=len(tail_visited))
The lessons I learned: Don’t try to be clever and check your maths.
27 Oct 2023
requests_cache
is nice.
ratelimit
is nice.
But they don’t play nicely together yet:
If a request is coming from the cache that requests_cache
maintains, ratelimit
doesn’t “know that” and will still slow your script down for no reason.
That’s why I published the ratelimit_requests_cache
module.
It offers a similar rate limiter to the ratelimit
module, but invocations only count towards the rate limit if the
request could not be served from the cache.
The usage is the same as the normal ratelimit
package.
You decorate a method with the sleep_and_retry
and a limiting decorator, in this case the limits_if_not_cached
:
import requests
import requests_cache
from ratelimit import sleep_and_retry
from ratelimit_requests_cache import limits_if_not_cached
@sleep_and_retry
@limits_if_not_cached(calls=1, period=1)
def get_from_httpbin(i):
return requests.get(f'https://httpbin.org/anything?i={i}')
# Enable requests caching
requests_cache.install_cache()
# Notice that only the first ten requests will be ratelimited to 1 request / second
# After that, it's a lot quicker since requests can be served from the cache
# and the ratelimiter does not engage
for i in range(100):
get_from_httpbin(i % 10)
print(i)
See it in action:
This rate limiter is ideal for when an API call is expensive, measured in either time or in money.
HTTP requests have to be performed only once, and you can better avoid getting HTTP 429 status codes.
This rate limiter checks whether a request was served from the cache or not by checking the .from_cache
attribute of
the Response
.
That means that if you have a different caching mechanism, you could also set this .from_cache
boolean attribute and use the
decorator for other purposes just as easily.
To start using it, get it from PyPI:
python3 -m pip install ratelimit_requests_cache
04 Oct 2023
Mitmproxy is just what it says on the tin: a proxy that can act as a man-in-the-middle.
By default it will re-sign HTTPS traffic with its own root CA.
It can also modify other requests in-place, using Python hooks.
In this post I show how I add a main to mitmproxy
hook scripts themselves.
This way both your hook and the mitmproxy invocation are contained within one file.
I think a Python module without an if __name__ == '__main__'
always is a bit of a missed opportunity.
Even if a module is nested deep into your application, it might still be a suitable place to write some examples how to use the code in the module.
Normally, when you run a mitmproxy
and want to set some hooks, you supply the script as a command line argument to the CLI tool.
mitmdump -q -s intercept.py
# or for the Command Line UI:
mitmproxy -s intercept.py
But, when running mitmproxy from the command line, you will not actually the __main__
of the mitmproxy module.
The place where the CLI tool actually lives in a code base can usually be found in setup.py
or pyproject.toml
.
There is often a parameter or section called scripts
, console_scripts
, or something similar, depending on the packaging tools.
For mitmproxy, it was in pyproject.toml
in the project.scripts
section:
[project.scripts]
mitmproxy = "mitmproxy.tools.main:mitmproxy"
mitmdump = "mitmproxy.tools.main:mitmdump"
mitmweb = "mitmproxy.tools.main:mitmweb"
In the code below I import the method that contains the CLI tool.
I also use the special __file__
variable present in Python.
This contains the full filename of the script it’s called from.
from mitmproxy import http
from mitmproxy.tools.main import mitmdump
def websocket_message(flow: http.HTTPFlow):
""" Hook on websocket messages """
last_message = flow.websocket.messages[-1]
print(last_message.content)
if __name__ == "__main__":
mitmdump(
[
"-q", # quiet flag, only script's output
"-s", # script flag
__file__, # use the same file as the hook
]
)
This way of adding a main is a bit similar to what I did earlier with streamlit.
That solution turned out to have some unforeseen implications: Streamlit was convinced a form was nested in itself.
So, stay tuned for the trouble that this mitmproxy hack might cause later.