suitkaise For technical info, see the technical info page.
is a Python code library.
It's for developers of all skill levels, and was created with 3 main goals in mind:
There are many things in Python that are difficult or nuanced for beginners, and many more that are just annoying, overwhelming, and time consuming for all developers.
I have taken a few of the most foundational and useful parts of modern Python and made them faster, clearer, and easier to use.
Every module I have made started as a "dream API".
"If it could just be like this..."
"If it worked like this..."
"I would use this so much more if it looked like this..."
I created things that I wished worked like the concepts I wrote. Then I went backwards to actually make it work.
suitkaise doesThink of the printing press, an invention that made production of paper media faster, more standardized, and less prone to human error. People didn't have to write books by hand anymore, saving them a large amount of time and effort.
The result: the world was flooded with books, and the "Information Age" began.
There are many things in Python that need their own printing press to make using them faster, more standardized, and less prone to human error.
Parallel processing, Python-to-Python serialization, file path handling, and more.
gives you these printing presses.
The name is inspired by "hacker laptops", where the user opens a briefcase and hacks some mainframe in 5 seconds. That's the level of speed and ease you get with .
processing — Unlocks the full potential of parallel programming in Python.60% of parallel processing is batch processing, where you process N number of items at once instead of just 1. gives you a class that makes batch processing easy, with 3 standard pooling patterns.
The other 40% is creating long-running, complex subprocesses that do more than just look up data or compute something. Creating these is generally a nightmare even for experienced developers.
gives you an class that makes creating these easy, with coded lifecycle methods to setup, run, cleanup, and more. These classes include timing, error handling, and run the process internally, so you don't have to worry about managing the process yourself.
Finally, gives you a class.
Every developer knows how to create a class instance and add objects to it.
And that's all you have to do with . Instantiate it, add objects to it, and pass it to your subprocesses. It ensures that everything syncs up and remains in sync for you. Even complex classes can be added and used just like you would use it normally.
How? , the serialization engine that can handle a vast amount of things that pickle, cloudpickle, and dill cannot, including complex, user created class instances that would fail to serialize with the other options.
cucumber — Serialize anything. outperforms all competitors in coverage, almost entirely eliminating errors when converting to and from bytes. Things like locks, generators, file handles, and more are all covered. Additionally, it has faster speed than cloudpickle and dill for many simple types, and is also faster in most cases for the more complex types as well.
Why is this awesome? You don't have to worry about errors anymore. You now have access to a custom class, the objects you want to use in it but couldn't before, and the ability to just share data between processes without thinking, all powered by this engine. You don't even have to use the other modules to get an upgrade. This is just simply better.
paths — Everything path related is so much more simple.It includes , a path object that uses an auto-detected project root to normalize all of your paths for you. It is cross platform compatible. An made on Murphy's Mac will be the same as the same made on Gurphy's Windows laptop.
It also includes an decorator that can be used to automatically streamline all of your paths to a specific type, getting rid of type mismatches in their entirety.
timing — Times your code with one line. gives you a class that is the core piece of this module. It powers and the context manager, which allow you to time your code with one line.
Additionally, collects far more statistical data than timeit, including mean, median, standard deviation, percentiles, and more.
circuits — Manage your execution flow more cleanly. gives you two patterns to manage your code. What separates them from other circuit breaker libraries is their use in parallel processing.
Circuit — auto-resets after sleeping, great for rate limiting, resource management, and moreBreakingCircuit — stays broken until manually reset, great for stopping execution after a certain number of failures with extra control is fully thread-safe.
also works with , so you can share the circuit breaker state across process boundaries.
sk — Modify your functions and methods without changing their code. can be used as a decorator or a function, and adds some special modifiers to your functions and methods.
.retry() — retry it when it fails.timeout() — return an error if it takes too long.background() — run it in the background and get the result later using Futures.asynced() — get an async version of it if it has calls that block your code from running using asyncio.to_thread().rate_limit() — limit the number of calls it makes per secondsuitkaise is for the developer.
As previously mentioned, when I created , I made the end goal first: what I myself as a developer would want to use.
All of is thousands of iterations improving on this goal API, which was created to address different problems I myself have encountered as a developer.
Here are the problems that was made for.
Parallel processing, or multiprocessing, is one of the essential concepts for modern software development.
But it is also a pain in the ass to set up.
Many users aren't just plugging the same function into multiple processes to make things go faster.
They might be using multiple subprocesses to make software run smoother, manage UI, gather real time data, manage resources or databases, and more.
These are all much more involved than just creating a processing pool to work with a single function that has different inputs. They require things like:
Trying to do all of this manually for each individual scenario is overwhelming and time consuming, even with Python's multiprocessing.Process class.
So I made my own class.
Inherit from the special class to create your own custom processes.
from suitkaise .processing import Skprocess
class MyProcess(Skprocess ):
def __init__(self, num_runs: int):
self.counter = 0
self.process_config .runs = num_runs
def __prerun__ (self):
# setup before the main part
# connect to databases, make API calls, read files
def __run__ (self):
# the main part
# write your code here
# it repeats for you, no need to write looping code
def __postrun__ (self):
# clean up your work
# close connections, add results to attributes
def __onfinish__ (self):
# clean up the process
# calculate summaries, save results to files, send emails
def __result__ (self):
# return the result of the process
# store your results as instance attributes and return them here
def __error__ (self):
# __result__() when an error occurs
Everything is separated into pieces that make much more sense. You have set spaces for setup, your main work, and cleanup/teardown. And, everything follows simple class practices.
Control the process with simple methods:
p = MyProcess(num_runs=10)
# start the process
p.start()
# wait for the process to finish
p.wait()
# access the result
result = p.result
learn more ⟶
When you try to pickle an object, you might get an error like this:
TypeError: cannot pickle 'MyObject' object
After hours of debugging, these feel like slaps in the face.
So many essential objects in Python are not pickleable, even if you use custom picklers like cloudpickle or dill.
Your thread locks don't pickle.
Your database connections don't pickle.
Your functions don't always pickle.
Your loggers don't pickle correctly.
Your class objects don't pickle unless they are extremely basic.
Your circular references don't reconstruct correctly.
There are also so many other weird BS quirks that you have to account for, like locally-defined functions, lambdas, closures, and more.
So, I made my own serialization engine that handles all of this: .
I wanted a serialization engine that could handle anything. I never wanted a pickling error again in my life.
But being able to serialize "anything" is a steep challenge. To prove that could do it, I needed a rival, an enemy, a final boss to defeat. One that, after winning, would let me say "I think I can serialize anything now."
So, I created the WorstPossibleObject.
Basically, "how can I make this as bad as possible?"
Then, I created an engine to beat it.
Then, I battled it thousands of times, forcing the engine to successfully serialize it and deserialize it multiple times per battle. Not a single error was allowed to occur.
Once the engine won, I felt confident that I could serialize anything. But there will always be some gap, some edge case, some exception not accounted for.
Therefore, I have written a special way for you to handle things yourself if need be.
__serialize__() and __deserialize__() methods in your classes will be used first by , before it uses the default handling.
You can use them to override the default serialization and deserialization behavior for your own objects.
learn more ⟶Do you ever pull a repo from your Windows PC at work and it doesn't work on your Mac at home?
Or, maybe your laptop at home is also Windows, but the project is placed in a different directory than the one you use at work.
And then everything breaks.
Right now, Python doesn't have true, consistent, or standardized cross-platform path handling. While it isn't too hard to do manually, it opens the door for miscommunications, errors, and more.
ensures that you can't make mistakes, and simplifies a lot of the manual work down into one line.
First off, there's .
is a path object that automatically detects the project root and uses it to normalize paths for you. These paths work cross machine and cross platform, as long as the project structure is the same.
No need to convert paths between operating systems.
No need to worry about where your project is located.
No needing to manually convert paths to work correctly everywhere.
learn more ⟶I used to hate having to write code to time things.
Set up a start time, stop time, calculate the difference, store it somewhere, calculate statistics manually...
removes all of that. One decorator or context manager, and you get automatic timing with deep statistics — mean, median, standard deviation, percentiles, and more.
from suitkaise .timing import timethis
@timethis()
def my_function():
do_work()
for _ in range(100):
my_function()
print(my_function.timer .mean )
print(my_function.timer .percentile (95))
learn more ⟶
suitkaise with AICurrently, AI agents like ChatGPT that you use with something like Cursor are not trained to use .
That doesn't mean you can't use with AI.
The docs are available for download through the CLI.
pip install suitkaise setup.sk file to your project rootsuitkaise docs from the terminal to download the docs to your project root.Once you do this, AI agents will have access to the docs, including a detailed API reference for each module and internal workings.
Use this prompt:
I am using suitkaise, a Python code library.
The docs for the lib are attached under suitkaise-docs.
Please read them and familiarize yourself with the library before continuing.
There are multiple ways to give feedback or report bugs. I am most likely to check the feedback forms first.
suitkaise social media account