Writing algorithms is the fun part, getting them to work in an actual environment is usually a lot more tedious. You’re on different projects, so a lot of code is copy-pasted around, your structure has gone awol and bugs keep popping up. Plus, the AI’s end users at the other end of the line can’t work with your Python code—they need your data science algorithms in a user-friendly UI. Something that’s stable, reliable. What if we told you you could make it work for both you and the operator, while at the same time making the error rate plummet, saving you time and eradicating polluted code?

By creating this SDK, we’ve managed to bridge the gap between hacky Jupyter notebooks and billion-dollar production environments. On your end, dive as deep into the code as you’d like. On the end of the operator, everything’s up and running, not a bug in sight.

Building on the experience of data scientists as well as software developers, we’ve built a platform with standardised algorithms and an extensive toolkit. We’ve streamlined the output of different ML frameworks, from Keras over Tensorflow to Pytorch and many others, building an SDK that chops up your pipeline into reusable blocks. It separates data science and runtime code. It guarantees production-readiness by enforcing thorough explicit typing. And it has CI/CD integration. Here’s how our SDK will end six of your most pressing issues.

  1. If only there were a way to standardise algorithms and frameworks.
    Machine learning frameworks like Tensorflow, Keras and PyTorch are great, but. They often have different APIs, which makes them hard to integrate directly with other tools. Worse: even within one and the same framework, different algorithms have different ways of performing a training or running in inference mode. Annoying, to say the least.
    > What we did was write a generic wrapper that makes any ML framework fit your APIs.
  2. Can we skip the double code, please?
    Without a decent framework to turn code snippets into reusable blocks, you’ll inevitably end up stuck with a lot of duplicated code. Far from efficient, considering sharing or reusing code from other projects is time-consuming and prone to errors.
    > With our SDK, you can bundle your code into reusable packages we call “cells”, with typed inputs and outputs to connect them with other cells to form pipelines, a fail-safe M.O.
  3. And the code pollution, while we’re at it.
    Writing the algorithm is one thing, making it run smoothly and efficiently—and making sure it keeps on doing so once it’s put into place—is another. Let’s say you’re looking to run your algorithm on AWS, Google Cloud, your local cluster, a Raspberry Pi, and an NVIDIA Jetson. Good luck writing a single piece of code that whirrs as smoothly on the Raspberry Pi as on a distributed supercomputer.
    > Frankly, we think you shouldn’t be bothered with how your algorithm is doing once you’ve written it. That’s why we’ve set up this platform to tackle these issues—so you won’t have to. We make sure your algorithm works, wherever it ends up.
  4. Also, is it possible to make won’t falter if it’s fed something new?
    In most cases, you’ll want to link up your algorithm to an HTTP or gRPC API, let it automatically ingest an RTSP stream, or maybe connect it to a Slack bot or a fancy dashboard. Then the calls from the users start coming every time some new kind of input is needed and you feel like the help desk rather than a data expert.
    > Again: we think you shouldn’t waste time on this. Using pre-set solutions to all these issues, you don’t have to worry about any new input and what this will do to your algorithm. You can use that time for writing new ones.
  5. My life would be a lot easier if I could test the algorithm in different environments.
    How excruciating to have to manually test every new bit of code in the interface. This eats up lots of time, not to mention that you have to go through the motions for every small adjustment you make.
    > Enter our testing modules and CI/CD. With every change you make in the code, we automatically run a series of tests, ensuring that whatever you’ve altered doesn’t trip up your operator’s environment in Robovision AI.
  6. It would be a real help to measure my algorithm’s performance, too.
    So you’ve coded the whole thing, and now you have no way of knowing if it’s actually doing well. Not precisely, at least. Is your algorithm processing ten images per second or a hundred? Who knows.
    > Once you’ve implemented your algorithm, our SDK lets you easily measure its performance and detect which lines of code are acting as a bottleneck. Problem solved.

If the above six won’t do, here’s one last argument to get you (or your boss) on board: working with our SDK combined with the platform means both you and the operator work on the same platform. They have a neat, nicely designed, user-friendly environment to label and train their applications. You get to work in Python, just like you’re used to. In other words: you and your colleagues in operations will finally be able to speak the same language, saving both ends a lot of lost-in-translation time.

Published on 25 January 2021