Manage complex AI projects with the ease of a notebook

Manage complex AI projects and build machine vision pipelines with the ease of a notebook but with the power of a production deployment. Our SDK provides all the tools you need to handle the diverse and dynamic landscape of AI. Streamline model creation, deployment and management activities and start to deliver enterprise grade-AI.

Why do you need an SDK?

A tool for data scientists

  • Standardizes the implementation of algorithms
  • Helps developers debug and test algorithms at runtime to automatically parallelize the algorithms, allowing higher throughputs
  • Provides useful tools to connect algorithms with an HTTP API server or a camera stream
  • Seek to work in one single environment and have one place to access all your ML assets

What can you do with our SDK?

  • Promote closer collaboration with project teams
  • Facilitate the creation of new solutions and scale its usage to new use-cases
  • Simplify debugging
  • Accelerate releases
  • Extend AI functionalities to the edge

Building Blocks

A cell is the smallest building block of the SDK which can be seen as a function with an input and an output. Any (machine learning) algorithm can be wrapped in a cell by implementing the required methods.

Complex data flows can be set up by connecting multiple cells together in a pipeline. The output of one cell can only be connected to the input of another cell if the IO types match. This also means that you can make use of other existing cells without having to know the details of the implementation. Once constructed, the pipeline requires a set of inputs which we refer to as the pipeline source which produces some outputs based on the internal cell connections which we refer to as the pipeline sink.

When you create a pipeline you define a set of blocks and algorithms. Algorithms still have to be created for your application by training a model for it. An algorithm is just a structure but when you feed it data, it becomes an application. Once a pipeline is built and trained, it can be loaded back into RVAI and made available to users in the field.

Reuse your code

A pipeline’s source and sink can be driven by a pipeline driver which feeds input data to the source and processes the results from the sink automatically. This makes it possible to ‘expose’ and then use the result anywhere. It can also be done at runtime so that the implementation of the pipeline does not need to be adjusted to support another driver allowing for the same code to be reused in different circumstances and for different kinds of clients/use-cases.

An example of such pipeline driver is an HTTP API server which converts HTTP prediction requests to pipeline inputs and returns the pipeline outputs as HTTP responses.
Another example is to link a camera stream to the pipeline and write the processed results to a database.

The Robovision SDK provides a programming environment with various runtimes (engines that pass along data being created). A runtime is responsible for executing pipelines and pipeline drivers. The power of this is that while keeping the same cells/pipeline

implementation, it is possible to use a different runtime with different priorities based on the use case. The functionality of a runtime can also be expanded with plugins; for example, for monitoring the pipelines.

Ray Runtime

Ray is a distributed computing platform that allows running processes in parallel. It enables scaling of duplicated cells so data will be distributed between multiple instances resulting in a significant boost in throughput.

Debug Runtime

Runs cells sequentially, it is easier to debug and more suitable for simple pipelines that have lower response restrictions.

Standardise types & formats

For cells to understand each others’ output, it is necessary to standardize the types and serialization formats. Therefore, RVAI types, which support a whole range of common types, but can be extended with custom types, are provided.

To solve the following problem:

Machine learning frameworks often have different APIs which make them hard to integrate directly with other platforms/tools. Therefore the SDK provides an API to wrap algorithms into standardized cells which speeds up your work and enable reusability for future ideas.

Unleash the power of ML OPS

Reproducibility aka version everything

Datasets, models, parameters, notebooks, and environments

Continuous delivery

Integration of continuous integration and delivery (CI/CD) tools

Accountability

Traceability of data & models

Deployment

Deployment of models via Docker to Kubernetes

Collaboration

Share and collaborate on all components

Monitoring

Statistical model monitoring via the Prometheus time-series database, and visualization with Grafana

On premise

In the cloud

On the edge