SDK

Manage complex AI projects and build machine vision pipelines with the ease of a notebook but with the power of a production deployment. Our SDK provides all the tools you need to handle the diverse and dynamic landscape of AI. Streamline model creation, deployment and management activities and start to deliver enterprise grade-AI.

A TOOL FOR DATA SCIENTISTS

  • Standardizes the implementation of algorithms
  • Helps developers debug and test algorithms at runtime to automatically parallelize the algorithms, allowing higher throughputs
  • Provides useful tools to connect algorithms with an HTTP API server or a camera stream
  • Seek to work in one single environment and have one place to access all your ML assets

WHAT CAN YOU DO WITH OUR SDK?

  • Promote closer collaboration with project teams
  • Facilitate the creation of new solutions and scale its usage to new use-cases
  • Simplify debugging
  • Accelerate releases
  • Extend AI functionalities to the edge

Throughout history the software programming paradigm has evolved from scripting and functional programming towards object-oriented programming allowing for a higher level of abstraction and application complexity. In order to democratize the ability to create automation by anyone, programming requirements have shifted towards a more dynamic approach, referred to as Software 2.0, that enables non technical users to automate actions with domain experience based on providing teaching examples. However, for non technical users, Software 2.0 will need to evolve even further, towards a new level of abstraction that empowers them to combine high level functions to handle a new generation of challenging applications.

BUILDING BLOCKS

A cell is the smallest building block of the SDK which can be seen as a function with an input and an output. Any (machine learning) algorithm can be wrapped in a cell by implementing the required methods.

Complex data flows can be set up by connecting multiple cells together in a pipeline. The output of one cell can only be connected to the input of another cell if the IO types match. This also means that you can make use of other existing cells without having to know the details of the implementation. Once constructed, the pipeline requires a set of inputs which we refer to as the pipeline source which produces some outputs based on the internal cell connections which we refer to as the pipeline sink.

When you create a pipeline you define a set of blocks and algorithms. Algorithms still have to be created for your application by training a model for it. An algorithm is just a structure but when you feed it data, it becomes an application. Once a pipeline is built and trained, it can be loaded back into RVAI and made available to users in the field.

REUSE YOUR CODE

A pipeline’s source and sink can be driven by a pipeline driver which feeds input data to the source and processes the results from the sink automatically. This makes it possible to ‘expose’ and then use the result anywhere. It can also be done at runtime so that the implementation of the pipeline does not need to be adjusted to support another driver allowing for the same code to be reused in different circumstances and for different kinds of clients/use-cases.

An example of such pipeline driver is an HTTP API server which converts HTTP prediction requests to pipeline inputs and returns the pipeline outputs as HTTP responses.

Another example is to link a camera stream to the pipeline and write the processed results to a database.

The Robovision SDK provides a programming environment with various runtimes (engines that pass along data being created). A runtime is responsible for executing pipelines and pipeline drivers. The power of this is that while keeping the same cells/pipeline implementation, it is possible to use a different runtime with different priorities based on the use case. The functionality of a runtime can also be expanded with plugins; for example, for monitoring the pipelines.

RAY RUNTIME

Ray is a distributed computing platform that allows running processes in parallel. It enables scaling of duplicated cells so data will be distributed between multiple instances resulting in a significant boost in throughout.

DEBUG RUNTIME

Runs cells sequentially, it is easier to debug and more suitable for simple pipelines that have lower response restrictions.

STANDARDIZE TYPES & FORMATS

For cells to understand each others’ output, it is necessary to standardize the types and serialization formats. Therefore, RVAI types, which support a whole range of common types, but can be extended with custom types, are provided.

TO SOLVE THE FOLLOWING PROBLEM:
Machine learning frameworks often have different APIs which make them hard to integrate directly with other platforms/tools. Therefore the SDK provides an API to wrap algorithms into standardized cells which speeds up your work and enable reusability for future ideas.

UNLEASH THE POWER OF ML OPS

REPRODUCIBILITY AKA VERSION EVERYTHING:
Datasets, models, parameters, notebooks, and environments

ACCOUNTABILITY:
Traceability of data & models

COLLABORATION:
Share and collaborate on all components

CONTINUOUS DELIVERY:
Integration of continuous integration and delivery (CI/CD) tools

DEPLOYMENT:
Deployment of models via Docker to Kubernetes

MONITORING:
Statistical model monitoring via the Prometheus time-series database, and visualization with Grafana

On Premise

In the cloud

On the edge