The Portal

The Quix Portal is a Web SaaS for developing data-driven and event driven applications. It provides UI access to our streaming, compute and storage infrastructure, together with tools and workflows to help you manage and explore your data, write and deploy projects, and connect your solutions to external systems and frontend applications.

The Portal is broken down into separate pages to help you work efficiently.

Portal Home

You’ll land on the portal home when you log-in to Quix. You can see the name of your organisation in the top left corner. This is the root of the breadcrumb nav.

Workspace management is currently the prominent feature of Portal Home. Here you can create workspaces to isolate resources to individual applications that you are working on.

Clicking your name on top RH corner gives you access to admin features like your profile, billing, user management as well as quick access to your Personal Access Token.

Workspace Home

Workspaces are a discreet, fully featured and highly scalable data platform which can be created on the fly to contain and manage all the data, topics, models, services and people related to a single activity that you are working on. You can use them to:

  • Contain a single application, which might be one web application or one game, or it could be an account for one customer’s product.

  • Create multiple applications, one per workspace, each with discreet human and compute resources.

  • Isolate a group of internal or external people working on a single application.

  • Contain specific datasets, models and services that you want to share with third parties whilst maintaining access control.

Workspaces deploy in a few minutes depending on how many other workspaces are being created at the same time. It takes a few minutes because we are provisioning you with a private data catalogue and private name spaces in our multi-tenant Kafka and compute clusters.

Once available, they are immediately ready to create topics and projects to stream and store data, build and deploy models and services, and consume queries and live streams into your apps.

Note, all users in your organisation currently have unlimited access to all your workspaces, data, models, services, certificates, SDK’s and API’s. We’re working on workspace user management.

Topics

The Portal makes topics easy to use. Here you can create, monitor, manage and delete them with our UI.

Built on Kafka, our Topics are a fault tolerant and durable container for streams of data that have a nanosecond timestamp as the primary key. Together with our SDK, our topics are a performant and cost-effective way to stream data in the cloud.

Features

Create

You can create any number of topics in a workspace. Just click create in our UI and give the topic a name

Connect

You can connect to any topic to read or write data streams using our wizard.

Data Persistence

You control whether your streams are permanently stored. By default, new topics do not store data to disk. This can be useful in various scenarios such as reducing storage costs by running a down sampling model and only storing the down sampled data. In other scenarios you may wish to store all the data flowing through a topic.

Visualise

You can visualise persisted streams directly from the topic.

Monitoring

We have built live monitoring so you can track the flow of data of each individual topic.

Delete

You can delete topics to keep your workspace tidy. This is a permanent action.

Data

All data persisted from topics are stored in the Quix Catalogue as streams.

It is easy for any user to find data in the catalogue using our UI and navigating by location or topic, or by searching and filtering by column.

All metadata for a stream is appended to it in the streams table and can be quickly accessed by clicking on the ‘open’ arrows.

Streams in the catalogue can be quickly visualised in Quix or external applications such as PowerBI or Grafana to further improve discovery.

Features

The catalogue page helps developers explore their data streams quickly.

Data location

The left context pane of the catalogue page lets you explore your data by location. For some applications the location may be tied to the physical location of the data source, for others the content could be product related; for example:

  • Racing teams may want to organise their data into a hierarchy based on the location of their races such as Race Series > Season > Country > Circuit.

  • App developers may want to organise their data based on the type of device used, such as: App Name > Version > Platform; or the physical location of their users, such as: Root > App Name > Region > Country > Town.

  • A MedTech company might want to organise IoT data from wearables by patient, pathology or anatomy.

The data location is an extremely flexible feature which can be tailored to your needs using Locations Properties in our SDK.

Data Management

You can delete any streams, individually or in a batch process. Deleted streams will be recoverable in the Trash for one hour.

Mode

You can switch the view of your stream between Stream mode and Pipeline mode.

Stream Mode will give you a view of each individual stream from each producer

Pipeline Mode will group all the streams in one data pipeline. The parent stream will be the first stream in the pipeline, with each topic and deployment creating subsequent child objects.

Visualise

Visualise is the page where you can visualise, analyse and share all your streams, both historic and live.

What Visualise is for

We are building Visualise as an internal tool to make it simple and fast to manage and extract value from your data. The core areas of focus include:

Visual data discovery

Features that enable you to quickly locate regions of interest in your data and compare them to other relevant datasets. These should be accessible to all users in your team regardless of their data expertise.

Data preparation for machine learning

Features that let you prepare and segment datasets for training and back testing.

Analysing the results of data processing models

Features that let you compare results from multiple experiments of models and data. Quickly switching between model versions and datasets to optimise the performance of your models.

Building data-driven features into your products

Features that let you build simple data queries such as aggregates, groups, splits and filters, then export the query via our API to be called by your frontend.

Collaborating with colleagues across your product development team

Features that allow you to work with software engineers, data scientists, product owners and designers to build data-driven features.

What Visualise is not for

We are not intending Visualise to be externally facing dashboards or data presentation, or robust tools for data analysis.

There are plenty of good tools that are freely available on the open market to perform these tasks, so we are not trying to duplicate features or functionality from those products. But we have made it very easy to connect to them with our API:

  • You can build a bespoke dashboard with Quix using your favorite library like we’ve done with this Flask and Ploty real time example.

  • We have built some sample code to get you started with Microsoft PowerBI

  • We use our API to build our own monitoring on Grafana; the Swagger link will help you get started quickly.

  • You can use the connect button to get a query code that you can use in your notebooks for analytics and ML tasks.

We’ll continue to add integrations to third party products - drop us an email if you have a specific requirement.

Features

Waveform View

Our default view allows you to quickly explore historic and live streams, parameters and events in a single view. The waveform view consists of the following features:

  • Quick select of multiple streams, parameters and events with colour coded overlay

  • Timestamp on the X axis and auto-fit of the parameter values on the Y axis

  • Dynamic tooltip values in the context pane

  • Vertical cursor navigation line on the view

  • Dynamic pan, zoom and fit of stream

  • Live data view

  • Colour coded data grouping and splitting

Use the waveform view to navigate a stream, find your region of interest, and extract that specific data for further processing.

Table View

The table view provides a quick way to summarise data and is currently the most useful way of looking at event and binary data. It consists of the following features:

  • Select any number of parameters (columns) in a stream

  • Aggregate by min, max, mean, first and last

  • Further aggregate by tags such as lap number, day, location, or any other predefined tag setup using our SDK

Use the table view to build a query your data and extract the code for that query for sharing, presentation or further processing.

Grouping

Data in the catalogue can be grouped by tag. Tags can be anything, and are defined by users with our SDK.

Download Data

You can download data as CSV or JSON. Downloading data from the Visualise pages will give you a file containing the exact data that you had in the visualise view.

Connect to Historic Data

You can connect to any data using our Catalogue API. Clicking connect from the Visualise page will give you pre-configured sample code for the exact data and query that you had in the visualise view – this is useful for a number of use cases, including:

Machine learning: This is very powerful for getting data to your notbook, or for segmenting data for training and back-testing your models.

App development: It provides pre-configured example of how to call data from your external apps.

Analytics: Use our PowerBI integration to ingest data for visual presentation or build your own frontend dashboard with your favourite Python library and a frontend deployment.

Live View

We have implemented a live view so that you can see data from open streams (whether that be raw data from sources or processed data from models).

You can perform the same data navigation, filtering and grouping on live data as historic data.

Working with Visualise

As previously mentioned the visualise page is designed to help you quickly locate regions of interest in your data and use those region for further processing and visualisation. Our goal is to make this code free so that your data becomes more accessible to more people across your teams.

We highly recommend that you use the tools available in Visualise to narrow down regions of interest, then use the connect and download buttons to get that specific data for sharing, processing, consumption in apps or visualisation in a BI tool or bespoke dashboard.

Projects

Projects is where you will create, store and manage the code that will bring your solutions to life. We’ve included a GIT repo, and created a simple workflow to help you deploy your project to our serverless compute.

Types of project

In quix, each repo is called a project. You actually write ANY code in a project and deploy it to Quix, but our SDK (and therefore our Kafka) currently supports only Python & C#.

Projects broadly fall into one of the following categories:

Bridges: A service that connects - ‘bridges’ - data between two or more systems (one of them being Quix). A bridge can be deployed server-side or client-side depending on the application. deployed client or server side.

Services: Any application code that is continuously running and does not perform complicated data processing. Can be a publisher, a subscriber, or a model. For example, a bridge, a function, a backend operation, or an integration to a third-party service like Twilio messaging.

Models: Any application code that is continuously running, performs complex data processing like ML, and normally1 pub/sub’s from one or more topics. Note: during development a model can be deployed as a job. For example, when training the model.

Jobs: Any application code that is run once. For example, use a job to run a batch import of data from an existing data store (CSV, DB or DataLake etc) or to rain a machine learning model.

1 A Model can also be called directly from an application using our API so doesn’t have to Pub and Sub!

Features

Project list

A very simple list of your projects. When you clone a sample from Library it will be added to the table view here. You can also create a new project from a blank template.

IDE

We’ve included a simple IDE where you can start writing code to develop your project. We provide the My Projects repo from where you can access our IDE with integrated GIT version control.

The IDE allows you to edit code, create and delete files and folders, commit changes and tag versions ready for deployment.

We expect most of you will want more features so you can also use your favourite IDE by linking Quix to your external GIT client.

GIT

Version control is critical to rigorous product and data engineering practice so we built a native GIT implementation for tracking changes in your project code.

Our GIT follows conventional features so you can:

  • Start with a master

  • Edit the master with a commit log

  • Create a branch, clone, duplicate or download your project

  • Tag versions

  • Connect to external GIT clients using Quix authorisation

Deployments

We’ve always felt that managing clusters was the antithesis of software development; could you imagine provisioning a cluster on your laptop to run some code – we can’t either, and we believe this is holding back more widespread adoption of data science in the cloud.

A key goal from the very beginning has been to allow any user to deploy their own code to the cloud with the click of a button.

We have achieved that goal.

Gone is the need for custom infrastructure; for large software teams; for DevOps specialists; for rewriting prototype Python code in Scala, Java or C++; for deploying and managing clusters when all you care about is your model.

With Quix you simply click Deploy from inside your project and configure your deployment. We give you control over your compute resources, and your project is built into a production environment where your code will be executed.

Features

One click deployment

With Quix you can deploy your project to our serverless compute quickly from the deployments page, or from inside the project. Quix will package your GIT project and build it into a Docker image. From here it will deploy the image into Kubernetes reserving enough resources for the maximum demand you configure.

You can even download a copy of your docker image for exporting to other systems.

Job or Service

You can deploy your project as a job (that is executed once) or a service (that runs continuously)

Logs

There are build logs and deployment logs to provide the details for each deployment.

Monitoring

You can monitor the actual consumption of each deployment

Management

You can start, stop, pause and delete deployments.

Library

We’ve created a Library to store production ready code.

Write Samples

Write contains some configurable samples for streaming data to a topic. Use them as a starting point for building bridges and crawlers etc. There’s a particularly useful sample for streaming data from a CSV, we’ve had teams use this to build a whole data driven product before connecting to a live data source by using this project and a CSV file as a simulator.

Read Samples

Read contains some configurable samples for streaming data from a topic. Use them as a starting point for consuming data into services, external systems and your apps.

Model samples

Model contains sample code for reading from an input topic, transforming the data then writing to an output topic. Use them as a starting point for creating models and services that pub/sub.

Sample Projects

These samples are generally more advanced, not only because they use an extended set of SDK features but because they’re providing a complete solution for a specific task.

Currently we include two advanced UDP bridges for consuming streamed data from the popular racing games F1 (developed by Codemasters) and Assetto Corsa. There’s also a bridge for connecting to the popular health and fitness app Strava.

These samples are great getting started quickly, or for showing you how to build efficient data recorders from UDP compatible devices.

We’re going to add many features to the Library in the coming months. In the near future, you will be able to save projects from develop to the library for access by other users in the workspace. We’ll then use it as a way to share code across workspaces so that models and services can be easily reused across an organisation

We also intend to build a public marketplace where our community can share code to help each other go faster. You’ll be able to mark projects as public to share code with the wider community or private to keep code within your organisation.

Connect

Connect exists to help you access your data, analytics, model results and services in your external applications. With connect you can:

  • Build features into your frontend products like websites, games and apps.

  • Work with business intelligence tools for further analysis and reporting

  • Build dashboards using tools like Grafana

  • Use our frontend deployment to build a bespoke dashboard with your favorite Python library.

  • Consume data into physical products and infrastructure like connected cars, machines, drones and buildings.

We will build out the features of connect to build more data-driven widgets and integrations to third-party apps and services.

Features

Catalogue API

Our Catalogue API helps you to build reports and analytic features from historical data in the Catalogue. You can streams programmatically with C# and Python from your frontend, dashboard or BI tool.

The Connect page will give you a pre-configured code sample in your language of choice, start on the Visualise page by selecting the data and applying query filters.

Use the Swagger docs and SDK to get started quickly.

PowerBI Integration

PowerBI is a powerful data visualisation tool which you can use to present analysis and model results to internal and external stakeholders. As mentioned previously, we’re not building a data visualisation tool; instead we developed a code sample to help you get your data from Quix to PowerBI.

Use the PowerBI integration and our instructions to get started quickly.