Quix Concepts

Here are the fundamental concepts that you need to learn to get the most out of Quix.

Broker at the core

To get the most out of Quix you need to grasp one main concept: A message broker is the centre of your architecture, not a database.

640

The building blocks

With Quix you combine Topics and Deployments to create any event-driven or data-driven product you can possibly imagine.

TopicsandDeployments

Topics

Topics will form the central nervous system of everything you build with Quix.

They group data streams and "broker" information (hence the name) between deployments allowing you to build low-latency data driven applications.

Very simplified, a topic is similar to a folder in a filesystem (the broker), and the data streams are the files in that folder.

BrokerTopicStreams

Using topics you can:

  • Stream data from any source

  • Run real-time models that process streaming data

  • Run services that respond to model results

  • Build data processing pipelines by daisy-chaining models and services

  • Build applications that consume processed data in real-time

  • Control which data to store permamently

Deployments

If topics are the central nervous system, deployments are the organs. Connecting your code directly to the broker means that your deployments receive messages almost instantly because the data is kept in-memory rather than passing through a disk.

Our SDK underpins this performance because it enables any deployment to communicate directly with a topic using a few lines of Python or C# code.

Deployments generally fall into one of the following project categories:

Bridge

A service that connects - ‘bridges’ - data between two or more systems (one of them being Quix). A bridge can be deployed server-side or client-side depending on the application. deployed client or server side.

Service

Any application code that is continuously running and does not perform complicated data processing. Can be a publisher, a subscriber, or a model. For example, a bridge, a function, a backend operation, or an integration to a third-party service like Twilio messaging.

Model

Any application code that is continuously running, performs complex data processing like ML, and normally1 pub/sub’s from one or more topics. Note: during development a model can be deployed as a job. For example, when training the model.

Job

Any application code that is run once. For example, use a job to run a batch import of data from an existing data store (CSV, DB or DataLake etc) or to rain a machine learning model.

You can deploy any of these project types client-side or server-side using our SDK or API to communicate directly with your topics using the pub/sub pattern.

You can use our API to communicate with a topic if your client-side application isn’t suited to using our SDK.

1A Model can also be called directly from an application using our API so doesn’t have to Pub and Sub!

Pub/Sub

The key concept is that your deployments connect to topics using the pub/sub (publish/subscribe) pattern:

We commonly refer to them as them read and write

Pub (write): when your deployment streams data to a topic using our SDK or API

Sub (read): when your deployment receives data streams from a topic using our SDK or API

Pub and Sub: when your deployment receives data from a topic, processes it, and writes output to a topic. Typical pattern for models and services.

pubsub

ML on streaming data

Deploying a model like you would a software service gives developers the ability to apply ML techniques to streaming data.

Streams

Streams are the central context of data in Quix.

A stream is a collection of data that belong to a single session of a single source. For example:

  • One journey for one car

  • One game session for one player

  • One flight for one aeroplane

Streams make it very easy to manage, discover and work with your data; and they are key to good data governance in your organisation.

Streams not tables

A key concept to grasp is the fact that your publisher defines the data schema, not the table in the database.

Instead of designing a table with rows and columns and sending data to that table, with Quix you use any pub deployment to create a stream that sends data to a topic.

If that topic has persistence enabled, whatever data is included in your stream will be recorded in the optimal database technology using the context of your topic and stream ID’s.

If you stop the pub deployment to change the schema, the new stream will have the new schema, the old one won’t.

Of course, tables are easy to work with, so our SDK automatically converts your stream into a data frame with the timestamp as the primary key for each row, and your streamed data arranged in the columns.

Data Governance

Streams are key to good data governance. Use our SDK to organise your data by:

  • Creating locations to group data by session, product, feature etc

  • Logging separate or continuous sessions depending upon the use case.

Flexibility

A stream is very flexible:

  • It can be short and repeating like individual user sessions in a mobile app, or

  • It can be never-ending, such as a stream of data from a power station, or

  • It can begin and end with the start and finish of a session, such as a football match, or

  • It can be a continuous stream of batches, such as daily stock market prices concatenated at the daily market open/close.

Use our SDK to define the duration of a stream according to the needs of your application.

What about the database?

Didn’t we mention it? The database is dead!

Ha, not quite, but nearly. There will always be very valid use cases for DB tech, including:

  • Long term storage of data is still very much needed, and message brokers are not designed for storage, so you still need a database for that.

  • Databases provide the easiest way to do analytics as they all support query language.

  • Historic data is also very important for training models, which again is another thumbs-up for the humble database.

So you still need them, but we’re saying get them out of the way of live applications, and, the way you think about your database is dead.

Persistence

With Quix, you just have to decide if you want the data persisted to disk or not, we take care of the rest.

With the Quix Catalogue you simply set persistence to ON or OFF at the topic level. Any data streamed to a topic with persistance enabled will be automatically written to the Catalouge, maintaining the live stream context.

Under the hood, our Catalouge is writing each data type to the optimal database technology to provide you with performance and cost efficiency. And you can access all of your data via The Portal or our Catalogue API.