Develop right the first time
Just choose your preferred environment to develop, test and iterate: historic-live, simulated-live or live-live. You’ll be safe knowing your models will work perfectly when deployed to production.
Always on time
Maintain the time domain
In-memory processing reduces latency and costs
Quix changes the way data is handled and processed, from a database-centric approach to a stream-centric approach.
Traditional ML architectures put the database in the way of performance. Your data has to be written to disk, extracted to memory, served to compute, and results are consumed and written back to disk. Quix eliminates this long and costly journey, connecting your models directly to Kafka. This keeps data in memory for optimal performance and lower costs.
Simplify development of scalable and elastic real-time applications with a client library that offers all the tools you need to handle live data.
Abstract your application code off your broker (Kafka, Kinesis, pub/sub or Pulsar) to stream and process in-memory on any environment.
- Stream Writer API delivers streaming data from any source
- Stream Reader API consumes live model results in your applications
- Catalogue API lets you query historic data
- Portal API automates any task in Quix
Want to take a closer look under the hood?
Quix handles capacity, resiliency and scaling by automatically allocating resources and distributing data to your replicas. Save money with live monitoring and pay only for what you use.