This section contains solutions, fixes, hints and tips to help you solve the most common issues encountered when using Quix.
Ensure the TopicId is correct. See here on how to find your TopicId
You can check the data in / out rates on the Topics tab.
If you want to see the data in the Data Catalogue please make sure you are persisting the data to the Topic otherwise it may appear that there is no data.
If you are using a consumer group, check that no other services are using the same group. If you run your code locally and deployed somewhere and they are both using the same consumer group one of them may consume all of the data.
If you see errors like these in your service or job logs then you may have used the wrong credentials or it could be that you have specified the wrong Topic Id.
Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256
Exception receiving package from Kafka
3/3 brokers are down
Broker: Topic authorization failed
Check very carefully each of the details.
The following must be correct:
Username (or Workspace Id)
These can all be found here
If you have deployed a service or job and the logs mention 'broker transport failure' then check the workspace name and password in the SecurityOptions.
Also check the broker address list. You should have these by default:
When attempting to access the web API’s you may encounter a 401 error. Check that the bearer token is correct and has not expired. If necessary generate a new bearer token. How to generate access tokens
Example of the error received when trying to connect to the Streaming Reader API with an expired bearer token
The API’s that require a valid bearer token are:
Streaming Writer API
Streaming Reader API
Telemetry Query API
|YOUR_WORKSPACE_ID can be found on the Topic Information|
Errors generated in the SDK callback can be swallowed or hard to read. To prevent this and make it easier to determine the root cause you should use a traceback
Begin by importing traceback
Then, inside the SDK callback where you might have an issue place code similar to this:
def read_stream(new_stream: StreamReader): def on_parameter_data_handler(data: ParameterData): try: data.timestamps # this does not exist except Exception: print(traceback.format_exc()) new_stream.parameters.create_buffer().on_read += on_parameter_data_handler input_topic.on_stream_received += read_stream
Notice that the try clause is within the handler and the except clause prints a formatted exception (below)
Traceback (most recent call last): File "main.py", line 20, in on_parameter_data_handler data.timestamps File "/usr/local/lib/python3.8/dist-packages/quixstreaming/models/netlist.py", line 22, in __getitem__ item = self.__wrapped[key] IndexError: list index out of range
If your service continually fails and restarts you will not be able to view the logs. Redeploy your service as a job instead. This will allow you to inspect the logs and get a better idea about what is happening.
There are currently 2 scenarios in which you might encounter an issue caused by DNS propagation.
1. Data catalogue has been deployed but DNS entries have not fully propagated. In this scenario you might see a banner when accessing the data catalogue.
2. A dashboard or other publicly visible deployment is not yet accessible, again due to DNS propagation.
|In these scenarios simply wait while the DNS records propagate. It can take up to 10 minutes for DNS to records to propagate fully.|
If you get strange errors when trying to compile your Python code locally please check that you are using Python version 3.8
For example you may encounter a 'ModuleNotFoundError'
ModuleNotFoundError: No module named 'quixstreaming'
For information on how to setup your IDE for working with Quix please check out this How To guide.
If your deployment’s logs report "Killed" or "Out of memory" then you may need to increase the amount of memory assgned to the deployment.
You may experience this:
At build time if you want to load large third party packages into your code
At runtime if you are storing large datasets in memory.