Friday, March 29, 2024
Data AnalyticsExplainerIoT Software&ToolsTutorials/DIY

InfluxDB | Time Series Database ? | TickStack | Tickscript ?

In this post, You can know about Time Series Databases, InfluxDB, TickStack and TickScript. InfluxDB is most popular and useful time series database. You can find Installation and getting started guidelines in this post.

What is a Time Series Database?

A Time Series Database (TSDB) is a database optimized for time-stamped or time series data. Time series data are simply measurements or events that are tracked, monitored, downsampled, and aggregated over time. This could be server metrics, application performance monitoring, network data, sensor data, events, clicks, trades in a market, and many other types of analytics data.

A Time Series Database is built specifically for handling metrics and events or measurements that are time-stamped. A TSDB is optimized for measuring change over time. Properties that make time series data very different than other data workloads are data lifecycle management, summarization, and large range scans of many records.

Why is a Time Series Database Important Now?

Time Series Databases are not new, but the first-generation Time Series Databases were primarily focused on looking at financial data, the volatility of stock trading, and systems built to solve trading. Today, everything that can be a component is a component. In addition, we are witnessing the instrumentation of every available surface in the material world—streets, cars, factories, power grids, ice caps, satellites, clothing, phones, microwaves, milk containers, planets, human bodies. Everything has, or will have, a sensor. So now, everything inside and outside the company is emitting a relentless stream of metrics and events or time series data. This means that the underlying platforms need to evolve to support these new workloads—more data points, more data sources, more monitoring, more controls.

Independent Ranking of Top 15 Time Series Databases

db ranking
Source: https://www.influxdata.com/

Why InfluxDB Time Series Database Unique?

The whole InfluxData platform is built from an open source core. InfluxData is an active contributor to the Telegraf, InfluxDB, Chronograf  and Kapacitor (TICK) projects as well as selling InfluxEnterprise and InfluxCloud on this open source core. The InfluxDB data model is quite different from other time series solutions like Graphite, RRD, or OpenTSDB. InfluxDB has a line protocol for sending time series data which takes the following form: measurement-name tag-set field-set timestamp The measurement name is a string, the tag set is a collection of key/value pairs where all values are strings, and the field set is a collection of key/value pairs where the values can be int64, float64, bool, or string.

Features of InfluxDB

  • DevOps Observability – Observing and automating key customer-facing systems, infrastructure, applications and business processes.
  • IoT Analytics – Analyzing and automating sensors and devices in real-time delivering insight and value while it still matters
  • Real-Time Analytics – Leveraging the investment in instrumentation and observability—detecting patterns and creating new business opportunities
  • Ease of Scale-Out & Deployment – Millions of writes per second, and clustering to eliminate single points of failure.
  • Quickly find value in data — control systems, identify patterns, and predict the future.

Open Source Time Series Platform

The InfluxData Platform is built upon a set of open source projects — Telegraf, Influx DB, Chronograf, and Kapacitor, which are collectively called the TICK Stack. Below, learn more information about Telegraf, InfluxDB, Chronograf, and Kapacitor and their specific functions within InfluxDB’s open source core.

The Open Source Time Series Platform provides services and functionality to accumulate, analyze, and act on time series data.

influxdb
Source: https://www.influxdata.com/

Note: Clustering is only available in InfluxEnterprise and InfluxCloud – Compare Editions.

Telegraf

Telegraf is a plugin-driven server agent for collecting and reporting metrics. Telegraf has plugins or integrations to source a variety of metrics directly from the system it’s running on, to pull metrics from third party APIs, or even to listen for metrics via a StatsD and Kafka consumer services. It also has output plugins to send metrics to a variety of other datastores, services, and message queues, including InfluxDB, Graphite, OpenTSDB, Datadog, Librato, Kafka, MQTT, NSQ, and many others.

Chronograf

Chronograf is the administrative user interface and visualization engine of the platform. It makes the monitoring and alerting for your infrastructure easy to setup and maintain. It is simple to use and includes templates and libraries to allow you to rapidly build dashboards with real-time visualizations of your data and to easily create alerting and automation rules.

Kapacitor

Kapacitor is a native data processing engine. It can process both stream and batch data from InfluxDB. Kapacitor lets you plug in your own custom logic or user-defined functions to process alerts with dynamic thresholds, match metrics for patterns, compute statistical anomalies, and perform specific actions based on these alerts like dynamic load rebalancing. Kapacitor integrates with HipChat, OpsGenie, Alerta, Sensu, PagerDuty, Slack, and more.

How to Install InfluxDB on Ubuntu

1. First, update all your current system packages by the command

sudo apt-get update

2. Add “Influx DB” key to verify the packages that will be installed by the below command.

sudo curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add –

3. Add “InfluxDB” to the repository file by the below command.

sudo echo “deb https://repos.influxdata.com/ubuntu trusty stable” | sudo tee /etc/apt/sources.list.d/influxdb.list

4. Re-update your system packages with the command

sudo apt-get update

5. Now you are ready to run the below command to install InfluxDB.

sudo apt-get -y install influxdb

6. Now you had successfully installed InfluxDB, you can open it from the web page by entering your IP address following by “:8083”.

7. To create a super admin with all privileges by the command below on the query box.

CREATE USER “admin” WITH PASSWORD ‘password’ WITH ALL PRIVILEGES

8. Run the query “Show users” to make sure that your admin user is created successfully.

Enable authentication

By default, authentication isn’t enabled on InfluxDB to enable follow the given below steps.

1.Open the configuration file with the nano editor by the below command.

sudo nano /etc/influxdb/influxdb.conf

2. Search for “Auth-Enabled” and change it from “false” to “true” like the screenshot below.

3. Restart InfluxDB so changes take effect by the below command.

sudo service influxdb restart

Getting Started with InfluxDB

I use ubuntu 16.04 LTS OS . Follow instructions are given below:

  • Download and install InfluxDB

Read This article How to Install InfluxDB on Ubuntu and follow instructions.

  • Now Check status of InfluxDB
systemctl status influxdb.service
  • After Activation start Influx
influx
  • First create a database
> create database mydb
  • To view the list of databases
> show databases
name: databases
---------------
name
_internal
mydb

>
  • For using the database
> use mydb

Note : In influxdb we call tables as measurements and columns as fields.

We don’t need to define measurements(tables) and fields(columns). It will create measurements and add columns automatically when we insert data.

Inserting Data

Format for writing data is

measurementName field1=value1,field2=value2,field3=value3 timestamp
  • The timestamp is in nanoseconds. If we don’t provide timestamp it will assign the local current timestamp.
  • By default it assumes all the numbers as doubles. For integer value we have to append i at the end.
    > insert measurementName field4=12i
  • String values should be in double quotes.
    > insert measurementName field5="qwqw"
  • For boolean values use t, T, true, True, or TRUE for TRUE, and f, F, false, False, or FALSE for FALSE
    > insert measurementName field6=T
  • We can use \ character for escaping comma, space, equal and other special character in field (field) value

For more details refer official documentation

Querying Data

To select all fields from measurement

> select * from measurementName

To select particular fields

> select field1, field2 from measurement

Note : If your mesurement name or field name contains characters such as .# or =, then use double quotes

> select "field1.name", "field2.name" from "measurement.name"

Where clause

A typical usage of where clause

> select * from measurement where field1 > 12 and field2 = 'sparta' and time > now() - 1d

We can also use or logic using separaters ( and ).

Supported comaparaters in influxdb are

  • = equal to
  • <> not equal to
  • != not equal to
  • > greater than
  • < less than
  • =~ matches against
  • !~ doesn’t match against

You can learn about queries in details from official influxDB Documentation.

What is TICK Stack ?

The TICK Stack is an acronym for a platform of open source tools built to make collection, storage, graphing, and alerting on time series data incredibly easy. The “I” in TICK stands for InfluxDB. InfluxData provides a Modern Time Series Platform, designed from the ground up to handle metrics and events. InfluxData’s products are based on an open source core. This open source core consists of the projects Telegraf, InfluxDB, Chronograf, and Kapacitor—collectively called the TICK Stack.

Telegraf

Telegraf is a plugin-driven server agent for collecting and reporting metrics. Telegraf has plugins or integrations to source a variety of metrics directly from the system it’s running on, to pull metrics from third party APIs, or even to listen for metrics via a StatsD and Kafka consumer services. It also has output plugins to send metrics to a variety of other datastores, services, and message queues, including InfluxDB, Graphite, OpenTSDB, Datadog, Librato, Kafka, MQTT, NSQ, and many others.

InfluxDB

high performance and efficient database store for handling high volumes of time-series data.

Chronograf

Chronograf is the administrative user interface and visualization engine of the platform. It makes the monitoring and alerting for your infrastructure easy to setup and maintain. It is simple to use and includes templates and libraries to allow you to rapidly build dashboards with real-time visualizations of your data and to easily create alerting and automation rules.

Kapacitor

Kapacitor is a native data processing engine. It can process both stream and batch data from InfluxDB. Kapacitor lets you plug in your own custom logic or user-defined functions to process alerts with dynamic thresholds, match metrics for patterns, compute statistical anomalies, and perform specific actions based on these alerts like dynamic load rebalancing. Kapacitor integrates with HipChat, OpsGenie, Alerta, Sensu, PagerDuty, Slack, and more.

Use Cases for TICK

TICK aligns well with many potential use cases. It especially fits uses which rely upon triggering events based on constant real-time data streams. An excellent example of this would be fleet tracking. TICK can monitor the fleet data in real-time and create an alert condition if something out of the ordinary occurs. It can also visualize the fleet in its entirety, creating a real-time dashboard of fleet status.

IoT devices are also a strong point for TICK. Solutions that rely upon many IoT devices combining date streams to build an overall view, such as an automated manufacturing line, work well with TICK. TICK can trigger alert events, and visualize the entire status of a production line easily.

What is TICK Stack ? TICKscript ?

Kapacitor uses a Domain Specific Language(DSL) named TICKscript to define tasks involving the extraction, transformation and loading of data and involving, moreover, the tracking of arbitrary changes and the detection of events within data. One common task is defining alerts. TICKscript is used in .tick files to define pipelines for processing data. The TICKscript language is designed to chain together the invocation of data processing operations defined in nodes.

Each script has a flat scope and each variable in the scope can reference a literal value, such as a string, an integer or a float value, or a node instance with methods that can then be called.

These methods come in two forms.

  • Property methods – A property method modifies the internal properties of a node and returns a reference to the same node. Property methods are called using dot (‘.’) notation.
  • Chaining methods – A chaining method creates a new child node and returns a reference to it. Chaining methods are called using pipe (‘|’) notation.

Nodes

In TICKscript the fundamental type is the node. A node has properties and, as mentioned, chaining methods. A new node can be created from a parent or sibling node using a chaining method of that parent or sibling node. For each node type the signature of this method will be the same, regardless of the parent or sibling node type. The chaining method can accept zero or more arguments used to initialize internal properties of the new node instance. Common node types are batch, query, stream, from, eval and alert, though there are dozens of others.

Pipelines

Every TICKscript is broken into one or more pipelines. Pipelines are chains of nodes logically organized along edges that cannot cycle back to earlier nodes in the chain. The nodes within a pipeline can be assigned to variables. This allows the results of different pipelines to be combined using, for example, a join or a union node. It also allows for sections of the pipeline to be broken into reasonably understandable self-descriptive functional units. In a simple TICKscript there may be no need to assign pipeline nodes to variables. The initial node in the pipeline sets the processing type for the Kapacitor task they define. These can be either stream or batch. These two types of pipelines cannot be combined.

Stream or batch?

With stream processing, datapoints are read, as in a classic data stream, point by point as they arrive. With stream Kapacitor subscribes to all writes of interest in InfluxDB. With batch processing a frame of ‘historic’ data is read from the database and then processed. With stream processing data can be transformed before being written to InfluxDB. With batch processing, the data should already be stored in InfluxDB. After processing, it can also be written back to it.

Which to use depends upon system resources and the kind of computation being undertaken. When working with a large set of data over a long time frame batch is preferred. It leaves data stored on the disk until it is required, though the query, when triggered, will result in a sudden high load on the database. Processing a large set of data over a long time frame with stream means needlessly holding potentially billions of data points in memory. When working with smaller time frames stream is preferred. It lowers the query load on InfluxDB.

Pipelines as graphs

Pipelines in Kapacitor are directed acyclic graphs (DAGs). This means that each edge has a direction down which data flows, and that there cannot be any cycles in the pipeline. An edge can also be thought of as the data-flow relationship that exists between a parent node and its child.

At the start of any pipeline will be declared one of two fundamental edges. This first edge establishes the type of processing for the task, however, each ensuing node establishes the edge type between itself and its children.

  • streamfrom()– an edge that transfers data a single data point at a time.
  • batchquery()– an edge that transfers data in chunks instead of one point at a time.

Examples

An elementary stream → from() pipeline

dbrp "telegraf"."autogen"

stream
    |from()
        .measurement('cpu')
    |httpOut('dump')

The simple script in Example 2 can be used to create a task with the default Telegraf database.

$ kapacitor define sf_task -tick sf.tick

The task, sf_task, will simply cache the latest cpu datapoint as JSON to the HTTP REST endpoint(e.g http://localhost:9092/kapacitor/v1/tasks/sf_task/dump).

This example contains a database and retention policy statement: dbrp.

This example also contains three nodes:

  • The base stream node.
  • The requisite from() node, that defines the stream of data points.
  • The processing node httpOut(), that caches the data it receives to the REST service of Kapacitor.

It contains two edges.

  • streamfrom()– sets the processing type of the task and the data stream.
  • from()httpOut()– passes the data stream to the HTTP output processing node.

It contains one property method, which is the call on the from() node to .measurement('cpu') defining the measurement to be used for further processing.

How to learn TICK Script

Visit this official Docs : https://docs.influxdata.com/kapacitor/v1.5/tick/syntax/

Running the TICK Stack on a Raspberry Pi | TICK Satck on Raspberry Pi

So You need :

  • MQ135 Senor
  • Esp8266
  • Arduino IDE
  • Influx DB
  • NodeJS
  • MQTT Broker

Visit this article for install Nodejs – How To Install Node.js on Ubuntu

Visit this article for install InFluxDb –How to Install InfluxDB on Ubuntu and Getting Started with InfluxDB

We use test.osquitto.org for MQTT broker but if you are intrested in your own secure mqtt broker, visit this article – How To Create Secure MQTT Broker

Using Mq135 Sensor with InfluxDB

So i provide codes given below. Modify code and use

Data.js code (this code run on System where you want to stored data)

**********************************data.js*****************************************

var mqtt = require('mqtt');
const fs = require('fs');
//var obj = { 'username': 'user', 'password': 'password' }
var client = mqtt.connect('mqtt://test.mosquitto.org:1883');
//var client = mqtt.connect('mqtt://test.mosquitto.org:1883', obj);
const Influx = require('influxdb-nodejs');
const clientInflux = new Influx('http://127.0.0.1:8086/aqDB');
const fieldSchema = {
aq: 'i',
ppm:'f'
};
const tagSchema = {
deviceId:'*'
};
clientInflux.schema('sendData', fieldSchema, tagSchema, {
// default is false
stripUnknown: true,
});

client.on('connect', () => {
console.log('Connected to server');
client.subscribe('sendData');
});
client.on('close', () => {
console.log('Disconnected from server');
});

client.on('message', (topic, message) => {
console.log("mqtt msg : " + message.toString());
data = JSON.parse(message)
if (data.deviceId && data.data) {
clientInflux.write('airData')
.tag({
deviceId: data.deviceId,
})
.field({
aq: data.data.rzero,
ppm:data.data.ppm
})
.then(() => {
console.log("Success")
})
.catch((err)=>{
console.log(err)
});
}
else
{
console.log("")
}
});

*****************************data.js***************************************

Node Sensor Code “mq135.ino”

Flash this code In ESP8266 using Arduino IDE. If you don’t know visit this article – Arduino Support for ESP8266 with simple test code

*********************************Mq135.js*********************************

#include <ESP8266WiFi.h>
#include <MQ135.h>
#include <PubSubClient.h>
#include <ArduinoJson.h>
#define ANALOGPIN A0

const char* ssid = "enter ssid";
const char* password = "enter password";

const char* mqttServer = "test.mosquitto.org";
const int mqttPort = 1883;
//const char* mqttUser = "user";
//const char* mqttPassword = "password";

WiFiClient espClient;
PubSubClient client(espClient);


MQ135 gasSensor = MQ135(ANALOGPIN);
void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
delay(100);
pinMode(2, OUTPUT);
// WiFi.softAP(hostssid, hostpassword);/
Serial.println();
WiFi.begin(ssid, password);
int i = 0;
while ( (i <= 10) && (WiFi.status() != WL_CONNECTED) )
{
delay(500);
Serial.print(".");
i++;
}
Serial.println();
// WiFi.config(ip,gateway,subnet,dns);
Serial.println("");
Serial.println("WiFi connected");
Serial.println("IP address: ");
Serial.println(WiFi.localIP());

client.setServer(mqttServer, mqttPort);
client.setCallback(callback);

while (!client.connected()) {
Serial.println("Connecting to MQTT...");

if (client.connect("ESP8266Client1")) {

Serial.println("connected");

} else {

Serial.print("failed with state ");
Serial.print(client.state());
delay(2000);

}
}

client.publish("sendData", "Hello from ESP8266");
client.subscribe("sendData");

}

void callback(char* topic, byte* payload, unsigned int length) {

Serial.print("Message arrived in topic: ");
Serial.println(topic);

Serial.print("Message:");
for (int i = 0; i < length; i++) {
Serial.print((char)payload[i]);
}

Serial.println();
Serial.println("-----------------------");

}

void loop() {
// put your main code here, to run repeatedly:
float rzero = gasSensor.getRZero(); //this to get the rzero value, uncomment this to get ppm value
Serial.print("RZero=");
Serial.println(rzero); // this to display the rzero value continuously, uncomment this to get ppm value

float ppm = gasSensor.getPPM(); // this to get ppm value, uncomment this to get rzero value
Serial.print("PPM=");
Serial.println(ppm); // this to display the ppm value continuously, uncomment this to get rzero value
DynamicJsonBuffer jsonBuffer;
JsonObject& json = jsonBuffer.createObject();
json["ppm"] = ppm;
String d;
json.printTo(d);
client.publish("sendData", d.c_str());
Serial.println(d);
digitalWrite(2, HIGH);
delay(500);
digitalWrite(2, LOW);
delay(500);
}

*********************************mq135******************************

Pin Diagram for Mq135 sensor

mq135                                esp82266

VCC       ============> 3.3v

GND     ============>GND

DATA   ============>A0


Harshvardhan Mishra

Hi, I'm Harshvardhan Mishra. Tech enthusiast and IT professional with a B.Tech in IT, PG Diploma in IoT from CDAC, and 6 years of industry experience. Founder of HVM Smart Solutions, blending technology for real-world solutions. As a passionate technical author, I simplify complex concepts for diverse audiences. Let's connect and explore the tech world together! If you want to help support me on my journey, consider sharing my articles, or Buy me a Coffee! Thank you for reading my blog! Happy learning! Linkedin

3 thoughts on “InfluxDB | Time Series Database ? | TickStack | Tickscript ?

Leave a Reply

Your email address will not be published. Required fields are marked *