Friday, April 26, 2024
Data AnalyticsExplainerInternet of ThingsIoT Software&ToolsTech/Web

Introducing the TICKscript language

What is TICK Stack ? TICKscript ?

The TICK Stack is an acronym for a platform of open source tools built to make collection, storage, graphing, and alerting on time series data incredibly easy. The “I” in TICK stands for InfluxDB. InfluxData provides a Modern Time Series Platform, designed from the ground up to handle metrics and events. InfluxData’s products are based on an open source core. This open source core consists of the projects Telegraf, InfluxDB, Chronograf, and Kapacitor—collectively called the TICK Stack.

For more information Visit this: An Introduction to TICK stack for IoT

Kapacitor uses a Domain Specific Language(DSL) named TICKscript to define tasks involving the extraction, transformation and loading of data and involving, moreover, the tracking of arbitrary changes and the detection of events within data. One common task is defining alerts. TICKscript is used in .tick files to define pipelines for processing data. The TICKscript language is designed to chain together the invocation of data processing operations defined in nodes.

Each script has a flat scope and each variable in the scope can reference a literal value, such as a string, an integer or a float value, or a node instance with methods that can then be called.

These methods come in two forms.

  • Property methods – A property method modifies the internal properties of a node and returns a reference to the same node. Property methods are called using dot (‘.’) notation.
  • Chaining methods – A chaining method creates a new child node and returns a reference to it. Chaining methods are called using pipe (‘|’) notation.

Nodes

In TICKscript the fundamental type is the node. A node has properties and, as mentioned, chaining methods. A new node can be created from a parent or sibling node using a chaining method of that parent or sibling node. For each node type the signature of this method will be the same, regardless of the parent or sibling node type. The chaining method can accept zero or more arguments used to initialize internal properties of the new node instance. Common node types are batch, query, stream, from, eval and alert, though there are dozens of others.

Pipelines

Every TICKscript is broken into one or more pipelines. Pipelines are chains of nodes logically organized along edges that cannot cycle back to earlier nodes in the chain. The nodes within a pipeline can be assigned to variables. This allows the results of different pipelines to be combined using, for example, a join or a union node. It also allows for sections of the pipeline to be broken into reasonably understandable self-descriptive functional units. In a simple TICKscript there may be no need to assign pipeline nodes to variables. The initial node in the pipeline sets the processing type for the Kapacitor task they define. These can be either stream or batch. These two types of pipelines cannot be combined.

Stream or batch?

With stream processing, datapoints are read, as in a classic data stream, point by point as they arrive. With stream Kapacitor subscribes to all writes of interest in InfluxDB. With batch processing a frame of ‘historic’ data is read from the database and then processed. With stream processing data can be transformed before being written to InfluxDB. With batch processing, the data should already be stored in InfluxDB. After processing, it can also be written back to it.

Which to use depends upon system resources and the kind of computation being undertaken. When working with a large set of data over a long time frame batch is preferred. It leaves data stored on the disk until it is required, though the query, when triggered, will result in a sudden high load on the database. Processing a large set of data over a long time frame with stream means needlessly holding potentially billions of data points in memory. When working with smaller time frames stream is preferred. It lowers the query load on InfluxDB.

Pipelines as graphs

Pipelines in Kapacitor are directed acyclic graphs (DAGs). This means that each edge has a direction down which data flows, and that there cannot be any cycles in the pipeline. An edge can also be thought of as the data-flow relationship that exists between a parent node and its child.

At the start of any pipeline will be declared one of two fundamental edges. This first edge establishes the type of processing for the task, however, each ensuing node establishes the edge type between itself and its children.

  • streamfrom()– an edge that transfers data a single data point at a time.
  • batchquery()– an edge that transfers data in chunks instead of one point at a time.

Examples

An elementary stream → from() pipeline

dbrp "telegraf"."autogen"

stream
    |from()
        .measurement('cpu')
    |httpOut('dump')

The simple script in Example 2 can be used to create a task with the default Telegraf database.

$ kapacitor define sf_task -tick sf.tick

The task, sf_task, will simply cache the latest cpu datapoint as JSON to the HTTP REST endpoint(e.g http://localhost:9092/kapacitor/v1/tasks/sf_task/dump).

This example contains a database and retention policy statement: dbrp.

This example also contains three nodes:

  • The base stream node.
  • The requisite from() node, that defines the stream of data points.
  • The processing node httpOut(), that caches the data it receives to the REST service of Kapacitor.

It contains two edges.

  • streamfrom()– sets the processing type of the task and the data stream.
  • from()httpOut()– passes the data stream to the HTTP output processing node.

It contains one property method, which is the call on the from() node to .measurement('cpu') defining the measurement to be used for further processing.

How to learn TICK Script

Visit this official Docs : https://docs.influxdata.com/kapacitor/v1.5/tick/syntax/


I hope you like this post. Do you have any questions? Leave a comment down below! Thanks for reading. If you like this post probably you might like my next ones, so please support me by subscribing my blog.


Harshvardhan Mishra

Hi, I'm Harshvardhan Mishra. Tech enthusiast and IT professional with a B.Tech in IT, PG Diploma in IoT from CDAC, and 6 years of industry experience. Founder of HVM Smart Solutions, blending technology for real-world solutions. As a passionate technical author, I simplify complex concepts for diverse audiences. Let's connect and explore the tech world together! If you want to help support me on my journey, consider sharing my articles, or Buy me a Coffee! Thank you for reading my blog! Happy learning! Linkedin

Leave a Reply

Your email address will not be published. Required fields are marked *