All Projects → vanthome → Winston Elasticsearch

vanthome / Winston Elasticsearch

Licence: mit
An elasticsearch transport for winston

Programming Languages

javascript
184084 projects - #8 most used programming language

Projects that are alternatives of or similar to Winston Elasticsearch

Elasticsearch Jest Example
ElasticSearch Java Rest Client Examples
Stars: ✭ 189 (-12.9%)
Mutual labels:  elasticsearch
Learningsummary
涵盖大部分Java进阶需要掌握的知识,包括【微服务】【中间件】【缓存】【数据库优化】【搜索引擎】【分布式】等等,欢迎Star~
Stars: ✭ 201 (-7.37%)
Mutual labels:  elasticsearch
Elasticsearch Comrade
Elasticsearch admin panel built for ops and monitoring
Stars: ✭ 214 (-1.38%)
Mutual labels:  elasticsearch
Firecamp
Serverless Platform for the stateful services
Stars: ✭ 194 (-10.6%)
Mutual labels:  elasticsearch
Amazonriver
amazonriver 是一个将postgresql的实时数据同步到es或kafka的服务
Stars: ✭ 198 (-8.76%)
Mutual labels:  elasticsearch
Pgsync
Postgres to elasticsearch sync
Stars: ✭ 205 (-5.53%)
Mutual labels:  elasticsearch
Elastiflow
Network flow analytics (Netflow, sFlow and IPFIX) with the Elastic Stack
Stars: ✭ 2,322 (+970.05%)
Mutual labels:  elasticsearch
Wazuh Kibana App
Wazuh - Kibana plugin
Stars: ✭ 212 (-2.3%)
Mutual labels:  elasticsearch
Image To Image Search
A reverse image search engine powered by elastic search and tensorflow
Stars: ✭ 200 (-7.83%)
Mutual labels:  elasticsearch
Py Elasticsearch Django
基于python语言开发的千万级别搜索引擎
Stars: ✭ 207 (-4.61%)
Mutual labels:  elasticsearch
Spandex
Elasticsearch client for Clojure (built on new ES 7.x java client)
Stars: ✭ 195 (-10.14%)
Mutual labels:  elasticsearch
Elasticsearch Test Data
Generate and upload test data to Elasticsearch for performance and load testing
Stars: ✭ 194 (-10.6%)
Mutual labels:  elasticsearch
Book Elastic Search In Action
Elastic 搜索开发实战
Stars: ✭ 205 (-5.53%)
Mutual labels:  elasticsearch
Snow Owl
🦉 Snow Owl - production ready, scalable terminology server (SNOMED CT, ICD-10, LOINC, dm+d, ATC and others)
Stars: ✭ 191 (-11.98%)
Mutual labels:  elasticsearch
Wazuh Docker
Wazuh - Docker containers
Stars: ✭ 213 (-1.84%)
Mutual labels:  elasticsearch
Awesome Es
简书的优秀资源可以向专题“elasticsearch”投稿,简书外的资源欢迎向本awesome pull requests
Stars: ✭ 188 (-13.36%)
Mutual labels:  elasticsearch
Log4net.elasticsearch
log4net appender to ElasticSearch
Stars: ✭ 202 (-6.91%)
Mutual labels:  elasticsearch
Searchkit Demo
Example imdb search using elasticsearch, searchkit, typescript, react and webpack
Stars: ✭ 217 (+0%)
Mutual labels:  elasticsearch
Gimel
Big Data Processing Framework - Unified Data API or SQL on Any Storage
Stars: ✭ 216 (-0.46%)
Mutual labels:  elasticsearch
Docker Elastic
Deploy Elastic stack in a Docker Swarm cluster. Ship application logs and metrics using beats & GELF plugin to Elasticsearch
Stars: ✭ 202 (-6.91%)
Mutual labels:  elasticsearch

winston-elasticsearch

Version npmBuild StatusDependenciesCoverage Status

An elasticsearch transport for the winston logging toolkit.

Features

  • logstash compatible message structure.
  • Thus consumable with kibana.
  • Date pattern based index names.
  • Custom transformer function to transform logged data into a different message structure.
  • Buffering of messages in case of unavailability of ES. The limit is the memory as all unwritten messages are kept in memory.

Compatibility

For Winston 3.x, Elasticsearch 7.0 and later, use the >= 0.7.0. For Elasticsearch 6.0 and later, use the 0.6.0. For Elasticsearch 5.0 and later, use the 0.5.9. For earlier versions, use the 0.4.x series.

Unsupported / Todo

  • Querying.

Installation

npm install --save winston winston-elasticsearch

Usage

const winston = require('winston');
const { ElasticsearchTransport } = require('winston-elasticsearch');

const esTransportOpts = {
  level: 'info'
};
const esTransport = new ElasticsearchTransport(esTransportOpts);
const logger = winston.createLogger({
  transports: [
    esTransport
  ]
});
// Compulsory error handling
logger.on('error', (error) => {
  console.error('Error caught', error);
});
esTransport.on('warning', (error) => {
  console.error('Error caught', error);
});

The winston API for logging can be used with one restriction: Only one JS object can only be logged and indexed as such. If multiple objects are provided as arguments, the contents are stringified.

Options

  • level [info] Messages logged with a severity greater or equal to the given one are logged to ES; others are discarded.
  • index [none | when dataStream is true, logs-app-default] The index to be used. This option is mutually exclusive with indexPrefix.
  • indexPrefix [logs] The prefix to use to generate the index name according to the pattern <indexPrefix>-<indexSuffixPattern>. Can be string or function, returning the string to use.
  • indexSuffixPattern [YYYY.MM.DD] a Day.js compatible date/ time pattern.
  • transformer [see below] A transformer function to transform logged data into a different message structure.
  • useTransformer [true] If set to true, the given transformer will be used (or the default). Set to false if you want to apply custom transformers during Winston's createLogger.
  • ensureIndexTemplate [true] If set to true, the given indexTemplate is checked/ uploaded to ES when the module is sending the fist log message to make sure the log messages are mapped in a sensible manner.
  • indexTemplate [see file index-template-mapping.json] the mapping template to be ensured as parsed JSON. ensureIndexTemplate is true and indexTemplate is undefined
  • flushInterval [2000] Time span between bulk writes in ms.
  • retryLimit [400] Number of retries to connect to ES before giving up.
  • healthCheckTimeout [30s] Timeout for one health check (health checks will be retried forever).
  • healthCheckWaitForStatus [yellow] Status to wait for when check upon health. See its API docs for supported options.
  • healthCheckWaitForNodes [>=1] Nodes to wait for when check upon health. See its API docs for supported options.
  • client An elasticsearch client instance. If given, all following options are ignored.
  • clientOpts An object hash passed to the ES client. See its docs for supported options.
  • waitForActiveShards [1] Sets the number of shard copies that must be active before proceeding with the bulk operation.
  • pipeline [none] Sets the pipeline id to pre-process incoming documents with. See the bulk API docs.
  • buffering [true] Boolean flag to enable or disable messages buffering. The bufferLimit option is ignored if set to false.
  • bufferLimit [null] Limit for the number of log messages in the buffer.
  • apm [null] Inject apm client to link elastic logs with elastic apm traces.
  • dataStream [false] Use Elasticsearch datastreams.
  • source [none] the source of the log message. This can be useful for microservices to understand from which service a log message origins.

Logging of ES Client

The default client and options will log through console.

Interdependencies of Options

When changing the indexPrefix and/ or the transformer, make sure to provide a matching indexTemplate.

Transformer

The transformer function allows mutation of log data as provided by winston into a shape more appropriate for indexing in Elasticsearch.

The default transformer generates a @timestamp and rolls any meta objects into an object called fields.

Params:

  • logdata An object with the data to log. Properties are:
    • timestamp [new Date().toISOString()] The timestamp of the log entry
    • level The log level of the entry
    • message The message for the log entry
    • meta The meta data for the log entry

Returns: Object with the following properties

  • @timestamp The timestamp of the log entry
  • severity The log level of the entry
  • message The message for the log entry
  • fields The meta data for the log entry

The default transformer function's transformation is shown below.

Input A:

{
  "message": "Some message",
  "level": "info",
  "meta": {
    "method": "GET",
    "url": "/sitemap.xml",
    ...
  }
}

Output A:

{
  "@timestamp": "2019-09-30T05:09:08.282Z",
  "message": "Some message",
  "severity": "info",
  "fields": {
    "method": "GET",
    "url": "/sitemap.xml",
    ...
  }
}

Note that in current logstash versions, the only "standard fields" are @timestamp and @version, anything else is just free.

A custom transformer function can be provided in the options hash.

Events

  • error: in case of any error.

Example

An example assuming default settings.

Log Action

logger.info('Some message', {});

Only JSON objects are logged from the meta field. Any non-object is ignored.

Generated Message

The log message generated by this module has the following structure:

{
  "@timestamp": "2019-09-30T05:09:08.282Z",
  "message": "Some log message",
  "severity": "info",
  "fields": {
    "method": "GET",
    "url": "/sitemap.xml",
    "headers": {
      "host": "www.example.com",
      "user-agent": "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)",
      "accept": "*/*",
      "accept-encoding": "gzip,deflate",
      "from": "googlebot(at)googlebot.com",
      "if-modified-since": "Tue, 30 Sep 2019 11:34:56 GMT",
      "x-forwarded-for": "66.249.78.19"
    }
  }
}

Target Index

This message would be POSTed to the following endpoint:

http://localhost:9200/logs-2019.09.30/log/

So the default mapping uses an index pattern logs-*.

Logs correlation with Elastic APM

Instrument your code

yarn add elastic-apm-node
- or -
npm install elastic-apm-node

Then, before any other require in your code, do:

const apm = require("elastic-apm-node").start({
  serverUrl: "<apm server http url>"
})

// Set up the logger
var winston = require('winston');
var Elasticsearch = require('winston-elasticsearch');

var esTransportOpts = {
  apm,
  level: 'info',
  clientOpts: { node: "<elastic server>" }
};
var logger = winston.createLogger({
  transports: [
    new Elasticsearch(esTransportOpts)
  ]
});

Inject apm traces into logs

logger.info('Some log message');

Will produce:

{
  "@timestamp": "2021-03-13T20:35:28.129Z",
  "message": "Some log message",
  "severity": "info",
  "fields": {},
  "transaction": {
    "id": "1f6c801ffc3ae6c6"
  },
  "trace": {
    "id": "1f6c801ffc3ae6c6"
  }
}

Notice

Some "custom" logs may not have the apm trace.

If that is the case, you can retrieve traces using apm.currentTraceIds like so:

logger.info("Some log message", { ...apm.currentTracesIds })

The transformer function (see above) will place the apm trace in the root object so that kibana can link Logs to APMs.

Custom traces WILL TAKE PRECEDENCE

If you are using a custom transformer, you should add the following code into it:

  if (logData.meta['transaction.id']) transformed.transaction = { id: logData.meta['transaction.id'] };
  if (logData.meta['trace.id']) transformed.trace = { id: logData.meta['trace.id'] };
  if (logData.meta['span.id']) transformed.span = { id: logData.meta['span.id'] };

This scenario may happen on a server (e.g. restify) where you want to log the query after it was sent to the client (e.g. using server.on('after', (req, res, route, error) => log.debug("after", { route, error }))). In that case you will not get the traces into the response because traces would have stopped (as the server sent the response to the client).

In that scenario, you could do something like so:

server.use((req, res, next) => {
  req.apm = apm.currentTracesIds
  next()
})
server.on("after", (req, res, route, error) => log.debug("after", { route, error, ...req.apm }))

Manual Flushing

Flushing can be manually triggered like this:

const esTransport = new ElasticsearchTransport(esTransportOpts);
esTransport.flush();

Datastreams

Elasticsearch 7.9 and higher supports Datstreams.

When dataStream: true is set, bulk indexing happens with create instead of index, and also the default naming convention is logs-*-*, which will match the built-in Index template and ILM policy, automatically creating a datastream.

By default, the datastream will be named logs-app-default, but alternatively, you can set the index option to anything that matches logs-*-* to make use of the built-in template and ILM policy.

If dataStream: true is enabled, AND ( you are using Elasticsearch < 7.9 OR (you have set a custom index that does not match logs-*-* AND you have not created a custom matching template in Elasticsearch)), a normal index will be created.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].