All Projects → guardian → kinesis-logback-appender

guardian / kinesis-logback-appender

Licence: Apache-2.0 License
LOGBack Appender for writing data into a Kinesis Stream

Programming Languages

java
68154 projects - #9 most used programming language

Maven Central

LOGBack Appender for Amazon Kinesis

This is an implementation of the AWS - Labs log4j appender for LOGBack.

Supports both Kinesis and Kinesis Firehose streams.

Sample Configuration

<configuration>
  <appender name="KINESIS" class="com.gu.logback.appender.kinesis.KinesisAppender">
    <bufferSize>1000</bufferSize>
    <threadCount>20</threadCount>
    <endpoint>kinesis.us-east-1.amazonaws.com</endpoint><!-- Specify endpoint OR region -->
    <region>us-east-1</region>
    <roleToAssumeArn>foo</roleToAssumeArn><!-- Optional: ARN of role for cross account access -->
    <maxRetries>3</maxRetries>
    <shutdownTimeout>30</shutdownTimeout>
    <streamName>testStream</streamName>
    <encoding>UTF-8</encoding>
    <layout class="ch.qos.logback.classic.PatternLayout">
      <pattern>%m</pattern>
    </layout>
  </appender>
  <appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
    <encoder>
      <pattern>%5p [%t] (%F:%L) - %m%n</pattern>
    </encoder>
  </appender>
  <logger name="KinesisLogger" additivity="false" level="INFO">
    <appender-ref ref="KINESIS"/>
  </logger>
  <root level="INFO">
    <appender-ref ref="stdout"/>
  </root>
</configuration>

Use com.gu.logback.appender.kinesis.KinesisAppender for Kinesis or com.gu.logback.appender.kinesis.FirehoseAppender for Kinesis Firehose.

Performance and reliability notes

This appender is performant but will block if the Kinesis stream throughput is exceeded. In order to guard against this you might want to consider:

  • ensure you have calculated how many shards you need based on your expected throughput
  • alerting on write throughput exceeded on the Kinesis stream(s)
  • setting up an autoscaling approach that will automatically scale your shards up and down appropriately AWS docs
  • configuring the AWS client to not retry on failure so that log lines are discarded when stream throughput is exceeded rather than backing up and causing a cascading failure
  • wrapping the appender in AsyncAppender, which can be configured to automatically drop overflowing messages on blocking

Testing locally

In order to test this you can simply use mvn install (to deploy to your local machine).

Releasing

Some notes for Guardian employees shipping updates to this.

First of all confirm that your pom.xml has a SNAPSHOT version in it (e.g.

<version>1.4.1-SNAPSHOT</version>
).

In order to release this to maven you'll need to have a settings file at ~/.m2/settings.xml containing your sonatype credentials (you can probably find these in .sbt/0.13/sonatype.sbt if you've shipped Scala libraries):

<settings>
  <servers>
    <server>
      <id>ossrh</id>
      <username>username</username>
      <password>password</password>
    </server>
  </servers>
</settings>

You'll also need the mvn command installed. You can do brew install mvn at the command line.

Once you've got that you can use mvn clean deploy to deploy your snapshot to sonatype. This will only release to the snapshot repo (you can add resolvers += "Sonatype snapshots" at "https://oss.sonatype.org/content/repositories/snapshots" to test resolution of this).

Finally when ready run mvn release:clean release:prepare and follow the prompts. Once this has completed you need to do one more step to actually release it on maven central: mvn release:perform.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].