README

Logproxy

Build Status Quality Gate Status

A microservice which acts as a logdrain and forwards messages to HSDP Foundation logging. Supports the new HSDP v2 single tenant solution.

Features

  • Cloud foundry logdrain endpoint
  • IronIO project logging endpoint
  • Supports v2 of the HSDP logging API
  • Batch uploads messages (max 25) for good performance
  • Very lean, runs in just 32MB RAM
  • Plugin support
  • Filter only mode
  • Elastic APM support

Distribution

Logproxy is distributed as a Docker image:

docker pull philipssoftware/logproxy

Dependencies

By default Logproxy uses RabbitMQ for log buffering. This is useful for handlingspikes in log volume. You can also choose to use an internal Go channel based queue.

Environment variables

Variable Description Required Default
TOKEN Token to use as part of logdrain URL Yes
HSDP_LOGINGESTOR_KEY HSDP logging service Key Yes (hsdp delivery)
HSDP_LOGINGESTOR_SECRET HSDP logging service Secret Yes (hsdp delivery)
HSDP_LOGINGESTOR_URL HSPD logging service endpoint Yes (hsdp delivery)
HSDP_LOGINGESTOR_PRODUCT_KEY Product key for v2 logging Yes (hsdp delivery)
LOGPROXY_SYSLOG Enable or disable Syslog drain No true
LOGPROXY_IRONIO Enable or disable IronIO drain No false
LOGPROXY_QUEUE Use specific queue (rabbitmq, channel) No rabbitmq
LOGPROXY_PLUGINDIR Search for plugins in this directory No
LOGPROXY_DELIVERY Select delivery type (hsdp, none) No hsdp
ELASTIC_APM_SERVICE_NAME Set the service name for APM No logproxy
ELASTIC_APM_SERVER_URL Sets the APM server URL No
ELASTIC_APM_SECRET_TOKEN Sets the APM secret token No

Building

Requirements

Compiling

Clone the repo somewhere (preferably outside your GOPATH):

$ git clone https://github.com/philips-software/logproxy.git
$ cd logproxy
$ docker build .

Installation

See the below manifest.yml file as an example.

applications:
- name: logproxy
  domain: your-domain.com
  docker:
    image: philipssoftware/logproxy:latest
  instances: 2
  memory: 64M
  disk_quota: 512M
  routes:
  - route: logproxy.your-domain.com
  env:
    HSDP_LOGINGESTOR_KEY: SomeKey
    HSDP_LOGINGESTOR_SECRET: SomeSecret
    HSDP_LOGINGESTOR_URL: https://logingestor-int2.us-east.philips-healthsuite.com
    HSDP_LOGINGESTOR_PRODUCT_KEY: product-uuid-here
    TOKEN: RandomTokenHere
  services:
  - rabbitmq
  stack: cflinuxfs3

Push your application:

cf push

If everything went OK logproxy should now be reachable on https://logproxy.your-domain.com . The logdrain endpoint would then be:

https://logproxy.your-domain.com/syslog/drain/RandomTokenHere

Configure logdrains

Syslog

In each space where you have apps running for which you'd like to drain logs define a user defined service called logproxy:

cf cups logproxy -l https://logproxy.your-domain.com/syslog/drain/RandomTokenHere

Then, bind this service to any app which should deliver their logs:

cf bind-service some-app logproxy

and restart the app to activate the logdrain:

cf restart some-app

Logs should now start flowing from your app all the way to HSDP logging infra through logproxy. You can use Kibana for log searching.

Structured logs

Logproxy supports parsing a structured JSON log format it then maps to a HSDP LogEvent Resource. Example structured log:

{
  "app": "myappname",
  "val": {
    "message": "The actual log message body"
  },
  "ver": "1.0.0",
  "evt": "EventID",
  "sev": "INFO",
  "cmp": "ComponentID",
  "trns": "transactionID",
  "usr": "someUserUUID",
  "srv": "some.host.com",
  "service": "service-name-here",
  "inst": "service-instance-id-hee",
  "cat": "Tracelog",
  "time": "2018-09-07T15:39:21Z",
  "custom": {
  		"key1": "val1",
  		"key2": { "innerkey": "innervalue" }
   }
}

Below is an example of an HSDP LogEvent resource type for reference

{
  "resourceType": "LogEvent",
  "id": "7f4c85a8-e472-479f-b772-2916353d02a4",
  "applicationName": "OPS",
  "eventId": "110114",
  "category": "TRACELOG",
  "component": "TEST",
  "transactionId": "2abd7355-cbdd-43e1-b32a-43ec19cd98f0",
  "serviceName": "OPS",
  "applicationInstance": "INST‐00002",
  "applicationVersion": "1.0.0",
  "originatingUser": "SomeUsr",
  "serverName": "ops-dev.apps.internal",
  "logTime": "2017-01-31T08:00:00Z",
  "severity": "INFO",
  "logData": {
    "message": "Test message"
  },
  "custom": {
  		"key1": "val1",
  		"key2": { "innerkey": "innervalue" }
   }
}
Mapping to LogEvent

The structured log to LogEvent mapping is done as follows

structured field LogEvent field
app applicationName
val.message logData.message
custom custom
ver applicationVersion
evt eventId
sev severity
cmp component
trns transactionId
usr originatingUser
srv serverName
service serviceName
inst applicationInstance
cat category
time logTime

IronIO

The IronIO logdrain is availble on this endpoint: /ironio/drain/:token

You can configure via the iron.io settings screen of your project:

settings screen

Field Mapping

Logproxy maps the IronIO field to Syslog fields as follows

IronIO field Syslog field LogEvent field
task_id ProcID applicationInstance
code_name AppName applicationName
project_id Hostname serverName
message Message logData.message

Filter only mode

You may choose to operate Logproxy in Filter only mode. It will listen for messages on the logdrain endpoints, run these through any active filter plugins and then discard instead of delivering them to HSDP logging. This is useful if you are using plugins for real-time processing only. To enable filter only mode set LOGPROXY_DELIVERY to none

...
env:
  LOGPROXY_DELIVERY: none
...

See the Logproxy plugins project for more details on plugins.

TODO

  • Better handling of HTTP 635 errors
Expand ▾ Collapse ▴

Documentation

The Go Gopher

There is no documentation for this package.

Source Files

Directories

Path Synopsis
handlers
queue
shared
shared/proto