? ?
這三樣東西分別作用是:日志收集、索引與搜索、可視化展現
?
l? logstash
? ? 這張架構圖可以看出logstash只是collect和index的地方,運行時傳入一個.conf文件,配置分三部分:input ,filter,output。
l? redis
? ? redis在這里是作為日志收集與索引之間解耦作用
l? elasticsearch
? ??核心組件,用來搜索。主要特點:real-time,distributed,Highly Available,document oriented,schema free,RESTful
?
kibana
? ? 可視化日志組件,讓數據交互變得更容易
?
部署
需要的組件
- logstash? https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz
- redis? http://download.redis.io/releases/redis-stable.tar.gz ?
- elasticsearch? https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.2.zip
- kibana? https://github.com/elasticsearch/kibana
?
?
?
Logstash
logstash 10 分鐘教程 : http://logstash.net/docs/1.4.2/tutorials/10-minute-walkthrough/
?
下載最新logstash版本并解壓
?
?
?
編輯logstash.conf配置文件
?
logstash用戶說明文檔: http://logstash.net/docs/1.4.2/
log4j server配置實例:log4j.conf
input { ? log4j { ? ? data_timeout => 5 # mode => "server" # port => 4560 ? } } ? filter { ? json { ? ? source => "message" ? ? remove_field => ["message","class","file","host","method","path","priority","thread","type","logger_name"] ? } } ? output{ ? ? #stdout {? ?? codec => json? ?} ? ? redis { ? ? ? ? host => "redis.internal.173" ? ? ? ? port => 6379 ? ? ? ? data_type => "list" ? ? ? ? key => "soalog" ? ? } } ? |
?
?
?
logstash輸出elasticsearch配置實例:soalog-es.conf
input { ? redis { ? ? host => "redis.internal.173" ? ? port => "6379" ? ? key => "soalog" ? ? data_type => "list" ? } } ? filter { ? json { ? ? source => "message" ? ? remove_field => ["message","type"] ? } } ? output { ? elasticsearch { ? ? #host => "es1.internal.173,es2.internal.173,es3.internal.173" ? ? cluster => "soaes" ? ? index => "soa_logs-%{+YYYY.MM.dd}" ? } } ? |
?
這里filter配置source => message,是把message里json串解析出來,作為索引字段,然后配置remove_field 把不需要字段刪除?
?
啟動
./logstash -f soalog-es.conf --verbose -l ../soalog-es.log &
./logstash -f log4j.conf --verbose -l ../log4j.log &
?
?
?
Elastcisearch
?
下載最新版本elasticsearch并解壓
?
bin/elasticsearch -d 后端運行
?
驗證
?
elasticsearch集群配置:
編輯 config/elasticsearch.yml?
#指定你的集群名稱,默認是elasticsearch,在使用客戶端連接集群模式會用到
cluster.name: soaes
#指定數據存儲目錄,可以多個磁盤 /path/to/data1,/path/to/data2
path.data: /mnt/hadoop/esdata
#指定日志存儲目錄
path.logs: /mnt/hadoop/eslogs
#集群主節點列表,執行發現新節點
discovery.zen.ping.unicast.hosts: ["hadoop74", "hadoop75"]
?
配置es模板 ,可以指定字段是否索引,以及存儲類型
在config目錄下創建templates目錄
增加模板文件template-soalogs.json
{ ? "template-soalogs" : { ? ? "template" : "soa_logs*", ? ? "settings" : { ? ? ? "index.number_of_shards" : 5, ? ? ? "number_of_replicas" : 1, ? ? ? "index" : { ? ? ? ? "store" : { ? ? ? ? ? "compress" : { ? ? ? ? ? ? "stored" : true, ? ? ? ? ? ? "tv": true ? ? ? ? ? } ? ? ? ? } ? ? ? } ? ? }, ? ? "mappings" : { ?"logs" : { ?"properties" : { ?"providerNode" : { ?"index" : "not_analyzed", ?"type" : "string" ?}, ?"serviceMethod" : { ?"index" : "not_analyzed", ?"type" : "string" ?}, ?"appId" : { ?"index" : "not_analyzed", ?"type" : "string" ?}, ?"status" : { ?"type" : "long" ?}, ?"srcAppId" : { ?"index" : "not_analyzed", ?"type" : "string" ?}, ?"remark" : { ?"type" : "string" ?}, ?"serviceVersion" : { ?"index" : "not_analyzed", ?"type" : "string" ?}, ?"srcServiceVersion" : { ?"index" : "not_analyzed", ?"type" : "string" ?}, ?"logSide" : { ?"type" : "long" ?}, ?"invokeTime" : { ?"type" : "long" ?}, ?"@version" : { ?"type" : "string" ?}, ?"@timestamp" : { ?"format" : "dateOptionalTime", ?"type" : "date" ?}, ?"srcServiceInterface" : { ?"index" : "not_analyzed", ?"type" : "string" ?}, ?"serviceInterface" : { ?"index" : "not_analyzed", ?"type" : "string" ?}, ?"retryCount" : { ?"type" : "long" ?}, ?"traceId" : { ?"index" : "not_analyzed", ?"type" : "string" ?}, ?"processTime" : { ?"type" : "long" ?}, ?"consumerNode" : { ?"index" : "not_analyzed", ?"type" : "string" ?}, ?"rpcId" : { ?"index" : "not_analyzed", ?"type" : "string" ?}, ?"srcServiceMethod" : { ?"index" : "not_analyzed", ?"type" : "string" ?} ?} ?} ?} ? } } |
?
?
kibana
進入elasticsearch目錄
bin/plugin -install elasticsearch/kibana?
驗證:
http://localhost
:9200/_plugin/kibana?
kibana需要配置查詢索引規則
?
?
這里index是soa_logs,按天分索引格式需要指定為YYYY-MM-DD
?
?
?
logstash時差8小時問題
?
logstash在按每天輸出到elasticsearch時,因為時區使用utc,造成每天8:00才創建當天索引,而8:00以前數據則輸出到昨天的索引
修改logstash/lib/logstash/event.rb 可以解決這個問題
第226行
.withZone(org.joda.time.DateTimeZone::UTC)
修改為
.withZone(org.joda.time.DateTimeZone.getDefault())
?
?
log4j.properties配置
#remote logging
log4j.additivity.logstash=false
log4j.logger.logstash=INFO,logstash
log4j.appender.logstash = org.apache.log4j.net.SocketAppender
log4j.appender.logstash.RemoteHost = localhost
log4j.appender.logstash.Port = 4560
log4j.appender.logstash.LocationInfo = false
?
?
java日志輸出
? ? private static final org.slf4j.Logger logstash = org.slf4j.LoggerFactory.getLogger("logstash");
? ? ?logstash.info(JSONObject.toJSONString(rpcLog));
?
?
KOPF
?elasticsearch集群監控
bin/plugin -install lmenezes/elasticsearch-kopf
http://localhost:9200/_plugin/kopf
?
?
?
logstash接入tomcat日志示例:
?logstash代理端配置tomcat.conf
input { ? file { ? ? type=> "usap" ? ? path=> ["/opt/17173/apache-tomcat-7.0.50-8090/logs/catalina.out","/opt/17173/apache-tomcat-7.0.50-8088/logs/catalina.out","/opt/17173/apache-tomcat-7.0.50-8086/logs/catalina.out","/opt/ 17173/apache-tomcat-7.0.50-8085/logs/catalina.out","/opt/17173/apache-tomcat-6.0.37-usap-image/logs/catalina.out"] ? ? codec=> multiline { ? ? ?pattern => "(^.+Exception:.+)|(^\s+at .+)|(^\s+... \d+ more)|(^\s*Caused by:.+)" ? ? ?what=> "previous" ? ? } ? } } filter { ? grok { ? ? #match => { "message" => "%{COMBINEDAPACHELOG}" } ? ? match => [ "message", "%{TOMCATLOG}", "message", "%{CATALINALOG}" ] ? ? remove_field => ["message"] ? } } output { # stdout{ codec => rubydebug } ? redis {host => "redis.internal.173" data_type => "list" key=> "usap" } } |
?
?
修改logstash/patterns/grok-patterns?
增加tomcat日志過濾正則
#tomcat log JAVACLASS (?:[a-zA-Z0-9-]+\:)+[A-Za-z0-9$]+ JAVALOGMESSAGE (.*) THREAD [A-Za-z0-9\-\[\]]+ # MMM dd, yyyy HH:mm:ss eg: Jan 9, 2014 7:13:13 AM CATALINA_DATESTAMP %{MONTH} %{MONTHDAY}, 20%{YEAR} %{HOUR}:?%{MINUTE}(?::?%{SECOND}) (?:AM|PM) # yyyy-MM-dd HH:mm:ss,SSS ZZZ eg: 2014-01-09 17:32:25,527 -0800 TOMCAT_DATESTAMP 20%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{HOUR}:?%{MINUTE}(?::?%{SECOND}) %{ISO8601_TIMEZONE} LOG_TIME %{HOUR}:?%{MINUTE}(?::?%{SECOND}) CATALINALOG %{CATALINA_DATESTAMP:timestamp} %{JAVACLASS:class} %{JAVALOGMESSAGE:logmessage} # 11:27:51,786 [http-bio-8088-exec-4] DEBUG JsonRpcServer:504 - Invoking method: getHistory #TOMCATLOG %{LOG_TIME:timestamp} %{THREAD:thread} %{LOGLEVEL:level} %{JAVACLASS:class} - %{JAVALOGMESSAGE:logmessage} TOMCATLOG %{TOMCAT_DATESTAMP:timestamp} %{LOGLEVEL:level} %{JAVACLASS:class} - %{JAVALOGMESSAGE:logmessage} |
?
啟動 tomcat 日志代理:
./logstash -f tomcat.conf --verbose -l ../tomcat.log &?
?
tomcat日志存入es?
配置tomcat-es.conf?
input { redis { ? ? ? ?host => 'redis.internal.173' ? ? ? ?data_type => 'list' ? ? ? ?port => "6379" ? ? ? ?key => 'usap' ? ? ? ?#type => 'redis-input' ? ? ? ?#codec => json ? ? ? ? } ? ? ? ?} output { # stdout { codec => rubydebug } ? ? ? ? elasticsearch { ? ? ? ? ?#host => "es1.internal.173,es2.internal.173,es3.internal.173"? ? ? ? ? ?cluster => "soaes" ? ? ? ? ?index => "usap-%{+YYYY.MM.dd}" ? ? ? ? } } |
?
啟動tomcat日志存儲
./logstash -f tomcat-es.conf --verbose -l ../tomcat-es.log &?
?
?
logstash接入nginx\syslog日志示例
logstash代理端配置nginx.conf?
input { ?file{ ? type => "linux-syslog" ? path => [ "/var/log/*.log", "/var/log/messages"] ?} ?file { ? type => "nginx-access" ? path => "/usr/local/nginx/logs/access.log" ?} ?file { ? type => "nginx-error" ? path => "/usr/local/nginx/logs/error.log" ?} } output { # stdout{ codec => rubydebug } ? redis {host => "redis.internal.173" data_type => "list" key=> "nginx" } } |
?
啟動nginx日志代理
./logstash -f nginx.conf --verbose -l ../nginx.log &?
?
nginx日志存入es
配置nginx-es.conf
input { redis { ? ? ? ?host => 'redis.internal.173' ? ? ? ?data_type => 'list' ? ? ? ?port => "6379" ? ? ? ?key => 'nginx' ? ? ? ?#type => 'redis-input' ? ? ? ?#codec => json ? ? ? ? } ? ? ? ?} filter { ?grok { ? type => "linux-syslog" ? pattern => "%{SYSLOGLINE}" ?} ?grok { ? type => "nginx-access" ? pattern => "%{IPORHOST:source_ip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] %{IPORHOST:host} %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_refere r} %{QS:http_user_agent}" ?} } output { # stdout { codec => rubydebug } ? ? ? ? elasticsearch { ? ? ? ? ?#host => "es1.internal.173,es2.internal.173,es3.internal.173" ? ? ? ? ?cluster => "soaes" ? ? ? ? ?index => "nginx-%{+YYYY.MM.dd}" ? ? ? ? } ? ? ? ? } |
?
啟動nginx日志存儲
./logstash -f nginx-es.conf --verbose -l ../nginx-es.log &?
?
?
更多文章、技術交流、商務合作、聯系博主
微信掃碼或搜索:z360901061

微信掃一掃加我為好友
QQ號聯系: 360901061
您的支持是博主寫作最大的動力,如果您喜歡我的文章,感覺我的文章對您有幫助,請用微信掃描下面二維碼支持博主2元、5元、10元、20元等您想捐的金額吧,狠狠點擊下面給點支持吧,站長非常感激您!手機微信長按不能支付解決辦法:請將微信支付二維碼保存到相冊,切換到微信,然后點擊微信右上角掃一掃功能,選擇支付二維碼完成支付。
【本文對您有幫助就好】元
