Äú¿ÉÒÔ¾èÖú£¬Ö§³ÖÎÒÃǵĹ«ÒæÊÂÒµ¡£

1Ôª 10Ôª 50Ôª





ÈÏÖ¤Â룺  ÑéÖ¤Âë,¿´²»Çå³þ?Çëµã»÷Ë¢ÐÂÑéÖ¤Âë ±ØÌî



  ÇóÖª ÎÄÕ ÎÄ¿â Lib ÊÓÆµ iPerson ¿Î³Ì ÈÏÖ¤ ×Éѯ ¹¤¾ß ½²×ù Model Center   Code  
»áÔ±   
   
 
     
   
 ¶©ÔÄ
  ¾èÖú
ʹÓÃELK´î½¨ÈÕÖ¾¼¯ÖзÖÎöƽ̨ʵ¼ù
 
×÷Õߣºwsgzao
  1689  次浏览      27
 2020-1-14
 
±à¼­ÍƼö:
±¾ÎÄÖмòµ¥½éÉÜÁËELK£¬°²×°ÉèÖõ¥½Úµã ELK£¬ÃüÁîÐÐÆô¶¯Ò»¸ö ELK£¬elasticsearch ÖØÒªµÄ²ÎÊýµ÷ÓÅ£¬filebeat ÅäÖõȣ¬Ï£Íû¶ÔÄúÓÐËù°ïÖú¡£
±¾ÎÄÀ´×ÔÓÚsegmentfault£¬ÓÉ»ðÁú¹ûÈí¼þDelores±à¼­¡¢ÍƼö¡£

ǰÑÔ

Elasticsearch + Logstash + Kibana£¨ELK£©ÊÇÒ»Ì׿ªÔ´µÄÈÕÖ¾¹ÜÀí·½°¸£¬·ÖÎöÍøÕ¾µÄ·ÃÎÊÇé¿öʱÎÒÃÇÒ»°ã»á½èÖúGoogle/°Ù¶È/CNZZµÈ·½Ê½Ç¶ÈëJS×öÊý¾Ýͳ¼Æ£¬µ«Êǵ±ÍøÕ¾·ÃÎÊÒì³£»òÕß±»¹¥»÷ʱÎÒÃÇÐèÒªÔÚºǫ́·ÖÎöÈçNginxµÄ¾ßÌåÈÕÖ¾£¬¶øNginxÈÕÖ¾·Ö¸î/GoAccess/Awstats¶¼ÊÇÏà¶Ô¼òµ¥µÄµ¥½Úµã½â¾ö·½°¸£¬Õë¶Ô·Ö²¼Ê½¼¯Èº»òÕßÊý¾ÝÁ¿¼¶½Ï´óʱ»áÏÔµÃÐÄÓÐÓà¶øÁ¦²»×㣬¶øELKµÄ³öÏÖ¿ÉÒÔʹÎÒÃÇ´ÓÈÝÃæ¶ÔеÄÌôÕ½¡£

Logstash£º¸ºÔðÈÕÖ¾µÄÊÕ¼¯£¬´¦ÀíºÍ´¢´æ

Elasticsearch£º¸ºÔðÈÕÖ¾¼ìË÷ºÍ·ÖÎö

Kibana£º¸ºÔðÈÕÖ¾µÄ¿ÉÊÓ»¯

ELK(Elasticsearch + Logstash + Kibana)

ELK¼ò½é

ELK ¹Ù·½Îĵµ ÊÇÒ»¸ö·Ö²¼Ê½¡¢¿ÉÀ©Õ¹¡¢ÊµÊ±µÄËÑË÷ÓëÊý¾Ý·ÖÎöÒýÇæ¡£Ä¿Ç°ÎÒÔÚ¹¤×÷ÖÐÖ»ÓÃÀ´ÊÕ¼¯ server µÄ log, ¿ª·¢¹ø¹øÃÇ debug µÄºÃÖúÊÖ¡£

°²×°ÉèÖõ¥½Úµã ELK

Èç¹ûÄãÏë¿ìËٵĴµ¥½Úµã ELK, ÄÇôʹÓà docker ·½Ê½¿Ï¶¨ÊÇÄãµÄ×î¼ÑÑ¡Ôñ¡£Ê¹ÓÃÈýºÏÒ»µÄ¾µÏñ

×¢Ò⣺°²×°Íê docker, ¼ÇµÃÉèÖà mmap counts ´óСÖÁÉÙ 262144

# ÉèÖà mmap ÃüÁî
# ÁÙʱÌí¼Ó·¨
sysctl -w vm.max_map_count=262144
# дÈë sysctl.conf ÎļþÀï
vim /etc/sysctl.conf
vm.max_map_count=262144
# ±£´æºÃÎļþÖ´ÐÐÒÔÏÂÃüÁî
sysctl -p
# °²×° docker
sudo yum install -y yum-utils
device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/
linux/centos/docker-ce.repo


sudo yum install -y docker-ce
sudo systemctl start docker

µ¥½ÚµãµÄ»úÆ÷£¬²»±Ø±©Â¶ 9200(Elasticsearch JSON interface) ºÍ 9300(Elasticsearch transport interface) ¶Ë¿Ú¡£ Èç¹ûÏëÔÚ docker Éϱ©Â¶¶Ë¿Ú£¬Óà -p Èç¹ûûÓÐÌîд¼àÌýµÄµØÖ·£¬Ä¬ÈÏÊÇ 0.0.0.0 ËùÓеÄÍø¿¨¡£½¨Ò黹ÊÇдÃ÷È·¼àÌýµÄµØÖ·£¬°²È«ÐÔ¸üºÃ¡£

-p ¼àÌýµÄIP:ËÞÖ÷»ú¶Ë¿Ú:ÈÝÆ÷ÄڵĶ˿Ú
-p 192.168.10.10:9300:9300

ÃüÁîÐÐÆô¶¯Ò»¸ö ELK

sudo docker run -p 5601:5601
-p 5044:5044 \
-v /data/elk-data:/var
/lib/elasticsearch \
-v /data/elk/logstash:
/etc/logstash/conf.d \
-it -e TZ="Asia/Singapore"
-e ES_HEAP_SIZE="20g" \
-e LS_HEAP_SIZE="10g"
--name elk-ubuntu sebp/elk

½«ÅäÖúÍÊý¾Ý¹ÒÔØ³öÀ´£¬¼´Ê¹ docker container ³öÏÖÁËÎÊÌâ¡£¿ÉÒÔÁ¢¼´Ïú»ÙÔÙÖØÆôÒ»¸ö£¬·þÎñÊÜÓ°ÏìµÄʱ¼äºÜ¶Ì¡£

# ×¢Òâ¹ÒÔØ³öÀ´µÄÎļþ¼ÐµÄȨÏÞÎÊÌâ
chmod 755 /data/elk-data
chmod 755 /data/elk/logstash
chown -R root:root /data
-v /data/elk-data:/var/lib
/elasticsearch # ½« elasticsearch
´æ´¢µÄÊý¾Ý¹ÒÔØ³öÀ´£¬Êý¾Ý³Ö¾Ã»¯¡£
-v /data/elk/logstash:/etc/
logstash/conf.d # ½« logstash
µÄÅäÖÃÎļþ¹ÒÔØ³öÀ´£¬·½±ãÔÚËÞÖ÷»úÉÏÐ޸ġ£

elasticsearch ÖØÒªµÄ²ÎÊýµ÷ÓÅ

ES_HEAP_SIZE Elasticsearch will assign the entire heap specified in jvm.options via the Xms (minimum heap size) and Xmx (maximum heap size) settings. You should set these two settings to be equal to each other. Set Xmx and Xms to no more than 50% of your physical RAM.the exact threshold varies but is near 32 GB. the exact threshold varies but 26 GB is safe on most systems, but can be as large as 30 GB on some systems.

Àû±×¹ØÏµ: The more heap available to Elasticsearch, the more memory it can use for its internal caches, but the less memory it leaves available for the operating system to use for the filesystem cache. Also, larger heaps can cause longer garbage collection pauses.

LS_HEAP_SIZE Èç¹û heap size ¹ýµÍ£¬»áµ¼Ö CPU ÀûÓÃÂʵ½´ïÆ¿¾±£¬Ôì³É JVM ²»¶ÏµÄ»ØÊÕÀ¬»ø¡£ ²»ÄÜÉèÖà heap size ³¬¹ýÎïÀíÄÚ´æ¡£ ÖÁÉÙÁô 1G ¸ø²Ù×÷ϵͳºÍÆäËûµÄ½ø³Ì¡£

Ö»ÐèÒªÅäÖÃlogstash

½ÓÏÂÀ´£¬ÎÒÃÇÔÙÀ´¿´Ò»¿´ logstash.conf ¼ÇµÃ¿´×¢ÊÍ

}
}
}
# ÒòΪÏßÉϵĻúÆ÷¸ü¶à£¬ÏßÉϵÄÎÒĬÈϲ»ÔÚ
filebeat Ìí¼Ó function£¬
ËùÒÔ else ÎÒ¾ÍÌí¼ÓÉÏ live
else {
mutate {
add_field => {
"function" => "live"
}
}
}
# ÔÚ֮ǰ filter message ʱ£¬ÎÒÃǵõ½ÁË
timestamp£¬ÕâÀïÎÒÃÇÐÞ¸Äһϸñʽ£¬Ìí¼ÓÉÏÊ±Çø¡£
date {
match => ["timestamp" ,
"yyyy-MM-dd HH:mm:ss Z"]
target => "@timestamp"
timezone => "Asia/Singapore"
}
# ½«Ö®Ç°»ñµÃµÄ path Ìæ»»ÆäÖÐµÄ / Ìæ»»Îª - ,
ÒòΪ elasticsearch index name ÓÐÒªÇó
# ÀýÈç feiyang/test feiyang_test
mutate {
gsub => ["path","/","-"]
add_field => {"host_ip" =>
"%{[fields][host]}"}
remove_field => ["tags","@version",
"offset","beat","fields",
"exim_year","exim_month","exim_day",
"exim_time","timestamp"]
}
# remove_field È¥µôһЩ¶àÓàµÄ×Ö¶Î
}
# µ¥½Úµã output ¾ÍÔÚ±¾»ú£¬Ò²²»ÐèÒª SSL,
µ« index µÄÃüÃû¹æÔò»¹ÊÇÐèÒª·Ç³£µÄ×¢Òâ
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "sg-%{function}-%
{path}-%{+xxxx.ww}"
# sg-nginx-feiyang233.club.
access-2019.13 ww´ú±íÖÜÊý
}
}

×îÖÕµÄÁ÷³ÌͼÈçÏÂËùʾ

index µÄ¹æÔò

Lowercase only

Cannot include , /, *, ?, ", <, >, |, ` ` (space character), ,, #

Indices prior to 7.0 could contain a colon (:),
but that¡¯s been deprecated and won¡¯t be supported in 7.0+

Cannot start with -, _, +

Cannot be . or ..

Cannot be longer than 255 bytes
(note it is bytes, so multi-byte characters
will count towards the 255 limit faster)

filebeat ÅäÖÃ

ÔÚ client ¶Ë£¬ÎÒÃÇÐèÒª°²×°²¢ÇÒÅäÖà filebeat

ÅäÖÃÎļþ filebeat.yml

filebeat.inputs:
- type: log
enabled: true
paths: # ÐèÒªÊÕ¼¯µÄÈÕÖ¾
- /var/log/app/** ## ** need
high versiob filebeat can
support recursive
fields: #ÐèÒªÌí¼ÓµÄ×Ö¶Î
host: "{{inventory_hostname}}"
function: "xxx"
multiline: # ¶àÐÐÆ¥Åä
match: after
negate: true # pay attention
the format
pattern: '^\[[0-9]{4}
-[0-9]{2}-[0-9]{2}' #\[
ignore_older: 24h
clean_inactive: 72h
output.logstash:
hosts: ["{{elk_server}}:25044"]
# ssl:
# certificate_authorities:
["/etc/filebeat/logstash.crt"]

ÅúÁ¿²¿Êð filebeat.yml ×îºÃʹÓà ansible

---
- hosts: all
become: yes
gather_facts: yes
tasks:
- name: stop filebeat
service:
name: filebeat
state: stopped
enabled: yes
- name: upload filebeat.yml
template:
src: filebeat.yml
dest: /etc/filebeat/filebeat.yml
owner: root
group: root
mode: 0644
- name: remove
file: #delete all files
in this directory
path: /var/lib/filebeat/registry
state: absent
- name: restart filebeat
service:
name: filebeat
state: restarted
enabled: yes

²é¿´ filebeat output

Ê×ÏÈÐèÒªÐÞ¸ÄÅäÖ㬽« filebeat Êä³öµ½±¾µØµÄÎļþ£¬Êä³öµÄ¸ñʽΪ json.

filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/app/**
fields:
host: "x.x.x.x"
region: "sg"
multiline:
match: after
negate: true
pattern: '^[0-9]{4}
-[0-9]{2}-[0-9]{2}'
ignore_older: 24h
clean_inactive: 72h
output.file:
path: "/home/feiyang"
filename: feiyang.json

ͨ¹ýÉÏÊöµÄÅäÖã¬ÎÒÃǾͿÉÒÔÔÚ·¾¶ /home/feiyang ϵõ½Êä³ö½á¹ûÎļþ feiyang.json ÔÚÕâÀïÐèҪעÒâµÄÊÇ£¬²»Í¬°æ±¾µÄ filebeat Êä³ö½á¹ûµÄ¸ñʽ»áÓÐËù²»Í¬£¬Õâ»á¸ø logstash ½âÎö¹ýÂËÔì³ÉÒ»µãµãÀ§ÄÑ¡£ÏÂÃæ¾ÙÀý˵Ã÷ 6.x ºÍ 7.x filebeat Êä³ö½á¹ûµÄ²»Í¬

{
"@timestamp":
"2019-06-27T15:53:27.682Z",
"@metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.4.2"
},
"fields": {
"host": "x.x.x.x",
"region": "sg"
},
"host": {
"name": "x.x.x.x"
},
"beat": {
"name": "x.x.x.x",
"hostname": "feiyang-localhost",
"version": "6.4.2"
},
"offset": 1567983499,
"message": "[2019-06-27T22:53:25.756327232][Info][@http.go.177] [48552188]request",
"source": "/var/log/feiyang/scripts/all.log"
}

6.4 Óë 7.2 »¹ÊÇÓкܴóµÄ²îÒ죬ÔڽṹÉÏ¡£

{
"@timestamp": "2019-06-27T15:41:42.991Z",
"@metadata": {
"beat": "filebeat",
"type": "_doc",
"version": "7.2.0"
},
"agent": {
"id": "3a38567b-e6c3-4b5a-a420-f0dee3a3bec8",
"version": "7.2.0",
"type": "filebeat",
"ephemeral_id": "b7e3c0b7-b460-
4e43-a9af-6d36c25eece7",
"hostname": "feiyang-localhost"
},
"log": {
"offset": 69132192,
"file": {
"path": "/var/log/app/feiyang/scripts/info.log"
}
},
"message": "2019-06-27
22:41:25.312|WARNING|14186|Option
|data|unrecognized|fields=set([u'id'])",
"input": {
"type": "log"
},
"fields": {
"region": "sg",
"host": "x.x.x.x"
},
"ecs": {
"version": "1.0.0"
},
"host": {
"name": "feiyang-localhost"
}
}

Kibana ¼òµ¥µÄʹÓÃ

Ôڴ ELK ʱ£¬±©Â¶³öÀ´µÄ 5601 ¶Ë¿Ú¾ÍÊÇ Kibana µÄ·þÎñ¡£

°²×°ÉèÖü¯Èº ELK °æ±¾ 6.7

ELK °²×°Îĵµ¼¯ÈºÖ÷ÒªÊǸ߿ÉÓ㬶à½ÚµãµÄ Elasticsearch »¹¿ÉÒÔÀ©ÈÝ¡£±¾ÎÄÖÐÓõĹٷ½¾µÏñ The base image is centos:7

Elasticsearch ¶à½Úµã´î½¨

# ¹ÒÔØ³öÀ´µÄÎļþ¼ÐȨÏ޷dz£µÄÖØÒª
mkdir -p /data/elk-data &&
chmod 755 /data/elk-data
chown -R root:root /data
docker run -p WAN_IP:9200:9200
-p 10.66.236.116:9300:9300 \
-v /data/elk-data:/usr/share
/elasticsearch/data \
--name feiy_elk \
docker.elastic.co/elasticsearch
/elasticsearch:6.7.0

½ÓÏÂÀ´ÊÇÐÞ¸ÄÅäÖÃÎļþ elasticsearch.yml

# Master ½Úµã node-1
# ½øÈëÈÝÆ÷ docker exec -it
[container_id] bash
# docker exec -it 70ada825aae1 bash
# vi /usr/share/elasticsearch
/config/elasticsearch.yml
cluster.name: "feiy_elk"
network.host: 0.0.0.0
node.master: true
node.data: true
node.name: node-1
network.publish_host: 10.66.236.116
discovery.zen.ping.unicast.hosts: ["10.66.236.116:9300","10.66.236.
118:9300","10.66.236.115:9300"]
# exit
# docker restart 70ada825aae1
# slave ½Úµã node-2
# ½øÈëÈÝÆ÷ docker exec -it
[container_id] bash
# vi /usr/share/elasticsearch
/config/elasticsearch.yml
cluster.name: "feiy_elk"
network.host: "0.0.0.0"
node.name: node-2
node.data: true
network.publish_host: 10.66.236.118
discovery.zen.ping.unicast.hosts: ["10.66.236.116:9300","10.66.236.
118:9300","10.66.236.115:9300"]
# exit
# docker restart 70ada825aae1
# slave ½Úµã node-3
# ½øÈëÈÝÆ÷ docker exec -it
[container_id] bash
# vi /usr/share/elasticsearch
/config/elasticsearch.yml
cluster.name: "feiy_elk"
network.host: "0.0.0.0"
node.name: node-3
node.data: true
network.publish_host: 10.66.236.115
discovery.zen.ping.unicast.hosts: ["10.66.236.116:9300","10.66.236.
118:9300","10.66.236.115:9300"]
# exit
# docker restart 70ada825aae1

¼ì²é¼¯Èº½Úµã¸öÊý£¬×´Ì¬µÈ

# curl http://wan_ip:9200
/_cluster/health?pretty
{
"cluster_name" : "feiy_elk",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 9,
"active_shards" : 18,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number"
: 100.0
}

×îÖÕ½á¹ûͼÔÚ kibana ÉÏ¿ÉÒÔ¿´µ½¼¯Èº×´Ì¬

Kibana ´î½¨

# docker run --link YOUR_
ELASTICSEARCH_CONTAINER
_NAME_OR_ID:elasticsearch
-p 5601:5601 {docker-repo}:{version}
docker run -p ÍâÍøIP:5601:5601
--link elasticsearchÈÝÆ÷µÄID:
elasticsearch docker.elastic.co
/kibana/kibana:6.7.0
# ×¢ÒâµÄÊÇ --link ¹Ù·½Æäʵ²¢²»ÍƼöµÄ£¬
ÍÆ¼öµÄÊÇ use user-defined networks https://docs.docker.com/network/links/
# ²âÊÔ²»Óà --link Ò²¿ÉÒÔͨ¡£
Ö±½ÓÓÃÈÝÆ÷µÄ IP
docker run -p ÍâÍøIP:5601:5601 docker.elastic.co/kibana/kibana:6.7.0

we recommend that you use user-defined networks to facilitate
communication between two containers instead of using

# vi /usr/share
/kibana/config/kibana.yml
# ÐèÒª°Ñ hosts IP ¸ÄΪ
elasticsearch ÈÝÆ÷µÄ IP
# ÎÒÕâÀï elasticsearch ÈÝÆ÷µÄ
IP ÊÇ 172.17.0.2
# ÈçºÎ²é¿´ docker inspect
elasticsearch_ID
server.name: kibana
server.host: "0.0.0.0"
elasticsearch.hosts:
[ "http://172.17.0.2:9200" ]
xpack.monitoring.ui.container.
elasticsearch.enabled: true
# Í˳öÈÝÆ÷²¢ÖØÆô
docker restart [container_ID]

Logstash ´î½¨

# docker -d ÒÔºǫ́µÄ·½Ê½Æô¶¯ÈÝÆ÷
--name ²ÎÊýÏÔʽµØÎªÈÝÆ÷ÃüÃû
docker run -p 5044:5044 -d --name
test_logstash docker.elastic.co
/logstash/logstash:6.7.0
# Ò²¿ÉÒÔÖ¸¶¨Íø¿¨£¬¼àÌýÔÚÄÚÍø»òÕßÍâÍø
¼àÌýÔÚÄÚÍø 192.168.1.2
docker run -p 192.168.1.2:5044:5044
-d --name test_logstash docker.elastic.co/logstash/logstash:6.7.0
# vi /usr/share/logstash/pipeline
/logstash.conf
# ÅäÖÃÏêÇéÇë²Î¿¼ÏÂÃæµÄÁ´½Ó,¼ÇµÃ
output hosts IP Ö¸Ïò Elasticsearch µÄ IP
# Elasticsearch µÄĬÈ϶˿ÚÊÇ9200£¬
ÔÚÏÂÃæµÄÅäÖÃÖпÉÒÔÊ¡ÂÔ¡£
hosts => ["IP Address 1:port1",
"IP Address 2:port2", "IP Address 3"]

logstash ¹ýÂ˹æÔò ¼ûÉÏÎĵÄÅäÖÃºÍ grok Óï·¨¹æÔò

# vi /usr/share/logstash/config
/logstash.yml
# ÐèÒª°Ñ url ¸ÄΪ elasticsearch
master ½ÚµãµÄ IP
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.url: http://elasticsearch_master_IP:9200
node.name: "feiy"
pipeline.workers: 24 # same with cores

¸ÄÍêÅäÖà exit ´ÓÈÝÆ÷ÀïÍ˳öµ½ËÞÖ÷»ú£¬È»ºóÖØÆôÕâ¸öÈÝÆ÷¡£

# ÈçºÎ²é¿´ container_ID
docker ps -a
docker restart [container_ID]

ÈÝÔÖ²âÊÔ

ÎÒÃǰѵ±Ç°µÄ master ½Úµã node-1 ¹Ø»ú£¬Í¨¹ý kibana ¿´¿´¼¯ÈºµÄ״̬ÊÇÔõÑù±ä»¯µÄ¡£

µ±Ç°¼¯ÈºµÄ״̬±ä³ÉÁË»ÆÉ«£¬ÒòΪ»¹ÓÐ 3 ¸ö Unassigned Shards¡£ÑÕÉ«º¬ÒåÇë²Î¿¼¹Ù·½Îĵµ£¬ÔÙ¹ýÒ»»á·¢ÏÖ¼¯Èº×´Ì¬±ä³ÉÁËÂÌÉ«¡£

kibana ¿ØÖÆÌ¨ Console

Quick intro to the UI

The Console UI is split into two panes:
an editor pane (left) and a response pane (right).
Use the editor to type requests and submit them to Elasticsearch.
The results will be displayed in the response pane on the right side.

Console understands requests in a compact format, similar to cURL:

# index a doc
PUT index/type/1
{
"body": "here"
}
# and get it ...
GET index/type/1

While typing a request, Console will make suggestions
which you can then accept by hitting Enter/Tab.
These suggestions are made based on the request structure as well as your indices and types.

A few quick tips, while I have your attention

Submit requests to ES using the green triangle button.

Use the wrench menu for other useful things.

You can paste requests in cURL format and
they will be translated to the Console syntax.

You can resize the editor and output panes
by dragging the separator between them.

Study the keyboard shortcuts under the Help button. Good stuff in there!

Console ³£ÓõÄÃüÁî

Kibana ¿ØÖÆÌ¨

ELK¼¼ÊõÕ»ÖеÄÄÇЩ²éѯÓï·¨

GET _search
{
"query": {
"match_all": {}
}
}
GET /_cat/health?v
GET /_cat/nodes?v
GET /_cluster/allocation/explain
GET /_cluster/state
GET /_cat/thread_pool?v
GET /_cat/indices?health=red&v
GET /_cat/indices?v
#½«µ±Ç°ËùÓÐµÄ index µÄ
replicas ÉèÖÃΪ 0
PUT /*/_settings
{
"index" : {
"number_of_replicas" : 0,
"refresh_interval": "30s"
}
}
GET /_template
# ÔÚµ¥½ÚµãµÄʱºò£¬²»ÐèÒª±¸·Ý£¬
ËùÒÔ½« replicas ÉèÖÃΪ 0
PUT _template/app-logstash
{
"index_patterns": ["app-*"],
"settings": {
"number_of_shards": 3,
"number_of_replicas": 0,
"refresh_interval": "30s"
}
}

Elasticsearch Êý¾ÝÇ¨ÒÆ

Elasticsearch Êý¾ÝÇ¨ÒÆ¹Ù·½Îĵµ¸Ð¾õ²»ÊǺÜÏêϸ¡£ÈÝÆ÷»¯µÄÊý¾ÝÇ¨ÒÆ£¬ÎÒÌ«²ËÓà reindex ʧ°ÜÁË£¬snapshot Ò²Á¹Á¹¡£

×îºóÊÇÓÃÒ»¸ö¿ªÔ´¹¤¾ß An Elasticsearch Migration Tool ½øÐÐÊý¾ÝÇ¨ÒÆµÄ¡£

wget https://github.com/medcl/esm-abandoned/releases/download/v0.4.2
/linux64.tar.gz
tar -xzvf linux64.tar.gz
./esm -s http://127.0.0.1:9200
-d http://192.168.21.55:9200 -x
index_name -w=5 -b=10 -c 10000
--copy_settings --copy_mappings
--force --refresh

Nginx ´úÀíת·¢

ÒòΪÓÐʱºò docker ÖØÆô£¬iptables restart Ò²»áˢУ¬ËùÒÔµ¼ÖÂÁËÎÒÃǵÄÏÞÖÆ¹æÔò»á±»¸ü¸Ä£¬³öÏÖ°²È«ÎÊÌâ¡£ÕâÊÇÓÉÓÚ docker µÄÍøÂç¸ôÀë»ùÓÚ iptable ʵÏÖÔì³ÉµÄÎÊÌ⡣ΪÁ˱ÜÃâÕâ¸ö°²È«ÎÊÌ⣬ÎÒÃÇ¿ÉÒÔÔÚÆô¶¯ docker ʱ£¬¾ÍÖ»¼àÌýÔÚÄÚÍø£¬»òÕß±¾µØ 127.0.0.1 È»ºóͨ¹ý nginx ת·¢¡£

# cat kibana.conf
server {
listen 25601;
server_name x.x.x.x;
access_log /var/log/nginx
/kibana.access.log;
error_log /var/log/nginx
/kibana.error.log;
location / {
allow x.x.x.x;
allow x.x.x.x;
deny all;
proxy_http_version 1.1;
proxy_buffer_size 64k;
proxy_buffers 32 32k;
proxy_busy_buffers_size 128k;
proxy_set_header Host $host;
proxy_set_header X-Real-IP
$remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;


proxy_pass http://127.0.0.1:5601;
}
}

! ÕâÀïÐèҪעÒâµÄÊÇ£¬ iptable filter ±í INPUT Á´ ÓÐûÓÐ×èµ² 172.17.0.0/16 docker ĬÈϵÄÍø¶Î¡£ÊÇ·ñ×èµ²ÁË 25601 Õâ¸ö¶Ë¿Ú¡£

²È¹ýµÄ¿Ó

iptables ·À²»×¡¡£ÐèÒª¿´ÉÏһƪ²©¿ÍÀïµÄ iptable ÎÊÌâ¡£»òÕß¼àÌýÔÚÄÚÍø£¬Óà Nginx ´úÀíת·¢¡£

elk ÍøÂçÎÊÌâ

elk node

discovery.type=single-node ÔÚ²âÊÔµ¥µãʱ¿ÉÓ㬴¼¯ÈºÊ±²»ÄÜÉèÖÃÕâ¸ö»·¾³±äÁ¿£¬ÏêÇé¼û¹Ù·½Îĵµ

ELKµÄÒ»´ÎÍÌÍÂÁ¿ÓÅ»¯

filebeat °æ±¾¹ýµÍµ¼Ö recursive glob patterns ** ²»¿ÉÓÃ

ÓÃ ansible Éý¼¶ filebeat

---
- hosts: all
become: yes
gather_facts: yes
tasks:
- name: upload filebeat.repo
copy:
src: elasticsearch.repo
dest: /etc/yum.repos.d
/elasticsearch.repo
owner: root
group: root
mode: 0644
- name: install the latest
version of filebeat
yum:
name: filebeat
state: latest
- name: restart filebeat
service:
name: filebeat
state: restarted
enabled: yes
# elasticsearch.repo
[elasticsearch-6.x]
name=Elasticsearch repository
for 6.x packages
baseurl=https://artifacts.elastic.co
/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co
/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

filebeat 7.x Óë 6.x ²»¼æÈÝÎÊÌâ. ¹Ø¼ü×ֱ仯ºÜ´ó, ±ÈÈç˵ "sorce" ±äΪÁË log[path]

 
   
1689 ´Îä¯ÀÀ       27
Ïà¹ØÎÄÕÂ

»ùÓÚEAµÄÊý¾Ý¿â½¨Ä£
Êý¾ÝÁ÷½¨Ä££¨EAÖ¸ÄÏ£©
¡°Êý¾Ýºþ¡±£º¸ÅÄî¡¢ÌØÕ÷¡¢¼Ü¹¹Óë°¸Àý
ÔÚÏßÉ̳ÇÊý¾Ý¿âϵͳÉè¼Æ ˼·+Ч¹û
 
Ïà¹ØÎĵµ

GreenplumÊý¾Ý¿â»ù´¡Åàѵ
MySQL5.1ÐÔÄÜÓÅ»¯·½°¸
ijµçÉÌÊý¾ÝÖÐ̨¼Ü¹¹Êµ¼ù
MySQL¸ßÀ©Õ¹¼Ü¹¹Éè¼Æ
Ïà¹Ø¿Î³Ì

Êý¾ÝÖÎÀí¡¢Êý¾Ý¼Ü¹¹¼°Êý¾Ý±ê×¼
MongoDBʵս¿Î³Ì
²¢·¢¡¢´óÈÝÁ¿¡¢¸ßÐÔÄÜÊý¾Ý¿âÉè¼ÆÓëÓÅ»¯
PostgreSQLÊý¾Ý¿âʵսÅàѵ