打开APP
userphoto
未登录

开通VIP,畅享免费电子书等14项超值服

开通VIP
ELK日志分析系统(4)-elasticsearch数据存储

 

1. 概述

  logstash把格式化的数据发送到elasticsearch以后,elasticsearch负责存储搜索日志数据

      elasticsearch的搜索接口还是很强大的,这边不详细展开,因为kibana会去调用el的接口;

  本文将讲解elasticsearch的相关配置和遇到的问题,至于elasticsearch的相关搜索使用,后面会找个时间整理一下。

2. 配置

  

  配置路径:docker-elk/elasticsearch/config/elasticsearch.yml

  • 关闭安全验证,否则kibana连接不上:xpack.security.enabled:false
  • 配置支持跨域调用,否则kibana会提示连接不上: http.cors.enabled: true

  另外由于elasticsearch很容易被攻击,所以建议不要把elasticsearch的端口对外开放

  

cluster.name: "docker-cluster"network.host: 0.0.0.0## Use single node discovery in order to disable production mode and avoid bootstrap checks## see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html#discovery.type: single-node## X-Pack settings## see https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-xpack.html#xpack.license.self_generated.type: trialxpack.security.enabled: falsexpack.monitoring.collection.enabled: truehttp.cors.enabled: truehttp.cors.allow-origin: "*"

 

  elasticsearch的缓存路径是/usr/share/elasticsearch/data

 

  验证是否成功:

  访问http://192.168.1.165:9200 ,如果得到以下数据表示成功: 

 

   

 3. 异常处理

 

  3.1. index has exceeded [1000000] - maximum allowed to be analyzed for highlighting

  详细的出错内容是这样:

    {"type":"illegal_argument_exception","reason":"The length of [message] field of [l60ZgW0Bv9XMTlnX27A_] doc of [syslog] index has exceeded [1000000] - maximum allowed to be analyzed for highlighting. This maximum can be set by changing the [index.highlight.max_analyzed_offset] index level setting. For large texts, indexing with offsets or term vectors is recommended!”}}

   错误原因:索引偏移量默认是100000,超过了

  

  最大迁移索引不能配置在配置文件中,只能接口修改

  

# 修改最大索引迁移curl -XPUT "http://192.168.1.165:9200/_settings" -H 'Content-Type: application/json' -d' {    "index" : {        "highlight.max_analyzed_offset" : 100000000    }}’

 

  3.1. circuit_breaking_exception', '[parent] Data too large, data for [<http_request>] would be [246901928/235.4mb], which is larger than the limit of [246546432/235.1mb]

  详细的出错内容是这样:

    elasticsearch.exceptions.TransportError: TransportError(429, 'circuit_breaking_exception', '[parent] Data too large, data for [<http_request>] would be [246901928/235.4mb], which is larger than the limit of [246546432/235.1mb], real usage: [246901768/235.4mb], new bytes reserved: [160/160b], usages [request=0/0b, fielddata=11733/11.4kb, in_flight_requests=160/160b, accounting=6120593/5.8mb]')

   

  错误原因:

    堆内存不够当前查询加载数据所以会报 https://github.com/docker-library/elasticsearch/issues/98

   解决方案:

  • 提高堆栈内存

    在宿主机执行:sudo sysctl -w vm.max_map_count=262144

    docker增加命令参数设置java的虚拟机初始化堆栈大小1G,和最大堆栈大小3G

    docker-compose路径:配置路径:docker-elk/docker-compose.yml   

services:  elasticsearch:    build:      context: elasticsearch/      args:        ELK_VERSION: $ELK_VERSION    volumes:      - type: bind        source: ./elasticsearch/config/elasticsearch.yml        target: /usr/share/elasticsearch/config/elasticsearch.yml        read_only: true      - type: volume        source: elasticsearch        target: /usr/share/elasticsearch/data    ports:      - "9200:9200"      - "9300:9300"    environment:      ES_JAVA_OPTS: "-Xms1g -Xmx3g"      ELASTIC_PASSWORD: changeme      LOGSPOUT: ignore    networks:      - elk

 

  • 增加堆内存的使用率,默认70%
curl -X PUT "http://192.168.1.165:9200/_cluster/settings" -H 'Content-Type: application/json' -d' {     "transient" : {         "indices.breaker.total.limit" : "90%"     } }’

 

3. 安装可视化插件

  使用docker启动

  docker run -d --name elasticsearch-head -p 9100:9100 mobz/elasticsearch-head:5

  elasticsearch需要配置支持跨域调用,否则会提示连接不上

  ElasticSearch head入口:http://192.168.1.165:9100

  插件效果如下:

 

  这个插件估计对新版本的elasticsearch支持不好,后面可以换一个支持新版本elsticsearch的插件。

 

本站仅提供存储服务,所有内容均由用户发布,如发现有害或侵权内容,请点击举报
打开APP,阅读全文并永久保存 查看更多类似文章
猜你喜欢
类似文章
【热】打开小程序,算一算2024你的财运
Docker安装及安装单机版ELK日志收集系统
还在服务器上捞日志?快搭建一个ELK日志系统吧,真心强大!
容器化启动ELK:
使用Docker搭建ELK日志系统的方法示例
记一次ELK踩坑的记录
ELK 6安装配置 nginx日志收集 kabana汉化
更多类似文章 >>
生活服务
热点新闻
分享 收藏 导长图 关注 下载文章
绑定账号成功
后续可登录账号畅享VIP特权!
如果VIP功能使用有故障,
可点击这里联系客服!

联系客服