docker + elasticsearch + hanlp分词器架构环境的搭建配置流程步骤

2023-06-01 00:00:00 架构 搭建 分词

1.docker方式安装elasticsearch

1.1 docker编排dockerfile文件编写

version: "2.2"
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
    container_name: elk-es
    restart: always
    environment:
      # 开启内存锁定
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      # 指定单节点启动
      - discovery.type=single-node
    ulimits:
      # 取消内存相关限制 用于开启内存锁定
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./data/es/data:/usr/share/elasticsearch/data
      - ./data/es/logs:/usr/share/elasticsearch/logs
      - ./data/es/plugins:/usr/share/elasticsearch/plugins
      - ./data/es/config/analysis-hanlp:/usr/share/elasticsearch/config/analysis-hanlp
      - ./data/es/config/jvm.options:/usr/share/elasticsearch/config/jvm.options
    ports:
      - 9200:9200

ps:

linux需要记得给data目录权限


1.2 启动docker + elasticsearch环境

docker-compose up -d 


2.安装hanlp分词器

hanlp分词器库:

https://github.com/KennFalcon/elasticsearch-analysis-hanlp

在容器内运行:

./bin/elasticsearch-plugin install https://github.com/KennFalcon/elasticsearch-analysis-hanlp/releases/download/v6.5.4/elasticsearch-analysis-hanlp-6.5.4.zip

ps:

此处的hanlp的版本要和docker中es的版本一致,README文件中有详细写的注意看


2.1 安装分词模型

这个库的作者认为一些模型不是所有人都需要,所以默认是不带crf和nlp模型的,

需要去 

github.com/hankcs/HanLP/releases 

下载对应版本的数据包,需要注意版本,安装之后需要重启docker


安装crf模型:

先去下载对应的数据包,找到 data/model 中的 crf 文件夹,

将此文件夹放入 一下文件夹

data/es/plugins/analysis-hanlp/data/model 

ps:

a.本来的plugin-security.policy中的FilePermission会导致es报错,

记得修改为:

// HanLP data directories
//permission java.io.FilePermission "plugins/analysis-hanlp/data/-", "read,write,delete";
//permission java.io.FilePermission "plugins/analysis-hanlp/hanlp.cache", "read,write,delete";
permission java.io.FilePermission "<<ALL FILES>>", "read,write,delete";


b.hanlp分词模型安装之后重启docker,如果不做处理会报错说找不到hanlp.properties文件,

解决方案:就是在dockerfile提前将几个关键文件挂载出来

   - ./data/es/config/analysis-hanlp:/usr/share/elasticsearch/config/analysis-hanlp
   - ./data/es/config/jvm.options:/usr/share/elasticsearch/config/jvm.options

解决完以上问就可以使用hanlp分词了


看看官方效果demo:

POST http://localhost:9200/twitter2/_analyze
{
  "text": "美国阿拉斯加州发生8.0级地震",
  "tokenizer": "hanlp"
}
{
  "tokens" : [
    {
      "token" : "美国",
      "start_offset" : 0,
      "end_offset" : 2,
      "type" : "nsf",
      "position" : 0
    },
    {
      "token" : "阿拉斯加州",
      "start_offset" : 0,
      "end_offset" : 5,
      "type" : "nsf",
      "position" : 1
    },
    {
      "token" : "发生",
      "start_offset" : 0,
      "end_offset" : 2,
      "type" : "v",
      "position" : 2
    },
    {
      "token" : "8.0",
      "start_offset" : 0,
      "end_offset" : 3,
      "type" : "m",
      "position" : 3
    },
    {
      "token" : "级",
      "start_offset" : 0,
      "end_offset" : 1,
      "type" : "q",
      "position" : 4
    },
    {
      "token" : "地震",
      "start_offset" : 0,
      "end_offset" : 2,
      "type" : "n",
      "position" : 5
    }
  ]
}

相关文章