Binary scripts including elasticsearch to start a node and elasticsearch-plugin to install plugins
$ES_HOME/bin
conf
Configuration files including elasticsearch.yml
$ES_HOME/config
ES_PATH_CONF
data
The location of the data files of each index / shard allocated on the node. Can hold multiple locations.
$ES_HOME/data
path.data
logs
Log files location.
$ES_HOME/logs
path.logs
plugins
Plugin files location. Each plugin will be contained in a subdirectory.
$ES_HOME/plugins
repo
Shared file system repository locations. Can hold multiple locations. A file system repository can be placed in to any subdirectory of any directory specified here.
# - nofile - max number of open file descriptors 最大打开的文件描述符数 # - memlock - max locked-in-memory address space (KB) 最大内存锁定 # - nproc - max number of processes 最大进程数 $ vim /etc/security/limits.conf ec2-user - nofile 65535 ec2-user - memlock unlimited ec2-user - nproc 4096
# 然后退出重新登陆
检查:
$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 63465 max locked memory (kbytes, -l) unlimited ## 这里已经生效 max memory size (kbytes, -m) unlimited open files (-n) 65535 ## 这里已经生效 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 4096 ## 这里已经生效 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
禁用交换分区 swap
执行命令以立刻禁用swap:
$ sudo swapoff -a
这里只是临时的禁用了,系统重启后还是会启动的,编辑以下配置文件将swap的挂载去掉:
$ sudo vim /etc/fstab
配置swappiness 以及虚拟内存
这是减少了内核的交换趋势,并且在正常情况下不应该导致交换,同时仍然允许整个系统在紧急情况下交换。
# 增加如下两行 $ sudo vim /etc/sysctl.conf vm.swappiness=1 vm.max_map_count=262144
The port that HTTP clients should use when communicating with this node. Useful when a cluster node is behind a proxy or firewall and the http.port is not directly addressable from the outside. Defaults to the actual port assigned via http.port.
http.bind_host http监听的IP
The host address to bind the HTTP service to. Defaults to http.host (if set) or network.bind_host.
http.publish_host
The host address to publish for HTTP clients to connect to. Defaults to http.host (if set) or network.publish_host.
http.host
Used to set the http.bind_host and the http.publish_host.
http.max_content_length
The max content of an HTTP request. Defaults to 100mb.
http.max_initial_line_length
The max length of an HTTP URL. Defaults to 4kb
http.max_header_size
The max size of allowed headers. Defaults to 8kB
http.compression 压缩
Support for compression when possible (with Accept-Encoding). Defaults to true.
http.compression_level 压缩级别
Defines the compression level to use for HTTP responses. Valid values are in the range of 1 (minimum compression) and 9 (maximum compression). Defaults to 3.
http.cors.enabled 跨域配置
Enable or disable cross-origin resource sharing, i.e. whether a browser on another origin can execute requests against Elasticsearch. Set to true to enable Elasticsearch to process pre-flight CORS requests. Elasticsearch will respond to those requests with the Access-Control-Allow-Origin header if the Origin sent in the request is permitted by the http.cors.allow-origin list. Set to false (the default) to make Elasticsearch ignore the Origin request header, effectively disabling CORS requests because Elasticsearch will never respond with the Access-Control-Allow-Origin response header. Note that if the client does not send a pre-flight request with an Origin header or it does not check the response headers from the server to validate the Access-Control-Allow-Origin response header, then cross-origin security is compromised. If CORS is not enabled on Elasticsearch, the only way for the client to know is to send a pre-flight request and realize the required response headers are missing.
http.cors.allow-origin
Which origins to allow. Defaults to no origins allowed. If you prepend and append a / to the value, this will be treated as a regular expression, allowing you to support HTTP and HTTPs. for example using /https?:\/\/localhost(:[0-9]+)?/ would return the request header appropriately in both cases. * is a valid value but is considered a security risk as your Elasticsearch instance is open to cross origin requests from anywhere.
http.cors.max-age
Browsers send a “preflight” OPTIONS-request to determine CORS settings. max-age defines how long the result should be cached for. Defaults to 1728000 (20 days)
http.cors.allow-methods
Which methods to allow. Defaults to OPTIONS, HEAD, GET, POST, PUT, DELETE.
http.cors.allow-headers
Which headers to allow. Defaults to X-Requested-With, Content-Type, Content-Length.
http.cors.allow-credentials
Whether the Access-Control-Allow-Credentials header should be returned. Note: This header is only returned, when the setting is set to true. Defaults to false
http.detailed_errors.enabled
Enables or disables the output of detailed error messages and stack traces in response output. Note: When set to false and the error_trace request parameter is specified, an error will be returned; when error_trace is not specified, a simple message will be returned. Defaults to true
http.pipelining.max_events
The maximum number of events to be queued up in memory before an HTTP connection is closed, defaults to 10000.
http.max_warning_header_count
The maximum number of warning headers in client HTTP responses, defaults to unbounded.
http.max_warning_header_size
The maximum total size of warning headers in client HTTP responses, defaults to unbounded.
transport 配置参考:
Setting
Description
transport.port transport端口
A bind port range. Defaults to 9300-9400.
transport.publish_port
The port that other nodes in the cluster should use when communicating with this node. Useful when a cluster node is behind a proxy or firewall and the transport.port is not directly addressable from the outside. Defaults to the actual port assigned via transport.port.
transport.bind_host transport监听的IP
The host address to bind the transport service to. Defaults to transport.host (if set) or network.bind_host.
transport.publish_host
The host address to publish for nodes in the cluster to connect to. Defaults to transport.host (if set) or network.publish_host.
transport.host
Used to set the transport.bind_host and the transport.publish_host.
transport.connect_timeout
The connect timeout for initiating a new connection (in time setting format). Defaults to 30s.
transport.compress
Set to true to enable compression (DEFLATE) between all nodes. Defaults to false.
transport.ping_schedule
Schedule a regular application-level ping message to ensure that transport connections between nodes are kept alive. Defaults to 5s in the transport client and -1 (disabled) elsewhere. It is preferable to correctly configure TCP keep-alives instead of using this feature, because TCP keep-alives apply to all kinds of long-lived connections and not just to transport connections.
$ ./es01/bin/elasticsearch --help OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. starts elasticsearch
Option Description ------ ----------- -E <KeyValuePair> Configure a setting -V, --version Prints elasticsearch version information and exits -d, --daemonize Starts Elasticsearch in the background # 后台启动 -h, --help show help -p, --pidfile <Path> Creates a pid file in the specified path on start # 指定pid文件 -q, --quiet Turns off standard output/error streams logging in console # 安静的方式 -s, --silent show minimal output -v, --verbose show verbose output
分别启动三台ES:
$ ll total 0 drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-01 drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-02 drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-03 lrwxrwxrwx 1 ec2-user ec2-user 22 Nov 26 15:00 es01 -> elasticsearch-7.4.2-01 lrwxrwxrwx 1 ec2-user ec2-user 22 Nov 26 15:00 es02 -> elasticsearch-7.4.2-02 lrwxrwxrwx 1 ec2-user ec2-user 22 Nov 26 15:00 es03 -> elasticsearch-7.4.2-03
$ ./es01/bin/elasticsearch-setup-passwords --help Sets the passwords for reserved users
Commands -------- auto - Uses randomly generated passwords interactive - Uses passwords entered by a user
Non-option arguments: command
Option Description ------ ----------- -h, --help show help -s, --silent show minimal output -v, --verbose show verbose output
# 自动生成密码,发现失败 $ ./es01/bin/elasticsearch-setup-passwords auto
Unexpected response code [500] from calling GET http://172.17.0.87:9200/_security/_authenticate?pretty It doesn't look like the X-Pack security feature is enabled on this Elasticsearch node. Please check if you have enabled X-Pack security in your elasticsearch.yml configuration file. ERROR: X-Pack Security is disabled by configuration.
我们查看一些ES01的日志,发现有报错:
[2019-11-27T14:35:13,391][WARN ][r.suppressed ] [es01] path: /_security/_authenticate, params: {pretty=} org.elasticsearch.ElasticsearchException: Security must be explicitly enabled when using a [basic] license. Enable security by setting [xpack.security.enabled] to [true] in the elasticsearch.yml file and restart the node. ......
ERROR: [1] bootstrap checks failed [1]: Transport SSL must be enabled if security is enabled on a [basic] license. Please set [xpack.security.transport.ssl.enabled] to [true] or disable security by setting [xpack.security.enabled] to [false]
# 查看命令帮助 $ ./es01/bin/elasticsearch-certutil --help WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.bouncycastle.jcajce.provider.drbg.DRBG (file:/opt/elk74/elasticsearch-7.4.2-01/lib/tools/security-cli/bcprov-jdk15on-1.61.jar) to constructor sun.security.provider.Sun() WARNING: Please consider reporting this to the maintainers of org.bouncycastle.jcajce.provider.drbg.DRBG WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Simplifies certificate creation for use with the Elastic Stack
Commands -------- csr - generate certificate signing requests cert - generate X.509 certificates and keys ca - generate a new local certificate authority
Non-option arguments: command
Option Description ------ ----------- -h, --help show help -s, --silent show minimal output -v, --verbose show verbose output
创建CA证书:
# 命令帮助: $ ./bin/elasticsearch-certutil ca --help generate a new local certificate authority
Option Description ------ ----------- -E <KeyValuePair> Configure a setting --ca-dn distinguished name to use for the generated ca. defaults to CN=Elastic Certificate Tool Autogenerated CA --days <Integer> number of days that the generated certificates are valid -h, --help show help --keysize <Integer> size in bits of RSA keys --out path to the output file that should be produced --pass password for generated private keys --pem output certificates and keys in PEM format instead of PKCS#12 ## 默认创建PKCS#12格式的,使用--pem可以创建pem格式的,key,crt,ca分开的。 -s, --silent show minimal output -v, --verbose show verbose output
# 创建ca证书 $ ./es01/bin/elasticsearch-certutil ca -v This tool assists you in the generation of X.509 certificates and certificate signing requests for use with SSL/TLS in the Elastic stack.
The 'ca' mode generates a new 'certificate authority' This will create a new X.509 certificate and private key that can be used to sign certificate when running in'cert' mode.
Use the 'ca-dn' option if you wish to configure the 'distinguished name' of the certificate authority
By default the 'ca' mode produces a single PKCS#12 output file which holds: * The CA certificate * The CA's private key If you elect to generate PEM format certificates (the -pem option), then the output will be a zip file containing individual files for the CA certificate and private key Please enter the desired output file [elastic-stack-ca.p12]: # 输入保存的ca文件名称 Enter password for elastic-stack-ca.p12 : # 输入证书密码,我们这里留空 # 默认的CA证书存放在$ES_HOME 目录中 $ ll es01/ total 560 drwxr-xr-x 2 ec2-user ec2-user 4096 Oct 29 04:45 bin drwxr-xr-x 2 ec2-user ec2-user 178 Nov 27 13:45 config drwxrwxr-x 3 ec2-user ec2-user 19 Nov 27 13:46 data -rw------- 1 ec2-user ec2-user 2527 Nov 27 15:05 elastic-stack-ca.p12 # 这里呢 drwxr-xr-x 9 ec2-user ec2-user 107 Oct 29 04:45 jdk drwxr-xr-x 3 ec2-user ec2-user 4096 Oct 29 04:45 lib -rw-r--r-- 1 ec2-user ec2-user 13675 Oct 29 04:38 LICENSE.txt drwxr-xr-x 2 ec2-user ec2-user 4096 Nov 27 14:48 logs drwxr-xr-x 37 ec2-user ec2-user 4096 Oct 29 04:45 modules -rw-r--r-- 1 ec2-user ec2-user 523209 Oct 29 04:45 NOTICE.txt drwxr-xr-x 2 ec2-user ec2-user 6 Oct 29 04:45 plugins -rw-r--r-- 1 ec2-user ec2-user 8500 Oct 29 04:38 README.textile
Option Description ------ ----------- -E <KeyValuePair> Configure a setting --ca path to an existing ca key pair (in PKCS#12 format) --ca-cert path to an existing ca certificate --ca-dn distinguished name to use for the generated ca. defaults to CN=Elastic Certificate Tool Autogenerated CA --ca-key path to an existing ca private key --ca-pass password for an existing ca private key or the generated ca private key --days <Integer> number of days that the generated certificates are valid --dns comma separated DNS names # 指定dns,域名 -h, --help show help --in file containing details of the instances in yaml format --ip comma separated IP addresses # 指定IP --keep-ca-key retain the CA private key for future use --keysize <Integer> size in bits of RSA keys --multiple generate files for multiple instances --name name of the generated certificate --out path to the output file that should be produced --pass password for generated private keys --pem output certificates and keys in PEM format instead of PKCS#12 -s, --silent show minimal output -v, --verbose show verbose output
# 创建node证书 $ cd es01 $ ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 This tool assists you in the generation of X.509 certificates and certificate signing requests for use with SSL/TLS in the Elastic stack.
The 'cert' mode generates X.509 certificate and private keys. * By default, this generates a single certificate and key for use on a single instance. * The '-multiple' option will prompt you to enter details for multiple instances and will generate a certificate and key for each one * The '-in' option allows for the certificate generation to be automated by describing the details of each instance in a YAML file
* An instance is any piece of the Elastic Stack that requires an SSL certificate. Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats may all require a certificate and private key. * The minimum required value for each instance is a name. This can simply be the hostname, which will be used as the Common Name of the certificate. A full distinguished name may also be used. * A filename value may be required for each instance. This is necessary when the name would result in an invalid file or directory name. The name provided here is used as the directory name (within the zip) and the prefix for the key and certificate files. The filename is required if you are prompted and the name is not displayed in the prompt. * IP addresses and DNS names are optional. Multiple values can be specified as a comma separated string. If no IP addresses or DNS names are provided, you may disable hostname verification in your SSL configuration.
* All certificates generated by this tool will be signed by a certificate authority (CA). * The tool can automatically generate a new CA for you, or you can provide your own with the -ca or -ca-cert command line options.
By default the 'cert' mode produces a single PKCS#12 output file which holds: * The instance certificate * The private key for the instance certificate * The CA certificate
If you specify any of the following options: * -pem (PEM formatted output) * -keep-ca-key (retain generated CA key) * -multiple (generate multiple certificates) * -in (generate certificates from an input file) then the output will be be a zip file containing individual certificate/key files
Enter password for CA (elastic-stack-ca.p12) : # 输入CA证书的密码,我们这里没有设置,直接回车 Please enter the desired output file [elastic-certificates.p12]: # 输入证书保存名称,保值默认直接回车 Enter password for elastic-certificates.p12 : # 输入证书的密码,留空,直接回车
Certificates written to /opt/elk74/elasticsearch-7.4.2-01/elastic-certificates.p12 # 存放位置
This file should be properly secured as it contains the private key for your instance.
This file is a self contained file and can be copied and used 'as is' For each Elastic product that you wish to configure, you should copy this '.p12' file to the relevant configuration directory and then follow the SSL configuration instructions in the product guide.
For client applications, you may only need to copy the CA certificate and configure the client to trust this certificate. $ ll total 564 drwxr-xr-x 2 ec2-user ec2-user 4096 Oct 29 04:45 bin drwxr-xr-x 2 ec2-user ec2-user 178 Nov 27 13:45 config drwxrwxr-x 3 ec2-user ec2-user 19 Nov 27 13:46 data -rw------- 1 ec2-user ec2-user 3451 Nov 27 15:10 elastic-certificates.p12 # 这里 -rw------- 1 ec2-user ec2-user 2527 Nov 27 15:05 elastic-stack-ca.p12 # 还有这里 drwxr-xr-x 9 ec2-user ec2-user 107 Oct 29 04:45 jdk drwxr-xr-x 3 ec2-user ec2-user 4096 Oct 29 04:45 lib -rw-r--r-- 1 ec2-user ec2-user 13675 Oct 29 04:38 LICENSE.txt drwxr-xr-x 2 ec2-user ec2-user 4096 Nov 27 14:48 logs drwxr-xr-x 37 ec2-user ec2-user 4096 Oct 29 04:45 modules -rw-r--r-- 1 ec2-user ec2-user 523209 Oct 29 04:45 NOTICE.txt drwxr-xr-x 2 ec2-user ec2-user 6 Oct 29 04:45 plugins -rw-r--r-- 1 ec2-user ec2-user 8500 Oct 29 04:38 README.textile
full,认证证书是否通过信任的CA证书签发的,同时认证server的hostname or IP address是否匹配证书中配置的。
certificate,我们这里采用的方式,只认证证书是否通过信任的CA证书签发的
none,什么也不认证,相当于关闭了SSL/TLS 认证,仅用于你非常相信安全的环境。
配置了,然后再次启动ES节点测试:
测试能够正常启动了。好了,我们再来继续之前的生成密码:在随意一台节点即可。
$ ./es01/bin/elasticsearch-setup-passwords auto Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user. The passwords will be randomly generated and printed to the console. Please confirm that you would like to continue [y/N]y #输入y,确认继续
Changed password for user apm_system PASSWORD apm_system = yc0GJ9QS4AP69pVzFKiX
Changed password for user kibana PASSWORD kibana = UKuHceHWudloJk9NvHlX
Changed password for user logstash_system PASSWORD logstash_system = N6pLSkNSNhT0UR6radrZ
Changed password for user beats_system PASSWORD beats_system = BmsiDzgx1RzqHIWTri48
Changed password for user remote_monitoring_user PASSWORD remote_monitoring_user = dflPnqGAQneqjhU1XQiZ
Changed password for user elastic PASSWORD elastic = Tu8RPllSZz6KXkgZWFHv
Usage: bin/kibana [command=serve] [options] Kibana is an open source (Apache Licensed), browser based analytics and search dashboard for Elasticsearch. Commands: serve [options] Run the kibana server help <command> Get the help for a specific command "serve" Options: -e, --elasticsearch <uri1,uri2> Elasticsearch instances -c, --config <path> Path to the config file, use multiple --config args to include multiple config files (default: ["/opt/elk74/kibana-7.4.2-linux-x86_64/config/kibana.yml"]) -p, --port <port> The port to bind to -q, --quiet Prevent all logging except errors -Q, --silent Prevent all logging --verbose Turns on verbose logging -H, --host <host> The host to bind to -l, --log-file <path> The file to log to --plugin-dir <path> A path to scan for plugins, this can be specified multiple times to specify multiple directories (default: ["/opt/elk74/kibana-7.4.2-linux-x86_64/plugins","/opt/elk74/kibana-7.4.2-linux-x86_64/src/legacy/core_plugins"]) --plugin-path <path> A path to a plugin which should be included by the server, this can be specified multiple times to specify multiple paths (default: []) --plugins <path> an alias for --plugin-dir --optimize Optimize and then stop the server -h, --help output usage information
A tool for managing settings stored in the Kibana keystore
Options: -V, --version output the version number -h, --help output usage information
Commands: create [options] Creates a new Kibana keystore list [options] List entries in the keystore add [options] <key> Add a string setting to the keystore remove [options] <key> Remove a setting from the keystore
首先我们创建keystore:
$ bin/kibana-keystore create Created Kibana keystore in /opt/elk74/kibana-7.4.2-linux-x86_64/data/kibana.keystore # 默认存放位置
Options: -f, --force overwrite existing setting without prompting -x, --stdin read setting value from stdin -s, --silent prevent all logging -h, --help output usage information
# 创建elasticsearch.username这个key:注意名字必须是kibana.yml中的key $ ./bin/kibana-keystore add elasticsearch.username Enter value for elasticsearch.username: ****** # 输入key对应的value,这里是kibana连接es的账号:kibana
# 创建elasticsearch.password这个key $ ./bin/kibana-keystore add elasticsearch.password Enter value for elasticsearch.password: ******************** # 输入对应的密码:UKuHceHWudloJk9NvHlX