how to move elasticsearch data from one server to another

后端 未结 12 1448
悲哀的现实
悲哀的现实 2020-12-12 12:27

How do I move Elasticsearch data from one server to another?

I have server A running Elasticsearch 1.1.1 on one local node with mul

相关标签:
12条回答
  • 2020-12-12 12:57

    If you don't want to use the elasticdump like a console tool. You can use next node.js script

    0 讨论(0)
  • 2020-12-12 12:59

    If you can add the second server to cluster, you may do this:

    1. Add Server B to cluster with Server A
    2. Increment number of replicas for indices
    3. ES will automatically copy indices to server B
    4. Close server A
    5. Decrement number of replicas for indices

    This will only work if number of replaces equal to number of nodes.

    0 讨论(0)
  • 2020-12-12 13:02

    The selected answer makes it sound slightly more complex than it is, the following is what you need (install npm first on your system).

    npm install -g elasticdump
    elasticdump --input=http://mysrc.com:9200/my_index --output=http://mydest.com:9200/my_index --type=mapping
    elasticdump --input=http://mysrc.com:9200/my_index --output=http://mydest.com:9200/my_index --type=data
    

    You can skip the first elasticdump command for subsequent copies if the mappings remain constant.

    I have just done a migration from AWS to Qbox.io with the above without any problems.

    More details over at:

    https://www.npmjs.com/package/elasticdump

    Help page (as of Feb 2016) included for completeness:

    elasticdump: Import and export tools for elasticsearch
    
    Usage: elasticdump --input SOURCE --output DESTINATION [OPTIONS]
    
    --input
                        Source location (required)
    --input-index
                        Source index and type
                        (default: all, example: index/type)
    --output
                        Destination location (required)
    --output-index
                        Destination index and type
                        (default: all, example: index/type)
    --limit
                        How many objects to move in bulk per operation
                        limit is approximate for file streams
                        (default: 100)
    --debug
                        Display the elasticsearch commands being used
                        (default: false)
    --type
                        What are we exporting?
                        (default: data, options: [data, mapping])
    --delete
                        Delete documents one-by-one from the input as they are
                        moved.  Will not delete the source index
                        (default: false)
    --searchBody
                        Preform a partial extract based on search results
                        (when ES is the input,
                        (default: '{"query": { "match_all": {} } }'))
    --sourceOnly
                        Output only the json contained within the document _source
                        Normal: {"_index":"","_type":"","_id":"", "_source":{SOURCE}}
                        sourceOnly: {SOURCE}
                        (default: false)
    --all
                        Load/store documents from ALL indexes
                        (default: false)
    --bulk
                        Leverage elasticsearch Bulk API when writing documents
                        (default: false)
    --ignore-errors
                        Will continue the read/write loop on write error
                        (default: false)
    --scrollTime
                        Time the nodes will hold the requested search in order.
                        (default: 10m)
    --maxSockets
                        How many simultaneous HTTP requests can we process make?
                        (default:
                          5 [node <= v0.10.x] /
                          Infinity [node >= v0.11.x] )
    --bulk-mode
                        The mode can be index, delete or update.
                        'index': Add or replace documents on the destination index.
                        'delete': Delete documents on destination index.
                        'update': Use 'doc_as_upsert' option with bulk update API to do partial update.
                        (default: index)
    --bulk-use-output-index-name
                        Force use of destination index name (the actual output URL)
                        as destination while bulk writing to ES. Allows
                        leveraging Bulk API copying data inside the same
                        elasticsearch instance.
                        (default: false)
    --timeout
                        Integer containing the number of milliseconds to wait for
                        a request to respond before aborting the request. Passed
                        directly to the request library. If used in bulk writing,
                        it will result in the entire batch not being written.
                        Mostly used when you don't care too much if you lose some
                        data when importing but rather have speed.
    --skip
                        Integer containing the number of rows you wish to skip
                        ahead from the input transport.  When importing a large
                        index, things can go wrong, be it connectivity, crashes,
                        someone forgetting to `screen`, etc.  This allows you
                        to start the dump again from the last known line written
                        (as logged by the `offset` in the output).  Please be
                        advised that since no sorting is specified when the
                        dump is initially created, there's no real way to
                        guarantee that the skipped rows have already been
                        written/parsed.  This is more of an option for when
                        you want to get most data as possible in the index
                        without concern for losing some rows in the process,
                        similar to the `timeout` option.
    --inputTransport
                        Provide a custom js file to us as the input transport
    --outputTransport
                        Provide a custom js file to us as the output transport
    --toLog
                        When using a custom outputTransport, should log lines
                        be appended to the output stream?
                        (default: true, except for `$`)
    --help
                        This page
    
    Examples:
    
    # Copy an index from production to staging with mappings:
    elasticdump \
      --input=http://production.es.com:9200/my_index \
      --output=http://staging.es.com:9200/my_index \
      --type=mapping
    elasticdump \
      --input=http://production.es.com:9200/my_index \
      --output=http://staging.es.com:9200/my_index \
      --type=data
    
    # Backup index data to a file:
    elasticdump \
      --input=http://production.es.com:9200/my_index \
      --output=/data/my_index_mapping.json \
      --type=mapping
    elasticdump \
      --input=http://production.es.com:9200/my_index \
      --output=/data/my_index.json \
      --type=data
    
    # Backup and index to a gzip using stdout:
    elasticdump \
      --input=http://production.es.com:9200/my_index \
      --output=$ \
      | gzip > /data/my_index.json.gz
    
    # Backup ALL indices, then use Bulk API to populate another ES cluster:
    elasticdump \
      --all=true \
      --input=http://production-a.es.com:9200/ \
      --output=/data/production.json
    elasticdump \
      --bulk=true \
      --input=/data/production.json \
      --output=http://production-b.es.com:9200/
    
    # Backup the results of a query to a file
    elasticdump \
      --input=http://production.es.com:9200/my_index \
      --output=query.json \
      --searchBody '{"query":{"term":{"username": "admin"}}}'
    
    ------------------------------------------------------------------------------
    Learn more @ https://github.com/taskrabbit/elasticsearch-dump`enter code here`
    
    0 讨论(0)
  • 2020-12-12 13:09

    We can use elasticdump or multielasticdump to take the backup and restore it, We can move data from one server/cluster to another server/cluster.

    Please find a detailed answer which I have provided here.

    0 讨论(0)
  • 2020-12-12 13:10

    You can take a snapshot of the complete status of your cluster (including all data indices) and restore them (using the restore API) in the new cluster or server.

    0 讨论(0)
  • 2020-12-12 13:11

    There is also the _reindex option

    From documentation:

    Through the Elasticsearch reindex API, available in version 5.x and later, you can connect your new Elasticsearch Service deployment remotely to your old Elasticsearch cluster. This pulls the data from your old cluster and indexes it into your new one. Reindexing essentially rebuilds the index from scratch and it can be more resource intensive to run.

    POST _reindex
    {
      "source": {
        "remote": {
          "host": "https://REMOTE_ELASTICSEARCH_ENDPOINT:PORT",
          "username": "USER",
          "password": "PASSWORD"
        },
        "index": "INDEX_NAME",
        "query": {
          "match_all": {}
        }
      },
      "dest": {
        "index": "INDEX_NAME"
      }
    }
    
    0 讨论(0)
提交回复
热议问题