Fluentd から AWSのElasticseachにアプリケーションログを転送したいのですが、うまくいきません。
S3とRedshiftには転送できているのですがESにだけ転送できていない(ドキュメントが追加されていない)です。

アプリケーションのログが転送されてくると、Fluentdのログには Connection opened to Elasticsearch cluster と表示されたのが最後でエラーも何も表示されません。

なにか分かる方、ご教授ください。

fluentd の ログ

2016-09-14 11:40:45 +0900 [debug]: plugin/out_redshift.rb:95:write: start creating gz.
2016-09-14 11:40:45 +0900 [info]: plugin/out_redshift.rb:116:write: start S3 updating. s3_uri=s3://a-i-ad.net/taka/relay/AppLogA/asp_inclick_09/201609/160914/11/taka-rly02-20160914-114045_00.gz
2016-09-14 11:40:45 +0900 [info]: plugin/out_redshift.rb:121:write: complete S3 updating. s3_Path=taka/relay/AppLogA/asp_inclick_09/201609/160914/11/taka-rly02-20160914-114045_00.gz
2016-09-14 11:40:45 +0900 [debug]: plugin/out_redshift.rb:129:write: start copying. s3_uri=s3://a-i-ad.net/taka/relay/AppLogA/asp_inclick_09/201609/160914/11/taka-rly02-20160914-114045_00.gz
2016-09-14 11:40:45 +0900 [info]: plugin/out_redshift.rb:130:write: start copying. s3_uri=s3://a-i-ad.net/taka/relay/AppLogA/asp_inclick_09/201609/160914/11/taka-rly02-20160914-114045_00.gz
2016-09-14 11:40:46 +0900 [info]: plugin/out_elasticsearch.rb:151:client: Connection opened to Elasticsearch cluster => {:host=>"search-hedgehog-wwyghti6zkhvk3qeob2v4g2p64.us-west-2.es.amazonaws.com", :port=>443, :scheme=>"https"}
2016-09-14 11:40:48 +0900 [info]: plugin/out_redshift.rb:134:write: completed copying to redshift. s3_uri=s3://a-i-ad.net/taka/relay/AppLogA/asp_inclick_09/201609/160914/11/taka-rly02-20160914-114045_00.gz

fluent の conf ファイル

<match AppLogA.**>
  type copy

    # debug時のみ
    <store>
      type stdout
    </store>

    <store>
      type forest
      subtype redshift
      remove_prefix AppLogA.
      <template>

        aws_key_id ***
        aws_sec_key ***
        s3_bucket a-i-ad.net
        s3_endpoint s3-us-west-2.amazonaws.com
        path /taka/relay
        timestamp_key_format AppLogA/${tag_parts[1]}_${tag_parts[2]}/%Y%m/%y%m%d/%H/taka-rly02-%Y%m%d-%H%M%S

        # redshift
        redshift_host uboat.a-i-ad.net
        redshift_port 5439
        redshift_dbname dmt01
        redshift_user relayuser
        redshift_password ***
        redshift_schemaname demand
        redshift_tablename ${tag_parts[1]}_${tag_parts[2]}
        file_type msgpack

        # buffer
        buffer_type file
        buffer_path /var/log/fluent/buffer/${tag_parts[1]}_A_${tag_parts[2]}.buffer
        flush_interval 10s
        buffer_chunk_limit 1024m
        buffer_queue_limit 400
        retry_wait 1s
        retry_limit 17

        num_threads 16
        #flush_at_shutdown true

        <secondary>
          type file
          path /var/log/fluent/failed/${tag_parts[1]}_A_${tag_parts[2]}_failed
        </secondary>

      </template>
  </store>

  <store>
    type forest
    subtype elasticsearch
    remove_prefix AppLogA.
    <template>
        #type elasticsearch
        index_name log
        type_name asp_inclick_201609 #${tag_parts[1]}_2016${tag_parts[2]}
        include_tag_key true
        tag_key @log_name

        host search-***.us-west-2.es.amazonaws.com
        port 443
        scheme https

        logstash_dateformat  %Y.%m.%d-%H
        logstash_format true
        flush_interval 10s
        retry_limit 5

        buffer_type file
        buffer_path /var/log/fluent/buffer/${tag_parts[1]}/${tag_parts[2]}/
        resurrect_after 5s
        reload_connections false
        tag esAppLogA.${tag_parts[1]}.${tag_parts[2]}
    </template>
  </store>

  <store>
    type forest
    subtype flowcounter
    <template>
      count_keys *
      unit minute
      aggregate all
      tag flowAppLogA.${tag_parts[1]}.${tag_parts[2]}
    </template>
  </store>

</match>

#
# 件数をファイルに書く。
#
<match flowAppLogA.**>
  type file
  path /var/log/fluent/logs/flowAppLogA.log
  time_slice_format %Y%m%d
  time_slice_wait 10m
  time_format %Y-%m-%d %H:%M:%S
</match>