Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [outputes.0] HTTP Status=200 URI=/_bulk "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"kOMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920802.180669296.flb', retry in 1160 seconds: task_id=608, input=tail.0 > output=es.0 (out_id=0) To easily locate the root cause and resolve this issue try AutoOps for Elasticsearch & OpenSearch. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/argo-server-6d7cf9c977-dlwnk_argo_argo-server-7e1ccfbd60b7539a1b2984f2f46de601d567ce83e87d434e173df195e44b5224.log, inode 101715266 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"1eMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=661 next_retry=2019-01-27 19:00:14 -0500 error_class="ArgumentError" error="Data too big (189382 bytes), would create more than 128 chunks!" plugin_id="object:3fee25617fbc" Because of this cache memory increases and td-agent fails to send messages to graylog Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY es 7.6.2 fluent/fluent-bit 1.8.15. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [ warn] [engine] failed to flush chunk '1-1648192103.858183.flb', retry in 18 seconds: task_id=5, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=3076476 file has been deleted: /var/log/containers/hello-world-89skv_argo_wait-5d919c301d4709b0304c6c65a8389aac10f30b8617bd935a9680a84e1873542b.log "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"j-Mmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=1772851 with offset=0 appended as /var/log/containers/hello-world-89knq_argo_wait-a7f77229883282b7aebce253b8c371dd28e0606575ded307669b43b272d9a2f4.log "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"IOMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [outputes.0] task_id=15 assigned to thread #0 [2022/03/24 04:20:06] [debug] [out coro] cb_destroy coro_id=5 Fluentbit to Splunk HEC forwarding issue #2150 - Github Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [out coro] cb_destroy coro_id=7 ): k3s 1.19.8, use docker-ce backend, 20.10.12. [OUTPUT] Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [ warn] [engine] failed to flush chunk '1-1648192121.87279162.flb', retry in 10 seconds: task_id=15, input=tail.0 > output=es.0 (out_id=0) "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"4eMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Helm chart configuration. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=13 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"_uMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"y-Mnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [ warn] [engine] failed to flush chunk '1-1648192098.623024610.flb', retry in 11 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [input chunk] update output instances with new chunk size diff=650 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:19:38] [debug] [out coro] cb_destroy coro_id=2 [2022/03/24 04:20:26] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Failed to flush index - how to solve related issues Enabling debug logging in fluentbit should give more info. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/coredns-66c464876b-4g64d_kube-system_coredns-3081b7d8e172858ec380f707cf6195c93c8b90b797b6475fe3ab21820386fc0d.log, inode 67178299 [2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=1772861 with offset=0 appended as /var/log/containers/hello-world-wpr5j_argo_wait-76bcd0771f3cc7b5f6b5f15f16ee01cc0c671fb047b93910271bc73e753e26ee.log [1.7] Fails to send data to ElasticSearch #3052 - Github [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920657.177210280.flb', retry in 1048 seconds: task_id=464, input=tail.0 > output=es.0 (out_id=0) [OUTPUT] Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Can you please enable debug log level and share the log? Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available [2022/03/24 04:20:51] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [ warn] [engine] failed to flush chunk '1-1648192097.600252923.flb', retry in 26 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Config: Buffer Section - Fluentd Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [outputes.0] task_id=6 assigned to thread #0 [2022/03/24 04:20:20] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=1756313 watch_fd=8 Otherwise, share steps to reproduce, including your config. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [out coro] cb_destroy coro_id=6 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"feMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"H-Moun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [retry] new retry created for task_id=17 attempts=1 Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) chunks are getting stuck Issue #3014 fluent/fluent-bit GitHub [2022/03/24 04:19:47] [debug] [outputes.0] task_id=1 assigned to thread #0 Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input chunk] update output instances with new chunk size diff=650 Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [http_client] not using http_proxy for header [2022/03/24 04:19:29] [debug] [outputes.0] task_id=2 assigned to thread #1 [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [input chunk] update output instances with new chunk size diff=634 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [ warn] [engine] failed to flush chunk '1-1648192110.850147571.flb', retry in 11 seconds: task_id=9, input=tail.0 > output=es.0 (out_id=0) The output plugins group events into chunks. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available Hi @yangtian9999. Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [outputes.0] HTTP Status=200 URI=/_bulk How can I debug why Fluentd is not sending data to Elasticsearch? [2022/03/24 04:20:51] [error] [outputes.0] could not pack/validate JSON response Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"MuMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Retry_Limit False, [OUTPUT] "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0uMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. It's possible for the HTTP status to be zero because it's unparseable -- specifically, the source uses atoi () -- but flb_http_do () will still return successfully. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"OeMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. failed to flush the buffer in fluentd looging - Stack Overflow Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=35359369 file has been deleted: /var/log/containers/hello-world-swxx6_argo_wait-dc29bc4a400f91f349d4efd144f2a57728ea02b3c2cd527fcd268e3147e9af7d.log Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=14 Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [ warn] [engine] failed to flush chunk '1-1648192111.878474491.flb', retry in 9 seconds: task_id=10, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [out coro] cb_destroy coro_id=6 [ warn] [engine] failed to flush chunk '16225-1622284700.63299738.flb', retry in X seconds: task_id=X and so on. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"IeMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Kibana 7.6.2 management. Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [http_client] not using http_proxy for header "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"3eMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:19:49] [debug] [retry] re-using retry for task_id=1 attempts=3 Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:19:24] [debug] [out coro] cb_destroy coro_id=0 Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) [2022/03/24 04:20:25] [debug] [outputes.0] task_id=2 assigned to thread #1 Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [ warn] [engine] failed to flush chunk '1-1648192119.62045721.flb', retry in 18 seconds: task_id=13, input=tail.0 > output=es.0 (out_id=0) Host {{ .Release.Name }}-elasticsearch-master, sassoftware/viya4-monitoring-kubernetes#431. [2022/03/24 04:19:21] [debug] [task] created task=0x7f7671e38680 id=1 OK [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920426.171646994.flb', retry in 632 seconds: task_id=233, input=tail.0 > output=es.0 (out_id=0) Failed to flush chunks' Issue #3499 fluent/fluent-bit GitHub Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input chunk] update output instances with new chunk size diff=1182 Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [task] created task=0x7ff2f183ac00 id=15 OK Bug Report. The number of log records that this output instance has successfully sent. Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [outputes.0] task_id=9 assigned to thread #0 Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [ warn] [engine] failed to flush chunk '1-1648192108.829100670.flb', retry in 8 seconds: task_id=7, input=tail.0 > output=es.0 (out_id=0) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available Bug Report. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-dsxks_argo_main-3bba9f6587b663e2ec8fde9f40424e43ccf8783cf5eafafc64486d405304f470.log, inode 35353618 You're sending more data than the cluster can index. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-wpr5j_argo_wait-76bcd0771f3cc7b5f6b5f15f16ee01cc0c671fb047b93910271bc73e753e26ee.log, inode 1772861 The event time is normally the delayed time from the current timestamp. Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [ warn] [engine] failed to flush chunk '1-1648192099.641327100.flb', retry in 9 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=1167 Fluentd does not handle a large number of chunks well when starting up, so that can be a problem as well. {"took":3473,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"2-Mmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available [2022/03/24 04:20:00] [ warn] [engine] failed to flush chunk '1-1648095560.297175793.flb', retry in 25 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [outputes.0] task_id=9 assigned to thread #1 Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [retry] re-using retry for task_id=4 attempts=2 Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [outputes.0] HTTP Status=200 URI=/_bulk {"took":2414,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"juMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"KeMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header caller=flush.go:198 org_id=fake msg="failed to flush user" err=timeout. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=35359369 events: IN_ATTRIB fail to flush the buffer in fluentd to elasticsearch - Stack Overflow "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"e-Mnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available Name es "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"eeMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [retry] re-using retry for task_id=6 attempts=2 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [out coro] cb_destroy coro_id=9 Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY When there are lots of messages in the request / chunk and the rejected message is at the end of the list then you never see the cause in the fluent-bit logs. Logstash_Format On "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"OOMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) Bug Report Describe the bug Looks like there is issue during recycling multiple TLS connections (when there is only one opened connection to upstream, or no TLS is used, everything works fine), tha. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [retry] new retry created for task_id=19 attempts=1 Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [out coro] cb_destroy coro_id=3 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Connecting to a Promtail pod to troubleshoot. Setting Type doc in the es OUTPUT helped in my case. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=694 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"KOMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [http_client] not using http_proxy for header I had similar issues with failed to flush chunk in fluent-bit logs, and eventually figured out that the index I was trying to send logs to already had a _type set to doc, while fluent-bit was trying to send with _type set to _doc (which is the default). Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [outputes.0] task_id=2 assigned to thread #0 Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [outputes.0] task_id=8 assigned to thread #0 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. I only changed the output config since its a subchart. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"IuMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:20:00] [debug] [retry] re-using retry for task_id=2 attempts=4 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [out coro] cb_destroy coro_id=8 Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [input chunk] update output instances with new chunk size diff=1083 I can see the logs in Kibana that were successfully uploaded, but the missing log could not be found.
Famous Hispanic Inventors Who Changed The World,
Special Pleading Fallacy Examples In Media,
Johanna Parker Carnival Cottage,
Met Kalfou Offerings,
How To Reset Red Lightning Bolt On Dash Chrysler 300,
Articles F