Hello Splunkers!! Issue Description We are experiencing a significant delay in data ingestion (>10 hours) for one index in Project B within our Splunk environment. Interestingly, Project A, which operates with a nearly identical configuration, does not exhibit this issue, and data ingestion occurs as expected. Steps Taken to Diagnose the Issue To identify the root cause of the delayed ingestion in Project B, the following checks were performed: Timezone Consistency: Verified that the timezone settings on the database server (source of the data) and the Splunk server are identical, ruling out timestamp misalignment. Props Configuration: Confirmed that the props.conf settings align with the event patterns, ensuring proper event parsing and processing. System Performance: Monitored CPU performance on the Splunk server and found no resource bottlenecks or excessive load. Note : Configuration Comparison: Conducted a thorough comparison of configurations between Project A and Project B, including inputs, outputs, and indexing settings, and found no apparent differences. Observations The issue is isolated to Project B, despite both projects sharing similar configurations and infrastructure. Project A processes data without delays, indicating that the Splunk environment and database connectivity are generally functional. Screenshot 1 : Screenshot 2 : Event sample : TIMESTAMP="2025-04-17T21:17:05.868000Z",SOURCE="TransportControllerManager_x.onStatusChangedTransferRequest",IDEVENT="1312670",EVENTTYPEKEY="TRFREQ_CANCELLED",INSTANCEID="210002100",OBJECTTYPE="TRANSFERREQUEST",OPERATOR="1",OPERATORID="1",TASKID="10030391534",TSULABEL="309360376000158328" props.conf [wmc_events] CHARSET=AUTO KV_MODE=AUTO SHOULD_LINEMERGE=false description= WMC events received from the Oracle database, formatted as key-value pairs pulldown_type=true TIME_PREFIX = ^TIMESTAMP= TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6NZ TZ = UTC NO_BINARY_CHECK = true TRUNCATE = 10000000 #MAX_EVENTS = 100000 ANNOTATE_PUNCT = false
... View more