since the issue is related with PROD in terms of logstash relted and the same issue was occured in lower level environment. . However we have gone through the logs resepect to the server and noticed the below exception. so what we are thinking that it the threshold reached to 1500000 the server has to be restarted.so kindly let us know is this good approch? if yes then we can go ahead with that else kindly suggest to us any solution for this.
:timestamp=>"2017-06-22T05:40:45.135000+0000", :message=>"Redis key size has hit a congestion threshold 500000 suspending output for 5 seconds", :level=>:warn}"
could u please give me any one solution for this .
How have you determined that Logstash is the bottleneck? What throughput are you currently seeing and what throughput are your downstream systems, e.g. Elasticsearch, able to handle?
NEW: Monitor These Apps!
Apache Lucene, Apache Solr and all other Apache Software Foundation project and their respective logos are trademarks of the Apache Software Foundation.
Elasticsearch, Kibana, Logstash, and Beats are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. This site and Sematext Group is in no way affiliated with Elasticsearch BV.
Service operated by Sematext