I am using s3 input plugin to download/unzip .gz file and feed to Logstash. Here is the input part of my working config for Logstash
```
input {
  s3 {
    interval => 86400
    bucket => "logstorage-us-west-1"
    aws_credentials_file => "<path_to_cred>"
    region => "us-west-1"
    sincedb_path => "<path_to_sincedb>"
    prefix => "<tenant1_id>/elasticsearch/"
    codec => multiline {
      pattern => "^\["
      negate => true
      what => "previous"
    }
    type => "elasticsearch"
    add_field => {
     "tenantID" => "<tenant1_id>"
    }    
  }
 
  s3 {
    interval => 86400
    bucket => "logstorage-us-west-1"
    aws_credentials_file => "<path_to_cred>"
    region => "us-west-1"
    sincedb_path => "<path_to_sincedb>"
    prefix => "<tenant2_id>/elasticsearch/"
    codec => multiline {
      pattern => "^\["
      negate => true
      what => "previous"
    }
    type => "elasticsearch"
    add_field => {
     "tenantID" => "<tenant2_id>"
    }    
  }    
  }
}
```
As you can see, most fields are identical except the **prefix** and **add_field.tenantID**, and I have about 100 tenants with different prefix but following the same pattern <tenant#_id>. I'm new to logstash and ruby syntax. How do I remove the duplicated code by moving them into a function, for loop or something easy to maintain?

Thanks,
KC

---
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB