This paper describes our entry for the INLG 2018 E2E NLG challenge. Generating flu- ent natural language descriptions from struc- tured data is a key sub-task for conversa- tional agents. In the E2E NLG challenge, the task is to generate these utterances conditioned on multiple attributes and values. Our sys- tem utilizes several extensions to the general- purpose sequence-to-sequence (S2S) architec- ture to model the latent content selection pro- cess, particularly different variants of copy at- tention and coverage decoding. In addition, we propose a new training method based on diverse ensembling to encourage the model to learn latent plans in training. We empirically evaluate these techniques and show that the system increases the quality of generated text across five automated metrics. Out of a total of sixty submitted systems from 16 institutions, our best system ranks first-place in three of the five metrics, including ROUGE.