I have done research in areas such as natural language understanding and generation, neural & statistical machine translation, syntactic and discourse parsing, question answering, automatic summarization, evaluation methodologies and quality prediction for textual structured output. I am currently focusing on topics that cover micro-reading (understanding the meaning of a text paragraph/document well enough to answer (in)queries about its content), and abstractive language generation (creating novel language outputs based on representations built from unstructured information of textual or visual nature). The techniques I use are a mesh of unsupervised and supervised learning methods based on Deep Learning with Neural Networks.